AI News, Let’s Shape AI Before AI Shapes Us

Let’s Shape AI Before AI Shapes Us

Artificial intelligence is like a beautiful suitor who repeatedly brings his admirer to the edge of consummation only to vanish, dashing hopes and leaving an unrequited lover to wonder what might have been.

Physicist Stephen Hawking warns that AI “could spell the end of the human race.” Even Bill Gates, who usually obsesses on such prosaic tasks as eliminating malaria, advises careful management of digital forms of “super intelligence.” Will today’s outsized fears of AI become fodder for tomorrow’s computer comedy?

“Expert systems” similarly have experienced a long gestation, and even now these programs, built around knowledge gained from human experts, deliver little.

Computers now can literally pick faces out of a crowd and unerringly provide customer service over the phone by simulating a real conversation.

Wilson, author of How to Survive a Robot Uprising, humans need not wait for the first AI catastrophe in order to install a “steel-reinforced panic room” to which they can escape from disobedient digital servants.

AI, Robotics, and the Future of Jobs

The vast majority of respondents to the 2014 Future of the Internet canvassing anticipate that robotics and artificial intelligence will permeate wide segments of daily life by 2025, with huge implications for a range of industries such as health care, transport and logistics, customer service, and home maintenance.

There will be greater differentiation between what AI does and what humans do, but also much more realization that AI will not be able to engage the critical tasks that humans do.” Another group of experts feels that the impact on employment is likely to be minimal for the simple reason that 10 years is too short a timeframe for automation to move substantially beyond the factory floor.

But there are only 12 years to 2025, some of these technologies will take a long time to deploy in significant scale… We’ve been living a relatively slow but certain progress in these fields from the 1960s.” Christopher Wilkinson, a retired European Union official, board member for EURid.eu, and Internet Society leader said, “The vast majority of the population will be untouched by these technologies for the foreseeable future.

Glenn Edens, a director of research in networking, security, and distributed systems within the Computer Science Laboratory at PARC, a Xerox Company, wrote, “There are significant technical and policy issues yet to resolve, however there is a relentless march on the part of commercial interests (businesses) to increase productivity so if the technical advances are reliable and have a positive ROI then there is a risk that workers will be displaced.

The race between automation and human work is won by automation, and as long as we need fiat currency to pay the rent/mortgage, humans will fall out of the system in droves as this shift takes place…The safe zones are services that require local human effort (gardening, painting, babysitting), distant human effort (editing, coaching, coordinating), and high-level thinking/relationship building.

The situation is exacerbated by total failure of the economics community to address to any serious degree sustainability issues that are destroying the modern ‘consumerist’ model and undermining the early 20th century notion of ‘a fair day’s pay for a fair day’s work.’ There is great pain down the road for everyone as new realities are addressed.

The short answer is that if the job is one where that question cannot be answered positively, that job is not likely to exist.” Tom Standage, digital editor for The Economist, makes the point that the next wave of technology is likely to have a more profound impact than those that came before it: “Previous technological revolutions happened much more slowly, so people had longer to retrain, and [also] moved people from one kind of unskilled work to another.

I’m reminded of the line from Henry Ford, who understood he does no good to his business if his own people can’t afford to buy the car.” Alex Howard, a writer and editor based in Washington, D.C., said, “I expect that automation and AI will have had a substantial impact on white-collar jobs, particularly back-office functions in clinics, in law firms, like medical secretaries, transcriptionists, or paralegals.

And education systems in the U.S. and much of the rest of the world are still sitting students in rows and columns, teaching them to keep quiet and memorize what is told to them, preparing them for life in a 20th century factory.” Bryan Alexander, technology consultant, futurist, and senior fellow at the National Institute for Technology in Liberal Education, wrote, “The education system is not well positioned to transform itself to help shape graduates who can ‘race against the machines.’ Not in time, and not at scale.

Think outside the job.” Bob Frankston, an Internet pioneer and technology innovator whose work helped allow people to have control of the networking (internet) within their homes, wrote, “We’ll need to evolve the concept of a job as a means of wealth distribution as we did in response to the invention of the sewing machine displacing seamstressing as welfare.” Jim Hendler, an architect of the evolution of the World Wide Web and professor of computer science at Rensselaer Polytechnic Institute, wrote, “The notion of work as a necessity for life cannot be sustained if the great bulk of manufacturing and such moves to machines—but humans will adapt by finding new models of payment as they did in the industrial revolution (after much upheaval).” Tim Bray, an active participant in the IETF and technology industry veteran, wrote, “It seems inevitable to me that the proportion of the population that needs to engage in traditional full-time employment, in order to keep us fed, supplied, healthy, and safe, will decrease.

Kevin Carson, a senior fellow at the Center for a Stateless Society and contributor to the P2P Foundation blog, wrote, “I believe the concept of ‘jobs’ and ‘employment’ will be far less meaningful, because the main direction of technological advance is toward cheap production tools (e.g., desktop information processing tools or open-source CNC garage machine tools) that undermine the material basis of the wage system.

The real change will not be the stereotypical model of ‘technological unemployment,’ with robots displacing workers in the factories, but increased employment in small shops, increased project-based work on the construction industry model, and increased provisioning in the informal and household economies and production for gift, sharing, and barter.” Tony Siesfeld, director of the Monitor Institute, wrote, “I anticipate that there will be a backlash and we’ll see a continued growth of artisanal products and small-scale [efforts], done myself or with a small group of others, that reject robotics and digital technology.” A

In the long run this trend will actually push toward the re-localization and re-humanization of the economy, with the 19th- and 20th-century economies of scale exploited where they make sense (cheap, identical, disposable goods), and human-oriented techniques (both older and newer) increasingly accounting for goods and services that are valuable, customized, or long-lasting.” In the end, a number of these experts took pains to note that none of these potential outcomes—from the most utopian to most dystopian—are etched in stone.

rather it’s a political choice.” Jason Pontin, editor in chief and publisher of the MIT Technology Review, responded, “There’s no economic law that says the jobs eliminated by new technologies will inevitably be replaced by new jobs in new markets… All of this is manageable by states and economies: but it will require wrestling with ideologically fraught solutions, such as a guaranteed minimum income, and a broadening of our social sense of what is valuable work.”

Robots vs. humans: Will AI bring the advertising apocalypse?

A recent announcement by Coca-Cola also indicates that they want to use bots to create music for ads, write scripts, post a spot on social media, and buy media – implying that the AI advertising revolution seems closer to reality than ever.

In personalized retargeting, algorithms based on deep learning – a highly innovative branch of AI methods that imitates the human brain – can recognize sales peaks like humans do, but they also notice hard-to-predict patterns and react quickly to better achieve goals.

The most powerful algorithms are already capable of answering millions of requests per second, which also includes the complex process of request analysis and bid estimation.

As more and more tasks are run by computers instead of analysts, this gives marketers the time to innovate and grow their brand, rather than worry about how to analyze the data and make a decision that will influence millions of customers at a time.

Day-to-day activities that form the backbone of any media agency: reporting, auditing, spot-checking, etc., can be fully automated to help specialists focus on strategy and creativity.

In personalized retargeting, inter alia, decisions about products that should be shown on ads are typically made in less than 10 milliseconds, faster than it takes to blink a human eye.

Although the computer was able to give creative direction for commercials or draw from a database of tagged and analyzed TV commercials from the past, it appeared humanity had triumphed, as Kuramoto won 54% of the vote compared to his AI counterpart: 46%.

One instance of the future was imagined by computer scientist, Eliezer Yudkowsky, in a paper from the 2008 book, Global Catastrophic Risks: “It would be physically possible to build a brain that computed a million times as fast as a human brain…

If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours.” Yudkowsky thinks that if we don’t get on top of this now it will be too late.

by the time your neurons finish thinking the words ‘I should do something’ you have already lost.” To add reality to this stark future, a new PwC report has found that 38% of US jobs will be replaced by robots and artificial intelligence by the early 2030s.

What we already know about the marketing industry is that when algorithms are able to learn from data, it makes it easier for brands to understand customers on a larger, more global scale rather than just as separate, local entities.

​Companies Want to Replicate Your Dead Loved Ones With Robot Clones

Many grieving people feel an emotional connection to things that represent dead loved ones, such as headstones, urns and shrines, according to grief counselors.

In the future, people may take that phenomenon to stunning new heights: Artificial intelligence experts predict that humans will replace dead relatives with synthetic robot clones, complete with a digital copy of that person's brain.

We don't stuff humans but this is a way of 'stuffing' their information, their personality and mannerisms,' said Bruce Duncan, managing director of Terasem Movement, a research foundation that aims to 'transfer human consciousness to computers and robots.'

The firm has already created thousands of highly detailed 'mind clones' to log the memories, values and attitudes of specific people.

Using the data, scientists created one of the world's most socially advanced robots, a replica of Terasem Movement founder Martine Rothblatt's wife, called Bina48, which sells for roughly $150,000.

She makes facial expressions, greets people and has conversations (including some awkward ones), made possible with facial and voice recognition software, motion tracking, and internet connectivity.

'The definition of 'alive' may even evolve to mean, 'as long as your essential personal information continues to be organized and accessible'' Bina48 still has some social glitches, but she's a working proof of concept—the firm's almost-charming poster girl for the techno-immortality movement.

more advanced version of robots like Bina48 could hit the market within 10 or 20 years for roughly $25,000 to $30,000 for variety of uses, including replicating dead loved ones, Duncan predicted.

At least 56,000 people have already handed over information to create mindfiles, a web-based storage space for preserving 'one's unique and essential characteristics for the future,' according to Lifenaut, a branch of the Terasem Movement that gathers human personality data for free.

The goal is to capture a person's attitudes, beliefs and memories and create a database that one day will be analogged and uploaded to a robot or holograph, according to the Lifenaut website.

'The robot personality may also be modifiable within a base personality construct to provide states or moods representing transitory conditions of happiness, fear, surprise,' the patent papers state.

Six years ago, the now-defunct company Intellitar launched a digital clone promising users 'virtual eternity,' and the ability to communicate with a person's digital self after death.

Is the AI apocalypse a tired Hollywood trope, or human destiny?

Why is it that every time humans develop a really clever computer system in the movies, it seems intent on killing every last one of us at its first opportunity?

In Stanley Kubrick’s masterpiece, 2001: A Space Odyssey, HAL 9000 starts off as an attentive, if somewhat creepy, custodian of the astronauts aboard the USS Discovery One, before famously turning homicidal and trying to kill them all. In The Matrix, humanity’s invention of AI promptly results in human-machine warfare, leading to humans enslaved as a biological source of energy by the machines. In Daniel H.

“If and when a takeoff occurs,” Bostrom writes, “it will likely be explosive.” Stephen Hawking has echoed this sentiment: “Once humans design artificial intelligence,” he says, “it will take off on its own and develop at an ever-increasing rate.

In fact, over the intervening decades, there were several boom and bust cycles in AI research (often dubbed “AI winters”) that moved a few of these building blocks forward, but then failed to show ongoing progress after an initial period of excitement.

The predictive text options that appear as we type on our phones save us valuable taps, while autocorrect attempts to make up for the fact that on-screen keyboards and human thumbs are a recipe for inaccuracy (notwithstanding the hilarity that often ensues when it tries to come to our rescue).

So-called intelligent assistants from Apple, Google, Amazon, and Microsoft (Siri, Google Now, Alexa, and Cortana, respectively) all leverage recent advances in natural language processing (NLP), combined with sophisticated heuristics, which make questions like “what’s the weather going to be like today?” the hands-free equivalent of jumping into a weather app or Googling the same phrase.

Google’s CEO, Sundar Pichai, recently boasted that his company’s NLP AI is approaching human levels of understanding, which may explain why he told shareholders that “the next big step will be for the very concept of the ‘device’ to fade away.” The Waze GPS and crowdsourced mapping app is a great example of AI planning.

IBM continues to develop Watson, as well as its other investments in AI, in pursuit of what Banavar calls “grand challenges.” These are computing problems so difficult and complex, they often require dozens of researchers and a sustained investment over months or years.

The sheer number of scans being done are creating increasing demand for trained radiologists, whose numbers are limited simply due to the rigorous and lengthy training required to become one.

In order to significantly impact the number and quality of scans that can be processed, researchers are using Watson to understand the content of the images, within the full medical context of the patient.

“Within the next two years,” Banavar says, “we will see some very significant breakthroughs in this.” For IBM to succeed, it will have to solve a problem that has plagued AI efforts from their very beginnings: Computers tend to follow the instructions they’re given in such a literal way that, when the unexpected occurs —

Using a tool within the AI arsenal known as machine learning, Gupta and her colleagues are slowly training computers to filter information in a way that most humans find relatively simple.

“You can have a model that can learn from a billion examples,” Gupta explains, “but if you don’t have a billion examples, the machine has nothing to learn from.” Which is why YouTube, with its monster catalog of videos, is the perfect place to nurture a data-hungry process like machine learning.

Both feel like child’s play: Smoothness dictates that you shouldn’t let one small change throw off a decision that has been based on dozens of other factors, while monotonicity operates on an “all other things being equal, this one fact should make it the best choice” principle.

In practice, smoothness means that a potentially great video recommendation isn’t dismissed by the algorithm simply because it contained both cooking and traveling information, when the previously watched video was purely about cooking.

If you’ve identified that you like coffee shops that serve organic, fair trade coffee and that also have free Wi-Fi, then the one that is closest to you should top the recommended list, even though you never specified distance as important.

“Minecraft is really interesting because it’s an open-world game,” Hofmann told us, which offers a unique space in which AI agents can deal with different environments that change over time, a key point if you’re trying to foster flexible learning.

But Eck was also struck by something else: “This stuff can be fun.” So Eck decided to lobby the senior brass at Google to let him build a small team to investigate how machine learning could be further leveraged in the world of art, only this time it would be focused on music, an area Eck has long been passionate about.

“The question is,” Eck asks, philosophically, “how do you build models that can generate [music] and can understand whether they’re good or not, based upon feedback from their audience, and then improve?”  It starts to sound like Magenta is going to unleash a horrible new wave of computer-generated Muzak, but Eck is quick to assure us that’s not the point.

Google’s Gupta points to a basic stumbling block that she thinks will hamstring the development of a strong AI for years to come: “Our best philosophers and neuroscientists aren’t sure what consciousness is,” she notes, “so how can we even start talking about what it means [for a machine to be conscious] or how we would go about replicating that digitally?” It’s hard to tell if she’s being sincere or coy —

“I believe that it may be possible in principle,” she says, “but just knowing the state of the art in AI, I don’t see us getting anywhere close to those predictions any time in the near future.” Google’s Eck finds the topic somewhat exasperating.

I don’t look at our brains as these computational boxes [in competition] with these other, stronger brains in boxes that happen to be inside computers.” When asked how far we might be from such a scenario, he laughs and says, “Twenty years!” because, as he points out, that’s the usual time frame experts give when they have no idea, but they need to say something.

He classifies Bostrom’s Superintelligence as “a brilliantly wrong book.” Chorost believes we may create increasingly powerful AI agents, but he’s unconvinced these algorithms will ever become sentient, let alone sapient.

“The environment should be lethally complex,” he says, evoking images of AIs competing in a virtual gladiator’s arena, “so that it kills off ineffective systems and rewards effective ones.” The other benefit to this artificial Darwinism, if it succeeds, is that it will produce a moral AI with no genocidal tendencies.

Perhaps the essential ingredients for sentience will never be reproduced in silicon, and we’ll be able to live comfortably knowing that as incredibly capable as Siri becomes, she’s never going to follow her own desires instead of catering to ours, like in the movie Her.

It might be easy to dismiss his concerns were it not for the fact that a federal advisory board to the Department of Defense just released a study on autonomy that echoes his words, almost verbatim: “Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike.

The study therefore concluded that DoD must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.” The other use of AI that Marchant believes is in need of examination is much closer to home: “The movement toward autonomous cars,” he says, is going to require thoughtful development and much better regulation.

people being injured or killed by an autonomous system making decisions.” He highlights the very real ethical decisions that will be faced by AI-controlled cars: In an accident situation, whose life should be preserved —

“I was talking to a pathologist,” he recounts, “who said his field is drying up because machines are taking it over.” Recently, a prototype AI based on IBM’s Watson began working at a global law firm.

Its machine discovery and document review capabilities, once sufficiently advanced, could affect the jobs of young associate lawyers, which Marchant thinks demonstrates that it’s not only menial jobs that are at risk.

Her Google colleague, Eck, puts it into a historical (and of course, musical) frame, noting that the advent of drum machines didn’t create legions of unemployed drummers (or, at the very least, it didn’t add to their existing numbers).

That isn’t the case now.” Interestingly, the biggest players in AI aren’t deaf to these and other concerns regarding AI’s future impact on society and have recently joined forces to create a new nonprofit organization called The Partnership on Artificial Intelligence to Benefit People and Society, or the shorter Partnership on AI.

“I’m almost worried that sometimes we move too quickly,” he says, “and start putting in place laws before we know what we’re trying to address.” Dr. Kathleen Richardson, Senior Research Fellow in the Ethics of Robotics at De Montfort University, knows exactly what she’s trying to address: The goal of an aware AI, or any AI designed to mimic living things, she believes, is a fundamentally flawed pursuit.

slavery.” For Richardson, using machines as a stand-in for a person, or indeed any other living entity, is a byproduct of a corrupt civilization that is still trying to find rationalizations to treat people as objects.

“We share properties with all life,” she says, “but we don’t share properties with human-made artifacts.” Reversing this logic, Richardson dismissed the notion that we will ever create an aware, sentient, or sapient algorithm.

“I’d try to talk him out of it, but if that’s what made him happy, I’d be more concerned about that than anything else.” Perhaps as a sign of the times, earlier this year a draft plan for the EU included wording that would give robots official standing as “electronic persons.” Facebook CEO Mark Zuckerberg has said that in 10 years, it’s likely that AI will be better than humans at basic sensory perception.

Li Deng, a principal researcher at Microsoft, agrees, and goes even further, saying, “Artificial Intelligence technologies will be used pervasively by ordinary people in their daily lives.” Eric Schmidt, executive chairman of Google parent Alphabet, and Google CEO Pichai see an enormous explosion in the number of applications, products, and companies that have machine learning at their core.

“One of the breakthroughs we need,” he says, “is how you combine the statistical technique [of machine learning] with the knowledge-based technique.” He refers to the fact that even though machines have proven powerful at sifting through huge volumes of data to determine patterns and predict outcomes, they still don’t understand its “meaning.” The other big challenge is being able to ramp up the computing power we need to make the next set of AI leaps possible.

“We are working on new architectures,” he reveals, “inspired by the natural structures of the brain.” The premise here is that if brain-inspired software, like neural nets, can yield powerful results in machine learning, then brain-inspired hardware might be equally (or more) powerful.

All this talk about brain-inspired technology inevitably leads us back to our first, spooky, concern: In the future, AI might be a collection of increasingly useful tools that can free us from drudgery, or it could evolve rapidly —

“When an engineering path [to sentient AI] becomes clear,” he says, “then we’ll have a sense of what not to do.” Banavar, despite being fairly certain that an AI with its own goals isn’t in our future, suggests that “it is a smart thing for us to have a way to turn off the machine.” The team at Google’s DeepMind agree and have written a paper in conjunction with the Future Of Life Institute that describes how to create the equivalent of a “big red button” that would let the human operator of an AI agent suspend its functions, even if the agent became smart enough to realize such a mechanism existed.

The paper, titled “Safely Interruptible Agents,” does not go so far as to position itself as the way to counter a runaway superintelligence, but it’s a step in the right direction as far as Tesla CEO Musk is concerned: He recently implied that Google is the “one” company whose AI efforts keep him awake at night.

Or perhaps, to quote computer scientist and AI skeptic Peter Hassan, we will simply keep “pursuing an ever-increasing number of irrelevant activities as the original goal recedes ever further into the future — like the mirage it is.”

Uncanny valley

The concept of the uncanny valley suggests that humanoid objects which appear almost, but not exactly, like real human beings elicit uncanny, or strangely familiar, feelings of eeriness and revulsion in observers.[2] Valley denotes a dip in the human observer's affinity for the replica, a relation that otherwise increases with the replica's human likeness.[3] Examples can be found in robotics, 3D computer animations, and lifelike dolls among others.

The concept was identified by the robotics professor Masahiro Mori as Bukimi no Tani Genshō (不気味の谷現象) in 1970.[5] The term was first translated as uncanny valley in the 1978 book Robots: Fact, Fiction, and Prediction, written by Jasia Reichardt,[6] thus forging an unintended link to Ernst Jentsch's concept of the uncanny,[7] introduced in a 1906 essay entitled 'On the Psychology of the Uncanny.'[8][9][10] Jentsch's conception was elaborated by Sigmund Freud in a 1919 essay entitled 'The Uncanny' ('Das Unheimliche').[11] Mori's original hypothesis states that as the appearance of a robot is made more human, some observers' emotional response to the robot becomes increasingly positive and empathetic, until it reaches a point beyond which the response quickly becomes strong revulsion.

However, as the robot's appearance continues to become less distinguishable from a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.[13] This area of repulsive response aroused by a robot with appearance and motion between a 'barely human' and 'fully human' entity is the uncanny valley.

In other words, this aversive reaction to realism can be said to be evolutionary in origin.[39] As of 2011, researchers at University of California, San Diego and California Institute for Telecommunications and Information Technology are measuring human brain activations related to the uncanny valley.[40][41] In one study using fMRI, a group of cognitive scientists and roboticists found the biggest differences in brain responses for uncanny robots in parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain’s visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons.

body modification), which aim to improve the abilities of the human body beyond what would normally be possible, be it eyesight, muscle strength, or cognition.[66] So long as these enhancements remain within a perceived norm of human behavior, a negative reaction is unlikely, but once individuals supplant normal human variety, revulsion can be expected.

However, according to this theory, once such technologies gain further distance from human norms, 'transhuman' individuals would cease to be judged on human levels and instead be regarded as separate entities altogether (this point is what has been dubbed 'posthuman'), and it is here that acceptance would rise once again out of the uncanny valley.[66] Another example comes from 'pageant retouching' photos, especially of children, which some find disturbingly doll-like.[67] A

sophia robot is traveling around the world

Ram channel sophia defies conventional thinking of what a robot should look like. designed to look like audrey hepburn, sophia embodies hepburn's classic beauty: porcelain skin, a slender...

A very human-like robot invented by Japanese engineers

Two human look-a-like robots invented by Japanese engineers. They can talk to each other!

Singer vs Virtual Singer

Roomie challenges virtual singers to a battle. Will the computers take over? And most importantly, can they order pizza over the phone bruuuuuh ~GET EXTENDED SONGS FROM VIDEOS & MORE~ PATREON:...

Humans Need Not Apply

Discuss this video: ## Robots, Etc: Terex Port automation:

CES preview: Lifelike robots built to win over humans

(4 Jan 2018) FOR CLEAN VERSION SEE STORY NUMBER: 4133194 LEAD IN: Artificial intelligence (AI) is set to be a top trend at the annual Consumer Electronics Show (CES) in Las Vegas. In Hong...

What happens when our computers get smarter than we are? | Nick Bostrom

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it...

Scientists Put the Brain of a Worm Into a Robot… and It MOVED

This robot contains the digitized brain of a worm, and without any outside input it just... works! Here's what this could mean for the future of AI. This Is How Your Brain Powers Your Thoughts...

Why Retail Businesses Should Hire Humans, Not Robots

Scott Galloway, professor of marketing at NYU Stern School of Business and the author of "The Four: The Hidden DNA of Amazon, Apple, Facebook and Google" says that employing a huge fleet of...

Could We Upload Our Consciousness To A Computer?

Would it ever be possible to one day upload our consciousness to a computer? How would we go about this? Read More: The Brain vs. The Computer

EX MACHINA Official Trailer (2015) [HD]

Subscribe HERE for NEW movie trailers ➤ Ex Machina - Official Trailer (2015) A young programmer is selected to participate in a breakthrough experiment in artificial..