AI News, Artificial Intelligence and the King Midas Problem

Artificial Intelligence and the King Midas Problem

If we keep programming smarter and smarter robots, then by the late 2020s, you may be able to ask your wonderful domestic robot to cook a tasty, high-protein dinner.

“We’ve got to get the right objective,” he explains, “and since we don’t seem to know how to program it, the right answer seems to be that the robot should learn – from interacting with and watching humans – what it is humans care about.” Russell works from the assumption that the robot will solve whatever formal problem we define.

Rather than assuming that the robot should optimize a given objective, Russell defines the problem as a two-player game (“game” as used by economists, meaning a decision problem with multiple agents) called cooperative inverse reinforcement learning (CIRL).

For example, if a robot observed the human’s morning routine, it should discover how important coffee is—not to itself, of course (we don’t want robots drinking coffee), but to the human.

Russell adds, “The robot—if it had been there—would have told Midas that he didn’t really want everything turned to gold, maybe just a few choice objects that he might point at from time to time and say the magic word.” AI Off-Switch Russell and his Berkeley colleagues also recently announced further progress toward ensuring safe AI, with a paper on ensuring an AI’s off-switch is always accessible.

Russell and his co-author summed up why it’s better to be cautious and careful than just assume all will turn out for the best: “Our experience with Chernobyl suggests it may be unwise to claim that a powerful technology entails no risks.

On September 11, 1933, Lord Rutherford, perhaps the world’s most eminent nuclear physicist, described the prospect of extracting energy from atoms as nothing but “moonshine.” Less than 24 hours later, Leo Szilard invented the neutron-induced nuclear chain reaction;

… [T]he risk [of AI] arises from the unpredictability and potential irreversibility of deploying an optimization process more intelligent than the humans who specified its objectives.” This summer, Russell received a grant of over $5.5 million from the Open Philanthropy Project for a new research center, the Center for Human-Compatible Artificial Intelligence, in Berkeley.

3 principles for creating safer AI

TED Programs & initiatives TEDx TED Prize TED Fellows TED Ed TED Translators TED Books TED Institute Ways to get TED TED Radio Hour on NPR More ways to get TED Follow TED Facebook Twitter Pinterest Instagram YouTube TED Blog Our community TED Speakers TED Fellows TED Translators TEDx Organizers TED Community Get TED email updates Subscribe to receive email notifications whenever new talks are published.

3 principles for creating safer AI | Stuart Russell

How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell...

What happens when our computers get smarter than we are? | Nick Bostrom

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it...

What is Artificial Intelligence (AI)? Discussion about Benefits, Risks and Uses of AI

Discussion about the state of Artificial Intelligence (AI) at World Economics Forum, Davos 2016 . How close are technologies to simulating or overtaking human intelligence and what are the...

Military expert about capabilities of modern Russian army (English subtitles)

PLEASE READ THE DESCRIPTION BEFORE WATCHING! FOR SUBTITLES TURN CAPTIONS ON YOUR PLAYER ON (CC) Military expert Vladislav Shurygin gives an interview and answers interesting questions on...