AI News, This Preschool Is for Robots

This Preschool Is for Robots

On the seventh floor of Berkeley’s technology research hall, a bright blue and yellow plastic ray gun sits on a long table, along with wooden spoons, model planes, and a set of square and round pegs.

Unlike most industrial robots, which are programmed to complete specific tasks, the Brett project teaches robots to learn using methods based partly on the ways young children discover the world.

Pieter Abbeel, who runs the robotics group at Berkeley, says his research has been partially inspired by watching child psychology tapes, which demonstrate how young children constantly adjust their approaches when solving tasks.

The project entered a new phase in the fall of 2014 when the team introduced a unique combination of two modern AI systems&and a roomful of toys—to a robot.

Since then, the team has published a series of papers that outline a software approach to let any robot learn new tasks faster than traditional industrial machines while being able to develop the sorts of broad knowhow for solving problems that we associate with people.

These kinds of breakthroughs mean we’re on the cusp of an explosion in robotics and artificial intelligence, as machines become able to do anything people can do, including thinking, according to Gill Pratt, program director for robotics research at the U.S. Defense Advanced Research Projects Agency.

Many AI researchers, however, say giving robots brains based on deep learning is a necessary next step, and it’s expected to transform industrial machines in the same way it enabled incredible strides in computer-vision applications, such as photo apps from Google and others that can recognize faces and buildings, says Andrej Karpathy, a computer science Ph.D.

When faced with uncertainty, today’s factory robots tend to shut down, which is not always the safest way to respond to a problem, says Avner Goren, a general manager at Texas Instruments.

“You put the baby down in front of some new task, and eventually it kind of plays around with it, figures out how to deal with the toy, and maybe by the end of the day, it’s figured out how to deal with a new piece of its environment that it’s never encountered.” In the past few years, Google has acquired at least seven robotics companies, including two spinoffs of Willow Garage, which made the PR2 robot on which Brett lives.

Japan’s Fanuc, which makes robots that can assemble products, weld metal, paint walls, and package goods, said on Aug. 20 that it had acquired a stake in the startup Preferred Networks to inject AI into its robots.

It developed the habit as a shortcut for doing simple addition and subtraction, using its limbs as memory aids instead of relying on the custom software in its head.

Abbeel says that’s because the robot has learned to solve its tasks using the same types of trial-and-error approaches that pro sports players, such as Roger Federer, use to perfect their game.

The Swiss tennis champion’s serves are streamlined and elegant, unlike an amateur player's, whose technique is riddled with “quirky motions that happen before they hit the ball.” Federer has practiced enough to know which parts of the preserve are pointless and which are key to a good hit.

The Education of Brett the Robot

The Berkeley Robot for the Elimination of Tedious Tasks—aka Brett, of course—holds one of those puzzle cubes for kids in one hand and with the other tries to jam a rectangular peg into a hole.

This is actually a big deal in robotics, because if humans want the machines of tomorrow to be truly intelligent and truly useful, the things are going to have to teach themselves to not only manipulate novel objects, but navigate new environments and solve problems on their own.

“If you think about something like reinforcement learning, where you learn from trial and error, the challenge is that often you need a lot of trial and error before you get somewhere,” says UC Berkeley roboticist Pieter Abbeel, who leads the learning research with Brett.

So what these researchers are chasing now is taking learning to the next level, specifically “learning to learn.” A programmer could keep tweaking Brett’s algorithm to get it to learn ever faster, sure.

“And you might have a reinforcement learning algorithm that maybe can have a robot learn to walk in a few hours rather than two weeks, maybe even faster.” This is essential for building a robotic future that isn’t totally maddening.

“Every living room is different in a home, and if we train a robot just on a single living room it's not going to be able to handle yours.” Solving peg puzzles, then, is literally and figuratively child’s play.

This Preschool Is for Robots

On the seventh floor of Berkeley’s technology research hall, a bright blue and yellow plastic ray gun sits on a long table, along with wooden spoons, model planes, and a set of square and round pegs.

Unlike most industrial robots, which are programmed to complete specific tasks, the Brett project teaches robots to learn using methods based partly on the ways young children discover the world.

Pieter Abbeel, who runs the robotics group at Berkeley, says his research has been partially inspired by watching child psychology tapes, which demonstrate how young children constantly adjust their approaches when solving tasks.

The project entered a new phase in the fall of 2014 when the team introduced a unique combination of two modern AI systems&and a roomful of toys—to a robot.

Since then, the team has published a series of papers that outline a software approach to let any robot learn new tasks faster than traditional industrial machines while being able to develop the sorts of broad knowhow for solving problems that we associate with people.

These kinds of breakthroughs mean we’re on the cusp of an explosion in robotics and artificial intelligence, as machines become able to do anything people can do, including thinking, according to Gill Pratt, program director for robotics research at the U.S. Defense Advanced Research Projects Agency.

Many AI researchers, however, say giving robots brains based on deep learning is a necessary next step, and it’s expected to transform industrial machines in the same way it enabled incredible strides in computer-vision applications, such as photo apps from Google and others that can recognize faces and buildings, says Andrej Karpathy, a computer science Ph.D.

When faced with uncertainty, today’s factory robots tend to shut down, which is not always the safest way to respond to a problem, says Avner Goren, a general manager at Texas Instruments.

“You put the baby down in front of some new task, and eventually it kind of plays around with it, figures out how to deal with the toy, and maybe by the end of the day, it’s figured out how to deal with a new piece of its environment that it’s never encountered.” In the past few years, Google has acquired at least seven robotics companies, including two spinoffs of Willow Garage, which made the PR2 robot on which Brett lives.

Japan’s Fanuc, which makes robots that can assemble products, weld metal, paint walls, and package goods, said on Aug. 20 that it had acquired a stake in the startup Preferred Networks to inject AI into its robots.

It developed the habit as a shortcut for doing simple addition and subtraction, using its limbs as memory aids instead of relying on the custom software in its head.

Abbeel says that’s because the robot has learned to solve its tasks using the same types of trial-and-error approaches that pro sports players, such as Roger Federer, use to perfect their game.

The Swiss tennis champion’s serves are streamlined and elegant, unlike an amateur player's, whose technique is riddled with “quirky motions that happen before they hit the ball.” Federer has practiced enough to know which parts of the preserve are pointless and which are key to a good hit.

BRETT The Robot Learns To Do New Things Just Like A Kid Does

UC Berkeley researchers have come up with new algorithms to help robots learn like humans do – through trial and error.

The Berkeley robot, known affectionately as BRETT or the Berkeley Robot for the Elimination of Tedious Tasks, used this deep learning algorithm to complete a number of tasks, including putting a clothes hanger on a rack, assembling a toy plane and screwing a cap on a water bottle.

The robot must be able to perceive and adapt to its surroundings.” Deep learning is a relatively new branch of AI research, loosely based on human’s neural circuitry and how our brains perceive and interact with the world.

To emulate our minds, deep learning algorithms create ‘neural nets’, in which layers of artificial neurons process raw sensory data like sound waves or image pixels and then try to interpret patterns and categories in the data it’s receiving.

There are no examples of the correct solution like one would have in speech and vision recognition programs.” The team used BRETT, a Willow Garage Personal Robot 2, in experiments much like the play of small children, such as placing blocks into matching openings or stacking up Lego bricks.

“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch.

New ‘deep learning’ technique enables robot mastery of skills via trial and error

UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.

They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.

The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.

Neural inspiration Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.

Instead, the UC Berkeley researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels.

Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.

When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.

“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch.

To bring you the best content on our sites and applications, Meredith partners with third party advertisers to serve digital ads, including personalized digital ads.

You always have the choice to experience our sites without personalized advertising based on your web browsing activity by visiting the DAA’s Consumer Choice page, the NAI's website, and/or the EU online choices page, from each of your browsers or devices.

See Smart Robots Learn to Play Like Human Children

BRETT is a robot that can think. Researchers at UC Berkeley have programmed BRETT to learn on its own, through trial and error, how to accomplish tasks like ...

BRETT - Robot With Artificial Intelligence

BRETT - Robot With Artificial Intelligence Music: Isolated Mind By A Himitsu Creative Commons — Attribution 3.0 ..

BRETT the Robot learns to put things together on his own

Full Story: UC Berkeley researchers have developed ..

Berkeley AI Research (BAIR) Teaches Robot to Interact with Real World Objects

At NIPS 2017, watch as students from University of California, Berkeley AI Research teach a robot how to interact with real-world objects using an NVIDIA DGX ...

10 Machine Learning based Products You MUST See

Cozmo - NVIDIA AI Car - Moley Robot - Sawyer Robot

Elon Musk's AI Is Better Than Us

Elon Musk wants Open AI. John Iadarola and Brett Erlich discuss Elon Musk's AI efforts on The Young Turks. Tell us what you think in the comment section ...

Robot From UC Berkeley Team Learns From Trial and Error

A robot is only as smart as its programming. Learning on the go has been the sole purview of living things. That was, until a team of scientists at UC Berkeley ...

BRETT: The easily teachable robot

Robots today must be programmed by writing computer code, but imagine donning a VR headset and virtually guiding a robot through a task, like you would ...

How to build an A.I. brain that can surpass human intelligence | Ben Goertzel

Artificial intelligence has the capability to far surpass our intelligence in a relatively short period of time. But AI expert Ben Goertzel knows that the foundation has ...

Learning to Learn for Robotic Control - Prof. Pieter Abbeel

Prof. Pieter Abbeel presents Learning to Learn for Robotic Control at the Neural Information Processing Systems Conference on December 7th, 2017.