AI News, New ‘deep learning’ technique enables robot mastery of skills via trial and error

New ‘deep learning’ technique enables robot mastery of skills via trial and error

UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.

They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.

The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.

Neural inspiration Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.

Instead, the UC Berkeley researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels.

Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.

When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.

“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch.

New ‘deep learning’ technique enables robot mastery of skills via trial and error

UC Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely approximates the way humans learn, marking a major milestone in the field of artificial intelligence.

They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.

The new multi-campus, multidisciplinary research initiative seeks to keep the dizzying advances in artificial intelligence, robotics and automation aligned to human needs.

Neural inspiration Conventional, but impractical, approaches to helping a robot make its way through a 3D world include pre-programming it to handle the vast range of possible scenarios or creating simulated environments within which the robot operates.

Instead, the UC Berkeley researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the neural circuitry of the human brain when it perceives and interacts with the world.

in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels.

Applying deep reinforcement learning to motor tasks has been far more challenging, however, since the task goes beyond the passive recognition of images and sounds.

When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.

“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch.

“Deep Learning”: A Giant Step for Robots

The prospect of robots that can learn for themselves — through artificial intelligence and adaptive learning — has fascinated scientists and movie-goers alike.

It may not seem like much but the challenge was daunting for the robot.  After as many as a hundred trials — holding a towel in different places each time — BRETT  knew the towel’s size and shape and could start folding.

 “The algorithms instructed the robot to perform in a very specific set of conditions, and although it succeeded, it took 20 minutes to fold each towel,” laughs Abbeel, associate professor of electrical engineering and computer science.  

year in a first for the field Abbeel gave a new version of BRETT the ability to improve its performance through both deep learning and reinforcement learning.  The deep learning component employs so-called neural networks to provide moment-to-moment visual and sensory feedback to the software that controls the robot’s movements. With

onboard camera allowed BRETT to pinpoint the nail to be extracted, as well as the position of its own arms and hands.  Through trial and error, it learned to adjust the vertical and horizontal position of the hammer claw as well as maneuver the angle to the right position to pull out the nail. The

Starting this year, the Bakar Fellows Program will support Abbeel’s lab with $75,000 a year for five years to help him refine the deep-learning strategy and move the research towards commercial viability.

This Robot Learns New Tasks Like a Human

You can pack plenty of information into a robotic brain, but ask a bot to teach itself a new motor task—even one as simple as stacking blocks or unscrewing a water bottle—and you’re probably shit out of luck.

While this works reasonably well in controlled environments—laboratories or medical facilities, for instance—learning to adapt to the unknown is a critical step our robots will need to take if they’re ever going to become more integrated into our daily lives.

To that end, researchers involved in Berkeley’s “People and Robotics Initiative” are turning to a new branch of artificial intelligence known as deep learning, which draws inspiration from how the human brain’s neural circuitry perceives and interacts with the world.

“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch,” said Pieter Abbeel of UC Berkeley’s Department of Electrical Engineering and Computer Sciences.

“In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.” As someone who’s about to commit three solid days to spring cleaning, this is very heartening news.

BRETT the Robot learns to put things together on his own

Full Story: UC Berkeley researchers have developed ..

BRETT Robot learns motor tasks via Trial and Error

Researchers from the University of California, Berkeley have developed a new 'deep learning' technique that enables robots to learn motor tasks through trial ...

Deep Learning for Robotics - Prof. Pieter Abbeel - NIPS 2017 Keynote

Pieter Abbeel delivers his keynote: Deep Learning for Robotics, at NIPS 2017.

the Robot learns to put things together on his own

Robot Berkeley researchers have developed algorithms that enable robots to learn motor tasks through trial and error using a process that more closely ...

BRETT - Robot With Artificial Intelligence

BRETT - Robot With Artificial Intelligence Music: Isolated Mind By A Himitsu Creative Commons — Attribution 3.0 ..

Autonomous Multi-Throw Multilateral Surgical Suturing

Supplementary material, data and code is available at: Contributors: Siddarth Sen*, Animesh Garg*, David V. Gealy, ..

Reinforcement Learning of Bimanual Manipulation for a Robot

The video shows how a Baxter robot autonomously learns to manipulate, i.e., lift, a box within about 2 hours. No simulations were needed for this process.

Shared Latent Models for Zero-Shot Learning

We develop a framework for zero-shot learning (ZSL) with a goal towards understanding recognition for scenarios where certain class-specific training data ...

Berkeley startup to train Robots like Puppets using ML and VR

The Researchers from the University of California, Berkeley, have launched a start-up, Embodied Intelligence, Inc., to use the latest techniques of deep ...

Efficient Algorithms for Semantic Scene Parsing

Assistant Professor Raquel Urtasan | Toyota Technological Institute at Chicago Abstract: Developing autonomous systems that are able to assist humans in ...