AI News, Google Wants Robots to Acquire New Skills by Learning From Each Other

Google Wants Robots to Acquire New Skills by Learning From Each Other

Sergey Levine from the Google Brain team, along with collaborators from Alphabet subsidiaries DeepMind and X, published a blog post on Mondaydescribing an approach for “general-purpose skill learning across multiple robots.” Teaching robots how to do even the most basic tasks in real-world settings like homes and offices has vexed roboticists for decades.

To tackle this challenge, theGoogle researchers decided to combine two recent technology advances.The first iscloud robotics, a concept that envisions robots sharing data and skills with each other through an online repository.

“The skills learned by the robots are still relatively simple—pushing objects and opening doors—but by learning such skills more quickly and efficiently through collective learning, robots might in the future acquire richer behavioral repertoires that could eventually make it possible for them to assist us in our daily lives.”

In that study, a group of robot arms went throughsome 800,000 grasp attempts, and though they failed a lot in the beginning, their success rate improved significantly as their neural net continuously retrained itself.

At regular intervals, the robots sent data about their performances toa central server, which used the data to build a new neural network that better captured how action and success were related.

The robots then shared their experiences with each other and together built what the researchers describe as a “single predictive model” that gives them an implicit understanding of the physics involved in interacting with the objects.

The idea is that people have a lot of intuition about their interactions with objects and the world, and that by assisting robots with manipulation skills we could transfer some of this intuition to robots to let them learn those skills faster.

Besides faster learning times, this approach might benefit from the greater diversity of experience: a real-world deployment might involve multiple robots in different places and different settings, sharing heterogeneous, varied experiences to build a single highly generalizable representation.

Google Tasks Robots with Learning Skills from One Another via Cloud Robotics

Humans use language to tap into the knowledge of others and learn skills faster.

Sharing the learning process among multiple robots, the research team has considerably expedited general-purpose skill acquisition of robots.

For example, in order to teach a robotic arm how to grasp objects, we may need to let the robot experience as many as 800,000 grasps.

Robots which are designed to perform certain pre-defined actions or interact with pre-defined objects cannot easily respond to changes in the environment.

Every robot puts its own experience on the server and takes the latest version of the training model, which is the overall result obtained by all of the robots, from the server.

In an attempt to teach robotic arms to grasp objects, Google observed that the robots have developed pre-grasp behavior.

experiences were monitored via the cameras and the result was used to train the system which was based on a convolutional neural network (CNN)­

For example, the research team slightly changed parameters such as camera position, lighting, and the gripper hardware for each robot.

In another experiment conducted recently, the Google team gives a group of robots the task of opening a door and investigates the idea of data sharing.

In the first experiment, the robots simply rely on reinforcement learning, or trial and error, combined with deep neural networks.

Nudging these objects around a table, the robots build a model which helps them somehow predict what might happen if they take a certain course of action.

Then, the researchers use a computer interface showing the test environment to tell the robots to move an object to a certain location.

By employing this method, we may soon witness robots capable of learning tasks much more complicated than simply opening a door.

Google's next big step for AI: Getting robots to teach each other new skills

Imagine if you could get better at some skill not just by learning and practicing it, but by accessing the brains of others to tap directly into their experiences?

In several demonstration videos published on Tuesday, Google shows robots using shared experiences to learn rapidly how to push objects and open doors.

A central server is also recording the robots actions, behaviors and final outcomes and uses those experiences to build a better neural network that helps the robots improve at the task.

As Google demonstrates in two videos, after 20 minutes training its robotics arms were fumbling around for the handle but eventually manage to open the door.

Here, Google is teaching its robots to build mental models about how things move in response to certain actions by building up experience of where pixels end up on a screen after a taking a certain action.

Also, by gradually changing the position of the door with each attempt the robots were able to gradually improve at the task, helping them become more versatile within few hours.

MarI/O - Machine Learning for Video Games

MarI/O is a program made of neural networks and genetic algorithms that kicks butt at Super Mario World. Source Code: "NEAT" Paper:

Collective Robot Reinforcement Learning, Human Demonstration

Full video: Research paper: Abstract: In principle, reinforcement learning and policy search methods can enable robots to learn.

Brain prints reveal children's reading difficulties - Science Nation

Tease: New test uses brain's electrical activity to pinpoint reading challenges early, increasing chances for success in school Description: Children who have difficulty learning to read,...

Collective Robot Reinforcement Learning, Training Phase

Full video: Research paper: Abstract: In principle, reinforcement learning and policy search methods can enable robots to learn.

Collective Robot Reinforcement Learning, End Result

Full video: Research paper: Abstract: In principle, reinforcement learning and policy search methods can enable robots to learn.

What can we learn from Robot Athletes? | Jacky Baltes | TEDxUManitoba

This talk introduces research into humanoid robotics and gives a short introduction to robots' athletic competitions. These prestigious international competitions attract thousands of participants...

UC Berkeley Building Smarter Everyday Robots with Deep Learning

Explore Berkeley's robotic research in "reinforcement learning" and the increased potentials enabled by 10x faster computational speed with Berkeley professors Sergey Levine and Pieter Abbeel:...

RI Seminar: Sergey Levine : Deep Robotic Learning

Sergey Levine Assistant Professor, UC Berkeley April 07, 2017 Abstract Deep learning methods have provided us with remarkably powerful, flexible, and robust solutions in a wide range of passive...

Vivienne Ming, Co-Founder, Socos - RE.WORK Deep Learning Summit 2015

This presentation took place at the Deep Learning Summit in San Francisco on 29-30 January 2015. The elusive quest to identify..

3D Estimation and Fully Automated Learning of Eye-Hand Coordination in Humanoid Robots

This work deals with the problem of 3D estimation and eye-hand calibration in humanoid robots. Using the iCub humanoid robot, we developed a fully automatic procedure based on optimization...