AI News, How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves

How Google Wants to Solve Robotic Grasping by Letting Robots Learn for Themselves

For roboticists who don’t want to wait through the equivalent of an entire robotic childhood, there are ways to streamline the process: at Google Research, they’ve set up more than a dozen robotic arms and let them workfor months on picking up objects that are heavy, light, flat, large, small, rigid, soft, and translucent (although not all at once).

In robotics, this is referred to as visual servoing, and in addition to improving grasping accuracy, it makes grasping possible when objects are moving around or changing orientation during the grasping process, a very common thing to have happen in those pesky “real-world situations.” Teaching robots this skill can be tricky, because there aren’t necessarily obvious connections between sensor data and actions, especially if you have gobs of sensor data coming in all the time (like you do with vision systems).

Cameras are positioned slightly differently, lighting is a bit different for each robot, and each of the compliant, underactuated two-finger grippers exhibits different types of wear, affecting performance: What the grippers of the robots used for data collection looked like at the end of the experiments.

They’d also like to investigate how this method could be applied to “real world” robots that are “exposed to a wide variety of environments, objects, lighting conditions, and wear and tear.” For more info, we spoke with Sergey Levine at Google Research about what they’ve been working on: IEEE Spectrum: Can you describe how your work is related to similar efforts, like Brown’s Million Object Challenge or UC Berkeley’s Dex-Net?

The volume is important for two reasons: (1) there are many possible geometric configurations of objects and grippers that are possible (2) additional data was always collected using the latest model, which was effective at picking out precisely those situations where the latest model was confident but incorrect, and therefore appending samples to the dataset that could improve the latest model further.

A practical application is likely to require more extensive training in a variety of environments, with a variety of backgrounds, and possibly in other settings (on shelves, in drawers, etc), as well as a mechanism for higher-level direction to choose what to grasp, perhaps by constraining the sampled motor commands to specific parts of the workspace.

Ken Goldberg’s AUTOLAB Research on Robot Grasping featured in MIT Tech Review

Inside a brightly decorated lab at the University of California, Berkeley, an ordinary-looking robot has developed an exceptional knack for picking up awkward and unusual objects.

The work shows how new approaches to robot learning, combined with the ability for robots to access information through the cloud, could advance the capabilities of robots in factories and warehouses, and might even enable these machines to do useful work in new settings like hospitals and homes (see “10 Breakthrough Technologies 2017: Robots That Teach Each Other”).

“We’re very excited about this.” Instead of practicing in the real world, the robot learned by feeding on a data set of more than a thousand objects that includes their 3-D shape, visual appearance, and the physics of grasping them.

“We can generate sufficient training data for deep neural networks in a day or so instead of running months of physical trials on a real robot,” says Jeff Mahler, a postdoctoral researcher who worked on the project.

“This paper is exciting because it shows that a simulated data set can be used to train a model for grasping.  And this model translates to real successes on a physical robot.” Advances in control algorithms and machine-learning approaches, together with new hardware, are steadily building a foundation on which a new generation of robots will operate.

Manual dexterity played a critical role in the evolution of human intelligence, forming a virtuous feedback loop with sharper vision and increasing brain power.

Technology Review Originally published May 25, 2017 in MIT Technology Review UC Berkeley’s AUTOLAB, directed by Professor Ken Goldberg, is a world-renowed center for research in robotics and automation sciences, with 30+ postdocs, PhD and undergraduate students pursuing projects in Cloud Robotics, Deep Reinforcement Learning, Learning from Demonstrations, Computer Assisted Surgery, Automated Manufacturing, and New Media Artforms.

Large-scale data collection with an array of robots

More info at

Google robots: He wants his robots grab things correctly.

Google robots: He wants his robots grab things correctly. A row of fourteen robotic arms, one disposed next to another, with trays and a multitude of different objects that deal. And trying...

Watch 14 Robotic Arms Teach Each Other How to Grasp Objects

These robots are teaching each other. WEBSITE: FACEBOOK: TWITTER: INSTAGRAM:

Google Develops Robot Arms that Learn to Pick Up Objects

Google recently created robots that use deep neural networks to learn how to pick up objects and move them.

Robotic Grasp Planning by Learning

This video demonstrates our one-shot learning approach for grasping unknown objects. More details:

Robotic Grasping of Moving Objects by dynamic re-planning

Dynamic Grasp and Trajectory Planning for Moving Objects.

Robots learn Grasping by sharing their hand-eye coordination experience with each other | QPT

A human child is able to reliably grasp objects after one year, and takes around four years to acquire more sophisticated precision grasps. However, networked robots can instantaneously share...

Interactive disambiguation of object references for grasping tasks

Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects and their properties (color, size,...

MoveIt Simple Grasps Generation and Filtering Test

Run with Baxter robot, this ROS pkg generates basic grasp approaches and filters them based on kinematic feasibility of the planning group.

Robotic manipulation and grasping