AI News, Robot can pick up any object after inspecting it

Robot can pick up any object after inspecting it

More recently, breakthroughs in computer vision have enabled robots to make basic distinctions between objects, but even then, they don't truly understand objects' shapes, so there's little they can do after a quick pick-up.

In a new paper, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they've made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before.

This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar objects -- a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses.

'Many approaches to manipulation can't identify specific parts of an object across the many orientations that object may encounter,' says PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow PhD student Pete Florence, alongside MIT professor Russ Tedrake.

These techniques both have obstacles: task-specific methods are difficult to generalize to other tasks, and general grasping doesn't get specific enough to deal with the nuances of particular tasks, like putting objects in specific spots.

In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk.

MIT CSAIL uses AI to teach robots to manipulate objects they’ve never seen before

San Francisco-based startup OpenAI developed a model that directs mechanical hands to manipulate objects with state-of-the-art precision, and Softbank Robotics recently tapped sentiment analysis firm Affectiva to imbue its Pepper robot with emotional intelligence.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” PhD student Lucas Manuelli, a lead author on the paper, said in a blog post published on MIT CSAIL’s website.

(In one round of training, the system learned a descriptor for hats after seeing only six different types.) Furthermore, the descriptors remain consistent despite differences in object color, texture, and shape, which gives DON a leg up on models that use RGB or depth data.

“But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly.” In tests, the team selected a pixel in a reference image for the system to autonomously identify.

Many of these objects were just grabbed from around the lab (including the authors’ and labmates’ shoes and hats), and we have been impressed with the variety of objects for which consistent dense visual models can be reliably learned with the same network architecture and training.”

“We are interested to explore new approaches to solving manipulation problems that exploit the dense visual information that learned dense descriptors provide and [to see] how these dense descriptors can benefit other types of robot learning, e.g.

Robotics: Science and Systems XII

First, we explore the contextual relationship between grasp types and object attributes, and show how that context can be used to boost the recognition of both grasp types and object attributes.

Second, we propose to model actions with grasp types and object attributes based on the hypothesis that grasp types and object attributes contain complementary information for characterizing different actions.

Our proposed action model outperforms traditional appearance-based models which are not designed to take into account semantic constraints such as grasp types or object attributes.

New MIT Robot Can Delicately Handle Objects It’s Never Seen Before

Robots in factories are really good at picking up objects they’ve been pre-programmed to handle, but it’s a different story when new objects are thrown into the mix.

General grasping, on the other hand, allows robots to handle objects of varying shapes and sizes, but at the cost of being unable to perform more complicated and nuanced tasks.

If we’re ever going to develop robots that can clean out a garage or sort through a cluttered kitchen, we’re going to need machines that can teach themselves about the world and all the stuff that’s in it.

This neural network generates an internal impression, or visual roadmap, of an object following a brief visual inspection (typically around 20 minutes).

In the future, a more sophisticated version of DON could be used in a variety of settings, such as collecting and sorting objects at warehouses, working in dangerous settings, and performing odd clean-up tasks in homes and offices.

Researchers have been working on computer vision for the better part of four decades, but this new approach, in which a neural net teaches itself to understand the 3D shape of an object, seems particularly fruitful.

Robots learn Grasping by sharing their hand-eye coordination experience with each other | QPT

A human child is able to reliably grasp objects after one year, and takes around four years to acquire more sophisticated precision grasps. However, networked ...

Intelligent Task-Oriented Grasping | R3 Roboy's Research Reviews #1

In this video we discuss the paper: Task-oriented Grasping with Semantic and Geometric Scene Understanding" by Renaud Detry, Jeremie Papon, Larry ...

Interactive disambiguation of object references for grasping tasks

Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects ...

[ROS Tutorials] ROS Perception #Unit3: Object Recognition and flat surfaces

Here you will learn how to recognise objects and flat surfaces in ROS. One of the most useful perception skills is being able to recognize objects. This allows you ...

Rigid 3D Geometry Matching for Grasping of Known Objects in Cluttered Scenes

Rigid 3D Geometry Matching for Grasping of Known Objects in Cluttered Scenes by Chavadar Papazov, Sami Haddadin, Sven Parusel and Darius Burschka.

Order picking of unknown items by robots

Deep learning for vision guided robotics. Fizyr will bring your automation to the next level. Our deep learning algorithm adds a layer of understanding, bringing ...

Robots can now pick up any object after inspecting it

Breakthrough CSAIL system suggests robots could one day be able to see well enough to be useful in people's homes and offices. With the DON system, a robot ...

Scene Representation and Object Grasping using Active Vision

Object grasping and manipulation pose major challenges for perception and control and require rich interaction between these two fields. In this video, we ...

iCub learns objects

This video demonstrates an architecture allowing an humanoid robot iCub to learn new objects presented to it. This is done by adding the concept of object to an ...

Transferring Object Grasping Skills and Knowledge Across Different Robotic Platforms

This video demonstrates the transfer of object grasping skills between two different humanoid robots, iCub and ARMAR-III, with different software frameworks.