AI News, Robots can now pick up any object after inspecting it

Robots can now pick up any object after inspecting it

  In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), say that they’ve made a key development in this area of work: a system that lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before.

This approach lets robots better understand and manipulate items, and, most importantly, allows them to even pick up a specific object among a clutter of similar — a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses.

'Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” says PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow PhD student Pete Florence, alongside MIT Professor Russ Tedrake.

Imagine a child at 18 months old, who doesn't understand which toy you want it to play with but can still grab lots of items, versus a four-year old who can respond to 'go grab your truck by the red end of it.” In one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy’s right ear from a range of different configurations.

“But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly.” In the future, the team hopes to improve the system to a place where it can perform specific tasks with a deeper understanding of the corresponding objects, like learning how to grasp an object and move it with the ultimate goal of say, cleaning a desk.

New MIT Robot Can Delicately Handle Objects It’s Never Seen Before

Robots in factories are really good at picking up objects they’ve been pre-programmed to handle, but it’s a different story when new objects are thrown into the mix.

General grasping, on the other hand, allows robots to handle objects of varying shapes and sizes, but at the cost of being unable to perform more complicated and nuanced tasks.

If we’re ever going to develop robots that can clean out a garage or sort through a cluttered kitchen, we’re going to need machines that can teach themselves about the world and all the stuff that’s in it.

This neural network generates an internal impression, or visual roadmap, of an object following a brief visual inspection (typically around 20 minutes).

In the future, a more sophisticated version of DON could be used in a variety of settings, such as collecting and sorting objects at warehouses, working in dangerous settings, and performing odd clean-up tasks in homes and offices.

Researchers have been working on computer vision for the better part of four decades, but this new approach, in which a neural net teaches itself to understand the 3D shape of an object, seems particularly fruitful.

MIT CSAIL uses AI to teach robots to manipulate objects they’ve never seen before

San Francisco-based startup OpenAI developed a model that directs mechanical hands to manipulate objects with state-of-the-art precision, and Softbank Robotics recently tapped sentiment analysis firm Affectiva to imbue its Pepper robot with emotional intelligence.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” PhD student Lucas Manuelli, a lead author on the paper, said in a blog post published on MIT CSAIL’s website.

(In one round of training, the system learned a descriptor for hats after seeing only six different types.) Furthermore, the descriptors remain consistent despite differences in object color, texture, and shape, which gives DON a leg up on models that use RGB or depth data.

“But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly.” In tests, the team selected a pixel in a reference image for the system to autonomously identify.

Many of these objects were just grabbed from around the lab (including the authors’ and labmates’ shoes and hats), and we have been impressed with the variety of objects for which consistent dense visual models can be reliably learned with the same network architecture and training.”

“We are interested to explore new approaches to solving manipulation problems that exploit the dense visual information that learned dense descriptors provide and [to see] how these dense descriptors can benefit other types of robot learning, e.g.

[ROS Tutorials] ROS Perception #Unit3: Object Recognition and flat surfaces

Here you will learn how to recognise objects and flat surfaces in ROS. One of the most useful perception skills is being able to recognize objects. This allows you ...

Robots learn Grasping by sharing their hand-eye coordination experience with each other | QPT

A human child is able to reliably grasp objects after one year, and takes around four years to acquire more sophisticated precision grasps. However, networked ...

Robots can now pick up any object after inspecting it

Breakthrough CSAIL system suggests robots could one day be able to see well enough to be useful in people's homes and offices. With the DON system, a robot ...

Object Recognition and Retrieval by an Articulated Robotic Arm using DCNN

an Articulated Robotic Arm capable of learning new objects in a fast and robust manner and performing near real-time object retrieval under robust ...

One-shot learning and generation of dexterous grasps for novel objects

This is the full video explaining the results of our paper on learning of a grasp type from a single shot. The technique is able to then transfer the grasps to novel ...

Intelligent Task-Oriented Grasping | R3 Roboy's Research Reviews #1

In this video we discuss the paper: Task-oriented Grasping with Semantic and Geometric Scene Understanding" by Renaud Detry, Jeremie Papon, Larry ...

Interactive disambiguation of object references for grasping tasks

Using a 3D scene segmentation [1] to yield object hypotheses that are subsequently labeled by a simple NN classifier, the robot system can talk about objects ...

Transferring Object Grasping Skills and Knowledge Across Different Robotic Platforms

This video demonstrates the transfer of object grasping skills between two different humanoid robots, iCub and ARMAR-III, with different software frameworks.

A Brief History of Robotics

Why don't we have robots taking care of our every need by now? A little history of the field of robotics might help you understand how hard it is to get machines to ...

Robot In a Room: Toward Perfect Object Recognition in Closed Environments

While general object recognition is still far from being solved, this paper proposes a way for a robot to recognize every object at ..