AI News, New interface allows more efficient, faster technique to remotely operate robots

New interface allows more efficient, faster technique to remotely operate robots

But for someone who isn't an expert, the ring-and-arrow system is cumbersome and error-prone.

'Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we've shortened the process to just two clicks,' said Sonia Chernova, the Georgia Tech assistant professor in robotics who advised the research effort.

Her team tested college students on both systems, and found that the point-and-click method resulted in significantly fewer errors, allowing participants to perform tasks more quickly and reliably than using the traditional method.

the second is a 3-D, interactive view where the user adjusts the virtual gripper and tells the robot exactly where to go and grab.

After a person clicks on a region of an item, the robot's perception algorithm analyzes the object's 3-D surface geometry to determine where the gripper should be placed.

'The robot can analyze the geometry of shapes, including making assumptions about small regions where the camera can't see, such as the back of a bottle,' said Chernova.

The point-and-click method also resulted in approximately one mistake per task, compared to nearly four for the ring-and-arrow technique.

New Interface Helps to Effectively Operate Robots

A new, simpler and efficient interface to control robots will allow laymen to do it without significant training time.

“Instead of a series of rotations, lowering and raising arrows, adjusting the grip and guessing the correct depth of field, we've shortened the process to just two clicks,” Sonia Chernova, the Georgia Tech assistant professor in robotics and advisor to the research team, said in a statement.

The researchers used college students to test both the traditional method and the new point-and-click program and found that the new system resulted in significantly fewer errors and allowed participants to perform tasks quicker and more reliably.

However, because the new method doesn’t include 3D mapping, it only provides a single camera view and allows the robot’s perception algorithm to analyze an object's 3D surface geometry to determine where the gripper should be placed.

Robot Fetches Objects With Just A Point And A Click

A team of researchers led by Charlie Kemp, director of the Center for Healthcare Robotics in the Health Systems Institute at the Georgia Institute of Technology and Emory University, have found a way to instruct a robot to find and deliver an item it may have never seen before using a more direct manner of communication —

El-E (pronounced like the name Ellie), a robot designed to help users with limited mobility with everyday tasks, autonomously moves to an item selected with a green laser pointer, picks up the item and then delivers it to the user, another person or a selected location such as a table.

El-E, named for her ability to elevate her arm and for the arm’s resemblance to an elephant trunk, can grasp and deliver several types of household items including towels, pill bottles and telephones from floors or tables.

The verbal instructions a person gives to help someone find a desired object are very difficult for a robot to use (the cup over near the couch or the brush next to the red toothbrush).

These types of commands require the robot to understand everyday human language and the objects it describes at a level well beyond the state of the art in language recognition and object perception.

The laser pointer interface and methods developed by Kemp’s team overcome this challenge by providing a direct way for people to communicate the location of interest to El-E and complimentary methods that enable El-E to pick up an object found at this location.

“If you want a robot to cook a meal or brush your hair, you will probably want the robot to first fetch the items it will need, and for tasks such as cleaning up around the home, it is essential that the robot be able to pick up objects and move them to new locations.

The Georgia Tech and Emory research team is now working to help El-E expand its capabilities to include switching lights on and off when the user selects a light switch and opening and closing doors when the user selects a door knob.

Point-and-Click Method Makes Robot Grasping Control Less Tedious

For most grasping tasks, when a robot needs help, it means that a human needs to manually position every single degree of freedom of the gripper while squinting at some low-resolution 3D point cloud.

Between the full manual and point-and-click grasping approaches shown in the above video, the researchers, from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL)lab, led by Professor Sonia Chernova, also implemented a middle ground “constrained positioning” method, which intelligently limits the amount of degrees of freedom that a user needs to position.

In other words, it needs basic depth data in order to help you out with grasping, but it doesn’t need to understand what it’s looking at, making it easy to scale up and deploy to new environments without training.

study of non-expert users showed that the point-and-click interface was the most effective, helping users to “complete a greater number of tasks more quickly, complete tasks more consistently, and make fewer mistakes,” as the RAIL lab researchers explain:

Physics-based Manipulation with and Around People

Robots manipulate with super-human speed and dexterity on factory floors. But yet they fail even under moderate amounts of clutter or uncertainty. However ...

A Distributed Robot Garden System

The Robot Garden is a system that functions as a visual embodiment of distributed algorithms, as well as an aesthetically appealing way to get more young ...

Neuroprotetic interface spinal brain: brain implants.

Neuroprotetic interface spinal brain: brain implants. Spinal cord injuries are undoubtedly the most dangerous that we can face, since the nerves in this area do ...

Swarm Robotics Podcast

Join Student Experience Lead Mike Salem and Sabine Hauert as they discuss swarm robotics in this episode of Udacity Explores: About Sabine Hauert: Sabine ...

DIGITROPOLIS; The Dark Side of AI, 5G & IoT

Click CC for English subtitles. Click CC voor Nederlandse ondertitels. 5G, The Internet of Things, en Artificial Intelligence; als je de media, politiek en corporates ...

Todd Kuiken: A prosthetic arm that "feels"

Surgeon and engineer Todd Kuiken is building a prosthetic arm that connects with the human nervous system -- improving motion, control ..

Jugs and Rods | Critical Role RPG Episode 94

Check out our store for official Critical Role merch: Catch Critical Role live Thursdays at 7PM PT on Alpha and Twitch: Alpha: ..

Ken Goldberg: "Cloud Robotics" | Talks at Google

Ken Goldberg is the craigslist Distinguished Professor of New Media and Professor of Industrial Engineering and Operations Research (IEOR) at the University ...

Online non-invasive brain typing through EEG-based BCI

This is the Demo for the submitted PERCOM 2018 paper 'Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals'.

RubyConf 2017: Git Driven Refactoring by Ashley Ellis Pierce

Git Driven Refactoring by Ashley Ellis Pierce Often we know that our code needs refactoring, but we have no idea where to start. Maybe we studied some ...