AI News, Researchers help robots 'think' and plan in the abstract

Researchers help robots 'think' and plan in the abstract

A robot's perception of the world consists of nothing more than the vast array of pixels collected by its cameras, and its ability to act is limited to setting the positions of the individual motors that control its joints and grippers.

'Imagine how hard it would be to plan something as simple as a trip to the grocery store if you had to think about each and every muscle you'd flex to get there, and imagine in advance and in detail the terabytes of visual data that would pass through your retinas along the way.

When we see demonstrations of robots planning for and performing multistep tasks, 'it's almost always the case that a programmer has explicitly told the robot how to think about the world in order for it to make a plan,' Konidaris said.

An example would be bundling all the little movements needed to open a door -- all the motor movements involved in reaching for the knob, turning it and pulling the door open -- into a single 'open the door' skill.

'Our work shows that once a robot has high-level motor skills, it can automatically construct a compatible high-level symbolic representation of the world -- one that is provably suitable for planning using those skills,' Konidaris said.

Learning abstract states of the world For the study, the researchers introduced a robot named Anathema Device (or Ana, for short) to a room containing a cupboard, a cooler, a switch that controls a light inside the cupboard, and a bottle that could be left in either the cooler or the cupboard.

They gave Ana a set of high-level motor skills for manipulating the objects in the room -- opening and closing both the cooler and the cupboard, flipping the switch and picking up a bottle.

There she saw that the light switch in the 'on' position, and she realized that opening the cupboard would block the switch, so she turned the switch off before opening the cupboard, returning to the cooler and retrieving the bottle, and finally placing it in the cupboard.

Brown University

A robot’s perception of the world consists of nothing more than the vast array of pixels collected by its cameras, and its ability to act is limited to setting the positions of the individual motors that control its joints and grippers.

“Imagine how hard it would be to plan something as simple as a trip to the grocery store if you had to think about each and every muscle you’d flex to get there, and imagine in advance and in detail the terabytes of visual data that would pass through your retinas along the way.

“But if we want robots that can act more autonomously, they’re going to need the ability to learn abstractions on their own.” In computer science terms, these kinds of abstractions fall into two categories: “procedural abstractions” and “perceptual abstractions.” Procedural abstractions are programs made out of low-level movements composed into higher-level skills.

An example would be bundling all the little movements needed to open a door — all the motor movements involved in reaching for the knob, turning it and pulling the door open — into a single “open the door” skill.

That’s the focus of this new research.   “Our work shows that once a robot has high-level motor skills, it can automatically construct a compatible high-level symbolic representation of the world — one that is provably suitable for planning using those skills,” Konidaris said.

Learning abstract states of the world For the study, the researchers introduced a robot named Anathema Device (or Ana, for short) to a room containing a cupboard, a cooler, a switch that controls a light inside the cupboard, and a bottle that could be left in either the cooler or the cupboard.

And she was able to learn them just by executing her skills and seeing what happens.” Planning in the abstract Once Ana was armed with her learned abstract representation, the researchers asked her to do something that required some planning: take the bottle from the cooler and put it in the cupboard.

There she saw that the light switch in the “on” position, and she realized that opening the cupboard would block the switch, so she turned the switch off before opening the cupboard, returning to the cooler and retrieving the bottle, and finally placing it in the cupboard.

Teaching robots to understand their world through basic motor skills

The researchers programmed a two-armed robot (Anathema Device or “Ana”) to manipulate objects in a room — opening and closing a cupboard and a cooler, flipping on a light switch and picking up a bottle.

She also learned that in order to turn the light off, the cupboard door needed to be closed, because the open door blocked her access to the switch.” Once processed, the robot associates a symbol with one of these abstract concepts.

This kind of adaptive quality means the robots could become far more capable of performing a greater variety of tasks in more diverse environments by choosing the actions they need to perform in a given scenario.

“We have to be able to give them goals and have them generate behavior on their own.” Of course, asking every robot to learn this way is equally inefficient, but the researchers believe they can develop a common language and create skills that could be download to new hardware.

Kurzweilaccelerating intelligence

…”) To help robots answer these kinds of questions and plan complex multi-step tasks, robots can construct two kinds of abstract representations of the world around them, say Brown University and MIT researchers: Building truly intelligent robots According to George Konidaris, Ph.D., an assistant professor of computer science at Brown and the lead author of the new study, there’s been less progress in perceptual abstraction —

They started by teaching Ana “procedural abstractions” in a room containing a cupboard, a cooler, a switch that controls a light inside the cupboard, and a bottle that could be left in either the cooler or the cupboard.

They gave Ana a set of high-level motor skills for manipulating the objects in the room, such as opening and closing both the cooler and the cupboard, flipping the switch, and picking up a bottle.

Once armed with these learned abstract procedures and perceptions, the researchers gave Ana a challenge: “Take the bottle from the cooler and put it in the cupboard.” Accepting the challenge, Ana navigated to the cooler.

“She learned these abstractions on her own” Once a robot has high-level motor skills, it can automatically construct a compatible high-level symbolic representation of the world by making sense of its pixelated surroundings, according to Konidaris.

Researchers help robots think and construct abstract

Analysts from Brown University and MIT have built up a strategy for helping robots get ready for multi-step undertakings by developing theoretical portrayals of their general surroundings.

A robot’s impression of the world comprises of just the huge swath of pixels gathered by its cameras, and its capacity to act is constrained to set the places of the individual engines that control its joints and grippers.

“Envision how hard it is design something as basic as a trek to the supermarket in the event that you needed to consider every single muscle you’d flex to arrive, and envision ahead of time and in detail the terabytes of visual information that would go through your retinas en route.

When we see exhibits of robots making arrangements for and performing multistep assignments, “it’s quite often the case that a software engineer has expressly informed the robot how to think concerning the world with the goal for it to make an arrangement,”

An illustration would package all the little developments expected to open an entryway — all the engine developments engaged with going after the handle, turning it and pulling the entryway open — into a solitary “open the entryway”

“Our work demonstrates that once a robot has abnormal state engine aptitudes, it can consequently build a perfect abnormal state emblematic portrayal of the world — one that is probably reasonable for arranging to utilize those abilities,”

For the investigation, the specialists presented a robot named Anathema Device (or Ana, for short) to a room containing a pantry, a cooler, a switch that controls a light inside the cabinet, and a container that could be left in either the cooler or the organizer.

She likewise took in the best possible design of pixels in her visual field related to the cooler cover being shut, which is the main setup in which it’s conceivable to open it.

position and she understood that opening the organizer would obstruct the switch, so she killed the switch before opening the pantry, coming back to the cooler and recovering the jug, lastly setting it in the cabinet.

Teaching robots to understand their world through basic motor skills

Robots are great at doing what they're told. But sometimes inputting that information into a system is a far more complex process than the task we're asking them ...

2016 Lecture 05: Maps of Meaning: Part I: Anomaly and the brain

We have limited knowledge. In consequence, many things are unknown and unpredictable. However, we must still act when we don't know what to do. Much of ...

A new class of digital twin for industry 4.0

The world of manufacturing is experiencing a major transformation as digital technologies make real the design principles for the fourth industrial revolution: ...

Improve Vocabulary ★ Sleep Learning ★ Listen To Spoken English Conversation, Binaural Beats Part 5.✔

Improve Vocabulary ☆ Sleep Learning ☆ Listen To Spoken English Conversation, Binaural Beats Part 5.✓ English tivi is a free Channel for English learners.

Bewusstsein schafft Lebensfreude - Dr. Daniele Ganser im Gepräch - (Hamburg/Mai 2017)

+++ New! English subtitles available +++ Wie schaffen wir als Gesellschaft den Weg aus dem Hamsterrad? Was kann jeder einzelne für sich tun, um die innere ...

The Time Traders by Andre Norton

If it is possible to conquer space, then perhaps it is also possible to conquer time. At least that was the theory American scientists were exploring in an effort to ...

Visual Understanding in Natural Language

Bridging visual and natural language understanding is a fundamental requirement for intelligent agents. This talk will focus mainly on automatic image ...

Physics-based Manipulation with and Around People

Robots manipulate with super-human speed and dexterity on factory floors. But yet they fail even under moderate amounts of clutter or uncertainty. However ...

GDL Presents Women Techmakers with Trisha Gee

Mandy Waite and Emma Jackson chat with Trisha Gee, a senior developer at 10Gen (the MongoDB company) about everything from the pervasiveness of ...