AI News, Model helps robots navigate more like humans do

Model helps robots navigate more like humans do

MIT researchers have now devised a way to help robots navigate environments more like humans do.

Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they've learned before in similar situations.

A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints.

But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,' says co-author Andrei Barbu, a researcher at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT's McGovern Institute.

The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot's movement in an environment.

In their paper, 'Deep sequential models for sampling-based planning,' the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents.

(It's a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next.

For example, the researchers demonstrated the model in a simulation known as a 'bug trap,' where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room.

The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.

Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were.

Working with multiple agents In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts.

Principles of goal-directed spatial robot navigation in biomimetic models

Where roboticists explicitly design and construct methods for constructing world representations suitable for navigation, biologists must infer the nature of these representations in animals from observing animal behaviour and in some cases, neural activity.

Given the preponderance of occupancy grid-based representations in robotics derived largely from decidedly biologically implausible laser range finders, it is worthwhile querying whether animals use occupancy grid-like representations to perform navigation.

Differentiating representations from behaviour can be challenging: for example, we know that bats are capable of precise navigation around obstacles in flight [82], but is this a result of pure sensory-action loops or some internal metric representation of three-dimensional space?

Perhaps the most suggestive evidence for animals encoding the world in a manner similar to robotic occupancy grid maps is found in rodents, and specifically boundary vector cells (BVCs), which fire at a certain range and relative orientation from an environmental boundary, and border vector cells, which may also include directionality in specific, short-range boundary detector cells [83].

In this biomimetic mobile robot, mapping and localization was shown to be possible using only noisy dead reckoning information and short-range tactile information from a three-dimensional array of active biomimetic whiskers.

If rodents do maintain an occupancy-based representation through BVCs without an explicit means of calculating boundary ranges, additional modelling in the vein of the Shrewbot system may provide further insights into the functional benefits of encoding this range to boundary information despite lacking a long range obstacle sensor.

[36] encode the average relative angle between the rat's heading and two specific visual landmarks, creating a phase precession effect, although there is no evidence that landmarks drive phase precession in this manner in actual rodents.

Topological maps encode useful transition information for robotic navigation such as connectivity between places, as well as sometimes storing additional navigationally relevant information in the links between map nodes, such as robot motor commands, movement behaviours or temporal transition information [26].

In an open area, the model learns multiple transition cells at a location representing all possible movement from that location, while in a constrained linear track, it learns only bi-directional transitions, providing a possible explanation for the directionality of place cells in linear track environments.

Where other models have postulated that conjunctive grid cells may encode a topological map, simulation experiments in a corridor paradigm with ambiguous landmark configurations with RatSLAM [98] have identified a possible role for conjunctive grid cells in filtering sensory uncertainty by maintaining and propagating multiple estimates of pose until the correct estimate can be resolved (figure 4).

PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning

We present PRM-RL, a hierarchical method for long-range navigation task completion that combines sampling-based path planning with reinforcement learning (RL) agents.

knowledge of the large-scale topology, while the sampling-based planners provide an approximate map of the space of possible configurations of the robot from which collision- free

successfully completes up to 215 m long trajectories under noisy sensor conditions, and the aerial cargo delivery completes flights over 1000 m without violating the task constraints in an environment

ReviewComputational cognitive models of spatial memory in navigation space: A review

Spatial memory refers to the part of the memory system that encodes, stores, recognizes and recalls spatial information about the environment and the agent’s orientation within it.

Representative models from each category are described and compared in a number of dimensions along which simulation models can differ (level of modeling, types of representation, structural accuracy, generality and abstraction, environment complexity), including their possible mapping to the underlying neural substrate.

Quadcopter Navigation in the Forest using Deep Neural Networks

We study the problem of perceiving forest or mountain trails from a single monocular image acquired from the viewpoint of a robot traveling on the trail itself.

Neural Mechanisms of Insect Navigation

Barbara Webb research involves insect navigation behavioral and ethological study. She has used computational modeling by relating the computational ...

TEDxCaltech - Pete Trautman - Robotic Navigation in Human Crowds

Pete Trautman is a graduate student in Control and Dynamical Systems at Caltech. Prior to coming to Caltech, Pete was a Captain in the United States Air Force, ...

Multi-Agent Path Topology in Support of Socially Competent Navigation Planning

We present a navigation planning framework for dynamic, multi-agent environments, where no explicit communication takes place among agents. Inspired by ...

Adaptive Learning for Multi-Agent Navigation

When agents in a multi-robot system move, they need to adapt their paths to account for potential collisions with other agents and with static obstacles. Existing ...

RAAIS 2017 - Raia Hadsell, Senior Research Scientist at DeepMind

Raia Hadsell on "Deep Reinforcement Learning and Real World Challenges" Raia is Senior Research Scientist at DeepMind. She joined in DeepMind 2014 ...

Socially Aware Motion Planning with Deep Reinforcement Learning

Intrinsically Motivated Goal Exploration Processes for Open-Ended Robot Learning

Intrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot Sébastien Forestier, Yoan Mollard, ...

Learning Dexterity

We've trained a human-like robot hand to manipulate physical objects with unprecedented dexterity. Our system, called Dactyl, is trained entirely in simulation ...

Hierarchical Reinforcement Learning for Robot Navigation

Complex trajectories for robot navigation can be decomposed in elementary movement primitives. Reinforcement Learning is employed to learn the movement ...