AI News, Model helps robots navigate more like humans do

Model helps robots navigate more like humans do

When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much.

A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints.

They’re always exploring, rarely observing, and never using what’s happened in the past.” The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.

In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents.

“The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.” Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.

(It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or  RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next.

For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room.

The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.

Working with multiple agents In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts.

This problem gets exponentially worse the more cars you have to contend with.” Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation.

Socially compliant mobile robot navigation via inverse reinforcement learning

We model their behavior in terms of a mixture distribution that captures both the discrete navigation decisions, such as going left or going right, as well as the natural variance of human trajectories.

Using the proposed model, our method is able to imitate the behavior of pedestrians or, alternatively, to replicate a specific behavior that was taught by tele-operation in the target environment of the robot.

An extensive set of experiments suggests that our technique outperforms state-of-the-art methods to model the behavior of pedestrians, which also makes it applicable to fields such as behavioral science or computer graphics.

Intrinsically Motivated Goal Exploration Processes for Open-Ended Robot Learning

Intrinsically Motivated Multi-Task Reinforcement Learning with open-source Explauto library and Poppy Humanoid Robot Sébastien Forestier, Yoan Mollard, ...

Learning Dexterity

We've trained a human-like robot hand to manipulate physical objects with unprecedented dexterity. Our system, called Dactyl, is trained entirely in simulation ...

Stanford's 'Jackrabbot' Robot Learns to Navigate Crowds

Researchers at Stanford University's Computational Vision and Geometry Lab developed a robot prototype that's learning how to move among humans.

Neural Mechanisms of Insect Navigation

Barbara Webb research involves insect navigation behavioral and ethological study. She has used computational modeling by relating the computational ...

RI Seminar: Joelle Pineau : Learning Socially Adaptive Navigation Strategies

Joelle Pineau Learning Socially Adaptive Navigation Strategies: Lessons from the SmartWheeler Project Associate Professor of Computer Science, McGill ...

Hierarchical Reinforcement Learning for Robot Navigation

Complex trajectories for robot navigation can be decomposed in elementary movement primitives. Reinforcement Learning is employed to learn the movement ...

Multi-Agent Robots Graduation Project

The Multi-agent autonomous robots have been used lately to acquire the high level of automation in factories. Besides the typical use of industrial robots in a ...

Socially Aware Motion Planning with Deep Reinforcement Learning

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...

Robot Learns Contact-Rich Assembly Processes with Model-Based Reinforcement Learning

As part of the Research Training Group "Ramp-Up Management", the Cybernetics Lab is researching the use of reinforcement learning for self-learning ...