AI News, Simultaneous localization and mapping

Simultaneous localization and mapping

In robotic mapping and navigation, simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it.

Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newly emerging domestic robots and even inside the human body.[1]

All quantities are usually probabilistic, so the objective is to compute: Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a transition function

Similarly the map can be updated sequentially by Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of EM algorithm.

Set-membership techniques are mainly based on interval constraint propagation.[2][3] They provide a set which encloses the pose of the robot and a set approximation of the map.

Bundle adjustment is another popular technique for SLAM using image data, which jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM systems such as Google's Project Tango.

New SLAM algorithms remain an active research area, and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below.

Topological maps are a method of environment representation which capture the connectivity (i.e., topology) of the environment rather than creating a geometrically accurate map.

Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.[4] In contrast, grid maps use arrays (typically square or hexagonal) of discretized cells to represent a topological world, and make inferences about which cells are occupied.

Modern self driving cars mostly simplify the mapping problem to almost nothing, by making extensive use of highly detailed map data collected in advance.

Essentially such systems simplify the SLAM problem to a simpler localisation only task, perhaps allowing for moving objects such as cars and people only to be updated in the map at runtime.

SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms.[5] Statistical independence is the mandatory requirement to cope with metric bias and with noise in measures.

At one extreme, laser scans or visual features provide details of a great many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via image registration.

At the opposite extreme, tactile sensors are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM.

Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) laser rangefinders, 3D High Definition LiDAR, 3D Flash LIDAR, 2D or 3D sonar sensors and one or more 2D cameras.[5] Since 2005, there has been intense research into VSLAM (visual SLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices.[6] Visual and LIDAR sensors are informative enough to allow for landmark extraction in many cases.

Other recent forms of SLAM include tactile SLAM[7] (sensing by local touch only), radar SLAM,[8] and wifi-SLAM (sensing by strengths of nearby wifi access points).

A kind of SLAM for human pedestrians uses a shoe mounted inertial measurement unit as the main sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of buildings.

The dynamic model balances the contributions from various sensors, various partial error models and finally comprises in a sharp virtual depiction as a map with the location and heading of the robot as some cloud of probability.

An alternative approach is to ignore the kinematic term and read odometry data from robot wheels after each command—such data may then be treated as one of the sensors rather than as kinematics.

The related problems of data association and computational complexity are among the problems yet to be fully resolved, for example the identification of multiple confusable landmarks.

A significant recent advance in the feature-based SLAM literature involved the re-examination of the probabilistic foundation for Simultaneous Localisation and Mapping (SLAM) where it was posed in terms of multi-object Bayesian filtering with random finite sets that provide superior performance to leading feature-based SLAM algorithms in challenging measurement scenarios with high false alarm rates and high missed detection rates without the need for data association.[10] Popular techniques for handling multiple objects include Joint Probabilistic Data Association Filter (JPDAF) and probability hypothesis density filter (PHD).

SLAM with DATMO is a model which tracks moving objects in a similar way to the agent itself.[11] Loop closure is the problem of recognizing a previously-visited location and updating beliefs accordingly.

Typical loop closure methods apply a second algorithm to compute some type of sensor measure similarity, and re-set the location priors when a match is detected.

Bio-inspired methods are not currently competitive with engineering approaches however[according to whom?] Researchers and experts in artificial intelligence have struggled to solve the SLAM problem in practical settings: that is, it required a great deal of computational power to sense a sizable area and process the resulting data to both map and localize.[15] A 2008 review of the topic summarized: '[SLAM] is one of the fundamental challenges of robotics .

[but it] seems that almost all the current approaches can not perform consistent maps for large areas, mainly due to the increase of the computational cost and due to the uncertainties that become prohibitive when the scenario becomes larger.'[16] Generally, complete 3D SLAM solutions are highly computationally intensive as they use complex real-time particle filters, sub-mapping strategies or hierarchical combination of metric topological representations, etc.

OrthoSLAM algorithm reduces SLAM to a linear estimation problem since only a single line is processed at a time.[17] Various SLAM algorithms are implemented in the open-source robot operating system (ROS) libraries, often used together with the Point Cloud Library for 3D maps or visual features from OpenCV.

SLAM - Artificial Intelligence for Robotics

This video is part of an online course, Intro to Artificial Intelligence. Check out the course here:

ROS Mastering LIVE-Show#2: Merging Odometry and IMU data for robot localization

Today we deal with the problem of how to merge odometry and IMU data to obtain a more stable localization of the robot. We will show how to use the robot_localization package for that. This...

Autonomous robotic teamwork — mapping hazardous environments with heterogenous robots

Creating diverse teams of robots that can autonomously map hazardous environments and assign tasks to each other is a notoriously hard problem in computer science. Virginia Tech's Pratap...

HowTo Solve the XIAOMI Vacuum LDS Problem

The LDS Problem of the XIAOMI MI Vacuum and How You can Fix IT. You can get a New Vaccum from Gerbest: spare lds Sensor..

RI Seminar: Michael Kaess : Robust and Efficient Real-time Mapping for Autonomous Robots

Michael Kaess Assistant Research Professor, Carnegie Mellon University, Robotics Institute Robust and Efficient Real-time Mapping for Autonomous Robots Abstract We are starting to see the...

Localization Program - Artificial Intelligence for Robotics

This video is part of an online course, Intro to Artificial Intelligence. Check out the course here:

Mobile robot localisation with lego EV3 robot

For an autonomous robot, deployed in an unknown environment, one of the main concerns is the answer on the question ``What's the position of the robot right now?''. After departure from its...

Event-based Vision for Autonomous High-Speed Robotics

This video summarizes the research carried out by the Robotics and Perception Group of the University of Zurich on Event-based Vision between 2013 and 2017. Event-based sensors enable the design...

SLAM

SLAM (Simultaneous Localization And Mapping) with ROS and DUDE robot, With this demo DUDE robot can build a map realtime. DUDE robot work with: - NVIDIA Jetson TK

Roomba 980 Robot Vacuum Cleans a Whole Level of Your Home

Learn more about the new Roomba 980 at