Simultaneous localization and mapping

From Wikipedia, the free encyclopedia
Jump to: navigation, search
A robot built by Technische Universität Darmstadt maps a maze using a LIDAR.
A map generated by the Darmstadt Robot.

Simultaneous localization and mapping (SLAM) is a technique used by digital machines to construct a map of an unknown environment (or to update a map within a known environment) while simultaneously keeping track of the machine's location in the physical environment. Put differently, "SLAM is the process of building up a map of an unfamiliar building as you're navigating through it—where are the doors? where are the stairs? what are all the things I might trip over?—and also keeping track of where you are within it."[1]

Operational definition[edit]

Maps are used for determining a location within an environment and for depicting an environment for planning and navigation; they support the assessment of actual location by recording information obtained from a form of perception and comparing it to a current set of perceptions. The benefit of a map in aiding the assessment of a location increases as the precision and quality of the current perceptions decrease. Maps generally represent the state at the time that the map is drawn; this is not necessarily consistent with the state of the environment at the time the map is used.

The complexity of the technical processes of locating and mapping under conditions of errors and noise do not allow for a coherent solution of both tasks. Simultaneous localization and mapping (SLAM) is a concept that binds these processes in a loop and therefore supports the continuity of both aspects in separated processes; iterative feedback from one process to the other enhances the results of both consecutive steps.

Mapping is the problem of integrating the information gathered by a set of sensors into a consistent model and depicting that information as a given representation. It can be described by the first characteristic question, What does the world look like? Central aspects in mapping are the representation of the environment and the interpretation of sensor data.

In contrast to this, localization is the problem of estimating the place (and pose) of the robot relative to a map; in other words, the robot has to answer the second characteristic question, Where am I? Typically, solutions comprise tracking, where the initial place of the robot is known, and global localization, in which no or just some a priori knowledge of the environmental characteristics of the starting position is given.

SLAM is therefore defined as the problem of building a model leading to a new map, or repetitively improving an existing map, while at the same time localizing the robot within that map. In practice, the answers to the two characteristic questions cannot be delivered independently of each other.

SLAM consists of multiple parts; Landmark extraction, data association, state estimation, state update and landmark update. There are many ways to solve each of the smaller parts.

Before a robot can contribute to answering the question of what the environment looks like, given a set of observations, it needs to know e.g.:

  • the robot's own kinematics,
  • which qualities the autonomous acquisition of information has, and,
  • from which sources additional supporting observations have been made.

It is a complex task to estimate the robot's current location without a map or without a directional reference.[2] "Location" may refer to simply the position of the robot or might also include its orientation.

Complexity of the SLAM Problem[edit]

Researchers and experts in artificial intelligence struggled to solve the "SLAM problem": that is, it required a great deal of computational power to sense a sizable area and process the resulting data to both map and localize.[1]

A 2008 review of the topic summarized: "[SLAM] is one of the fundamental challenges of robotics . . . [but it] seems that almost all the current approaches can not perform consistent maps for large areas, mainly due to the increase of the computational cost and due to the uncertainties that become prohibitive when the scenario becomes larger."[3]

Generally, complete 3D SLAM solutions are highly computationally intensive as they use complex real-time particle filters, sub-mapping strategies or hierarchical combination of metric topological representations, etc.[4] Robots that use embedded systems cannot fully implement SLAM because of their limitation in computing power. Nguyen V., Harati A., & Siegwart R. (2007) contributed to embedded robotics by presenting a fast, lightweight solution called OrthoSLAM, which breaks down the complexity of the environment into orthogonal planes. By mapping only the planes that are orthogonal to each other, the structure of most indoor environments can be estimated fairly accurately. OrthoSLAM algorithm reduces SLAM to a linear estimation problem since only a single line is processed at a time.[4]

Technical problems[edit]

SLAM can be thought of as a chicken or egg problem: An unbiased map is needed for localization while an accurate pose estimate is needed to build that map. This is the starting condition for iterative mathematical solution strategies.

Beyond, the answering of the two characteristic questions is not as straightforward as it might sound due to inherent uncertainties in discerning the robot's relative movement from its various sensors. Generally, due to the budget of noise in a technical environment, SLAM is not served with just compact solutions, but with a bunch of physical concepts contributing to results.

If at the next iteration of map building the measured distance and direction traveled has a budget of inaccuracies, driven by limited inherent precision of sensors and additional ambient noise, then any features being added to the map will contain corresponding errors. Over time and motion, locating and mapping errors build cumulatively, grossly distorting the map and therefore the robot's ability to determine its actual location and heading with sufficient accuracy.

There are various techniques to compensate for errors, such as recognizing features that it has come across previously (i.e., data association or loop closure detection), and re-skewing recent parts of the map to make sure the two instances of that feature become one. Statistical techniques used in SLAM include Kalman filters, particle filters (aka. Monte Carlo methods) and scan matching of range data. They provide an estimation of the posterior probability function for the pose of the robot and for the parameters of the map. Set-membership techniques are mainly based on interval constraint propagation.[5][6] They provide a set which encloses the pose of the robot and a set approximation of the map.


SLAM in the mobile robotics community generally refers to the process of creating geometrically consistent maps of the environment. Topological maps are a method of environment representation which capture the connectivity (i.e., topology) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms.[7]

SLAM is tailored to the available resources, hence not aimed at perfection, but at operational compliance. The published approaches are employed in unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newly emerging domestic robots and even inside the human body.[8]

It is generally considered that "solving" the SLAM problem has been one of the notable achievements of the robotics research in the past decades.[9] The related problems of data association and computational complexity are among the problems yet to be fully resolved.

A significant recent advance in the feature-based SLAM literature involved the re-examination of the probabilistic foundation for Simultaneous Localisation and Mapping (SLAM) where it was posed in terms of multi-object Bayesian filtering with random finite sets that provide superior performance to leading feature-based SLAM algorithms in challenging measurement scenarios with high false alarm rates and high missed detection rates without the need for data association.[10]


SLAM will always use several different types of sensors to acquire data with statistically independent errors.[11] Statistical independence is the mandatory requirement to cope with metric bias and with noise in measures.

Such optical sensors may be one-dimensional (single beam) or 2D- (sweeping) laser rangefinders, 3D High Definition LiDAR, 3D Flash LIDAR, 2D or 3D sonar sensors and one or more 2D cameras.[11] Since 2005, there has been intense research into VSLAM (visual SLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices.[12] Other recent forms of SLAM include tactile SLAM (sensing by local touch only),[13] radar SLAM[14], and wifi-SLAM (sensing by strengths of nearby wifi access points).

Recent approaches apply quasi-optical wireless ranging for multi-lateration (RTLS) or multi-angulation in conjunction with SLAM as a tribute to erratic wireless measures.

A special kind of SLAM for human pedestrians uses a shoe mounted inertial measurement unit as the main sensor and relies on the fact that pedestrians are able to avoid walls. This approach called FootSLAM can be used to automatically build floor plans of buildings that can then be used by an indoor positioning system.[15]


The results from sensing will feed the algorithms for locating. According to propositions of geometry, any sensing must include at least one lateration and (n+1) determining equations for an n-dimensional problem. In addition, there must be some additional a priori knowledge about orienting the results versus absolute or relative systems of coordinates with rotation and mirroring.


Contribution to mapping may work in 2D modeling and respective representation or in 3D modeling and 2D projective representation as well. As a part of the model, the kinematics of the robot is included, to improve estimates of sensing under conditions of inherent and ambient noise. The dynamic model balances the contributions from various sensors, various partial error models and finally comprises in a sharp virtual depiction as a map with the location and heading of the robot as some cloud of probability. Mapping is the final depicting of such model, the map is either such depiction or the abstract term for the model.


"Active SLAM" studies the combined problem of SLAM with deciding where to move next in order to build the map as efficiently as possible. The need for active exploration is especially pronounced in sparse sensing regimes such as tactile SLAM. Active SLAM is generally performed by approximating the entropy of the map under hypothetical actions.


A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986.[16][17] Other pioneering work in this field was conducted by the research group of Hugh F. Durrant-Whyte in the early 1990s.[18]

See also[edit]

External links[edit]


  1. ^ a b Brynjolfsson, Erik; McAfee, Andrew (Jan 20, 2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company. p. 52. ISBN 9780393239355. 
  2. ^ Definition according to, a platform for SLAM researchers
  3. ^ Aulinas, Josep (2008). "The SLAM Problem: A Survey". Proceedings of the 2008 Conference on Artificial Intelligence Research & Development: 363–71. Retrieved July 15, 2015. 
  4. ^ a b Nguyen, V.; Harati, A; Siegwart, R., "A lightweight SLAM algorithm using Orthogonal planes for indoor mobile robotics," Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on , vol., no., pp.658,663, Oct. 29 2007-Nov. 2 2007
  5. ^ Jaulin, L. (2009). "A nonlinear set-membership approach for the localization and map building of an underwater robot using interval constraint propagation". IEEE Transactions on Robotics. 
  6. ^ Jaulin, L. (2011). "Range-only SLAM with occupancy maps; A set-membership approach". IEEE Transactions on Robotics. 
  7. ^ Cummins, Mark; Newman, Paul (June 2008). "FAB-MAP: Probabilistic localization and mapping in the space of appearance". The International Journal of Robotics Research 27 (6): 647–665. doi:10.1177/0278364908090961. Retrieved 23 July 2014. 
  8. ^ Mountney, P.; Stoyanov, D. Davison, A. Yang, G-Z. (2006). "Simultaneous Stereoscope Localization and Soft-Tissue Mapping for Minimal Invasive Surgery". MICCAI 1: 347–354. doi:10.1007/11866565_43. Retrieved 2010-07-30. 
  9. ^ Durrant-Whyte, H.; Bailey, T. (2006). "Simultaneous Localization and Mapping (SLAM): Part I The Essential Algorithms". Robotics and Automation Magazine 13 (2): 99–110. doi:10.1109/MRA.2006.1638022. Retrieved 2008-04-08. 
  10. ^ J. Mullane, B.-N. Vo, M. D. Adams, and B.-T. Vo, (2011). "A random-finite-set approach to Bayesian SLAM,". IEEE Transactions on Robotics 27 (2): 268–282. doi:10.1109/TRO.2010.2101370. 
  11. ^ a b Magnabosco, M., Breckon, T.P. (February 2013). "Cross-Spectral Visual Simultaneous Localization And Mapping (SLAM) with Sensor Handover". Robotics and Autonomous Systems 63 (2): 195–208. doi:10.1016/j.robot.2012.09.023. Retrieved 5 November 2013. 
  12. ^ Karlsson, N.; Di Bernardo, E.;Ostrowski, J;Goncalves, L.;Pirjanian, P.;Munich, M. (2005). "The vSLAM Algorithm for Robust Localization and Mapping". Int. Conf. on Robotics and Automation (ICRA). 
  13. ^ Marck, J.W. Mohamoud, A., v.d.Houwen, E., van Heijster, R. (2013). "Indoor radar SLAM A radar application for vision and GPS denied environments.". Radar Conference (EuRAD), 2013 European. 
  14. ^ Fox, C., Evans, M., Pearson, M. and Prescott, T. (2012). "Tactile SLAM with a biomimetic whiskered robot.". Proc. IEEE Int. Conf. on Robotics and Automation (ICRA),. 
  15. ^ Robertson, P.; Angermann, M.;Krach B. (2009). "Simultaneous Localization and Mapping for Pedestrians using only Foot-Mounted Inertial Sensors". Ubicomp 2009. Orlando, Florida, USA: ACM. doi:10.1145/1620545.1620560. 
  16. ^ Smith, R.C.; Cheeseman, P. (1986). "On the Representation and Estimation of Spatial Uncertainty". The International Journal of Robotics Research 5 (4): 56–68. doi:10.1177/027836498600500404. Retrieved 2008-04-08. 
  17. ^ Smith, R.C.; Self, M.;Cheeseman, P. (1986). "Estimating Uncertain Spatial Relationships in Robotics". "Proceedings of the Second Annual Conference on Uncertainty in Artificial Intelligence". UAI '86. University of Pennsylvania, Philadelphia, PA, USA: Elsevier. pp. 435–461. 
  18. ^ Leonard, J.J.; Durrant-whyte, H.F. (1991). "Simultaneous map building and localization for an autonomous mobile robot". Intelligent Robots and Systems' 91.'Intelligence for Mechanical Systems, Proceedings IROS'91. IEEE/RSJ International Workshop on: 1442–1447. doi:10.1109/IROS.1991.174711. Retrieved 2008-04-08.