Draft:SLAM for Decision Making

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

SLAM (Simultaneous Localization and Mapping) is a problem in Robotics/Autonomous Vehicles, that requires to create a map of an unknown environment by an agent placed at unknown location in the environment and keep on updating the agent's location within the map simultaneously[1]. This map information is used to navigate the agent in the environment by path planning algorithms and obstacle avoidance. It is also used in navigating self-driving cars, autonomous robots, unmanned vehicles, etc. SLAM algorithm was first popularized by Standford Team's autonomous cars, STANLEY and JUNIOR, at DARPA Grand Challenge 2005, which used SLAM for navigation.

How SLAM works?[edit]

At any time instant , is the state vector consists of orientation and location of the agent, is control vector applied at time on the agent to reach the state at time , is landmark vector of location and is the observations taken by the agent at the time of the location. Since the quantities are probabilistic, so we have to compute joint posterior [1] at time . Here, is set of observations and is set of control inputs.

Basic representation of autonomous agent states, control, observation and landmarks in a flow chart with respect to time t.


The SLAM algorithm is implemented in two recursive steps, i.e.,

Time-Update[edit]

               [1] ... (1)

Measurement Update[edit]

                ... (2)

The above mentioned equations calculates joint posterior. The localization problem is solved by calculating probability distribution and mapping problem is solved by calculating conditional density .

Contributions of SLAM[edit]

SLAM has modified the world in a drastic manner, it has opened possibilities to reach complex and enclosed spaces that were previously unreachable. Its effects disrupted the geospatial and surveying industries bringing up new technologies. Also, the fact that it answers major localization problems in autonomous vehicles, with increased robustness, yields a more stable and accurate state estimation for both ground and aerial navigation solutions. SLAM algorithms have also been used for localization and navigation by astronauts on the surface of Mars[2]. In our everyday life, ARCore(Android) and ARKit(IOS) in the smartphones detect the feature points and find the change in location, also the visual information, and is combined with the IMU to estimate the position and the orientation. SLAM is a core for the upcoming technologies in Augmented Realities and has a lot of future scopes.

Importance of SLAM[edit]

File:Annotation 2020-10-06 143959.png
(a) Real-world scenario (b) the map in RVIZ and (c) trajectory with obstacles using Active SLAM.

SLAM, when combined with path planning problems in an unknown complex environment, becomes an Active SLAM problem. In a paper[3] active SLAM based on deep reinforcement learning is used for path planning under unknown environments. For obstacle avoidance path, Dueling DQN is used to navigate the robot, and FastSLAM is used to map and locate the robot in the unknown environment. Here, the robot learns and adapts to navigate in the environment based on Deep Reinforcement Learning Algorithms (DRL) which facilitates the robot to interact with its environment, avoiding both static and dynamic obstacles and reaching the goal by generating an optimal path using a 2D map created by FastSLAM for online navigation.

We all know SLAM is used to construct maps of unknown environments. Another example of active SLAM is described in a paper[4], Monte Carlo uncertainty maps-based for mobile robot autonomous SLAM navigation, where SLAM is integrated with Monte Carlo experiment to create an uncertainty map of an environment where SLAM is implemented on EKF (Extended Kalman Filter) to extract contours from the surrounding and navigate the robot to high uncertainty region of the environment. Based on the Gaussian method and Monte Carlo experiments the features are extracted and the uncertainty of the map is reconstructed. Monte Carlo's method is better than gridding the map, generated by SLAM, as it reduces the computational cost for navigation for real-time operation.

As mentioned in paper[5], robot can perform multiple task using decision making while the whole map or a part of map is unavailable. This increases the efficiency of the robot and saves time as the robot simultaneously maps and performs tasks. The proposed approach uses probabilistic RRT in combination with online POMDP to direct the belief and navigate in a dynamic environment.

Variants of SLAM[edit]

Evolution of SLAM

Extended Kalman Filter (EKF) SLAM[edit]

It is one of the early approaches to SLAM that estimates the state of the system based on the robot pose at every time step and the landmark location, where the navigation environment is represented in a discrete-time state-space vector. The initial state of robot represented as , landmark observations and the control inputs and the joint posterior density of the landmarks and state of robot is given by the probability distribution as:

                

The state of the whole system is given as:

SLAM in general is described as a Bayesian estimation problem, with a goal to make a belief or a probabilistic view that the robot has an idea of the landmark positions, and the pose is as close to the real distribution of the system. This belief is given as:

                

This approach utilizes the Extended Kalman Filter, with a Gaussian noise assumption. Taylor series expansion on the state-vector is used to linearize the motion and sensors [6]. EKF follows the filter cycle with the following steps: State prediction, measurement prediction, measurement, data association, and update. In the state prediction, the state space of the robot is updated based on the robot's motion by deriving the Jacobian of motion. Next, in the prediction step where the next information is predicted based on the existing information and a correction step is performed where the known data is associated and the previously unobserved landmarks are initialized. Now an expected observation is made based on the current estimate a Jacobian is calculated for the observation. The measurement update in a single step requires only one full belief update. This cycle continues until the loop closure is made, where loop closing reduces the uncertainty in robot and landmark estimates. [7]

But the drawback of EKF is the ability to work with uncertainty under Non-Gaussian implication (or non-linearities) introduced FastSLAM. [8]

FastSLAM [9][10][edit]

The yellow line shows the estimated path of the robot, and the blue line is the GPS ground truth, and the yellow dots represent the estimated positions of the landmarks.

Unlike a simple EFK SLAM, FastSLAM estimates the posterior using a modified particle filter. This approach uses K independent Kalman filters to estimate the K landmark positions. Using Rao-Blackwellization to exploit dependencies between variables the equation (1) can be represented such that it makes the SLAM posterior into low-dimensional estimation problems.

                  

Each particle maintains a K individual EKF's. The key steps of FastSLAM 1.0 involve, extending the path posterior by sampling a new pose for each sample by taking the old generation of the sample and drawing a new pose using the odometry information we get . Now we compute a particle weight which is a result obtained from the sampling principle, the weight helps understand the uncertainty of the landmark. The weight is given as:

                  

The last step is to update the belief of observed landmarks which is similar to the EFK update rule. There is a data-association problem involved where the system has more than one possible association for the observation and the landmark. In such cases, the multi-modal belief is considered and data is associated with the most probable match, if the probability for an assignment is too low then a new landmark is generated[10]. This is a huge advantage of FastSLAM over EKF. FastSLAM 2.0 uses measurements along with the motion model shown below and hence this results in more peaked proposal distribution as the sensor improves the accuracy and robustness of the algorithm.

                   

Graph SLAM [11][edit]

It is the second major category of SLAM, follows the optimization approach instead of a filter-based approach. Graph-based SLAM uses a graph construction to represent the problem where every node represents a pose of the robot while mapping and the edges represent the spatial constraints between 2 poses. The aim of this approach is to find the node configuration with minimum error introduced by constraints. An edge is constructed when the robot is displaced from to or the robot observes the same part of the environment from and . It uses matrices to build the problem where the affine and projective transformations are captured, it is represented as an information matrix . There is a function that computes the difference between the observation and the real observation , given by:

                   

To find the minimum cost, the following function is used to find the optimal value for the vehicle state and the landmark.

                   

LSD SLAM or Large-Scale Direct SLAM [12][edit]

Type of SLAM, where not just the local motion of the camera is tracked but the method also builds consistent large-scale maps of the surrounding environment. The algorithm consists of 3 components: Tracking is where camera images are used to estimate the body pose with respect to the keyframe using the pose of the previous frame. Depth-map estimation where the depth is found by a filtering-based estimation of semi-dense depth maps. The last phase is Map optimization where the map is not refined anymore and loop closures are detected by scale-aware and image alignments. LSD-SLAM can not just be implied on monocular but also on the stereo and omnidirectional or wide field-of-view cameras.

In this algorithm, the maps are represented as a pose graph of keyframes, where each frame has the camera image Ii, inverse depth map Di, and the variance of the inverse depth Vi. Initializing from the first keyframe Ii the relative 3D pose ξij is estimated by minimizing the variance-normalized photometric error. To estimate the depth map, when the camera moves too far from the existing map a new keyframe is created from the recent tracked image and a weighted combination of relative distance and angle to the current keyframe.

dist(ξji)=ξjiT W ξji. Tracked frames that are not keyframes are used for further refinement of the depth map, by a large number of comparisons on the stereo image regions with high accuracy to add new pixels to the depth map. This algorithm uses direct image alignment for monocular SLAM to avoid scale-drift errors over long trajectories. An important limitation of the approach of image alignment is the need for high accuracy with the initialization, for finding loop closure constraints in case of large loop closures. This can be resolved by increasing the convergence radius using Efficient Second-Order Minimization (ESM) or Coarse-To-Fine Approach.

As the robot moves in a counter-clockwise direction, based on the robot's pose visual odometry is created. The left image represents the position before loop closure and the right image shows the position after the loop closure using V-SLAM.

Visual SLAM (V-SLAM)[edit]

Localizing a robot in an unknown environment while creating a map of the environment using cameras is Visual SLAM (V-SLAM). As the robot traverses the environment it generates a trajectory based on camera poses (the camera is mounted on the agent) which observes the landmarks captured by the camera. The landmark position and camera pose are estimated using Extended Kalman Filters (EKF) or Bundle Adjustment variant.[13] V-SLAM creates dense local maps which are constantly updated and after loop closure a 3D global map is created which is accurate than a stereo reconstruction. As the robot moves, the dense map is calculated based on the robot's movement and dense disparity images which are corrected by the Kalman filter. When it comes to high speed moving robot.

Other[edit]

There are other important and upcoming SLAM variants like L-SLAM (Low dimensional SLAM)[14], ORB-SLAM[15], CT-SLAM[16], CoSLAM (Collaborative Visual SLAM)[17], SeqSLAM (Sequential SLAM)[18].

Applications of SLAM[edit]

SLAM has been used in mapping, sensing, kinematic modeling, audio-visual SLAM, collaborative SLAM, moving objects, loop closure, exploration, etc. The major applications that use SLAM are the robots (both manned and autonomous) that are used in unknown terrains like planet surfaces, underseas, underground. SLAM can be used in both indoor and outdoor applications like a home vacuum cleaner to an underground exploration of mines.

Considering the example of Vacuum Cleaner Robots, a robot without SLAM will randomly traverse in the room which may end up not cleaning the entire room and consuming high power and low efficiency. A robot with SLAM localizes itself in the room and moves accordingly based on the sensors. It also maps the room using a camera or LiDAR sensor to avoid moving blindly in the room by maintaining its power consumption cost and cleaning the room completely.

In paper[13] of V-SLAM, an autonomous car was traversed in rectangle counter-clockwise which generates visual odometry of the car's pose. At the loop closure, using Monte Carlo simulation and Kalman filters, the drift was predicted and removed using sparse mapping of the V-SLAM algorithm. V-SLAM is also used in satellite positioning and navigation service, deep space exploration, indoor localization, etc[19]. SLAM is also used in navigating a fleet of robots in a warehouse to perform a certain task, parking an autonomous car at a parking spot or navigating a delivery drone in an unknown environment.

Open Research in SLAM[edit]

Based on the research done so far, SLAM is always used for navigating the robot and generating map of an unknown environment while algorithms from Monte Carlo Simulation, Deep Reinforcement Learning, online POMDP, etc are used to create dense map of the unknown environment and generate a path for the robot to traverse in uncertain environments. The paper[20] describes about the path planning technique in belief space with Pose SLAM under uncertain environments. The maps generated by Pose SLAM can be directly used as belief roadmaps to plan path in the areas with low uncertainty. Similarly as described in paper[21], that creating map and localizing robot is solved by SLAM but optimization of path and creating detailed and informative environment in dynamic conditions using SLAM is still an ongoing research.

The algorithm that is based on semantic-based mapping or enhancing the accuracy of the semantic map is Semantic SLAM. Semantic SLAM[22] opened the research to estimate poses, detect loop closures and build 3D maps more efficiently than the traditional SLAM methods. Blending Deep Learning and Semantic SLAM is a hot topic in SLAM that focuses on creating a detailed map of the environment that is robust to illumination, dynamics of the world, and fast-moving agents, and also generating optimal path planning for a dynamic environment.

As suggested by the paper[5], there are possibilities with simultaneous planning and performing multiple tasks using various types of sensors to get a better model of the environment, also there is more scope to improve the accuracy by using various other planning algorithms instead of RRTs in combination with online POMDP.

AR SLAM is another rising topic where a device collects reference points in the form of pictures from the environment. Google's AR platform uses advanced SLAM to distinguish between floors and walls. So in AR, a virtual image is blended with real-world information and SLAM provides information of the device and world, which is a MAXST's Visual SLAM problem[23].

References[edit]

  1. ^ a b c Durrant-Whyte, H.; Bailey, T. (June 2006). "Simultaneous localization and mapping: part I". IEEE Robotics Automation Magazine. 13 (2): 99–110. doi:10.1109/MRA.2006.1638022. ISSN 1558-223X. S2CID 8061430.
  2. ^ Benzerrouk Hamza; Alexander Nebylov; Gennady Yatsevitch (21 April 2016). "Smart Algorithms for Astronaut Localization and Navigation on the Surface of Mars Planet Based on Natural constraints". 18th IFAC Symposium on Automatic Control in Aerospace. 43 (15): 511–516. doi:10.3182/20100906-5-JP-2022.00087.
  3. ^ Wen, Shuhuan; Zhao, Yanfang; Yuan, Xiao; Wang, Zongtao; Zhang, Dan; Manfredi, Luigi (2020-04-01). "Path planning for active SLAM based on deep reinforcement learning under unknown environments". Intelligent Service Robotics. 13 (2): 263–272. doi:10.1007/s11370-019-00310-w. ISSN 1861-2784. S2CID 213630342.
  4. ^ Cheein, Fernando A. Auat; Toibero, Juan M.; di Sciascio, Fernando; Carelli, Ricardo; Pereira, F. Lobo (March 2010). "Monte Carlo uncertainty maps-based for mobile robot autonomous SLAM navigation". 2010 IEEE International Conference on Industrial Technology: 1433–1438. doi:10.1109/ICIT.2010.5472495. ISBN 978-1-4244-5695-6. S2CID 39756289.
  5. ^ a b A. Guez; J. Pineau (3–7 May 2010). "Multi-tasking SLAM". 2010 IEEE International Conference on Robotics and Automation: 377–384. doi:10.1109/ROBOT.2010.5509969. ISBN 978-1-4244-5038-1. S2CID 907443.
  6. ^ J.A.Castellanos; R. Martinez-Cantin; J.D. Tardós, J. Neira (2007). "Robocentric map joining: Improving the consistency of EKF-SLAM". Robotics and Autonomous Systems. Elsevier. 55: 21–29. doi:10.1016/j.robot.2006.06.005. Retrieved 31 January 2007.
  7. ^ Sebastian Thrun; Wolfram Burgard; Dieter Fox. Probabilistic Robotics. MIT Press. pp. Chapter 10.
  8. ^ J. Z. Sasiadek; A. Monjazeb; D. Necsulescu (2008). "Navigation of an autonomous mobile robot using EKF-SLAM and FastSLAM". 16th Mediterranean Conference on Control and Automation: 517–522. doi:10.1109/MED.2008.4602213. ISBN 978-1-4244-2504-4. S2CID 18720777.
  9. ^ M. Montemerlo and S. Thrun (2003). "Simultaneous localization and mapping with unknown data association using FastSLAM". 2003 IEEE International Conference on Robotics and Automation. 2: 1985–1991. doi:10.1109/ROBOT.2003.1241885. ISBN 0-7803-7736-2. S2CID 14304373.
  10. ^ a b Sebastian Thrun; Wolfram Burgard; Dieter Fox. Probabilistic Robotics. MIT Press. pp. Chapter 13.
  11. ^ G. Grisetti; R. Kümmerle; C. Stachniss; W. Burgard (2010). "A Tutorial on Graph-Based SLAM". IEEE Intelligent Transportation Systems Magazine. 2 (4): 31–43. doi:10.1109/MITS.2010.939925. S2CID 5970112.
  12. ^ Engel J., Schöps T., Cremers D. (2014). "LSD-SLAM: Large-Scale Direct Monocular SLAM". ECCV 2014: Computer Vision – ECCV 2014. Lecture Notes in Computer Science. 8690: 834–849. doi:10.1007/978-3-319-10605-2_54. ISBN 978-3-319-10604-5.CS1 maint: multiple names: authors list (link)
  13. ^ a b Lategahn, Henning; Geiger, Andreas; Kitt, Bernd (May 2011). "Visual SLAM for autonomous ground vehicles". 2011 IEEE International Conference on Robotics and Automation: 1732–1737. doi:10.1109/ICRA.2011.5979711. ISBN 978-1-61284-386-5. S2CID 10996897.
  14. ^ Zikos, Nikos; Petridis, Vassilios (2014), "6-DoF Low Dimensionality SLAM (L-SLAM)", Journal of Intelligent & Robotic Systems, Springer, 79: 1–18, doi:10.1007/s10846-014-0029-6, ISSN 0921-0296, S2CID 40486562
  15. ^ R. Mur-Artal and J. M. M. Montiel and J. D. Tardós (2015). "ORB-SLAM: A Versatile and Accurate Monocular SLAM System". IEEE Transactions on Robotics. 31 (5): 1147–1163. arXiv:1502.00956. Bibcode:2015arXiv150200956M. doi:10.1109/TRO.2015.2463671. ISSN 1552-3098. S2CID 206775100.
  16. ^ M. Bosse and R. Zlot (2009). "Continuous 3D scan-matching with a spinning 2d laser". 2009 IEEE International Conference on Robotics and Automation: 4312–4319. doi:10.1109/ROBOT.2009.5152851. ISBN 978-1-4244-2788-8. ISSN 1050-4729. S2CID 2819117.
  17. ^ D. Zou and P. Tan (2013). "CoSLAM: Collaborative Visual SLAM in Dynamic Environments" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. IEEE. 35 (2): 354–66. doi:10.1109/TPAMI.2012.104. PMID 22547430. S2CID 9517281.
  18. ^ Michael J. Milford and Gordon. F. Wyeth. "SeqSLAM: Visual Route-Based Navigation for Sunny Summer Days and Stormy Winter Nights". Proc. Of Intl. Conf. On Robotics and Automation.
  19. ^ Kaichang DI, Wenhui WAN (2020-03-20). "Progress and Applications of Visual SLAM". 测绘学报(英文版) (in Chinese). 2 (2): 38–49. doi:10.11947/j.JGGS.2019.0205 (inactive 31 May 2021). ISSN 2096-5990.CS1 maint: DOI inactive as of May 2021 (link)
  20. ^ Valencia, Rafael; Andrade-Cetto, Juan (2018), Valencia, Rafael; Andrade-Cetto, Juan (eds.), "Path Planning in Belief Space with Pose SLAM", Mapping, Planning and Exploration with Pose SLAM, Springer Tracts in Advanced Robotics, Cham: Springer International Publishing, pp. 53–87, doi:10.1007/978-3-319-60603-3_4, ISBN 978-3-319-60603-3, retrieved 2020-10-06
  21. ^ Cornejo-Lupa, María A.; Ticona-Herrera, Regina P.; Cardinale, Yudith; Barrios-Aranibar, Dennis (2020-09-28). "A Survey of Ontologies for Simultaneous Localization and Mapping in Mobile Robots". ACM Computing Surveys. 53 (5): 103:1–103:26. doi:10.1145/3408316. ISSN 0360-0300.
  22. ^ Xia, Linlin; Cui, Jiashuo; Shen, Ran; Xu, Xun; Gao, Yiping; Li, Xinying (2020-05-13). "A survey of image semantics-based visual simultaneous localization and mapping: Application-oriented solutions to autonomous navigation of mobile robots". International Journal of Advanced Robotic Systems. 17 (3). doi:10.1177/1729881420919185. S2CID 219425725.
  23. ^ "3.4.x ㅣ MAXST Developer Site". developer.maxst.com. Retrieved 2020-10-07.