Structure from motion

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Structure from motion (SfM) is a range imaging technique; it refers to the process of estimating three-dimensional structures from two-dimensional image sequences which may be coupled with local motion signals. It is studied in the fields of computer vision and visual perception. In biological vision, SfM refers to the phenomenon by which humans (and other living creatures) can recover 3D structure from the projected 2D (retinal) motion field of a moving object or scene.

Obtaining 3D information from 2D images[edit]

Digital surface model of motorway interchange construction site
Real photo x SfM with texture color x SfM with simple shader. Made with Python Photogrammetry Toolbox GUI and rendered in Blender with Cycles.
Bezmiechowa airfield 3D Digital Surface Model extracted from data collected during 30min flight of Pteryx UAV

Humans perceive a lot of information about the three-dimensional structure in their environment by moving through it. When the observer moves and the objects around him move, information is obtained from images sensed over time.[1]

Finding structure from motion presents a similar problem as finding structure from stereo vision. In both instances, the correspondence between images and the reconstruction of 3D object needs to be found.

To find correspondence between images, features such as corner points (edges with gradients in multiple directions) are tracked from one image to the next. One of the most widely used feature detectors is the SIFT (Scale-invariant feature transform). It uses the maxima from a Difference-of-Gaussians (DOG) pyramid as features. The first step in SIFT is finding a dominant gradient direction. To make it rotation-invariant, the descriptor is rotated to fit this orientation.[2] Another common feature detector is the SURF (Speeded Up Robust Features).[3] In SURF, the DOG is replaced with a Hessian matrix based blob detector. Also, instead of evaluating the gradient histograms, SURF computes for the sums of gradient components and the sums of their absolute values.[4] The features detected from all the images will then be matched. One of the matching algorithms that track features from one image to another is the Lukas-Kanade tracker.[5]

Sometimes some of the matched features are incorrectly matched. This is why the matches should also be filtered. RANSAC (Random Sample Consensus) is the algorithm, which is usually used to remove the outlier correspondences. In the paper of Fischler and Bolles, RANSAC is used to solve the Location Determination Problem (LDP), where the objective is to determine the points in space that project onto an image into a set of landmarks with known locations.[6]

The feature trajectories over time are then used to reconstruct their 3D positions and the camera's motion.[7] An alternative is given by so-called direct approaches, where geometric information (3D structure and camera motion) is directly estimated from the images, without intermediate abstraction to features or corners.[8]

There are several approaches to structure from motion. In incremental SFM, camera poses are solved for and added one by one to the collection. In global SFM, the poses of all cameras are solved for at the same time. A somewhat intermediate approach is out-of-core SFM, where several partial reconstructions are computed that are then integrated into a global solution.

See also[edit]

Further reading[edit]

  • Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in Computer Vision. Cambridge University Press. ISBN 0-521-54051-8. 
  • Olivier Faugeras and Quang-Tuan Luong and Theodore Papadopoulo (2001). The Geometry of Multiple Images. MIT Press. ISBN 0-262-06220-8. 
  • Yi Ma, S. Shankar Sastry, Jana Kosecka, Stefano Soatto, Jana Kosecka (November 2003). An Invitation to 3-D Vision: From Images to Geometric Models. Interdisciplinary Applied Mathematics Series, #26. Springer-Verlag New York, LLC. ISBN 0-387-00893-4. 

References[edit]

  1. ^ Linda G. Shapiro, George C. Stockman (2001). Computer Vision. Prentice Hall. ISBN 0-13-030796-3. 
  2. ^ D. G. Lowe (2004). "Distinctive image features from scale-invariant keypoints". International Journal of Computer Vision. 
  3. ^ H. Bay, T. Tuytelaars, and L. Van Gool (2006). "Surf: Speeded up robust features". 9th European Conference on Computer Vision. 
  4. ^ K. Häming and G. Peters (2010). "The structure-from-motion reconstruction pipeline – a survey with focus on short image sequences". Kybernetika. 
  5. ^ B. D. Lucas and T. Kanade. "An iterative image registration technique with an application to stereo vision". IJCAI81. 
  6. ^ M. A. Fischler and R. C. Bolles (1981). "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography". Commun. ACM. 
  7. ^ F. Dellaert, S. Seitz, C. Thorpe, and S. Thrun (2000). "Structure from Motion without Correspondence". IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 
  8. ^ Engel, Jakob; Schöps, Thomas; Cremers, Daniel (2014). "European Converence on Computer Vision (ECCV) 2014" (PDF).  |chapter= ignored (help)

External links[edit]

Structure from Motion software toolboxes[edit]

Open source solutions[edit]

C++

Matlab

Python

Other software[edit]