3D pose estimation

From Wikipedia, the free encyclopedia
  (Redirected from 3D Pose Estimation)
Jump to: navigation, search

3D pose estimation is the problem of determining the transformation of an object in a 2D image which gives the 3D object. The need for 3D pose estimation arises from the limitations of feature based pose estimation. There exist environments where it is difficult to extract corners or edges from an image. To circumvent these issues, the object is dealt with as a whole through the use of free-form contours.[1]

3D pose estimation from an uncalibrated 2D camera[edit]

It is possible to estimate the 3D rotation and translation of a 3D object from a single 2D photo, if an approximate 3D model of the object is known and the corresponding points in the 2D image are known. A common technique for solving this has recently[when?] been "POSIT", where the 3D pose is estimated directly from the 3D model points and the 2D image points, and corrects the errors iteratively until a good estimate is found from a single image.[2] Most implementations of POSIT only work on non-coplanar points (in other words, it won't work with flat objects or planes).[3]

Another approach is to register a 3D CAD model over the photograph of a known object by optimizing a suitable distance measure with respect to the pose parameters. [4] [5] The distance measure is computed between the object in the photograph and the 3D CAD model projection at a given pose. Perspective projection or orthogonal projection is possible depending on the pose representation used. This approach is appropriate for applications where a 3D CAD model of a known object (or object category) is available.

3D pose estimation from a calibrated 2D camera[edit]

Given a 2D image of an object, and the camera that is calibrated with respect to a world coordinate system, it is also possible to find the pose which gives the 3D object in its object coordinate system.[6] This works as follows.

Extracting 3D from 2D[edit]

Starting with a 2D image, image points are extracted which correspond to corners in an image. The projection rays from the image points are reconstructed from the 2D points so that the 3D points, which must be incident with the reconstructed rays, can be determined.

Pseudocode[edit]

The algorithm for determining pose estimation is based on the Iterative Closest Point algorithm. The main idea is to determine the correspondences between 2D image features and points on the 3D model curve.

  (a)Reconstruct projection rays from the image points
(b)Estimate the nearest point of each projection ray to a point on the 3D contour
(c)Estimate the pose of the contour with the use of this correspondence set
(d)goto (b)

The above algorithm does not account for images containing an object that is partially occluded. The following algorithm assumes that all contours are rigidly coupled, meaning the pose of one contour defines the pose of another contour.

  (a)Reconstruct projection rays from the image points
(b)For each projection ray R:
(c)For each 3D contour:
(c1)Estimate the nearest point P1 of ray R to a point on the contour
(c2)if (n==1) choose P1 as actual P for the point-line correspondence
(c3)else compare P1 with P:
if dist(P1, R) is smaller than dist(P, R)
then choose P1 as new P
(d)Use (P, R) as correspondence set.
(e)Estimate pose with this correspondence set
(f)Transform contours, goto (b)

In practice, using a 2 GHz Intel Pentium processor, average speeds of 29fps have been reached using the above algorithm.[6]

Estimating pose through comparison[edit]

Systems exist which use a database of an object at different rotations and translations to compare an input image against to estimate pose. These systems accuracy is limited to situations which are represented in their database of images, however the goal is to recognize a pose, rather than determine it.[7]

See also[edit]

References[edit]

  1. ^ Bodo Rosenhahn. "Pose Estimation of 3D Free-form Contours in Conformal Geometry" (in English / German). Institut fur Informatik und Praktische Mathematik, Christian-Albrechts-Universitat zu Kiel. Archived from the original on 3 June 2008. Retrieved 2008-06-09. 
  2. ^ Dementhon and Davis, 1995. "Model-based object pose in 25 lines of code". Kluwer Academic Publishers. Retrieved 2010-05-29. 
  3. ^ Javier Barandiaran. "POSIT tutorial with OpenCV and OpenGL". Archived from the original on 20 June 2010. Retrieved 2010-05-29. 
  4. ^ Srimal Jayawardena and Marcus Hutter and Nathan Brewer. "A Novel Illumination-Invariant Loss for Monocular 3D Pose Estimation". Retrieved 2013-06-01. 
  5. ^ Srimal Jayawardena and Di Yang and Marcus Hutter. "3D Model Assisted Image Segmentation". Retrieved 2013-06-01. 
  6. ^ a b Bodo Rosenhahn. "Foundations about 2D-3D Pose Estimation". CV Online. Retrieved 2008-06-09. 
  7. ^ Vassilis Athitsos. "Estimating 3D Hand Pose from a Cluttered Image". Boston University Computer Science Tech. 

Bibliography[edit]

  • Rosenhahn, B. "Foundations about 2D-3D Pose Estimation."
  • Rosenhahn, B. "Pose Estimation of 3D Free-form Contours in Conformal Geometry."
  • Athitsos, V. "Estimating 3D Hand Pose from a Cluttered Image."

External links[edit]

  • [1] Further readings on various Computer Vision topics as well as more information on 3D pose estimation