Eight-point algorithm

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The eight-point algorithm is an algorithm used in computer vision to estimate the essential matrix or the fundamental matrix related to a stereo camera pair from a set of corresponding image points. It was introduced by Christopher Longuet-Higgins in 1981 for the case of the essential matrix. In theory, this algorithm can be used also for the fundamental matrix, but in practice the normalized eight-point algorithm, described by Richard Hartley in 1997, is better suited for this case.

The algorithm's name derives from the fact that it estimates the essential matrix or the fundamental matrix from a set of eight (or more) corresponding image points. However, variations of the algorithm can be used for fewer than eight points.

Coplanarity constraint[edit]

Example of epipolar geometry. Two cameras, with their respective centers of projection points OL and OR, observe a point P. The projection of P onto each of the image planes is denoted pL and pR. Points EL and ER are the epipoles.

One may express the epipolar geometry of two cameras and a point in space with an algebraic equation. Observe that, no matter where the point P is in space, the vectors \overline{O_L P}, \overline{O_R P} and  \overline{O_R O_L} belong to the same plane. Call X_L the coordinates of point P in the left eye's reference frame and call X_R the coordinates of P in the right eye's reference frame and call R, T the rotation and translation between the two reference frames s.t. X_L = R^T (X_R-T) is the relationship between the coordinates of P in the two reference frames. The vector T may be thought of as the coordinates of O_L in the right eye's reference frame. If three vectors are coplanar, then their scalar triple product is equal to zero, hence \overline{O_R P} \cdot \left( \overline{O_R O_L}  \wedge \overline{O_L P} \right) = 0. In order to express this constraint as an algebraic equation we will express the vectors in coordinates in the left eye's reference frame:


\left( R^T (X_R-T) \right)^T  T \wedge X_L =  X_R^T R T\wedge X_L = X_R^T R S X_L = 0

Observe that T \wedge may be thought of as a matrix; Longuet-Higgins used the symbol S to denote it. The product  R^T T \wedge = R S is often called essential matrix and denoted with  E .

The vectors \overline{O_L p_L}, \overline{O_R p_R} are parallel to the vectors  \overline{O_L P}, \overline{O_R p} and therefore the coplanarity constraint holds if we substitute these vectors. If we call y, y' the coordinates of the projections of P onto the left and right image planes, then the coplanarity constraint may be written as


 y'^T \mathbf{E} y = 0

The basic algorithm[edit]

The basic eight-point algorithm is here described for the case of estimating the essential matrix  \mathbf{E} . It consists of three steps. First, it formulates a homogeneous linear equation, where the solution is directly related to  \mathbf{E} , and then solves the equation, taking into account that it may not have an exact solution. Finally, the internal constraints of the resulting matrix are managed. The first step is described in Longuet-Higgins' paper, the second and third steps are standard approaches in estimation theory.

The constraint defined by the essential matrix  \mathbf{E} is

 (\mathbf{y}')^{T} \, \mathbf{E} \, \mathbf{y} = 0

for corresponding image points represented in normalized image coordinates  \mathbf{y}, \mathbf{y}' . The problem which the algorithm solves is to determine  \mathbf{E} for a set of matching image points. In practice, the image coordinates of the image points are affected by noise and the solution may also be over-determined which means that it may not be possible to find  \mathbf{E} which satisfies the above constraint exactly for all points. This issue is addressed in the second step of the algorithm.

Step 1: Formulating a homogeneous linear equation[edit]

With

 \mathbf{y} = \begin{pmatrix} y_{1} \\ y_{2} \\ 1 \end{pmatrix}   and    \mathbf{y}' = \begin{pmatrix} y'_{1} \\ y'_{2} \\ 1 \end{pmatrix}   and    \mathbf{E} = \begin{pmatrix} e_{11} & e_{12} & e_{13} \\ e_{21} & e_{22} & e_{23} \\ e_{31} & e_{32} & e_{33} \end{pmatrix}

the constraint can also be rewritten as

 y'_1 y_1 e_{11} + y'_1 y_2 e_{12} + y'_1 e_{13} + y'_2 y_1 e_{21} + y'_2 y_2 e_{22} + y'_2 e_{23} + y_1 e_{31} + y_2 e_{32} + e_{33} = 0 \,

or

 \mathbf{e} \cdot \tilde{\mathbf{y}} = 0

where

 \tilde{\mathbf{y}} = \begin{pmatrix} y'_1 y_1 \\ y'_1 y_2 \\ y'_1 \\ y'_2 y_1 \\ y'_2 y_2 \\ y'_2 \\ y_1 \\ y_2 \\ 1 \end{pmatrix}   and    \mathbf{e} = \begin{pmatrix} e_{11} \\ e_{12} \\ e_{13} \\ e_{21} \\ e_{22} \\ e_{23} \\ e_{31} \\ e_{32} \\ e_{33} \end{pmatrix}

that is,  \mathbf{e} represents the essential matrix in the form of a 9-dimensional vector and this vector must be orthogonal to the vector  \tilde{\mathbf{y}} which can be seen as a vector representation of the  3 \times 3 matrix  \mathbf{y}' \, \mathbf{y}^{T} .

Each pair of corresponding image points produces a vector  \tilde{\mathbf{y}} . Given a set of 3D points  \mathbf{P}_k this corresponds to a set of vectors  \tilde{\mathbf{y}}_{k} and all of them must satisfy

 \mathbf{e} \cdot \tilde{\mathbf{y}}_{k} = 0

for the vector  \mathbf{e} . Given sufficiently many (at least eight) linearly independent vectors  \tilde{\mathbf{y}}_{k} it is possible to determine  \mathbf{e} in a straightforward way. Collect all vectors  \tilde{\mathbf{y}}_{k} as the columns of a matrix  \mathbf{Y} and it must then be the case that

 \mathbf{e}^{T} \, \mathbf{Y} = \mathbf{0}

This means that  \mathbf{e} is the solution to a homogeneous linear equation.

Step 2: Solving the equation[edit]

A standard approach to solving this equation implies that  \mathbf{e} is a left singular vector of  \mathbf{Y} corresponding to a singular value that equals zero. Provided that at least eight linearly independent vectors  \tilde{\mathbf{y}}_{k} are used to construct  \mathbf{Y} it follows that this singular vector is unique (disregarding scalar multiplication) and, consequently,  \mathbf{e} and then  \mathbf{E} can be determined.

In the case that more than eight corresponding points are used to construct  \mathbf{Y} it is possible that it does not have any singular value equal to zero. This case occurs in practice when the image coordinates are affected by various types of noise. A common approach to deal with this situation is to describe it as a total least squares problem; find  \mathbf{e} which minimizes

 \| \mathbf{e}^{T} \, \mathbf{Y} \|

when  \| \mathbf{e} \| = 1 . The solution is to choose  \mathbf{e} as the left singular vector corresponding to the smallest singular value of  \mathbf{Y} . A reordering of this  \mathbf{e} back into a  3 \times 3 matrix gives the result of this step, here referred to as  \mathbf{E}_{\rm est} .

Step 3: Enforcing the internal constraint[edit]

Another consequence of dealing with noisy image coordinates is that the resulting matrix may not satisfy the internal constraint of the essential matrix, that is, two of its singular values are equal and nonzero and the other is zero. Depending on the application, smaller or larger deviations from the internal constraint may or may not be a problem. If it is critical that the estimated matrix satisfies the internal constraints, this can be accomplished by finding the matrix  \mathbf{E}' of rank 2 which minimizes

 \| \mathbf{E}' - \mathbf{E}_{\rm est} \|

where  \mathbf{E}_{\rm est} is the resulting matrix from Step 2 and the Frobenius matrix norm is used. The solution to the problem is given by first computing a singular value decomposition of  \mathbf{E}_{\rm est} :

 \mathbf{E}_{\rm est} = \mathbf{U} \, \mathbf{S} \, \mathbf{V}^{T}

where  \mathbf{U}, \mathbf{V} are orthogonal matrices and  \mathbf{S} is a diagonal matrix which contains the singular values of  \mathbf{E}_{\rm est} . In the ideal case, one of the diagonal elements of  \mathbf{S} should be zero, or at least small compared to the other two which should be equal. In any case, set

 \mathbf{S}' = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}

Finally,  \mathbf{E}' is given by

 \mathbf{E}' = \mathbf{U} \, \mathbf{S}' \, \mathbf{V}^{T}

The matrix  \mathbf{E}' is the resulting estimate of the essential matrix provided by the algorithm.

Determining R and t from E[edit]

This topic is covered in the page on the Essential matrix (section on determining R and t from E).

The normalized eight-point algorithm[edit]

The basic eight-point algorithm can in principle be used also for estimating the fundamental matrix  \mathbf{F} . The defining constraint for  \mathbf{F} is

 (\mathbf{y}')^{T} \, \mathbf{F} \, \mathbf{y} = 0

where  \mathbf{y}, \mathbf{y}' are the homogeneous representations of corresponding image coordinates (not necessary normalized). This means that it is possible to form a matrix  \mathbf{Y} in a similar way as for the essential matrix and solve the equation

 \mathbf{f}^{T} \, \mathbf{Y} = \mathbf{0}

for  \mathbf{f} which is a reshaped version of  \mathbf{F} . By following the procedure outlined above, it is then possible to determine  \mathbf{F} from a set of eight matching points. In practice, however, the resulting fundamental matrix may not be useful for determining epipolar constraints.

The problem[edit]

The problem is that the resulting  \mathbf{Y} often is ill-conditioned. In theory,  \mathbf{Y} should have one singular value equal to zero and the rest are non-zero. In practice, however, some of the non-zero singular values can become small relative to the larger ones. If more than eight corresponding points are used to construct  \mathbf{Y} , where the coordinates are only approximately correct, there may not be a well-defined singular value which can be identified as approximately zero. Consequently, the solution of the homogeneous linear system of equations may not be sufficiently accurate to be useful.

What's causing the problem[edit]

Hartley addressed this estimation problem in his 1997 article. His analysis of the problem shows that the problem is caused by the poor distribution of the homogeneous image coordinates in their space,  \mathbb{R}^{3} . A typical homogeneous representation of the 2D image coordinate  (y_{1}, y_{2}) \, is

 \mathbf{y} = \begin{pmatrix} y_{1} \\ y_{2} \\ 1 \end{pmatrix}

where both  y_{1}, y_{2} \, lie in the range 0 to 1000-2000 for a modern digital camera. This means that the first two coordinates in  \mathbf{y} vary over a much larger range than the third coordinate. Furthermore, if the image points which are used to construct  \mathbf{Y} lie in a relatively small region of the image, for example at  (700,700) \pm (100,100) \, , again the vector  \mathbf{y} points in more or less the same direction for all points. As a consequence,  \mathbf{Y} will have one large singular value and the remaining are small.

How it can be solved[edit]

As a solution to this problem, Hartley proposed that the coordinate system of each of the two images should be transformed, independently, into a new coordinate system according to the following principle.

  • The origin of the new coordinate system should be centered (have its origin) at the centroid (center of gravity) of the image points. This is accomplished by a translation of the original origin to the new one.
  • After the translation the coordinates are uniformly scaled so that the mean distance from the origin to a point equals  \sqrt{2} .

This principle results, normally, in a distinct coordinate transformation for each of the two images. As a result, new homogeneous image coordinates  \mathbf{\bar y}, \mathbf{\bar y}' are given by

 \mathbf{\bar y} = \mathbf{T} \, \mathbf{y}
 \mathbf{\bar y}' = \mathbf{T}' \, \mathbf{y}'

where  \mathbf{T}, \mathbf{T}' are the transformations (translation and scaling) from the old to the new normalized image coordinates. This normalization is only dependent on the image points which are used in a single image and is, in general, distinct from normalized image coordinates produced by a normalized camera.

The epipolar constraint based on the fundamental matrix can now be rewritten as

 0 = (\mathbf{\bar y}')^{T} \, ((\mathbf{T}')^{T})^{-1} \, \mathbf{F} \, \mathbf{T}^{-1}\, \mathbf{\bar y} = (\mathbf{\bar y}')^{T} \, \mathbf{\bar F} \, \mathbf{\bar y}

where  \mathbf{\bar F} = ((\mathbf{T}')^{T})^{-1} \, \mathbf{F} \, \mathbf{T}^{-1} . This means that it is possible to use the normalized homogeneous image coordinates  \mathbf{\bar y}, \mathbf{\bar y}' to estimate the transformed fundamental matrix  \mathbf{\bar F} using the basic eight-point algorithm described above.

The purpose of the normalization transformations is that the matrix  \mathbf{\bar Y} , constructed from the normalized image coordinates, in general has a better condition number than  \mathbf{Y} has. This means that the solution  \mathbf{\bar f} is more well-defined as a solution of the homogeneous equation  \mathbf{\bar Y} \, \mathbf{\bar f} than  \mathbf{f} is relative to  \mathbf{Y} . Once  \mathbf{\bar f} has been determined and reshaped into  \mathbf{\bar F} the latter can be de-normalized to give  \mathbf{F} according to

 \mathbf{F} = (\mathbf{T}')^{T} \, \mathbf{\bar F} \, \mathbf{T}

In general, this estimate of the fundamental matrix is a better one than would have been obtained by estimating from the un-normalized coordinates.

Using fewer than eight points[edit]

Each point pair contributes with one constraining equation on the element in  \mathbf{E} . Since  \mathbf{E} has five degrees of freedom it should therefore be sufficient with only five point pairs to determine  \mathbf{E} . Though possible from a theoretical point of view, the practical implementation of this is not straightforward and relies on solving various non-linear equations.

References[edit]

  • Richard I. Hartley (June 1997). "In Defense of the Eight-Point Algorithm". IEEE Transaction on Pattern Recognition and Machine Intelligence 19 (6): 580–593. doi:10.1109/34.601246. 
  • Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press. ISBN 978-0-521-54051-3. 
  • H. Christopher Longuet-Higgins (September 1981). "A computer algorithm for reconstructing a scene from two projections". Nature 293 (5828): 133–135. doi:10.1038/293133a0.