Jump to content

Essential matrix

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Jarble (talk | contribs) at 19:33, 16 July 2020 (linking). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computer vision, the essential matrix is a matrix, , with some additional properties described below, which relates corresponding points in stereo images assuming that the cameras satisfy the pinhole camera model.

Function

More specifically, if and are homogeneous normalized image coordinates in image 1 and 2, respectively, then

if and correspond to the same 3D point in the scene.

The above relation which defines the essential matrix was published in 1981 by H. Christopher Longuet-Higgins, introducing the concept to the computer vision community. Richard Hartley and Andrew Zisserman's book reports that an analogous matrix appeared in photogrammetry long before that. Longuet-Higgins' paper includes an algorithm for estimating from a set of corresponding normalized image coordinates as well as an algorithm for determining the relative position and orientation of the two cameras given that is known. Finally, it shows how the 3D coordinates of the image points can be determined with the aid of the essential matrix.

Use

The essential matrix can be seen as a precursor to the fundamental matrix. Both matrices can be used for establishing constraints between matching image points, but the essential matrix can only be used in relation to calibrated cameras since the inner camera parameters must be known in order to achieve the normalization. If, however, the cameras are calibrated the essential matrix can be useful for determining both the relative position and orientation between the cameras and the 3D position of corresponding image points.

Derivation and definition

This derivation follows the paper by Longuet-Higgins.

Two normalized cameras project the 3D world onto their respective image planes. Let the 3D coordinates of a point P be and relative to each camera's coordinate system. Since the cameras are normalized, the corresponding image coordinates are

  and  

A homogeneous representation of the two image coordinates is then given by

  and  

which also can be written more compactly as

  and  

where and are homogeneous representations of the 2D image coordinates and and are proper 3D coordinates but in two different coordinate systems.

Another consequence of the normalized cameras is that their respective coordinate systems are related by means of a translation and rotation. This implies that the two sets of 3D coordinates are related as

where is a rotation matrix and is a 3-dimensional translation vector.

The essential matrix is then defined as:

where is the matrix representation of the cross product with .

To see that this definition of the essential matrix describes a constraint on corresponding image coordinates multiply from left and right with the 3D coordinates of point P in the two different coordinate systems:

  1. Insert the above relations between and and the definition of in terms of and .
  2. since is a rotation matrix.
  3. Properties of the matrix representation of the cross product.

Finally, it can be assumed that both and are > 0, otherwise they are not visible in both cameras. This gives

which is the constraint that the essential matrix defines between corresponding image points.

Properties of the essential matrix

Not every arbitrary matrix can be an essential matrix for some stereo cameras. To see this notice that it is defined as the matrix product of one rotation matrix and one skew-symmetric matrix, both . The skew-symmetric matrix must have two singular values which are equal and another which is zero. The multiplication of the rotation matrix does not change the singular values which means that also the essential matrix has two singular values which are equal and one which is zero. The properties described here are sometimes referred to as internal constraints of the essential matrix.

If the essential matrix is multiplied by a non-zero scalar, the result is again an essential matrix which defines exactly the same constraint as does. This means that can be seen as an element of a projective space, that is, two such matrices are considered equivalent if one is a non-zero scalar multiplication of the other. This is a relevant position, for example, if is estimated from image data. However, it is also possible to take the position that is defined as

where , and then has a well-defined "scaling". It depends on the application which position is the more relevant.

The constraints can also be expressed as

and

Here the last equation is matrix constraint, which can be seen as 9 constraints, one for each matrix element. These constraints are often used for determining the essential matrix from five corresponding point pairs.

The essential matrix has five or six degrees of freedom, depending on whether or not it is seen as a projective element. The rotation matrix and the translation vector have three degrees of freedom each, in total six. If the essential matrix is considered as a projective element, however, one degree of freedom related to scalar multiplication must be subtracted leaving five degrees of freedom in total.

Estimation of the essential matrix

Given a set of corresponding image points it is possible to estimate an essential matrix which satisfies the defining epipolar constraint for all the points in the set. However, if the image points are subject to noise, which is the common case in any practical situation, it is not possible to find an essential matrix which satisfies all constraints exactly.

Depending on how the error related to each constraint is measured, it is possible to determine or estimate an essential matrix which optimally satisfies the constraints for a given set of corresponding image points. The most straightforward approach is to set up a total least squares problem, commonly known as the eight-point algorithm.

Determining R and t from E

Given that the essential matrix has been determined for a stereo camera pair, for example, using the estimation method above this information can be used for determining also the rotation and translation (up to a scaling) between the two camera's coordinate systems. In these derivations is seen as a projective element rather than having a well-determined scaling.

The following method for determining and is based on performing a SVD of , see Hartley & Zisserman's book. It is also possible to determine and without an SVD, for example, following Longuet-Higgins' paper.

Finding one solution

An SVD of gives

where and are orthogonal matrices and is a diagonal matrix with

The diagonal entries of are the singular values of which, according to the internal constraints of the essential matrix, must consist of two identical and one zero value. Define

  with  

and make the following ansatz

Since may not completely fulfill the constraints when dealing with real world data (f.e. camera images), the alternative

  with  

may help.

Proof

First, these expressions for and do satisfy the defining equation for the essential matrix

Second, it must be shown that this is a matrix representation of the cross product for some . Since

it is the case that is skew-symmetric, i.e., . This is also the case for our , since

According to the general properties of the matrix representation of the cross product it then follows that must be the cross product operator of exactly one vector .

Third, it must also need to be shown that the above expression for is a rotation matrix. It is the product of three matrices which all are orthogonal which means that , too, is orthogonal or . To be a proper rotation matrix it must also satisfy . Since, in this case, is seen as a projective element this can be accomplished by reversing the sign of if necessary.

Finding all solutions

So far one possible solution for and has been established given . It is, however, not the only possible solution and it may not even be a valid solution from a practical point of view. To begin with, since the scaling of is undefined, the scaling of is also undefined. It must lie in the null space of since

For the subsequent analysis of the solutions, however, the exact scaling of is not so important as its "sign", i.e., in which direction it points. Let be normalized vector in the null space of . It is then the case that both and are valid translation vectors relative . It is also possible to change into in the derivations of and above. For the translation vector this only causes a change of sign, which has already been described as a possibility. For the rotation, on the other hand, this will produce a different transformation, at least in the general case.

To summarize, given there are two opposite directions which are possible for and two different rotations which are compatible with this essential matrix. In total this gives four classes of solutions for the rotation and translation between the two camera coordinate systems. On top of that, there is also an unknown scaling for the chosen translation direction.

It turns out, however, that only one of the four classes of solutions can be realized in practice. Given a pair of corresponding image coordinates, three of the solutions will always produce a 3D point which lies behind at least one of the two cameras and therefore cannot be seen. Only one of the four classes will consistently produce 3D points which are in front of both cameras. This must then be the correct solution. Still, however, it has an undetermined positive scaling related to the translation component.

The above determination of and assumes that satisfy the internal constraints of the essential matrix. If this is not the case which, for example, typically is the case if has been estimated from real (and noisy) image data, it has to be assumed that it approximately satisfy the internal constraints. The vector is then chosen as right singular vector of corresponding to the smallest singular value.

3D points from corresponding image points

The problem to be solved there is how to compute given corresponding normalized image coordinates and . If the essential matrix is known and the corresponding rotation and translation transformations have been determined, this algorithm (described in Longuet-Higgins' paper) provides a solution.

Let denote row k of the rotation matrix :

Combining the above relations between 3D coordinates in the two coordinate systems and the mapping between 3D and 2D points described earlier gives

or

Once is determined, the other two coordinates can be computed as

The above derivation is not unique. It is also possible to start with an expression for and derive an expression for according to

In the ideal case, when the camera maps the 3D points according to a perfect pinhole camera and the resulting 2D points can be detected without any noise, the two expressions for are equal. In practice, however, they are not and it may be advantageous to combine the two estimates of , for example, in terms of some sort of average.

There are also other types of extensions of the above computations which are possible. They started with an expression of the primed image coordinates and derived 3D coordinates in the unprimed system. It is also possible to start with unprimed image coordinates and obtain primed 3D coordinates, which finally can be transformed into unprimed 3D coordinates. Again, in the ideal case the result should be equal to the above expressions, but in practice they may deviate.

A final remark relates to the fact that if the essential matrix is determined from corresponding image coordinate, which often is the case when 3D points are determined in this way, the translation vector is known only up to an unknown positive scaling. As a consequence, the reconstructed 3D points, too, are undetermined with respect to a positive scaling.

See also

Toolboxes

References

  • David Nistér (June 2004). "An efficient solution to the five-point relative pose problem". IEEE Transactions on Pattern Analysis and Machine Intelligence. 26 (6): 756–777. doi:10.1109/TPAMI.2004.17. PMID 18579936.
  • H. Stewénius and C. Engels and D. Nistér (June 2006). "Recent Developments on Direct Relative Orientation". ISPRS Journal of Photogrammetry and Remote Sensing. 60 (4): 284–294. Bibcode:2006JPRS...60..284S. CiteSeerX 10.1.1.61.9329. doi:10.1016/j.isprsjprs.2006.03.005.
  • H. Christopher Longuet-Higgins (September 1981). "A computer algorithm for reconstructing a scene from two projections". Nature. 293 (5828): 133–135. Bibcode:1981Natur.293..133L. doi:10.1038/293133a0.
  • Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press. ISBN 978-0-521-54051-3.
  • Yi Ma; Stefano Soatto; Jana Košecká; S. Shankar Sastry (2004). An Invitation to 3-D Vision. Springer. ISBN 978-0-387-00893-6.
  • Gang Xu and Zhengyou Zhang (1996). Epipolar geometry in Stereo, Motion and Object Recognition. Kluwer Academic Publishers. ISBN 978-0-7923-4199-4.