Camera resectioning

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video. Usually, the pinhole camera parameters are represented in a 3 × 4 matrix called the camera matrix.

This process is often called camera calibration, but "camera calibration" can also mean photometric camera calibration.

Parameters of camera model[edit]

Often, we use [u\,v\,1]^T to represent a 2D point position in Pixel coordinates. [x_w\, y_w\, z_w\,1]^T is used to represent a 3D point position in World coordinates. Note: they were expressed in augmented notation of Homogeneous coordinates which is the most common notation in robotics and rigid body transforms. Referring to the pinhole camera model, a camera matrix is used to denote a projective mapping from World coordinates to Pixel coordinates.

1\end{bmatrix}=A \begin{bmatrix}
R & T\end{bmatrix}\begin{bmatrix}

Intrinsic parameters[edit]

\alpha_{x} & \gamma & u_{0}\\
0 & \alpha_{y} & v_{0}\\
0 & 0 & 1\end{bmatrix}

The intrinsic matrix contains 5 intrinsic parameters. These parameters encompass focal length, image sensor format, and principal point. The parameters \alpha_{x} = f \cdot m_{x} and \alpha_{y} = f \cdot m_{y} represent focal length in terms of pixels, where m_{x} and m_{y} are the scale factors relating pixels to distance and f is the focal length in terms of distance. [1] \gamma represents the skew coefficient between the x and the y axis, and is often 0. u_{0} and v_{0} represent the principal point, which would be ideally in the centre of the image.

Nonlinear intrinsic parameters such as lens distortion are also important although they cannot be included in the linear camera model described by the intrinsic parameter matrix. Many modern camera calibration algorithms estimate these intrinsic parameters as well[citation needed].

Extrinsic parameters[edit]

R,T are the extrinsic parameters which denote the coordinate system transformations from 3D world coordinates to 3D camera coordinates. Equivalently, the extrinsic parameters define the position of the camera center and the camera's heading in world coordinates. T is the position of the origin of the world coordinate system expressed in coordinates of the camera-centered coordinate system. T is often mistakenly considered the position of the camera. The position, C, of the camera expressed in world coordinates is C = -R^{-1}T = -R^T T (since R is a rotation matrix).

Camera calibration is often used as an early stage in computer vision.

When a camera is used, light from the environment is focused on an image plane and captured. This process reduces the dimensions of the data taken in by the camera from three to two (light from a 3D scene is stored on a 2D image). Each pixel on the image plane therefore corresponds to a shaft of light from the original scene. Camera resectioning determines which incoming light is associated with each pixel on the resulting image. In an ideal pinhole camera, a simple projection matrix is enough to do this. With more complex camera systems, errors resulting from misaligned lenses and deformations in their structures can result in more complex distortions in the final image. The camera projection matrix is derived from the intrinsic and extrinsic parameters of the camera, and is often represented by the series of transformations; e.g., a matrix of camera intrinsic parameters, a 3 × 3 rotation matrix, and a translation vector. The camera projection matrix can be used to associate points in a camera's image space with locations in 3D world space.

Camera resectioning is often used in the application of stereo vision where the camera projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by both cameras.

Some people call this camera calibration, but many restrict the term camera calibration for the estimation of internal or intrinsic parameters only.


There are many different approaches to calculate the intrinsic and extrinsic parameters for a specific camera setup.

  1. Direct linear transformation (DLT) method
  2. A classical approach is "Roger Y. Tsai Algorithm". It is a 2-stage algorithm, calculating the pose (3D Orientation, and x-axis and y-axis translation) in first stage. In second stage it computes the focal length, distortion coefficients and the z-axis translation.
  3. Zhang's "a flexible new technique for camera calibration".

Zhang's method[edit]

Zhang's camera calibration method[2][3] employs abstract concepts like the image of the absolute conic and circular points.


Assume we have a homography \textbf{H} that maps points x_\pi on a "probe plane" \pi to points x on the image.

The circular pointsI, J = [1\, \pm j \, 0]^T lie on both our probe plane \pi and on the absolute conic \Omega_\infty. Lying on \Omega_\infty of course means they are also projected onto the image of the absolute conic (IAC) \omega, thus x_1^T \omega x_1= 0 and x_2^T \omega x_2= 0. The circular points project as

x_1 & = \textbf{H} I = 
h_1 & h_2 & h_3
1 \\
j \\
= h_1 + j h_2
x_2 & =  \textbf{H} J =
h_1 & h_2 & h_3
1 \\
-j \\
= h_1 - j h_2

We can actually ignore x_2 while substituting our new expression for x_1 as follows:

x_1^T \omega x_1 &= \left ( h_1 + j h_2 \right )^T \omega \left ( h_1 + j h_2 \right ) \\
 &= \left ( h_1^T + j h_2^T \right ) \omega \left ( h_1 + j h_2 \right ) \\
 &= h_1^T \omega h_1 + j \left ( h_2^T \omega h_2 \right ) \\
 &= 0

Selby's method (for X-ray cameras)[edit]

Selby's camera calibration method[4] addresses the auto-calibration of X-ray camera systems. X-ray camera systems, consisting of the X-ray generating tube and a solid state detector can be modelled as pinhole camera systems, comprising 9 intrinsic and extrinsic camera parameters. Intensity based registration based on an arbitrary X-ray image and a reference model (as a tomographic dataset) can then be used to determine the relative camera parameters without the need of a special calibration body or any ground-truth data.

See also[edit]

External links[edit]


  1. ^ Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in Computer Vision. Cambridge University Press. pp. 155–157. ISBN 0-521-54051-8. 
  2. ^ Z. Zhang, "A flexible new technique for camera calibration'", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pages 1330–1334, 2000
  3. ^ P. Sturm and S. Maybank, "On plane-based camera calibration: a general algorithm, singularities, applications'", In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 432–437, Fort Collins, CO, USA, June 1999
  4. ^ Boris Peter Selby et al., "Patient positioning with X-ray detector self-calibration for image guided therapy", Australasian Physical & Engineering Science in Medicine, Vol.34, No.3, pages 391–400, 2011