3D projection

From Wikipedia, the free encyclopedia
Jump to: navigation, search

3D projection is any method of mapping three-dimensional points to a two-dimensional plane. As most current methods for displaying graphical data are based on planar two-dimensional media, the use of this type of projection is widespread, especially in computer graphics, engineering and drafting.

Orthographic projection[edit]

When the human eye looks at a scene, objects in the distance appear smaller than objects close by. Orthographic projection ignores this effect to allow the creation of to-scale drawings for construction and engineering.

Orthographic projections are a small set of transforms often used to show profile, detail or precise measurements of a three dimensional object. Common names for orthographic projections include plane, cross-section, bird's-eye, and elevation.

If the normal of the viewing plane (the camera direction) is parallel to one of the primary axes (which is the x, y, or z axis), the mathematical transformation is as follows; To project the 3D point a_x, a_y, a_z onto the 2D point b_x, b_y using an orthographic projection parallel to the y axis (profile view), the following equations can be used:


b_x = s_x a_x + c_x

b_y = s_z a_z + c_z

where the vector s is an arbitrary scale factor, and c is an arbitrary offset. These constants are optional, and can be used to properly align the viewport. Using matrix multiplication, the equations become:


 \begin{bmatrix}
   {b_x }  \\
   {b_y }  \\
\end{bmatrix} = \begin{bmatrix}
   {s_x } & 0 & 0 \\
   0 & 0 & {s_z }  \\
\end{bmatrix}\begin{bmatrix}
   {a_x }  \\
   {a_y }  \\
   {a_z }  \\
\end{bmatrix} + \begin{bmatrix}
   {c_x }  \\
   {c_z }  \\
\end{bmatrix}
.

While orthographically projected images represent the three dimensional nature of the object projected, they do not represent the object as it would be recorded photographically or perceived by a viewer observing it directly. In particular, parallel lengths at all points in an orthographically projected image are of the same scale regardless of whether they are far away or near to the virtual viewer. As a result, lengths near to the viewer are not foreshortened as they would be in a perspective projection.

Weak perspective projection[edit]

A "weak" perspective projection uses the same principles of an orthographic projection, but requires the scaling factor to be specified, thus ensuring that closer objects appear bigger in the projection, and vice-versa. It can be seen as a hybrid between an orthographic and a perspective projection, and described either as a perspective projection with individual point depths Z_{i} replaced by an average constant depth Z_{ave},[1] or simply as an orthographic projection plus a scaling.[2]

The weak-perspective model thus approximates perspective projection while using a simpler model, similar to the pure (unscaled) orthographic perspective. It is a reasonable approximation when the depth of the object along the line of sight is small compared to the distance from the camera, and the field of view is small. With these conditions, it can be assumed that all points on a 3D object are at the same distance Z_{ave} from the camera without significant errors in the projection (compared to the full perspective model).

Perspective projection[edit]

When the human eye views a scene, objects in the distance appear smaller than objects close by - this is known as perspective. While orthographic projection ignores this effect to allow accurate measurements, perspective definition shows distant objects as smaller to provide additional realism.

The perspective projection requires a more involved definition as compared to orthographic projections. A conceptual aid to understanding the mechanics of this projection is to imagine the 2D projection as though the object(s) are being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation:

  • \mathbf{a}_{x,y,z} - the 3D position of a point A that is to be projected.
  • \mathbf{c}_{x,y,z} - the 3D position of a point C representing the camera.
  • \mathbf{\theta}_{x,y,z} - The orientation of the camera (represented, for instance, by Tait–Bryan angles).
  • \mathbf{e}_{x,y,z} - the viewer's position relative to the display surface.[3]

Which results in:

  • \mathbf{b}_{x,y} - the 2D projection of \mathbf{a}.

When \mathbf{c}_{x,y,z}=\langle 0,0,0\rangle, and \mathbf{\theta}_{x,y,z} = \langle 0,0,0\rangle, the 3D vector \langle 1,2,0 \rangle is projected to the 2D vector \langle 1,2 \rangle.

Otherwise, to compute \mathbf{b}_{x,y} we first define a vector \mathbf{d}_{x,y,z} as the position of point A with respect to a coordinate system defined by the camera, with origin in C and rotated by \mathbf{\theta} with respect to the initial coordinate system. This is achieved by subtracting \mathbf{c} from \mathbf{a} and then applying a rotation by -\mathbf{\theta} to the result. This transformation is often called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes): [4] [5]


\begin{bmatrix}
   \mathbf{d}_x \\
   \mathbf{d}_y \\
   \mathbf{d}_z \\
\end{bmatrix}=\begin{bmatrix}
   1 & 0 & 0  \\
   0 & {\cos ( \mathbf{- \theta}_x ) } & { - \sin ( \mathbf{- \theta}_x ) }  \\
   0 & { \sin ( \mathbf{- \theta}_x ) } & { \cos ( \mathbf{- \theta}_x ) }  \\
\end{bmatrix}\begin{bmatrix}
   { \cos ( \mathbf{- \theta}_y ) } & 0 & { \sin ( \mathbf{- \theta}_y ) }  \\
   0 & 1 & 0  \\
   { - \sin ( \mathbf{- \theta}_y ) } & 0 & { \cos ( \mathbf{- \theta}_y ) }  \\
\end{bmatrix}\begin{bmatrix}
   { \cos ( \mathbf{- \theta}_z ) } & { - \sin ( \mathbf{- \theta}_z ) } & 0  \\
   { \sin ( \mathbf{- \theta}_z ) } & { \cos ( \mathbf{- \theta}_z ) } & 0  \\
   0 & 0 & 1  \\
\end{bmatrix}\left( {\begin{bmatrix}
   \mathbf{a}_x  \\
   \mathbf{a}_y  \\
   \mathbf{a}_z  \\
\end{bmatrix} - \begin{bmatrix}
   \mathbf{c}_x  \\
   \mathbf{c}_y  \\
   \mathbf{c}_z  \\
\end{bmatrix}} \right)

This representation corresponds to rotating by three Euler angles (more properly, Tait–Bryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z (reading left-to-right)". Note that if the camera is not rotated (\mathbf{\theta}_{x,y,z} = \langle 0,0,0\rangle), then the matrices drop out (as identities), and this reduces to simply a shift: \mathbf{d} = \mathbf{a} - \mathbf{c}.

Alternatively, without using matrices (let's replace (ax-cx) with x and so on, and abbreviate cosθ to c and sinθ to s):


\begin{array}{lcl}
	d_x = c_y (s_z \mathbf{y}+c_z \mathbf{x})-s_y \mathbf{z} \\
	d_y = s_x (c_y \mathbf{z}+s_y (s_z \mathbf{y}-c_z \mathbf{x}))+c_x (c_z \mathbf{y}+s_z \mathbf{x}) \\
	d_z = c_x (c_y \mathbf{z}+s_y (s_z \mathbf{y}-c_z \mathbf{x}))-s_x (c_z \mathbf{y}+s_z \mathbf{x}) \\
\end{array}

This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane; literature also may use x/z):[6]


\begin{array}{lcl}
 \mathbf{b}_x &= & \frac{\mathbf{e}_z}{\mathbf{d}_z} \mathbf{d}_x - \mathbf{e}_x \\
 \mathbf{b}_y &= & \frac{\mathbf{e}_z}{\mathbf{d}_z} \mathbf{d}_y - \mathbf{e}_y\\
\end{array}.

Or, in matrix form using homogeneous coordinates, the system


\begin{bmatrix}
   \mathbf{f}_x \\
   \mathbf{f}_y \\
   \mathbf{f}_z \\
   \mathbf{f}_w \\
\end{bmatrix}=\begin{bmatrix}
   1 & 0 & -\frac{\mathbf{e}_x}{\mathbf{e}_z} & 0 \\
   0 & 1 & -\frac{\mathbf{e}_y}{\mathbf{e}_z} & 0 \\
   0 & 0 & 1 & 0 \\
   0 & 0 & 1/\mathbf{e}_z & 0 \\
\end{bmatrix}\begin{bmatrix}
   \mathbf{d}_x  \\
   \mathbf{d}_y  \\
   \mathbf{d}_z  \\
   1 \\
\end{bmatrix}

in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving


\begin{array}{lcl}
 \mathbf{b}_x &= &\mathbf{f}_x / \mathbf{f}_w \\
 \mathbf{b}_y &= &\mathbf{f}_y / \mathbf{f}_w \\
\end{array}.

The distance of the viewer from the display surface, \mathbf{e}_z, directly relates to the field of view, where \alpha=2 \cdot \tan^{-1}(1/\mathbf{e}_z) is the viewed angle. (Note: This assumes that you map the points (-1,-1) and (1,1) to the corners of your viewing surface)

The above equations can also be rewritten as:


\begin{array}{lcl}
 \mathbf{b}_x= (\mathbf{d}_x \mathbf{s}_x ) / (\mathbf{d}_z \mathbf{r}_x) \mathbf{r}_z\\
 \mathbf{b}_y= (\mathbf{d}_y \mathbf{s}_y ) / (\mathbf{d}_z \mathbf{r}_y) \mathbf{r}_z\\
\end{array}.

In which \mathbf{s}_{x,y} is the display size, \mathbf{r}_{x,y} is the recording surface size (CCD or film), \mathbf{r}_z is the distance from the recording surface to the entrance pupil (camera center), and \mathbf{d}_z is the distance, from the 3D point being projected, to the entrance pupil.

Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media.

Diagram[edit]

Perspective Transform Diagram.png

To determine which screen x-coordinate corresponds to a point at A_x,A_z multiply the point coordinates by:

B_x = A_x \frac{B_z}{A_z}

where

B_x is the screen x coordinate
A_x is the model x coordinate
B_z is the focal length—the axial distance from the camera center to the image plane
A_z is the subject distance.

Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.

See also[edit]

References[edit]

  1. ^ Subhashis Banerjee (2002-02-18). "The Weak-Perspective Camera". 
  2. ^ Alter, T. D. (July 1992). 3D Pose from 3 Corresponding Points under Weak-Perspective Projection (Technical report). MIT AI Lab. 
  3. ^ Ingrid Carlbom, Joseph Paciorek (1978). "Planar Geometric Projections and Viewing Transformations". ACM Computing Surveys 10 (4): 465–502. doi:10.1145/356744.356750. .
  4. ^ Riley, K F (2006). Mathematical Methods for Physics and Engineering. Cambridge University Press. pp. 931, 942. doi:10.2277/0521679710. ISBN 0-521-67971-0. 
  5. ^ Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). Reading, Mass.: Addison-Wesley Pub. Co. pp. 146–148. ISBN 0-201-02918-9. 
  6. ^ Sonka, M; Hlavac, V; Boyle, R (1995). Image Processing, Analysis & Machine Vision (2nd ed.). Chapman and Hall. p. 14. ISBN 0-412-45570-6. 

External links[edit]

Further reading[edit]