Match moving

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In cinematography, match moving is a cinematic technique that allows the insertion of computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot. The term is used loosely to describe several different methods of extracting camera motion information from a motion picture. Sometimes referred to as motion tracking or camera solving, match moving is related to rotoscoping and photogrammetry. Match moving is sometimes confused with motion capture, which records the motion of objects, often human actors, rather than the camera. Typically, motion capture requires special cameras and sensors and a controlled environment (although recent developments such as the Kinect camera have begun to change this). Match moving is also distinct from motion control photography, which uses mechanical hardware to execute multiple identical camera moves. Match moving, by contrast, is typically a software-based technology, applied after the fact to normal footage recorded in uncontrolled environments with an ordinary camera.

Match moving is primarily used to track the movement of a camera through a shot so that an identical virtual camera move can be reproduced in a 3D animation program. When new animated elements are composited back into the original live-action shot, they will appear in perfectly-matched perspective and therefore appear seamless.

As it is mostly software-based, match moving has become increasingly affordable as the cost of computer power has declined; it is now an established visual-effects tool and is even used in live television broadcasts as part of providing effects such as the virtual yellow-down-line in American football.


The process of match moving can be broken down into two steps.


The first step is identifying and tracking features. A feature is a specific point in the image that a tracking algorithm can lock onto and follow through multiple frames (SynthEyes calls them blips). Often features are selected because they are bright/dark spots, edges or corners depending on the particular tracking algorithm. Popular programs use template matching based on NCC score and RMS error. What is important is that each feature represents a specific point on the surface of a real object. As a feature is tracked it becomes a series of two-dimensional coordinates that represent the position of the feature across a series of frames. This series is referred to as a "track". Once tracks have been created they can be used immediately for 2D motion tracking, or then be used to calculate 3D information.


The second step involves solving for 3D motion. This process attempts to derive the motion of the camera by solving the inverse-projection of the 2D paths for the position of the camera. This process is referred to as calibration.

To explain further: when a point on the surface of a three dimensional object is photographed its position in the 2D frame can be calculated by a 3D projection function. We can consider a camera to be an abstraction that holds all the parameters necessary to model a camera in a real or virtual world. Therefore a camera is a vector that includes as its elements the position of the camera, its orientation, focal length, and other possible parameters that define how the camera focuses light onto the film plane. Exactly how this vector is constructed is not important as long as there is a compatible projection function P.

The projection function P takes as its input a camera vector (denoted camera) and another vector the position of a 3D point in space (denoted xyz) and returns a 2D point that has been projected onto a plane in front of the camera (denoted XY). We can express this:

XY = P(camera, xyz)
An illustration of feature projection. Around the rendering of a 3D structure, red dots represent points that are chosen by the tracking process. Cameras at frame i and j project the view onto a plane depending on the parameters of the camera. In this way features tracked in 2D correspond to real points in a 3D space. Although this particular illustration is computer-generated, match moving is normally done on real objects.

The projection function transforms the 3D point and strips away the component of depth. Without knowing the depth of the component an inverse projection function can only return a set of possible 3D points, that form a line emanating from the nodal point of the camera lens and passing through the projected 2D point. We can express the inverse projection as:

xyz ∈ P'(camera, XY)


{xyz :P(camera, xyz) = XY}

Let's say we are in a situation where the features we are tracking are on the surface of a rigid object such as a building. Since we know that the real point xyz will remain in the same place in real space from one frame of the image to the next we can make the point a constant even though we do not know where it is. So:

xyzi = xyzj

where the subscripts i and j refer to arbitrary frames in the shot we are analyzing. Since this is always true then we know that:

P'(camerai, XYi) ∩ P'(cameraj, XYj) ≠ {}

Because the value of XYi has been determined for all frames that the feature is tracked through by the tracking program, we can solve the reverse projection function between any two frames as long as P'(camerai, XYi) ∩ P'(cameraj, XYj) is a small set. Set of possible camera vectors that solve the equation at i and j (denoted Cij).

Cij = {(camerai,cameraj):P'(camerai, XYi) ∩ P'(cameraj, XYj) ≠ {})

So there is a set of camera vector pairs Cij for which the intersection of the inverse projections of two points XYi and XYj is a non-empty, hopefully small, set centering around a theoretical stationary point xyz .

In other words, imagine a black point floating in a white void and a camera. For any position in space that we place the camera, there is a set of corresponding parameters (orientation, focal length, etc.) that will photograph that black point exactly the same way. Since C has an infinite number of members, one point is never enough to determine the actual camera position.

As we start adding tracking points, we can narrow the possible camera positions. For example if we have a set of points {xyzi,0,...,xyzi,n} and {xyzj,0,...,xyzj,n} where i and j still refer to frames and n is an index to one of many tracking points we are following. We can derive a set of camera vector pair sets {Ci,j,0,...,Ci,j,n}.

In this way multiple tracks allow us to narrow the possible camera parameters. The set of possible camera parameters that fit, F, is the intersection of all sets:

F = Ci,j,0 ∩ ... ∩ Ci,j,n

The fewer elements are in this set the closer we can come to extracting the actual parameters of the camera. In reality errors introduced to the tracking process require a more statistical approach to determining a good camera vector for each frame, optimization algorithms and bundle block adjustment are often utilized. Unfortunately there are so many elements to a camera vector that when every parameter is free we still might not be able to narrow F down to a single possibility no matter how many features we track. The more we can restrict the various parameters, especially focal length, the easier it becomes to pinpoint the solution.

In all, the 3D solving process is the process of narrowing down the possible solutions to the motion of the camera until we reach one that suits the needs of the composite we are trying to create.

Point-cloud projection[edit]

Once the camera position has been determined for every frame it is then possible to estimate the position of each feature in real space by inverse projection. The resulting set of points is often referred to as a point cloud because of its raw appearance like a nebula. Since point clouds often reveal some of the shape of the 3D scene they can be used as a reference for placing synthetic objects or by a reconstruction program to create a 3D version of the actual scene.

Ground-plane determination[edit]

The camera and point cloud need to be oriented in some kind of space. Therefore, once calibration is complete, it is necessary to define a ground plane. Normally, this is a unit plane that determines the scale, orientation and origin of the projected space. Some programs attempt to do this automatically, though more often the user defines this plane. Since shifting ground planes does a simple transformation of all of the points, the actual position of the plane is really a matter of convenience.


Reconstruction is the interactive process of recreating a photographed object using tracking data. This technique is related to photogrammetry. In this particular case we are referring to using match moving software to reconstruct a scene from incidental footage.

A reconstruction program can create three-dimensional objects that mimic the real objects from the photographed scene. Using data from the point cloud and the user's estimation, the program can create a virtual object and then extract a texture from the footage that can be projected onto the virtual object as a surface texture.

2D vs. 3D[edit]

Match moving has two forms. Some compositing programs, such as Shake, Adobe After Effects, and Discreet Combustion, include two-dimensional motion tracking capabilities. Two dimensional match moving only tracks features in two-dimensional space, without any concern to camera movement or distortion. It can be used to add motion blur or image stabilization effects to footage. This technique is sufficient to create realistic effects when the original footage does not include major changes in camera perspective. For example, a billboard deep in the background of a shot can often be replaced using two-dimensional tracking.

Three-dimensional match moving tools make it possible to extrapolate three-dimensional information from two-dimensional photography. These tools allow users to derive camera movement and other relative motion from arbitrary footage. The tracking information can be transferred to computer graphics software and used to animate virtual cameras and simulated objects. Programs capable of 3D match moving include:

Automatic vs. interactive tracking[edit]

There are two methods by which motion information can be extracted from an image. Interactive tracking, sometimes referred to as "supervised tracking", relies on the user to follow features through a scene. Automatic tracking relies on computer algorithms to identify and track features through a shot. The tracked points movements are then used to calculate a "solution". This solution is composed of all the camera's information such as the motion, focal length, and lens distortion.

The advantage of automatic tracking is that the computer can create many points faster than a human can. A large number of points can be analyzed with statistics to determine the most reliable data. The disadvantage of automatic tracking is that, depending on the algorithm, the computer can be easily confused as it tracks objects through the scene. Automatic tracking methods are particularly ineffective in shots involving fast camera motion such as that seen with hand-held camera work and in shots with repetitive subject matter like small tiles or any sort of regular pattern where one area is not very distinct. This tracking method also suffers when a shot contains a large amount of motion blur, making the small details it needs harder to distinguish.

The advantage of interactive tracking is that a human user can follow features through an entire scene and will not be confused by features that are not rigid. A human user can also determine where features are in a shot that suffers from motion blur; it is extremely difficult for an automatic tracker to correctly find features with high amounts of motion blur. The disadvantage of interactive tracking is that the user will inevitably introduce small errors as they follow objects through the scene, which can lead to what is called "drift".

Professional-level motion tracking is usually achieved using a combination of interactive and automatic techniques. An artist can remove points that are clearly anomalous and use "tracking mattes" to block confusing information out of the automatic tracking process. Tracking mattes are also employed to cover areas of the shot which contain moving elements such as an actor or a spinning ceiling fan.

Tracking mattes[edit]

A tracking matte is similar in concept to a garbage matte used in traveling matte compositing. However, the purpose of a tracking matte is to prevent tracking algorithms from using unreliable, irrelevant, or non-rigid tracking points. For example, in a scene where an actor walks in front of a background, the tracking artist will want to use only the background to track the camera through the scene, knowing that motion of the actor will throw off the calculations. In this case, the artist will construct a tracking matte to follow the actor through the scene, blocking that information from the tracking process.


Since there are often multiple possible solutions to the calibration process and a significant amount of error can accumulate, the final step to match moving often involves refining the solution by hand. This could mean altering the camera motion itself or giving hints to the calibration mechanism. This interactive calibration is referred to as "refining".

Most match moving applications seem based on similar algorithms for tracking and calibration. Often, the initial results obtained are similar. However, it seems that each program has different refining capabilities. Therefore, when choosing software, look closely at the refining process.

Real time[edit]

On-set, real-time camera tracking is becoming more widely used in feature film production to allow elements that will be inserted in post-production be visualised live on-set. This has the benefit of helping the director and actors improve performances by actually seeing set extensions or CGI characters whilst (or shortly after) they do a take. No longer do they need to perform to green/blue screens and have no feedback of the end result. Eye-line references, actor positioning, and CGI interaction can now be done live on-set giving everyone confidence that the shot is correct and going to work in the final composite.

To achieve this, a number of components from hardware to software need to be combined. Software collects all of the 6 degrees of freedom movement of the camera as well as metadata such as zoom, focus, iris and shutter elements from many different types of hardware devices, ranging from motion capture systems such as active LED marker based system from PhaseSpace, passive systems such as Motion Analysis or Vicon, to rotary encodes fitted to camera cranes and dollies such as Technocranes and Fisher Dollies, or inertia & gyroscopic sensors mounted directly to the camera. There are also laser based tracking systems that can be attached to anything, including Steadycams, to track cameras outside in the rain at distances of up to 30 meters.

Motion control cameras can also be used as a source or destination for 3D camera data. Camera moves can be pre-visualised in advance and then converted into motion control data that drives a camera crane along precisely the same path as the 3D camera. Encoders on the crane can also be used in real time on-set to reverse this process to generate live 3D cameras. The data can be sent to any number of different 3D applications, allowing 3D artists to modify their CGI elements live on set as well. The main advantage being that set design issues that would be timely and costly issues later down the line can be sorted out during the shooting process, ensuring that the actors "fit" within each environment for each shot whilst they do their performances.

Real time motion capture systems can also be mixed within camera data stream allowing virtual characters to be inserted into live shots on-set. This dramatically improves the interaction between real and non-real MoCap driven characters as both plate and CG performances can be choreographed together.

See also[edit]


External links[edit]