In computer vision, the Lucas–Kanade method is a widely used differential method for optical flow estimation developed by Bruce D. Lucas and Takeo Kanade. It assumes that the flow is essentially constant in a local neighbourhood of the pixel under consideration, and solves the basic optical flow equations for all the pixels in that neighbourhood, by the least squares criterion.

By combining information from several nearby pixels, the Lucas–Kanade method can often resolve the inherent ambiguity of the optical flow equation. It is also less sensitive to image noise than point-wise methods. On the other hand, since it is a purely local method, it cannot provide flow information in the interior of uniform regions of the image.

## Concept

The Lucas–Kanade method assumes that the displacement of the image contents between two nearby instants (frames) is small and approximately constant within a neighborhood of the point p under consideration. Thus the optical flow equation can be assumed to hold for all pixels within a window centered at p. Namely, the local image flow (velocity) vector $(V_{x},V_{y})$ must satisfy

$I_{x}(q_{1})V_{x}+I_{y}(q_{1})V_{y}=-I_{t}(q_{1})$ $I_{x}(q_{2})V_{x}+I_{y}(q_{2})V_{y}=-I_{t}(q_{2})$ $\vdots$ $I_{x}(q_{n})V_{x}+I_{y}(q_{n})V_{y}=-I_{t}(q_{n})$ where $q_{1},q_{2},\dots ,q_{n}$ are the pixels inside the window, and $I_{x}(q_{i}),I_{y}(q_{i}),I_{t}(q_{i})$ are the partial derivatives of the image $I$ with respect to position x, y and time t, evaluated at the point $q_{i}$ and at the current time.

These equations can be written in matrix form $Av=b$ , where

$A={\begin{bmatrix}I_{x}(q_{1})&I_{y}(q_{1})\\[10pt]I_{x}(q_{2})&I_{y}(q_{2})\\[10pt]\vdots &\vdots \\[10pt]I_{x}(q_{n})&I_{y}(q_{n})\end{bmatrix}}\quad \quad \quad v={\begin{bmatrix}V_{x}\\[10pt]V_{y}\end{bmatrix}}\quad \quad \quad b={\begin{bmatrix}-I_{t}(q_{1})\\[10pt]-I_{t}(q_{2})\\[10pt]\vdots \\[10pt]-I_{t}(q_{n})\end{bmatrix}}$ This system has more equations than unknowns and thus it is usually over-determined. The Lucas–Kanade method obtains a compromise solution by the least squares principle. Namely, it solves the 2×2 system

$A^{T}Av=A^{T}b$ or
$\mathrm {v} =(A^{T}A)^{-1}A^{T}b$ where $A^{T}$ is the transpose of matrix $A$ . That is, it computes

${\begin{bmatrix}V_{x}\\[10pt]V_{y}\end{bmatrix}}={\begin{bmatrix}\sum _{i}I_{x}(q_{i})^{2}&\sum _{i}I_{x}(q_{i})I_{y}(q_{i})\\[10pt]\sum _{i}I_{y}(q_{i})I_{x}(q_{i})&\sum _{i}I_{y}(q_{i})^{2}\end{bmatrix}}^{-1}{\begin{bmatrix}-\sum _{i}I_{x}(q_{i})I_{t}(q_{i})\\[10pt]-\sum _{i}I_{y}(q_{i})I_{t}(q_{i})\end{bmatrix}}$ where the central matrix in the equation is an Inverse matrix. The sums are running from i=1 to n.

The matrix $A^{T}A$ is often called the structure tensor of the image at the point p.

## Weighted window

The plain least squares solution above gives the same importance to all n pixels $q_{i}$ in the window. In practice it is usually better to give more weight to the pixels that are closer to the central pixel p. For that, one uses the weighted version of the least squares equation,

$A^{T}WAv=A^{T}Wb$ or

$\mathrm {v} =(A^{T}WA)^{-1}A^{T}Wb$ where $W$ is an n×n diagonal matrix containing the weights $W_{ii}=w_{i}$ to be assigned to the equation of pixel $q_{i}$ . That is, it computes

${\begin{bmatrix}V_{x}\\[10pt]V_{y}\end{bmatrix}}={\begin{bmatrix}\sum _{i}w_{i}I_{x}(q_{i})^{2}&\sum _{i}w_{i}I_{x}(q_{i})I_{y}(q_{i})\\[10pt]\sum _{i}w_{i}I_{x}(q_{i})I_{y}(q_{i})&\sum _{i}w_{i}I_{y}(q_{i})^{2}\end{bmatrix}}^{-1}{\begin{bmatrix}-\sum _{i}w_{i}I_{x}(q_{i})I_{t}(q_{i})\\[10pt]-\sum _{i}w_{i}I_{y}(q_{i})I_{t}(q_{i})\end{bmatrix}}$ The weight $w_{i}$ is usually set to a Gaussian function of the distance between $q_{i}$ and p.

## Use conditions and techniques

In order for equation $A^{T}Av=A^{T}b$ to be solvable, $A^{T}A$ should be invertible, or $A^{T}A$ 's eigenvalues satisfy $\lambda _{1}\geq \lambda _{2}>0$ . To avoid noise issue, usually $\lambda _{2}$ is required to not be too small. Also, if $\lambda _{1}/\lambda _{2}$ is too large, this means that the point p is on an edge, and this method suffers from the aperture problem. So for this method to work properly, the condition is that $\lambda _{1}$ and $\lambda _{2}$ are large enough and have similar magnitude. This condition is also the one for Corner detection. This observation shows that one can easily tell which pixel is suitable for the Lucas–Kanade method to work on by inspecting a single image.

One main assumption for this method is that the motion is small (less than 1 pixel between two images for example). If the motion is large and violates this assumption, one technique is to reduce the resolution of images first and then apply the Lucas-Kanade method. 

## Improvements and extensions

The least-squares approach implicitly assumes that the errors in the image data have a Gaussian distribution with zero mean. If one expects the window to contain a certain percentage of "outliers" (grossly wrong data values, that do not follow the "ordinary" Gaussian error distribution), one may use statistical analysis to detect them, and reduce their weight accordingly.

The Lucas–Kanade method per se can be used only when the image flow vector $V_{x},V_{y}$ between the two frames is small enough for the differential equation of the optical flow to hold, which is often less than the pixel spacing. When the flow vector may exceed this limit, such as in stereo matching or warped document registration, the Lucas–Kanade method may still be used to refine some coarse estimate of the same, obtained by other means; for example, by extrapolating the flow vectors computed for previous frames, or by running the Lucas-Kanade algorithm on reduced-scale versions of the images. Indeed, the latter method is the basis of the popular Kanade-Lucas-Tomasi (KLT) feature matching algorithm.

A similar technique can be used to compute differential affine deformations of the image contents.