# Bicubic interpolation

(Redirected from Bi-cubic)
Comparison of Bicubic interpolation with some 1- and 2-dimensional interpolations. Black and red/yellow/green/blue dots correspond to the interpolated point and neighbouring samples, respectively. Their heights above the ground correspond to their values.

In mathematics, bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two-dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearest-neighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.

In image processing, bicubic interpolation is often chosen over bilinear or nearest-neighbor interpolation in image resampling, when speed is not an issue. In contrast to bilinear interpolation, which only takes 4 pixels (2×2) into account, bicubic interpolation considers 16 pixels (4×4). Images resampled with bicubic interpolation are smoother and have fewer interpolation artifacts.

## Computation

Bicubic interpolation on the square ${\displaystyle [0,4]\times [0,4]}$ consisting of 25 unit squares patched together. Bicubic interpolation as per Matplotlib's implementation. Colour indicates function value. The black dots are the locations of the prescribed data being interpolated. Note how the color samples are not radially symmetric.
Bilinear interpolation on the same dataset as above. Derivatives of the surface are not continuous over the square boundaries.
Nearest-neighbor interpolation on the same dataset as above.

Suppose the function values ${\displaystyle f}$ and the derivatives ${\displaystyle f_{x}}$, ${\displaystyle f_{y}}$ and ${\displaystyle f_{xy}}$ are known at the four corners ${\displaystyle (0,0)}$, ${\displaystyle (1,0)}$, ${\displaystyle (0,1)}$, and ${\displaystyle (1,1)}$ of the unit square. The interpolated surface can then be written as

${\displaystyle p(x,y)=\sum \limits _{i=0}^{3}\sum _{j=0}^{3}a_{ij}x^{i}y^{j}.}$

The interpolation problem consists of determining the 16 coefficients ${\displaystyle a_{ij}}$. Matching ${\displaystyle p(x,y)}$ with the function values yields four equations:

1. ${\displaystyle f(0,0)=p(0,0)=a_{00},}$
2. ${\displaystyle f(1,0)=p(1,0)=a_{00}+a_{10}+a_{20}+a_{30},}$
3. ${\displaystyle f(0,1)=p(0,1)=a_{00}+a_{01}+a_{02}+a_{03},}$
4. ${\displaystyle f(1,1)=p(1,1)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=0}^{3}a_{ij}.}$

Likewise, eight equations for the derivatives in the ${\displaystyle x}$ and the ${\displaystyle y}$ directions:

1. ${\displaystyle f_{x}(0,0)=p_{x}(0,0)=a_{10},}$
2. ${\displaystyle f_{x}(1,0)=p_{x}(1,0)=a_{10}+2a_{20}+3a_{30},}$
3. ${\displaystyle f_{x}(0,1)=p_{x}(0,1)=a_{10}+a_{11}+a_{12}+a_{13},}$
4. ${\displaystyle f_{x}(1,1)=p_{x}(1,1)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=0}^{3}a_{ij}i,}$
5. ${\displaystyle f_{y}(0,0)=p_{y}(0,0)=a_{01},}$
6. ${\displaystyle f_{y}(1,0)=p_{y}(1,0)=a_{01}+a_{11}+a_{21}+a_{31},}$
7. ${\displaystyle f_{y}(0,1)=p_{y}(0,1)=a_{01}+2a_{02}+3a_{03},}$
8. ${\displaystyle f_{y}(1,1)=p_{y}(1,1)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=1}^{3}a_{ij}j.}$

And four equations for the ${\displaystyle xy}$ mixed partial derivative:

1. ${\displaystyle f_{xy}(0,0)=p_{xy}(0,0)=a_{11},}$
2. ${\displaystyle f_{xy}(1,0)=p_{xy}(1,0)=a_{11}+2a_{21}+3a_{31},}$
3. ${\displaystyle f_{xy}(0,1)=p_{xy}(0,1)=a_{11}+2a_{12}+3a_{13},}$
4. ${\displaystyle f_{xy}(1,1)=p_{xy}(1,1)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=1}^{3}a_{ij}ij.}$

The expressions above have used the following identities:

${\displaystyle p_{x}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=0}^{3}a_{ij}ix^{i-1}y^{j},}$
${\displaystyle p_{y}(x,y)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=1}^{3}a_{ij}x^{i}jy^{j-1},}$
${\displaystyle p_{xy}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=1}^{3}a_{ij}ix^{i-1}jy^{j-1}.}$

This procedure yields a surface ${\displaystyle p(x,y)}$ on the unit square ${\displaystyle [0,1]\times [0,1]}$ that is continuous and has continuous derivatives. Bicubic interpolation on an arbitrarily sized regular grid can then be accomplished by patching together such bicubic surfaces, ensuring that the derivatives match on the boundaries.

Grouping the unknown parameters ${\displaystyle a_{ij}}$ in a vector

${\displaystyle \alpha =\left[{\begin{smallmatrix}a_{00}&a_{10}&a_{20}&a_{30}&a_{01}&a_{11}&a_{21}&a_{31}&a_{02}&a_{12}&a_{22}&a_{32}&a_{03}&a_{13}&a_{23}&a_{33}\end{smallmatrix}}\right]^{T}}$

and letting

${\displaystyle x=\left[{\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&f_{x}(0,0)&f_{x}(1,0)&f_{x}(0,1)&f_{x}(1,1)&f_{y}(0,0)&f_{y}(1,0)&f_{y}(0,1)&f_{y}(1,1)&f_{xy}(0,0)&f_{xy}(1,0)&f_{xy}(0,1)&f_{xy}(1,1)\end{smallmatrix}}\right]^{T},}$

the above system of equations can be reformulated into a matrix for the linear equation ${\displaystyle A\alpha =x}$.

Inverting the matrix gives the more useful linear equation ${\displaystyle A^{-1}x=\alpha }$, where

${\displaystyle A^{-1}=\left[{\begin{smallmatrix}1&0&0&0&0&0&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&1&0&0&0&0&0&0&0&0&0&0&0\\-3&3&0&0&-2&-1&0&0&0&0&0&0&0&0&0&0\\2&-2&0&0&1&1&0&0&0&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&1&0&0&0&0&0&0&0\\0&0&0&0&0&0&0&0&0&0&0&0&1&0&0&0\\0&0&0&0&0&0&0&0&-3&3&0&0&-2&-1&0&0\\0&0&0&0&0&0&0&0&2&-2&0&0&1&1&0&0\\-3&0&3&0&0&0&0&0&-2&0&-1&0&0&0&0&0\\0&0&0&0&-3&0&3&0&0&0&0&0&-2&0&-1&0\\9&-9&-9&9&6&3&-6&-3&6&-6&3&-3&4&2&2&1\\-6&6&6&-6&-3&-3&3&3&-4&4&-2&2&-2&-2&-1&-1\\2&0&-2&0&0&0&0&0&1&0&1&0&0&0&0&0\\0&0&0&0&2&0&-2&0&0&0&0&0&1&0&1&0\\-6&6&6&-6&-4&-2&4&2&-3&3&-3&3&-2&-1&-2&-1\\4&-4&-4&4&2&2&-2&-2&2&-2&2&-2&1&1&1&1\end{smallmatrix}}\right],}$

which allows ${\displaystyle \alpha }$ to be calculated quickly and easily.

There can be another concise matrix form for 16 coefficients:

${\displaystyle {\begin{bmatrix}f(0,0)&f(0,1)&f_{y}(0,0)&f_{y}(0,1)\\f(1,0)&f(1,1)&f_{y}(1,0)&f_{y}(1,1)\\f_{x}(0,0)&f_{x}(0,1)&f_{xy}(0,0)&f_{xy}(0,1)\\f_{x}(1,0)&f_{x}(1,1)&f_{xy}(1,0)&f_{xy}(1,1)\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\1&1&1&1\\0&1&0&0\\0&1&2&3\end{bmatrix}}{\begin{bmatrix}a_{00}&a_{01}&a_{02}&a_{03}\\a_{10}&a_{11}&a_{12}&a_{13}\\a_{20}&a_{21}&a_{22}&a_{23}\\a_{30}&a_{31}&a_{32}&a_{33}\end{bmatrix}}{\begin{bmatrix}1&1&0&0\\0&1&1&1\\0&1&0&2\\0&1&0&3\end{bmatrix}},}$

or

${\displaystyle {\begin{bmatrix}a_{00}&a_{01}&a_{02}&a_{03}\\a_{10}&a_{11}&a_{12}&a_{13}\\a_{20}&a_{21}&a_{22}&a_{23}\\a_{30}&a_{31}&a_{32}&a_{33}\end{bmatrix}}={\begin{bmatrix}1&0&0&0\\0&0&1&0\\-3&3&-2&-1\\2&-2&1&1\end{bmatrix}}{\begin{bmatrix}f(0,0)&f(0,1)&f_{y}(0,0)&f_{y}(0,1)\\f(1,0)&f(1,1)&f_{y}(1,0)&f_{y}(1,1)\\f_{x}(0,0)&f_{x}(0,1)&f_{xy}(0,0)&f_{xy}(0,1)\\f_{x}(1,0)&f_{x}(1,1)&f_{xy}(1,0)&f_{xy}(1,1)\end{bmatrix}}{\begin{bmatrix}1&0&-3&2\\0&0&3&-2\\0&1&-2&1\\0&0&-1&1\end{bmatrix}},}$

where

${\displaystyle p(x,y)={\begin{bmatrix}1&x&x^{2}&x^{3}\end{bmatrix}}{\begin{bmatrix}a_{00}&a_{01}&a_{02}&a_{03}\\a_{10}&a_{11}&a_{12}&a_{13}\\a_{20}&a_{21}&a_{22}&a_{23}\\a_{30}&a_{31}&a_{32}&a_{33}\end{bmatrix}}{\begin{bmatrix}1\\y\\y^{2}\\y^{3}\end{bmatrix}}.}$

## Extension to rectilinear grids

Often, we wish to perform bicubic interpolation using data on a rectilinear grid, rather than the unit square. In this case, the identities for ${\displaystyle p_{x},p_{y},}$ and ${\displaystyle p_{xy}}$ become

${\displaystyle p_{x}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=0}^{3}{\frac {a_{ij}ix^{i-1}y^{j}}{\Delta x}},}$
${\displaystyle p_{y}(x,y)=\textstyle \sum \limits _{i=0}^{3}\sum \limits _{j=1}^{3}{\frac {a_{ij}x^{i}jy^{j-1}}{\Delta y}},}$
${\displaystyle p_{xy}(x,y)=\textstyle \sum \limits _{i=1}^{3}\sum \limits _{j=1}^{3}{\frac {a_{ij}ix^{i-1}jy^{j-1}}{\Delta x\Delta y}},}$

where ${\displaystyle \Delta x}$ is the ${\displaystyle x}$ spacing of the cell containing the point ${\displaystyle (x,y)}$ and similar for ${\displaystyle \Delta y}$. In this case, the most practical approach to computing the coefficients ${\displaystyle \alpha }$ is to let

${\displaystyle x=\left[{\begin{smallmatrix}f(0,0)&f(1,0)&f(0,1)&f(1,1)&\Delta xf_{x}(0,0)&\Delta xf_{x}(1,0)&\Delta xf_{x}(0,1)&\Delta xf_{x}(1,1)&\Delta yf_{y}(0,0)&\Delta yf_{y}(1,0)&\Delta yf_{y}(0,1)&\Delta yf_{y}(1,1)&\Delta x\Delta yf_{xy}(0,0)&\Delta x\Delta yf_{xy}(1,0)&\Delta x\Delta yf_{xy}(0,1)&\Delta x\Delta yf_{xy}(1,1)\end{smallmatrix}}\right]^{T},}$

then to solve ${\displaystyle \alpha =A^{-1}x}$ with ${\displaystyle A}$ as before. Next, we compute normalized interpolating variables

${\displaystyle {\overline {x}}={\frac {x-x_{0}}{x_{1}-x_{0}}}}$,
${\displaystyle {\overline {y}}={\frac {y-y_{0}}{y_{1}-y_{0}}}}$

where ${\displaystyle x_{0},x_{1},y_{0},}$ and ${\displaystyle y_{1}}$ are the ${\displaystyle x}$ and ${\displaystyle y}$ coordinates of the grid points surrounding the point ${\displaystyle (x,y)}$. Then, the interpolating surface becomes

${\displaystyle p(x,y)=\sum \limits _{i=0}^{3}\sum _{j=0}^{3}a_{ij}{\overline {x}}^{i}{\overline {y}}^{j}.}$

## Finding derivatives from function values

If the derivatives are unknown, they are typically approximated from the function values at points neighbouring the corners of the unit square, e.g. using finite differences.

To find either of the single derivatives, ${\displaystyle f_{x}}$ or ${\displaystyle f_{y}}$, using that method, find the slope between the two surrounding points in the appropriate axis. For example, to calculate ${\displaystyle f_{x}}$ for one of the points, find ${\displaystyle f(x,y)}$ for the points to the left and right of the target point and calculate their slope, and similarly for ${\displaystyle f_{y}}$.

To find the cross derivative ${\displaystyle f_{xy}}$, take the derivative in both axes, one at a time. For example, one can first use the ${\displaystyle f_{x}}$ procedure to find the ${\displaystyle x}$ derivatives of the points above and below the target point, then use the ${\displaystyle f_{y}}$ procedure on those values (rather than, as usual, the values of ${\displaystyle f}$ for those points) to obtain the value of ${\displaystyle f_{xy}(x,y)}$ for the target point. (Or one can do it in the opposite direction, first calculating ${\displaystyle f_{y}}$ and then ${\displaystyle f_{x}}$ from those. The two give equivalent results.)

At the edges of the dataset, when one is missing some of the surrounding points, the missing points can be approximated by a number of methods. A simple and common method is to assume that the slope from the existing point to the target point continues without further change, and using this to calculate a hypothetical value for the missing point.

## Bicubic convolution algorithm

Bicubic spline interpolation requires the solution of the linear system described above for each grid cell. An interpolator with similar properties can be obtained by applying a convolution with the following kernel in both dimensions:

${\displaystyle W(x)={\begin{cases}(a+2)|x|^{3}-(a+3)|x|^{2}+1&{\text{for }}|x|\leq 1,\\a|x|^{3}-5a|x|^{2}+8a|x|-4a&{\text{for }}1<|x|<2,\\0&{\text{otherwise}},\end{cases}}}$

where ${\displaystyle a}$ is usually set to −0.5 or −0.75. Note that ${\displaystyle W(0)=1}$ and ${\displaystyle W(n)=0}$ for all nonzero integers ${\displaystyle n}$.

This approach was proposed by Keys, who showed that ${\displaystyle a=-0.5}$ produces third-order convergence with respect to the sampling interval of the original function.[1]

If we use the matrix notation for the common case ${\displaystyle a=-0.5}$, we can express the equation in a more friendly manner:

${\displaystyle p(t)={\tfrac {1}{2}}{\begin{bmatrix}1&t&t^{2}&t^{3}\\\end{bmatrix}}{\begin{bmatrix}0&2&0&0\\-1&0&1&0\\2&-5&4&-1\\-1&3&-3&1\\\end{bmatrix}}{\begin{bmatrix}f_{-1}\\f_{0}\\f_{1}\\f_{2}\\\end{bmatrix}}}$

for ${\displaystyle t}$ between 0 and 1 for one dimension. Note that for 1-dimensional cubic convolution interpolation 4 sample points are required. For each inquiry two samples are located on its left and two samples on the right. These points are indexed from −1 to 2 in this text. The distance from the point indexed with 0 to the inquiry point is denoted by ${\displaystyle t}$ here.

For two dimensions first applied once in ${\displaystyle x}$ and again in ${\displaystyle y}$:

${\displaystyle b_{-1}=p(t_{x},f_{(-1,-1)},f_{(0,-1)},f_{(1,-1)},f_{(2,-1)}),}$
${\displaystyle b_{0}=p(t_{x},f_{(-1,0)},f_{(0,0)},f_{(1,0)},f_{(2,0)}),}$
${\displaystyle b_{1}=p(t_{x},f_{(-1,1)},f_{(0,1)},f_{(1,1)},f_{(2,1)}),}$
${\displaystyle b_{2}=p(t_{x},f_{(-1,2)},f_{(0,2)},f_{(1,2)},f_{(2,2)}),}$
${\displaystyle p(x,y)=p(t_{y},b_{-1},b_{0},b_{1},b_{2}).}$

## Use in computer graphics

The lower half of this figure is a magnification of the upper half, showing how the apparent sharpness of the left-hand line is created. Bicubic interpolation causes overshoot, which increases acutance.

The bicubic algorithm is frequently used for scaling images and video for display (see bitmap resampling). It preserves fine detail better than the common bilinear algorithm.

However, due to the negative lobes on the kernel, it causes overshoot (haloing). This can cause clipping, and is an artifact (see also ringing artifacts), but it increases acutance (apparent sharpness), and can be desirable.