# Binomial inverse theorem

In mathematics, the binomial inverse theorem is useful for expressing matrix inverses in different ways.

If A, U, B, V are matrices of sizes p×p, p×q, q×q, q×p, respectively, then

${\displaystyle \left(\mathbf {A} +\mathbf {UBV} \right)^{-1}=\mathbf {A} ^{-1}-\mathbf {A} ^{-1}\mathbf {UB} \left(\mathbf {B} +\mathbf {BVA} ^{-1}\mathbf {UB} \right)^{-1}\mathbf {BVA} ^{-1}}$

provided A and B + BVA−1UB are nonsingular. Nonsingularity of the latter requires that B−1 exist since it equals B(I+VA—1UB) and the rank of the latter cannot exceed the rank of B.[1]

Since B is invertible, the two B terms flanking the parenthetical quantity inverse in the right-hand side can be replaced with (B−1)−1, which results in

${\displaystyle \left(\mathbf {A} +\mathbf {UBV} \right)^{-1}=\mathbf {A} ^{-1}-\mathbf {A} ^{-1}\mathbf {U} \left(\mathbf {B} ^{-1}+\mathbf {VA} ^{-1}\mathbf {U} \right)^{-1}\mathbf {VA} ^{-1}.}$

This is the Woodbury matrix identity, which can also be derived using matrix blockwise inversion.

A more general formula exists when B is singular and possibly even non-square:[1]

${\displaystyle (\mathbf {A+UBV} )^{-1}=\mathbf {A} ^{-1}-\mathbf {A} ^{-1}\mathbf {U} (\mathbf {I+BVA} ^{-1}\mathbf {U} )^{-1}\mathbf {BVA} ^{-1}.}$

Formulas also exist for certain cases in which A is singular.[2]

## Verification

First notice that

${\displaystyle \left(\mathbf {A} +\mathbf {UBV} \right)\mathbf {A} ^{-1}\mathbf {UB} =\mathbf {UB} +\mathbf {UBVA} ^{-1}\mathbf {UB} =\mathbf {U} \left(\mathbf {B} +\mathbf {BVA} ^{-1}\mathbf {UB} \right).}$

Now multiply the matrix we wish to invert by its alleged inverse:

${\displaystyle \left(\mathbf {A} +\mathbf {UBV} \right)\left(\mathbf {A} ^{-1}-\mathbf {A} ^{-1}\mathbf {UB} \left(\mathbf {B} +\mathbf {BVA} ^{-1}\mathbf {UB} \right)^{-1}\mathbf {BVA} ^{-1}\right)}$
${\displaystyle =\mathbf {I} _{p}+\mathbf {UBVA} ^{-1}-\mathbf {U} \left(\mathbf {B} +\mathbf {BVA} ^{-1}\mathbf {UB} \right)\left(\mathbf {B} +\mathbf {BVA} ^{-1}\mathbf {UB} \right)^{-1}\mathbf {BVA} ^{-1}}$
${\displaystyle =\mathbf {I} _{p}+\mathbf {UBVA} ^{-1}-\mathbf {UBVA} ^{-1}=\mathbf {I} _{p}\!}$

which verifies that it is the inverse.

So we get that if A−1 and ${\displaystyle \left(\mathbf {B} +\mathbf {BVA} ^{-1}\mathbf {UB} \right)^{-1}}$ exist, then ${\displaystyle \left(\mathbf {A} +\mathbf {UBV} \right)^{-1}}$ exists and is given by the theorem above.[3]

## Special cases

### First

If p = q and U = V = Ip is the identity matrix, then

${\displaystyle \left(\mathbf {A} +\mathbf {B} \right)^{-1}=\mathbf {A} ^{-1}-\mathbf {A} ^{-1}\mathbf {B} \left(\mathbf {B} +\mathbf {BA} ^{-1}\mathbf {B} \right)^{-1}\mathbf {BA} ^{-1}.}$

Remembering the identity

${\displaystyle \left(\mathbf {C} \mathbf {D} \right)^{-1}=\mathbf {D} ^{-1}\mathbf {C} ^{-1},}$

we can also express the previous equation in the simpler form as

${\displaystyle \left(\mathbf {A} +\mathbf {B} \right)^{-1}=\mathbf {A} ^{-1}-\mathbf {A} ^{-1}\left(\mathbf {I} +\mathbf {B} \mathbf {A} ^{-1}\right)^{-1}\mathbf {B} \mathbf {A} ^{-1}.}$

Continuing with the merging of the terms of the far right-hand side of the above equation results in Hua's identity

${\displaystyle \left(\mathbf {A} +\mathbf {B} \right)^{-1}=\mathbf {A} ^{-1}-\left(\mathbf {A} +\mathbf {A} \mathbf {B} ^{-1}\mathbf {A} \right)^{-1}.}$

### Second

If B = Iq is the identity matrix and q = 1, then U is a column vector, written u, and V is a row vector, written vT. Then the theorem implies the Sherman-Morrison formula:

${\displaystyle \left(\mathbf {A} +\mathbf {uv} ^{\mathrm {T} }\right)^{-1}=\mathbf {A} ^{-1}-{\frac {\mathbf {A} ^{-1}\mathbf {uv} ^{\mathrm {T} }\mathbf {A} ^{-1}}{1+\mathbf {v} ^{\mathrm {T} }\mathbf {A} ^{-1}\mathbf {u} }}.}$

This is useful if one has a matrix A with a known inverse A−1 and one needs to invert matrices of the form A+uvT quickly for various u and v.

### Third

If we set A = Ip and B = Iq, we get

${\displaystyle \left(\mathbf {I} _{p}+\mathbf {UV} \right)^{-1}=\mathbf {I} _{p}-\mathbf {U} \left(\mathbf {I} _{q}+\mathbf {VU} \right)^{-1}\mathbf {V} .}$

In particular, if q = 1, then

${\displaystyle \left(\mathbf {I} +\mathbf {uv} ^{\mathrm {T} }\right)^{-1}=\mathbf {I} -{\frac {\mathbf {uv} ^{\mathrm {T} }}{1+\mathbf {v} ^{\mathrm {T} }\mathbf {u} }},}$

which is a particular case of the Sherman-Morrison formula given above.