# Moore–Penrose inverse

In mathematics, and in particular linear algebra, the Moore–Penrose inverse ${\displaystyle A^{+}}$ of a matrix ${\displaystyle A}$ is the most widely known generalization of the inverse matrix.[1][2][3][4] It was independently described by E. H. Moore[5] in 1920, Arne Bjerhammar[6] in 1951, and Roger Penrose[7] in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse.

A common use of the pseudoinverse is to compute a "best fit" (least squares) solution to a system of linear equations that lacks a solution (see below under § Applications). Another use is to find the minimum (Euclidean) norm solution to a system of linear equations with multiple solutions. The pseudoinverse facilitates the statement and proof of results in linear algebra.

The pseudoinverse is defined and unique for all matrices whose entries are real or complex numbers. It can be computed using the singular value decomposition.

## Notation

In the following discussion, the following conventions are adopted.

• ${\displaystyle \mathbb {K} }$ will denote one of the fields of real or complex numbers, denoted ${\displaystyle \mathbb {R} }$, ${\displaystyle \mathbb {C} }$, respectively. The vector space of ${\displaystyle m\times n}$ matrices over ${\displaystyle \mathbb {K} }$ is denoted by ${\displaystyle \mathbb {K} ^{m\times n}}$.
• For ${\displaystyle A\in \mathbb {K} ^{m\times n}}$, ${\displaystyle A^{\top }}$ and ${\displaystyle A^{*}}$ denote the transpose and Hermitian transpose (also called conjugate transpose) respectively. If ${\displaystyle \mathbb {K} =\mathbb {R} }$, then ${\displaystyle A^{*}=A^{\top }}$.
• For ${\displaystyle A\in \mathbb {K} ^{m\times n}}$, ${\displaystyle \operatorname {ran} (A)}$ denotes the column space (image) of ${\displaystyle A}$ (the space spanned by the column vectors of ${\displaystyle A}$) and ${\displaystyle \operatorname {ker} (A)}$ denotes the kernel (null space) of ${\displaystyle A}$.
• Finally, for any positive integer ${\displaystyle n}$, ${\displaystyle I_{n}\in \mathbb {K} ^{n\times n}}$ denotes the ${\displaystyle n\times n}$ identity matrix.

## Definition

For ${\displaystyle A\in \mathbb {K} ^{m\times n}}$, a pseudoinverse of ${\displaystyle A}$ is defined as a matrix ${\displaystyle A^{+}\in \mathbb {K} ^{n\times m}}$ satisfying all of the following four criteria, known as the Moore–Penrose conditions:[7][8]

 ${\displaystyle {\text{1.}}\quad AA^{+}A}$ ${\displaystyle =\;A}$ (${\displaystyle AA^{+}}$ need not be the general identity matrix, but it maps all column vectors of ${\displaystyle A}$ to themselves); ${\displaystyle {\text{2.}}\quad A^{+}AA^{+}}$ ${\displaystyle =\;A^{+}}$ (${\displaystyle A^{+}}$ acts like a weak inverse); ${\displaystyle {\text{3.}}\quad (AA^{+})^{*}}$ ${\displaystyle =\;AA^{+}}$ (${\displaystyle AA^{+}}$ is Hermitian); ${\displaystyle {\text{4.}}\quad (A^{+}A)^{*}}$ ${\displaystyle =\;A^{+}A}$ (${\displaystyle A^{+}A}$ is also Hermitian).

${\displaystyle A^{+}}$ exists for any matrix ${\displaystyle A}$, but, when the latter has full rank (that is, the rank of ${\displaystyle A}$ is ${\displaystyle \min\{m,n\}}$), then ${\displaystyle A^{+}}$ can be expressed as a simple algebraic formula.

In particular, when ${\displaystyle A}$ has linearly independent columns (and thus matrix ${\displaystyle A^{*}A}$ is invertible), ${\displaystyle A^{+}}$ can be computed as

${\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}.}$

This particular pseudoinverse constitutes a left inverse, since, in this case, ${\displaystyle A^{+}A=I}$.

When ${\displaystyle A}$ has linearly independent rows (matrix ${\displaystyle AA^{*}}$ is invertible), ${\displaystyle A^{+}}$ can be computed as

${\displaystyle A^{+}=A^{*}(AA^{*})^{-1}.}$

This is a right inverse, as ${\displaystyle AA^{+}=I}$.

## Properties

### Existence and uniqueness

The pseudoinverse exists and is unique: for any matrix ${\displaystyle A}$, there is precisely one matrix ${\displaystyle A^{+}}$, that satisfies the four properties of the definition.[8]

A matrix satisfying the first condition of the definition is known as a generalized inverse. If the matrix also satisfies the second definition, it is called a generalized reflexive inverse. Generalized inverses always exist but are not in general unique. Uniqueness is a consequence of the last two conditions.

### Basic properties

• If ${\displaystyle A}$ has real entries, then so does ${\displaystyle A^{+}}$.
• If ${\displaystyle A}$ is invertible, its pseudoinverse is its inverse. That is, ${\displaystyle A^{+}=A^{-1}}$.[9]:243
• The pseudoinverse of a zero matrix is its transpose.
• The pseudoinverse of the pseudoinverse is the original matrix: ${\displaystyle \left(A^{+}\right)^{+}=A}$.[9]:245
• Pseudoinversion commutes with transposition, conjugation, and taking the conjugate transpose:[9]:245
${\displaystyle \left(A^{\top }\right)^{+}=\left(A^{+}\right)^{\top }}$, ${\displaystyle \left({\overline {A}}\right)^{+}={\overline {A^{+}}}}$, ${\displaystyle \left(A^{*}\right)^{+}=\left(A^{+}\right)^{*}}$.
• The pseudoinverse of a scalar multiple of ${\displaystyle A}$ is the reciprocal multiple of ${\displaystyle A^{+}}$:
${\displaystyle \left(\alpha A\right)^{+}=\alpha ^{-1}A^{+}}$ for ${\displaystyle \alpha \neq 0}$.

#### Identities

The following identities can be used to cancel certain subexpressions or expand expressions involving pseudoinverses. Proofs for these properties can be found in the proofs subpage.

{\displaystyle {\begin{alignedat}{3}A^{+}={}&A^{+}&&A^{+*}&&A^{*}\\={}&A^{*}&&A^{+*}&&A^{+},\\A={}&A^{+*}&&A^{*}&&A\\={}&A&&A^{*}&&A^{+*},\\A^{*}={}&A^{*}&&A&&A^{+}\\={}&A^{+}&&A&&A^{*}.\end{alignedat}}}

### Reduction to Hermitian case

The computation of the pseudoinverse is reducible to its construction in the Hermitian case. This is possible through the equivalences:

${\displaystyle A^{+}=\left(A^{*}A\right)^{+}A^{*},}$
${\displaystyle A^{+}=A^{*}\left(AA^{*}\right)^{+},}$

as ${\displaystyle A^{*}A}$ and ${\displaystyle AA^{*}}$ are Hermitian.

### Products

If ${\displaystyle A\in \mathbb {K} ^{m\times n},\ B\in \mathbb {K} ^{n\times p}}$, and if

1. ${\displaystyle A}$ has orthonormal columns (that is, ${\displaystyle A^{*}A=I_{n}}$),   or
2. ${\displaystyle B}$ has orthonormal rows (that is, ${\displaystyle BB^{*}=I_{n}}$),   or
3. ${\displaystyle A}$ has all columns linearly independent (full column rank) and ${\displaystyle B}$ has all rows linearly independent (full row rank),   or
4. ${\displaystyle B=A^{*}}$ (that is, ${\displaystyle B}$ is the conjugate transpose of ${\displaystyle A}$),

then

${\displaystyle (AB)^{+}=B^{+}A^{+}.}$

The last property yields the equalities

{\displaystyle {\begin{aligned}\left(AA^{*}\right)^{+}&=A^{+*}A^{+},\\\left(A^{*}A\right)^{+}&=A^{+}A^{+*}.\end{aligned}}}

NB: The equality ${\displaystyle (AB)^{+}=B^{+}A^{+}}$ does not hold in general. See the counterexample:

${\displaystyle {\Biggl (}{\begin{pmatrix}1&1\\0&0\end{pmatrix}}{\begin{pmatrix}0&0\\1&1\end{pmatrix}}{\Biggr )}^{+}={\begin{pmatrix}1&1\\0&0\end{pmatrix}}^{+}={\begin{pmatrix}{\tfrac {1}{2}}&0\\{\tfrac {1}{2}}&0\end{pmatrix}}\quad \neq \quad {\begin{pmatrix}{\tfrac {1}{4}}&0\\{\tfrac {1}{4}}&0\end{pmatrix}}={\begin{pmatrix}0&{\tfrac {1}{2}}\\0&{\tfrac {1}{2}}\end{pmatrix}}{\begin{pmatrix}{\tfrac {1}{2}}&0\\{\tfrac {1}{2}}&0\end{pmatrix}}={\begin{pmatrix}0&0\\1&1\end{pmatrix}}^{+}{\begin{pmatrix}1&1\\0&0\end{pmatrix}}^{+}}$

### Projectors

${\displaystyle P=AA^{+}}$ and ${\displaystyle Q=A^{+}A}$ are orthogonal projection operators, that is, they are Hermitian (${\displaystyle P=P^{*}}$, ${\displaystyle Q=Q^{*}}$) and idempotent (${\displaystyle P^{2}=P}$ and ${\displaystyle Q^{2}=Q}$). The following hold:

• ${\displaystyle PA=AQ=A}$ and ${\displaystyle A^{+}P=QA^{+}=A^{+}}$
• ${\displaystyle P}$ is the orthogonal projector onto the range of ${\displaystyle A}$ (which equals the orthogonal complement of the kernel of ${\displaystyle A^{*}}$).
• ${\displaystyle Q}$ is the orthogonal projector onto the range of ${\displaystyle A^{*}}$ (which equals the orthogonal complement of the kernel of ${\displaystyle A}$).
• ${\displaystyle (I-Q)=\left(I-A^{+}A\right)}$ is the orthogonal projector onto the kernel of ${\displaystyle A}$.
• ${\displaystyle (I-P)=\left(I-AA^{+}\right)}$ is the orthogonal projector onto the kernel of ${\displaystyle A^{*}}$.[8]

The last two properties imply the following identities:

• ${\displaystyle A\,\ \left(I-A^{+}A\right)=\left(I-AA^{+}\right)A\ \ =0}$
• ${\displaystyle A^{*}\left(I-AA^{+}\right)=\left(I-A^{+}A\right)A^{*}=0}$

Another property is the following: if ${\displaystyle A\in \mathbb {K} ^{n\times n}}$ is Hermitian and idempotent (true if and only if it represents an orthogonal projection), then, for any matrix ${\displaystyle B\in \mathbb {K} ^{m\times n}}$ the following equation holds:[10]

${\displaystyle A(BA)^{+}=(BA)^{+}}$

This can be proven by defining matrices ${\displaystyle C=BA}$, ${\displaystyle D=A(BA)^{+}}$, and checking that ${\displaystyle D}$ is indeed a pseudoinverse for ${\displaystyle C}$ by verifying that the defining properties of the pseudoinverse hold, when ${\displaystyle A}$ is Hermitian and idempotent.

From the last property it follows that, if ${\displaystyle A\in \mathbb {K} ^{n\times n}}$ is Hermitian and idempotent, for any matrix ${\displaystyle B\in \mathbb {K} ^{n\times m}}$

${\displaystyle (AB)^{+}A=(AB)^{+}}$

Finally, if ${\displaystyle A}$ is an orthogonal projection matrix, then its pseudoinverse trivially coincides with the matrix itself, that is, ${\displaystyle A^{+}=A}$.

### Geometric construction

If we view the matrix as a linear map ${\displaystyle A:K^{n}\to K^{m}}$ over a field ${\displaystyle K}$ then ${\displaystyle A^{+}:K^{m}\to K^{n}}$ can be decomposed as follows. We write ${\displaystyle \oplus }$ for the direct sum, ${\displaystyle \perp }$ for the orthogonal complement, ${\displaystyle \operatorname {ker} }$ for the kernel of a map, and ${\displaystyle \operatorname {ran} }$ for the image of a map. Notice that ${\displaystyle K^{n}=\left(\operatorname {ker} A\right)^{\perp }\oplus \operatorname {ker} A}$ and ${\displaystyle K^{m}=\operatorname {ran} A\oplus \left(\operatorname {ran} A\right)^{\perp }}$. The restriction ${\displaystyle A:\left(\operatorname {ker} A\right)^{\perp }\to \operatorname {ran} A}$ is then an isomorphism. This implies that ${\displaystyle A^{+}}$ on ${\displaystyle \operatorname {ran} A}$ is the inverse of this isomorphism, and is zero on ${\displaystyle \left(\operatorname {ran} A\right)^{\perp }.}$

In other words: To find ${\displaystyle A^{+}b}$ for given ${\displaystyle b}$ in ${\displaystyle K^{m}}$, first project ${\displaystyle b}$ orthogonally onto the range of ${\displaystyle A}$, finding a point ${\displaystyle p(b)}$ in the range. Then form ${\displaystyle A^{-1}(\{p(b)\})}$, that is, find those vectors in ${\displaystyle K^{n}}$ that ${\displaystyle A}$ sends to ${\displaystyle p(b)}$. This will be an affine subspace of ${\displaystyle K^{n}}$ parallel to the kernel of ${\displaystyle A}$. The element of this subspace that has the smallest length (that is, is closest to the origin) is the answer ${\displaystyle A^{+}b}$ we are looking for. It can be found by taking an arbitrary member of ${\displaystyle A^{-1}(\{p(b)\})}$ and projecting it orthogonally onto the orthogonal complement of the kernel of ${\displaystyle A}$.

This description is closely related to the Minimum norm solution to a linear system.

### Subspaces

{\displaystyle {\begin{aligned}\operatorname {ker} \left(A^{+}\right)&=\operatorname {ker} \left(A^{*}\right)\\\operatorname {ran} \left(A^{+}\right)&=\operatorname {ran} \left(A^{*}\right)\end{aligned}}}

### Limit relations

The pseudoinverse are limits:

${\displaystyle A^{+}=\lim _{\delta \searrow 0}\left(A^{*}A+\delta I\right)^{-1}A^{*}=\lim _{\delta \searrow 0}A^{*}\left(AA^{*}+\delta I\right)^{-1}}$
(see Tikhonov regularization). These limits exist even if ${\displaystyle \left(AA^{*}\right)^{-1}}$ or ${\displaystyle \left(A^{*}A\right)^{-1}}$ do not exist.[8]:263

### Continuity

In contrast to ordinary matrix inversion, the process of taking pseudoinverses is not continuous: if the sequence ${\displaystyle \left(A_{n}\right)}$ converges to the matrix ${\displaystyle A}$ (in the maximum norm or Frobenius norm, say), then ${\displaystyle (A_{n})^{+}}$ need not converge to ${\displaystyle A^{+}}$. However, if all the matrices have the same rank, ${\displaystyle (A_{n})^{+}}$ will converge to ${\displaystyle A^{+}}$.[11]

### Derivative

The derivative of a real valued pseudoinverse matrix which has constant rank at a point ${\displaystyle x}$ may be calculated in terms of the derivative of the original matrix:[12]

${\displaystyle {\frac {\mathrm {d} }{\mathrm {d} x}}A^{+}(x)=-A^{+}\left({\frac {\mathrm {d} }{\mathrm {d} x}}A\right)A^{+}~+~A^{+}A^{+{\text{T}}}\left({\frac {\mathrm {d} }{\mathrm {d} x}}A^{\text{T}}\right)\left(I-AA^{+}\right)~+~\left(I-A^{+}A\right)\left({\frac {\text{d}}{{\text{d}}x}}A^{\text{T}}\right)A^{+{\text{T}}}A^{+}}$

## Examples

Since for invertible matrices the pseudoinverse equals the usual inverse, only examples of non-invertible matrices are considered below.

• For ${\displaystyle \mathbf {A} ={\begin{pmatrix}0&0\\0&0\end{pmatrix}},\,}$ the pseudoinverse is ${\displaystyle \mathbf {A^{+}} ={\begin{pmatrix}0&0\\0&0\end{pmatrix}}.}$ (Generally, the pseudoinverse of a zero matrix is its transpose.) The uniqueness of this pseudoinverse can be seen from the requirement ${\displaystyle A^{+}=A^{+}AA^{+}}$, since multiplication by a zero matrix would always produce a zero matrix.
• For ${\displaystyle \mathbf {A} ={\begin{pmatrix}1&0\\1&0\end{pmatrix}},\,}$ the pseudoinverse is ${\displaystyle \mathbf {A^{+}} ={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}\\0&0\end{pmatrix}}.}$
Indeed, ${\displaystyle \mathbf {A\,A^{+}} ={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}\\{\frac {1}{2}}&{\frac {1}{2}}\end{pmatrix}},\,}$ and thus ${\displaystyle \mathbf {A\,A^{+}A} ={\begin{pmatrix}1&0\\1&0\end{pmatrix}}=A.}$
Similarly, ${\displaystyle \mathbf {A^{+}A} ={\begin{pmatrix}1&0\\0&0\end{pmatrix}},\,}$ and thus ${\displaystyle \mathbf {A^{+}A\,A^{+}} ={\begin{pmatrix}{\frac {1}{2}}&{\frac {1}{2}}\\0&0\end{pmatrix}}=A^{+}.}$
• For ${\displaystyle \mathbf {A} ={\begin{pmatrix}1&0\\-1&0\end{pmatrix}},\ }$ ${\displaystyle \mathbf {A^{+}} ={\begin{pmatrix}{\frac {1}{2}}&-{\frac {1}{2}}\\0&0\end{pmatrix}}.}$
• For ${\displaystyle \mathbf {A} ={\begin{pmatrix}1&0\\2&0\end{pmatrix}},\ }$ ${\displaystyle \mathbf {A^{+}} ={\begin{pmatrix}{\frac {1}{5}}&{\frac {2}{5}}\\0&0\end{pmatrix}}.}$ (The denominators are ${\displaystyle 5=1^{2}+2^{2}}$.)
• For ${\displaystyle \mathbf {A} ={\begin{pmatrix}1&1\\1&1\end{pmatrix}},\ }$ ${\displaystyle \mathbf {A^{+}} ={\begin{pmatrix}{\frac {1}{4}}&{\frac {1}{4}}\\{\frac {1}{4}}&{\frac {1}{4}}\end{pmatrix}}.}$
• For ${\displaystyle \mathbf {A} ={\begin{pmatrix}1&0\\0&1\\0&1\end{pmatrix}},\,}$ the pseudoinverse is ${\displaystyle \mathbf {A^{+}} ={\begin{pmatrix}1&0&0\\0&{\frac {1}{2}}&{\frac {1}{2}}\end{pmatrix}}.}$
For this matrix, the left inverse exists and thus equals ${\displaystyle \mathbf {A^{+}} }$, indeed, ${\displaystyle \mathbf {A^{+}A} ={\begin{pmatrix}1&0\\0&1\end{pmatrix}}.}$

## Special cases

### Scalars

It is also possible to define a pseudoinverse for scalars and vectors. This amounts to treating these as matrices. The pseudoinverse of a scalar ${\displaystyle x}$ is zero if ${\displaystyle x}$ is zero and the reciprocal of ${\displaystyle x}$ otherwise:

${\displaystyle x^{+}={\begin{cases}0,&{\mbox{if }}x=0;\\x^{-1},&{\mbox{otherwise}}.\end{cases}}}$

### Vectors

The pseudoinverse of the null (all zero) vector is the transposed null vector. The pseudoinverse of a non-null vector is the conjugate transposed vector divided by its squared magnitude:

${\displaystyle x^{+}={\begin{cases}0^{\mathrm {T} },&{\mbox{if }}x=0;\\{x^{*} \over x^{*}x},&{\mbox{otherwise}}.\end{cases}}}$

### Linearly independent columns

If the columns of ${\displaystyle A}$ are linearly independent (so that ${\displaystyle m\geq n}$), then ${\displaystyle A^{*}A}$ is invertible. In this case, an explicit formula is:[13]

${\displaystyle A^{+}=\left(A^{*}A\right)^{-1}A^{*}}$.

It follows that ${\displaystyle A^{+}}$ is then a left inverse of ${\displaystyle A}$:   ${\displaystyle A^{+}A=I_{n}}$.

### Linearly independent rows

If the rows of ${\displaystyle A}$ are linearly independent (so that ${\displaystyle m\leq n}$), then ${\displaystyle AA^{*}}$ is invertible. In this case, an explicit formula is:

${\displaystyle A^{+}=A^{*}\left(AA^{*}\right)^{-1}}$.

It follows that ${\displaystyle A^{+}}$ is a right inverse of ${\displaystyle A}$:   ${\displaystyle AA^{+}=I_{m}}$.

### Orthonormal columns or rows

This is a special case of either full column rank or full row rank (treated above). If ${\displaystyle A}$ has orthonormal columns (${\displaystyle A^{*}A=I_{n}}$) or orthonormal rows (${\displaystyle AA^{*}=I_{m}}$), then:

${\displaystyle A^{+}=A^{*}}$.

### Orthogonal projection matrices

If ${\displaystyle A}$ is an orthogonal projection matrix, that is, ${\displaystyle A=A^{*}}$ and ${\displaystyle A^{2}=A}$, then the pseudoinverse trivially coincides with the matrix itself:

${\displaystyle A^{+}=A}$.

### Circulant matrices

For a circulant matrix ${\displaystyle C}$, the singular value decomposition is given by the Fourier transform, that is, the singular values are the Fourier coefficients. Let ${\displaystyle {\mathcal {F}}}$ be the Discrete Fourier Transform (DFT) matrix, then[14]

{\displaystyle {\begin{aligned}C&={\mathcal {F}}\cdot \Sigma \cdot {\mathcal {F}}^{*}\\C^{+}&={\mathcal {F}}\cdot \Sigma ^{+}\cdot {\mathcal {F}}^{*}\end{aligned}}}

## Construction

### Rank decomposition

Let ${\displaystyle r\leq \min(m,n)}$ denote the rank of ${\displaystyle A\in K^{m\times n}}$. Then ${\displaystyle A}$ can be (rank) decomposed as ${\displaystyle A=BC}$ where ${\displaystyle B\in K^{m\times r}}$ and ${\displaystyle C\in K^{r\times n}}$ are of rank ${\displaystyle r}$. Then ${\displaystyle A^{+}=C^{+}B^{+}=C^{*}\left(CC^{*}\right)^{-1}\left(B^{*}B\right)^{-1}B^{*}}$.

### The QR method

For ${\displaystyle \mathbb {K} \in \{\mathbb {R} ,\mathbb {C} \}}$ computing the product ${\displaystyle AA^{*}}$ or ${\displaystyle A^{*}A}$ and their inverses explicitly is often a source of numerical rounding errors and computational cost in practice. An alternative approach using the QR decomposition of ${\displaystyle A}$ may be used instead.

Consider the case when ${\displaystyle A}$ is of full column rank, so that ${\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}}$. Then the Cholesky decomposition ${\displaystyle A^{*}A=R^{*}R}$, where ${\displaystyle R}$ is an upper triangular matrix, may be used. Multiplication by the inverse is then done easily by solving a system with multiple right-hand sides,

${\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}\quad \Leftrightarrow \quad (A^{*}A)A^{+}=A^{*}\quad \Leftrightarrow \quad R^{*}RA^{+}=A^{*}}$

which may be solved by forward substitution followed by back substitution.

The Cholesky decomposition may be computed without forming ${\displaystyle A^{*}A}$ explicitly, by alternatively using the QR decomposition of ${\displaystyle A=QR}$, where ${\displaystyle Q}$ has orthonormal columns, ${\displaystyle Q^{*}Q=I}$, and ${\displaystyle R}$ is upper triangular. Then

${\displaystyle A^{*}A\,=\,(QR)^{*}(QR)\,=\,R^{*}Q^{*}QR\,=\,R^{*}R}$,

so ${\displaystyle R}$ is the Cholesky factor of ${\displaystyle A^{*}A}$.

The case of full row rank is treated similarly by using the formula ${\displaystyle A^{+}=A^{*}(AA^{*})^{-1}}$ and using a similar argument, swapping the roles of ${\displaystyle A}$ and ${\displaystyle A^{*}}$.

### Singular value decomposition (SVD)

A computationally simple and accurate way to compute the pseudoinverse is by using the singular value decomposition.[13][8][15] If ${\displaystyle A=U\Sigma V^{*}}$ is the singular value decomposition of ${\displaystyle A}$, then ${\displaystyle A^{+}=V\Sigma ^{+}U^{*}}$. For a rectangular diagonal matrix such as ${\displaystyle \Sigma }$, we get the pseudoinverse by taking the reciprocal of each non-zero element on the diagonal, leaving the zeros in place, and then transposing the matrix. In numerical computation, only elements larger than some small tolerance are taken to be nonzero, and the others are replaced by zeros. For example, in the MATLAB, GNU Octave, or NumPy function pinv, the tolerance is taken to be t = ε⋅max(m, n)⋅max(Σ), where ε is the machine epsilon.

The computational cost of this method is dominated by the cost of computing the SVD, which is several times higher than matrix–matrix multiplication, even if a state-of-the art implementation (such as that of LAPACK) is used.

The above procedure shows why taking the pseudoinverse is not a continuous operation: if the original matrix ${\displaystyle A}$ has a singular value 0 (a diagonal entry of the matrix ${\displaystyle \Sigma }$ above), then modifying ${\displaystyle A}$ slightly may turn this zero into a tiny positive number, thereby affecting the pseudoinverse dramatically as we now have to take the reciprocal of a tiny number.

### Block matrices

Optimized approaches exist for calculating the pseudoinverse of block structured matrices.

### The iterative method of Ben-Israel and Cohen

Another method for computing the pseudoinverse (cf. Drazin inverse) uses the recursion

${\displaystyle A_{i+1}=2A_{i}-A_{i}AA_{i},}$

which is sometimes referred to as hyper-power sequence. This recursion produces a sequence converging quadratically to the pseudoinverse of ${\displaystyle A}$ if it is started with an appropriate ${\displaystyle A_{0}}$ satisfying ${\displaystyle A_{0}A=\left(A_{0}A\right)^{*}}$. The choice ${\displaystyle A_{0}=\alpha A^{*}}$ (where ${\displaystyle 0<\alpha <2/\sigma _{1}^{2}(A)}$, with ${\displaystyle \sigma _{1}(A)}$ denoting the largest singular value of ${\displaystyle A}$) [16] has been argued not to be competitive to the method using the SVD mentioned above, because even for moderately ill-conditioned matrices it takes a long time before ${\displaystyle A_{i}}$ enters the region of quadratic convergence.[17] However, if started with ${\displaystyle A_{0}}$ already close to the Moore–Penrose inverse and ${\displaystyle A_{0}A=\left(A_{0}A\right)^{*}}$, for example ${\displaystyle A_{0}:=\left(A^{*}A+\delta I\right)^{-1}A^{*}}$, convergence is fast (quadratic).

### Updating the pseudoinverse

For the cases where ${\displaystyle A}$ has full row or column rank, and the inverse of the correlation matrix (${\displaystyle AA^{*}}$ for ${\displaystyle A}$ with full row rank or ${\displaystyle A^{*}A}$ for full column rank) is already known, the pseudoinverse for matrices related to ${\displaystyle A}$ can be computed by applying the Sherman–Morrison–Woodbury formula to update the inverse of the correlation matrix, which may need less work. In particular, if the related matrix differs from the original one by only a changed, added or deleted row or column, incremental algorithms exist that exploit the relationship.[18][19]

Similarly, it is possible to update the Cholesky factor when a row or column is added, without creating the inverse of the correlation matrix explicitly. However, updating the pseudoinverse in the general rank-deficient case is much more complicated.[20][21]

### Software libraries

The Python package NumPy provides a pseudoinverse calculation through its functions matrix.I and linalg.pinv; its pinv uses the SVD-based algorithm. SciPy adds a function scipy.linalg.pinv that uses a least-squares solver. High-quality implementations of SVD, QR, and back substitution are available in standard libraries, such as LAPACK. Writing one's own implementation of SVD is a major programming project that requires a significant numerical expertise. In special circumstances, such as parallel computing or embedded computing, however, alternative implementations by QR or even the use of an explicit inverse might be preferable, and custom implementations may be unavoidable.

The MASS package for R provides a calculation of the Moore–Penrose inverse through the ginv function.[22] The ginv function calculates a pseudoinverse using the singular value decomposition provided by the svd function in the base R package. An alternative is to employ the pinv function available in the pracma package.

The Octave programming language provides a pseudoinverse through the standard package function pinv and the pseudo_inverse() method.

In Julia (programming language), the LinearAlgebra package of the standard library provides an implementation of the Moore-Penrose pseudoinverse pinv() implemented via singular-value decomposition.[23]

## Applications

### Linear least-squares

The pseudoinverse provides a least squares solution to a system of linear equations.[24] For ${\displaystyle A\in \mathbb {K} ^{m\times n}}$, given a system of linear equations

${\displaystyle Ax=b,}$

in general, a vector ${\displaystyle x}$ that solves the system may not exist, or if one does exist, it may not be unique. The pseudoinverse solves the "least-squares" problem as follows:

• ${\displaystyle \forall x\in \mathbb {K} ^{n}}$, we have ${\displaystyle \|Ax-b\|_{2}\geq \|Az-b\|_{2}}$ where ${\displaystyle z=A^{+}b}$ and ${\displaystyle \|\cdot \|_{2}}$ denotes the Euclidean norm. This weak inequality holds with equality if and only if ${\displaystyle x=A^{+}b+\left(I-A^{+}A\right)w}$ for any vector ${\displaystyle w}$; this provides an infinitude of minimizing solutions unless ${\displaystyle A}$ has full column rank, in which case ${\displaystyle \left(I-A^{+}A\right)}$ is a zero matrix.[25] The solution with minimum Euclidean norm is ${\displaystyle z.}$[25]

This result is easily extended to systems with multiple right-hand sides, when the Euclidean norm is replaced by the Frobenius norm. Let ${\displaystyle B\in \mathbb {K} ^{m\times p}}$.

• ${\displaystyle \forall X\in \mathbb {K} ^{n\times p}}$, we have ${\displaystyle \|AX-B\|_{\mathrm {F} }\geq \|AZ-B\|_{\mathrm {F} }}$ where ${\displaystyle Z=A^{+}B}$ and ${\displaystyle \|\cdot \|_{\mathrm {F} }}$ denotes the Frobenius norm.

### Obtaining all solutions of a linear system

If the linear system

${\displaystyle Ax=b}$

has any solutions, they are all given by[26]

${\displaystyle x=A^{+}b+\left[I-A^{+}A\right]w}$

for arbitrary vector ${\displaystyle w}$. Solution(s) exist if and only if ${\displaystyle AA^{+}b=b}$.[26] If the latter holds, then the solution is unique if and only if ${\displaystyle A}$ has full column rank, in which case ${\displaystyle \left[I-A^{+}A\right]}$ is a zero matrix. If solutions exist but ${\displaystyle A}$ does not have full column rank, then we have an indeterminate system, all of whose infinitude of solutions are given by this last equation.

### Minimum norm solution to a linear system

For linear systems ${\displaystyle Ax=b,}$ with non-unique solutions (such as under-determined systems), the pseudoinverse may be used to construct the solution of minimum Euclidean norm ${\displaystyle \|x\|_{2}}$ among all solutions.

• If ${\displaystyle Ax=b}$ is satisfiable, the vector ${\displaystyle z=A^{+}b}$ is a solution, and satisfies ${\displaystyle \|z\|_{2}\leq \|x\|_{2}}$ for all solutions.

This result is easily extended to systems with multiple right-hand sides, when the Euclidean norm is replaced by the Frobenius norm. Let ${\displaystyle B\in \mathbb {K} ^{m\times p}}$.

• If ${\displaystyle AX=B}$ is satisfiable, the matrix ${\displaystyle Z=A^{+}B}$ is a solution, and satisfies ${\displaystyle \|Z\|_{\mathrm {F} }\leq \|X\|_{\mathrm {F} }}$ for all solutions.

### Condition number

Using the pseudoinverse and a matrix norm, one can define a condition number for any matrix:

${\displaystyle {\mbox{cond}}(A)=\|A\|\left\|A^{+}\right\|.\ }$

A large condition number implies that the problem of finding least-squares solutions to the corresponding system of linear equations is ill-conditioned in the sense that small errors in the entries of ${\displaystyle A}$ can lead to huge errors in the entries of the solution.[27]

## Generalizations

Besides for matrices over real and complex numbers, the conditions hold for matrices over biquaternions, also called "complex quaternions".[28]

In order to solve more general least-squares problems, one can define Moore–Penrose inverses for all continuous linear operators ${\displaystyle A:H_{1}\rightarrow H_{2}}$ between two Hilbert spaces ${\displaystyle H_{1}}$ and ${\displaystyle H_{2}}$, using the same four conditions as in our definition above. It turns out that not every continuous linear operator has a continuous linear pseudoinverse in this sense.[27] Those that do are precisely the ones whose range is closed in ${\displaystyle H_{2}}$.

In abstract algebra, a Moore–Penrose inverse may be defined on a *-regular semigroup. This abstract definition coincides with the one in linear algebra.

## Notes

1. ^ Ben-Israel & Greville 2003, p. 7.
2. ^ Campbell & Meyer, Jr. 1991, p. 10.
3. ^ Nakamura 1991, p. 42.
4. ^ Rao & Mitra 1971, p. 50–51.
5. ^ Moore, E. H. (1920). "On the reciprocal of the general algebraic matrix". Bulletin of the American Mathematical Society. 26 (9): 394–95. doi:10.1090/S0002-9904-1920-03322-7.
6. ^ Bjerhammar, Arne (1951). "Application of calculus of matrices to method of least squares; with special references to geodetic calculations". Trans. Roy. Inst. Tech. Stockholm. 49.
7. ^ a b
8. Golub, Gene H.; Charles F. Van Loan (1996). Matrix computations (3rd ed.). Baltimore: Johns Hopkins. pp. 257–258. ISBN 978-0-8018-5414-9.
9. ^ a b c Stoer, Josef; Bulirsch, Roland (2002). Introduction to Numerical Analysis (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-95452-3..
10. ^ Maciejewski, Anthony A.; Klein, Charles A. (1985). "Obstacle Avoidance for Kinematically Redundant Manipulators in Dynamically Varying Environments". International Journal of Robotics Research. 4 (3): 109–117. doi:10.1177/027836498500400308. hdl:10217/536.
11. ^ Rakočević, Vladimir (1997). "On continuity of the Moore–Penrose and Drazin inverses" (PDF). Matematički Vesnik. 49: 163–72.
12. ^ Golub, G. H.; Pereyra, V. (April 1973). "The Differentiation of Pseudo-Inverses and Nonlinear Least Squares Problems Whose Variables Separate". SIAM Journal on Numerical Analysis. 10 (2): 413–32. Bibcode:1973SJNA...10..413G. doi:10.1137/0710036. JSTOR 2156365.
13. ^ a b
14. ^ Stallings, W. T.; Boullion, T. L. (1972). "The Pseudoinverse of an r-Circulant Matrix". Proceedings of the American Mathematical Society. 34 (2): 385–88. doi:10.2307/2038377. JSTOR 2038377.
15. ^ Linear Systems & Pseudo-Inverse
16. ^ Ben-Israel, Adi; Cohen, Dan (1966). "On Iterative Computation of Generalized Inverses and Associated Projections". SIAM Journal on Numerical Analysis. 3 (3): 410–19. Bibcode:1966SJNA....3..410B. doi:10.1137/0703035. JSTOR 2949637.pdf
17. ^ Söderström, Torsten; Stewart, G. W. (1974). "On the Numerical Properties of an Iterative Method for Computing the Moore–Penrose Generalized Inverse". SIAM Journal on Numerical Analysis. 11 (1): 61–74. Bibcode:1974SJNA...11...61S. doi:10.1137/0711008. JSTOR 2156431.
18. ^ Gramß, Tino (1992). Worterkennung mit einem künstlichen neuronalen Netzwerk (PhD dissertation). Georg-August-Universität zu Göttingen. OCLC 841706164.
19. ^ Emtiyaz, Mohammad (February 27, 2008). "Updating Inverse of a Matrix When a Column is Added/Removed" (PDF). Cite journal requires |journal= (help)
20. ^ Meyer, Jr., Carl D. (1973). "Generalized inverses and ranks of block matrices". SIAM J. Appl. Math. 25 (4): 597–602. doi:10.1137/0125057.
21. ^ Meyer, Jr., Carl D. (1973). "Generalized inversion of modified matrices". SIAM J. Appl. Math. 24 (3): 315–23. doi:10.1137/0124033.
22. ^
23. ^
24. ^ Penrose, Roger (1956). "On best approximate solution of linear matrix equations". Proceedings of the Cambridge Philosophical Society. 52 (1): 17–19. Bibcode:1956PCPS...52...17P. doi:10.1017/S0305004100030929.
25. ^ a b Planitz, M. (October 1979). "Inconsistent systems of linear equations". Mathematical Gazette. 63 (425): 181–85. doi:10.2307/3617890. JSTOR 3617890.
26. ^ a b James, M. (June 1978). "The generalised inverse". Mathematical Gazette. 62 (420): 109–14. doi:10.1017/S0025557200086460.
27. ^ a b Hagen, Roland; Roch, Steffen; Silbermann, Bernd (2001). "Section 2.1.2". C*-algebras and Numerical Analysis. CRC Press.
28. ^ Tian, Yongge (2000). "Matrix Theory over the Complex Quaternion Algebra". p.8, Theorem 3.5. arXiv:math/0004005.