# Generalized inverse

In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix ${\displaystyle A}$.

A matrix ${\displaystyle A^{\mathrm {g} }\in \mathbb {R} ^{n\times m}}$ is a generalized inverse of a matrix ${\displaystyle A\in \mathbb {R} ^{m\times n}}$ if ${\displaystyle AA^{\mathrm {g} }A=A.}$[1][2][3]

The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse.[1]

## Motivation

Consider the linear system

${\displaystyle Ax=y}$

where ${\displaystyle A}$ is an ${\displaystyle n\times m}$ matrix and ${\displaystyle y\in {\mathcal {R}}(A),}$ the column space of ${\displaystyle A}$. If ${\displaystyle A}$ is nonsingular (which implies ${\displaystyle n=m}$) then ${\displaystyle x=A^{-1}y}$ will be the solution of the system. Note that, if ${\displaystyle A}$ is nonsingular, then

${\displaystyle AA^{-1}A=A.}$

Now suppose ${\displaystyle A}$ is rectangular (${\displaystyle n\neq m}$), or square and singular. Then we need a right candidate ${\displaystyle G}$ of order ${\displaystyle m\times n}$ such that for all ${\displaystyle y\in {\mathcal {R}}(A),}$

${\displaystyle AGy=y.}$[4]

That is, ${\displaystyle x=Gy}$ is a solution of the linear system ${\displaystyle Ax=y}$. Equivalently, we need a matrix ${\displaystyle G}$ of order ${\displaystyle m\times n}$ such that

${\displaystyle AGA=A.}$

Hence we can define the generalized inverse as follows: Given an ${\displaystyle m\times n}$ matrix ${\displaystyle A}$, an ${\displaystyle n\times m}$ matrix ${\displaystyle G}$ is said to be a generalized inverse of ${\displaystyle A}$ if ${\displaystyle AGA=A.}$[1][2][3] The matrix ${\displaystyle A^{-1}}$ has been termed a regular inverse of ${\displaystyle A}$ by some authors.[5]

## Types

Important types of generalized inverse include:

• One-sided inverse (right inverse or left inverse)
• Right inverse: If the matrix ${\displaystyle A}$ has dimensions ${\displaystyle n\times m}$ and ${\displaystyle {\textrm {rank}}(A)=n}$, then there exists an ${\displaystyle m\times n}$ matrix ${\displaystyle A_{\mathrm {R} }^{-1}}$ called the right inverse of ${\displaystyle A}$ such that ${\displaystyle AA_{\mathrm {R} }^{-1}=I_{n}}$, where ${\displaystyle I_{n}}$ is the ${\displaystyle n\times n}$ identity matrix.
• Left inverse: If the matrix ${\displaystyle A}$ has dimensions ${\displaystyle n\times m}$ and ${\displaystyle {\textrm {rank}}(A)=m}$, then there exists an ${\displaystyle m\times n}$ matrix ${\displaystyle A_{\mathrm {L} }^{-1}}$ called the left inverse of ${\displaystyle A}$ such that ${\displaystyle A_{\mathrm {L} }^{-1}A=I_{m}}$, where ${\displaystyle I_{m}}$ is the ${\displaystyle m\times m}$ identity matrix.[6]
• Bott–Duffin inverse
• Drazin inverse
• Moore–Penrose inverse

Some generalized inverses are defined and classified based on the Penrose conditions:

1. ${\displaystyle AA^{\mathrm {g} }A=A}$
2. ${\displaystyle A^{\mathrm {g} }AA^{\mathrm {g} }=A^{\mathrm {g} }}$
3. ${\displaystyle (AA^{\mathrm {g} })^{*}=AA^{\mathrm {g} }}$
4. ${\displaystyle (A^{\mathrm {g} }A)^{*}=A^{\mathrm {g} }A,}$

where ${\displaystyle {}^{*}}$ denotes conjugate transpose. If ${\displaystyle A^{\mathrm {g} }}$ satisfies the first condition, then it is a generalized inverse of ${\displaystyle A}$. If it satisfies the first two conditions, then it is a reflexive generalized inverse of ${\displaystyle A}$. If it satisfies all four conditions, then it is the pseudoinverse of ${\displaystyle A}$, which is denoted by ${\displaystyle A^{+}}$ and also known as the Moore–Penrose inverse, after the pioneering works by E. H. Moore and Roger Penrose.[2][7][8][9][10][11] It is convenient to define an ${\displaystyle I}$-inverse of ${\displaystyle A}$ as an inverse that satisfies the subset ${\displaystyle I\subset \{1,2,3,4\}}$ of the Penrose conditions listed above. Relations, such as ${\displaystyle A^{(1,4)}AA^{(1,3)}=A^{+}}$, can be established between these different classes of ${\displaystyle I}$-inverses.[1]

When ${\displaystyle A}$ is non-singular, any generalized inverse ${\displaystyle A^{\mathrm {g} }=A^{-1}}$ and is therefore unique. For a singular ${\displaystyle A}$, some generalised inverses, such as the Drazin inverse and the Moore–Penrose inverse, are unique, while others are not necessarily uniquely defined.

## Examples

### Reflexive generalized inverse

Let

${\displaystyle A={\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}},\quad G={\begin{bmatrix}-{\frac {5}{3}}&{\frac {2}{3}}&0\\[4pt]{\frac {4}{3}}&-{\frac {1}{3}}&0\\[4pt]0&0&0\end{bmatrix}}.}$

Since ${\displaystyle \det(A)=0}$, ${\displaystyle A}$ is singular and has no regular inverse. However, ${\displaystyle A}$ and ${\displaystyle G}$ satisfy Penrose conditions (1) and (2), but not (3) or (4). Hence, ${\displaystyle G}$ is a reflexive generalized inverse of ${\displaystyle A}$.

### One-sided inverse

Let

${\displaystyle A={\begin{bmatrix}1&2&3\\4&5&6\end{bmatrix}},\quad A_{\mathrm {R} }^{-1}={\begin{bmatrix}-{\frac {17}{18}}&{\frac {8}{18}}\\[4pt]-{\frac {2}{18}}&{\frac {2}{18}}\\[4pt]{\frac {13}{18}}&-{\frac {4}{18}}\end{bmatrix}}.}$

Since ${\displaystyle A}$ is not square, ${\displaystyle A}$ has no regular inverse. However, ${\displaystyle A_{\mathrm {R} }^{-1}}$ is a right inverse of ${\displaystyle A}$. The matrix ${\displaystyle A}$ has no left inverse.

### Inverse of other semigroups (or rings)

The element b is a generalized inverse of an element a if and only if ${\displaystyle a\cdot b\cdot a=a}$, in any semigroup (or ring, since the multiplication function in any ring is a semigroup).

The generalized inverses of the element 3 in the ring ${\displaystyle \mathbb {Z} /12\mathbb {Z} }$ are 3, 7, and 11, since in the ring ${\displaystyle \mathbb {Z} /12\mathbb {Z} }$:

${\displaystyle 3\cdot 3\cdot 3=3}$
${\displaystyle 3\cdot 7\cdot 3=3}$
${\displaystyle 3\cdot 11\cdot 3=3}$

The generalized inverses of the element 4 in the ring ${\displaystyle \mathbb {Z} /12\mathbb {Z} }$ are 1, 4, 7, and 10, since in the ring ${\displaystyle \mathbb {Z} /12\mathbb {Z} }$:

${\displaystyle 4\cdot 1\cdot 4=4}$
${\displaystyle 4\cdot 4\cdot 4=4}$
${\displaystyle 4\cdot 7\cdot 4=4}$
${\displaystyle 4\cdot 10\cdot 4=4}$

If an element a in a semigroup (or ring) has an inverse, the inverse must be the only generalized inverse of this element, like the elements 1, 5, 7, and 11 in the ring ${\displaystyle \mathbb {Z} /12\mathbb {Z} }$.

In the ring ${\displaystyle \mathbb {Z} /12\mathbb {Z} }$, any element is a generalized inverse of 0, however, 2 has no generalized inverse, since there is no b in ${\displaystyle \mathbb {Z} /12\mathbb {Z} }$ such that ${\displaystyle 2\cdot b\cdot 2=2}$.

## Construction

The following characterizations are easy to verify:

• A right inverse of a non-square matrix ${\displaystyle A}$ is given by ${\displaystyle A_{\mathrm {R} }^{-1}=A^{\intercal }\left(AA^{\intercal }\right)^{-1}}$, provided ${\displaystyle A}$ has full row rank.[6]
• A left inverse of a non-square matrix ${\displaystyle A}$ is given by ${\displaystyle A_{\mathrm {L} }^{-1}=\left(A^{\intercal }A\right)^{-1}A^{\intercal }}$, provided ${\displaystyle A}$ has full column rank.[6]
• If ${\displaystyle A=BC}$ is a rank factorization, then ${\displaystyle G=C_{\mathrm {R} }^{-1}B_{\mathrm {L} }^{-1}}$ is a g-inverse of ${\displaystyle A}$, where ${\displaystyle C_{\mathrm {R} }^{-1}}$ is a right inverse of ${\displaystyle C}$ and ${\displaystyle B_{\mathrm {L} }^{-1}}$ is left inverse of ${\displaystyle B}$.
• If ${\displaystyle A=P{\begin{bmatrix}I_{r}&0\\0&0\end{bmatrix}}Q}$ for any non-singular matrices ${\displaystyle P}$ and ${\displaystyle Q}$, then ${\displaystyle G=Q^{-1}{\begin{bmatrix}I_{r}&U\\W&V\end{bmatrix}}P^{-1}}$ is a generalized inverse of ${\displaystyle A}$ for arbitrary ${\displaystyle U,V}$ and ${\displaystyle W}$.
• Let ${\displaystyle A}$ be of rank ${\displaystyle r}$. Without loss of generality, let
${\displaystyle A={\begin{bmatrix}B&C\\D&E\end{bmatrix}},}$
where ${\displaystyle B_{r\times r}}$ is the non-singular submatrix of ${\displaystyle A}$. Then,
${\displaystyle G={\begin{bmatrix}B^{-1}&0\\0&0\end{bmatrix}}}$
is a generalized inverse of ${\displaystyle A}$ if and only if ${\displaystyle E=DB^{-1}C}$.

## Uses

Any generalized inverse can be used to determine whether a system of linear equations has any solutions, and if so to give all of them. If any solutions exist for the n × m linear system

${\displaystyle Ax=b}$,

with vector ${\displaystyle x}$ of unknowns and vector ${\displaystyle b}$ of constants, all solutions are given by

${\displaystyle x=A^{\mathrm {g} }b+\left[I-A^{\mathrm {g} }A\right]w}$,

parametric on the arbitrary vector ${\displaystyle w}$, where ${\displaystyle A^{\mathrm {g} }}$ is any generalized inverse of ${\displaystyle A}$. Solutions exist if and only if ${\displaystyle A^{\mathrm {g} }b}$ is a solution, that is, if and only if ${\displaystyle AA^{\mathrm {g} }b=b}$. If A has full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique.[12]

## Generalized inverses of matrices

The generalized inverses of matrices can be characterized as follows. Let ${\displaystyle A\in \mathbb {R} ^{m\times n}}$, and

${\displaystyle A=U{\begin{bmatrix}\Sigma _{1}&0\\0&0\end{bmatrix}}V^{\textsf {T}}}$

be its singular-value decomposition. Then for any generalized inverse ${\displaystyle A^{g}}$, there exist[1] matrices ${\displaystyle X}$, ${\displaystyle Y}$, and ${\displaystyle Z}$ such that

${\displaystyle A^{g}=V{\begin{bmatrix}\Sigma _{1}^{-1}&X\\Y&Z\end{bmatrix}}U^{\textsf {T}}.}$

Conversely, any choice of ${\displaystyle X}$, ${\displaystyle Y}$, and ${\displaystyle Z}$ for matrix of this form is a generalized inverse of ${\displaystyle A}$.[1] The ${\displaystyle \{1,2\}}$-inverses are exactly those for which ${\displaystyle Z=Y\Sigma _{1}X}$, the ${\displaystyle \{1,3\}}$-inverses are exactly those for which ${\displaystyle X=0}$, and the ${\displaystyle \{1,4\}}$-inverses are exactly those for which ${\displaystyle Y=0}$. In particular, the pseudoinverse is given by ${\displaystyle X=Y=Z=0}$:

${\displaystyle A^{+}=V{\begin{bmatrix}\Sigma _{1}^{-1}&0\\0&0\end{bmatrix}}U^{\textsf {T}}.}$

## Transformation consistency properties

In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse, ${\displaystyle A^{+},}$ satisfies the following definition of consistency with respect to transformations involving unitary matrices U and V:

${\displaystyle (UAV)^{+}=V^{*}A^{+}U^{*}}$.

The Drazin inverse, ${\displaystyle A^{\mathrm {D} }}$ satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrix S:

${\displaystyle \left(SAS^{-1}\right)^{\mathrm {D} }=SA^{\mathrm {D} }S^{-1}}$.

The unit-consistent (UC) inverse,[13] ${\displaystyle A^{\mathrm {U} },}$ satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matrices D and E:

${\displaystyle (DAE)^{\mathrm {U} }=E^{-1}A^{\mathrm {U} }D^{-1}}$.

The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers.

## Citations

1. Ben-Israel & Greville 2003, pp. 2, 7
2. ^ a b c Nakamura 1991, pp. 41–42
3. ^ a b Rao & Mitra 1971, pp. vii, 20
4. ^ Rao & Mitra 1971, p. 24
5. ^ Rao & Mitra 1971, pp. 19–20
6. ^ a b c Rao & Mitra 1971, p. 19
7. ^ Rao & Mitra 1971, pp. 20, 28, 50–51
8. ^
9. ^ Campbell & Meyer 1991, p. 10
10. ^ James 1978, p. 114
11. ^ Nakamura 1991, p. 42
12. ^ James 1978, pp. 109–110
13. ^ Uhlmann 2018

## Sources

### Textbook

• Ben-Israel, Adi; Greville, Thomas Nall Eden (2003). Generalized Inverses: Theory and Applications (2nd ed.). New York, NY: Springer. doi:10.1007/b97366. ISBN 978-0-387-00293-4.
• Campbell, Stephen L.; Meyer, Carl D. (1991). Generalized Inverses of Linear Transformations. Dover. ISBN 978-0-486-66693-8.
• Horn, Roger Alan; Johnson, Charles Royal (1985). Matrix Analysis. Cambridge University Press. ISBN 978-0-521-38632-6.
• Nakamura, Yoshihiko (1991). Advanced Robotics: Redundancy and Optimization. Addison-Wesley. ISBN 978-0201151985.
• Rao, C. Radhakrishna; Mitra, Sujit Kumar (1971). Generalized Inverse of Matrices and its Applications. New York: John Wiley & Sons. pp. 240. ISBN 978-0-471-70821-6.