# Sylvester equation

In mathematics, in the field of control theory, a Sylvester equation is a matrix equation of the form:

$AX+XB=C.$ Then given matrices A, B, and C, the problem is to find the possible matrices X that obey this equation. All matrices are assumed to have coefficients in the complex numbers. For the equation to make sense, the matrices must have appropriate sizes, for example they could all be square matrices of the same size. But more generally, A and B must be square matrices of sizes n and m respectively, and then X and C both have n rows and m columns.

A Sylvester equation has a unique solution for X exactly when there are no common eigenvalues of A and −B. More generally, the equation AX + XB = C has been considered as an equation of bounded operators on a (possibly infinite-dimensional) Banach space. In this case, the condition for the uniqueness of a solution X is almost the same: There exists a unique solution X exactly when the spectra of A and −B are disjoint.

## Existence and uniqueness of the solutions

Using the Kronecker product notation and the vectorization operator $\operatorname {vec}$ , we can rewrite Sylvester's equation in the form

$(I_{m}\otimes A+B^{T}\otimes I_{n})\operatorname {vec} X=\operatorname {vec} C,$ where $A$ is of dimension $n\!\times \!n$ , $B$ is of dimension $m\!\times \!m$ , $X$ of dimension $n\!\times \!m$ and $I_{k}$ is the $k\times k$ identity matrix. In this form, the equation can be seen as a linear system of dimension $mn\times mn$ .

Theorem. Given matrices $A\in \mathbb {C} ^{n\times n}$ and $B\in \mathbb {C} ^{m\times m}$ , the Sylvester equation $AX+XB=C$ has a unique solution $X\in \mathbb {C} ^{n\times m}$ for any $C\in \mathbb {C} ^{n\times m}$ if and only if $A$ and $-B$ do not share any eigenvalue.

Proof. The equation $AX+XB=C$ is a linear system with $mn$ unknowns and the same amount of equations. Hence it is uniquely solvable for any given $C$ if and only if the homogeneous equation $AX+XB=0$ admits only the trivial solution $0$ .

(i) Assume that $A$ and $-B$ do not share any eigenvalue. Let $X$ be a solution to the abovementioned homogeneous equation. Then $AX=X(-B)$ , which can be lifted to $A^{k}X=X(-B)^{k}$ for each $k\geq 0$ by mathematical induction. Consequently, $p(A)X=Xp(-B)$ for any polynomial $p$ . In particular, let $p$ be the characteristic polynomial of $A$ . Then $p(A)=0$ due to the Cayley-Hamilton theorem; meanwhile, the spectral mapping theorem tells us $\sigma (p(-B))=p(\sigma (-B)),$ where $\sigma (\cdot )$ denotes the spectrum of a matrix. Since $A$ and $-B$ do not share any eigenvalue, $p(\sigma (-B))$ does not contain zero, and hence $p(-B)$ is nonsingular. Thus $X=0$ as desired. This proves the "if" part of the theorem.

(ii) Now assume that $A$ and $-B$ share an eigenvalue $\lambda$ . Let $u$ be a corresponding right eigenvector for $A$ , $v$ be a corresponding left eigenvector for $-B$ , and $X=u{v}^{*}$ . Then $X\neq 0$ , and $AX+XB=A(uv^{*})-(uv^{*})(-B)=\lambda uv^{*}-\lambda uv^{*}=0.$ Hence $X$ is a nontrivial solution to the aforesaid homogeneous equation, justifying the "only if" part of the theorem. Q.E.D.

As an alternative to the spectral mapping theorem, the nonsingularity of $p(-B)$ in part (i) of the proof can also be demonstrated by the Bézout's identity for coprime polynomials. Let $q$ be the characteristic polynomial of $-B$ . Since $A$ and $-B$ do not share any eigenvalue, $p$ and $q$ are coprime. Hence there exist polynomials $f$ and $g$ such that $p(z)f(z)+q(z)g(z)\equiv 1$ . By the Cayley–Hamilton theorem, $q(-B)=0$ . Thus $p(-B)f(-B)=I$ , implying that $p(-B)$ is nonsingular.

The theorem remains true for real matrices with the caveat that one considers their complex eigenvalues. The proof for the "if" part is still applicable; for the "only if" part, note that both $\mathrm {Re} (uv^{*})$ and $\mathrm {Im} (uv^{*})$ satisfy the homogenous equation $AX+XB=0$ , and they cannot be zero simultaneously.

## Roth's removal rule

Given two square complex matrices A and B, of size n and m, and a matrix C of size n by m, then one can ask when the following two square matrices of size n + m are similar to each other: ${\begin{bmatrix}A&C\\0&B\end{bmatrix}}$ and ${\begin{bmatrix}A&0\\0&B\end{bmatrix}}$ . The answer is that these two matrices are similar exactly when there exists a matrix X such that AX − XB = C. In other words, X is a solution to a Sylvester equation. This is known as Roth's removal rule.

One easily checks one direction: If AX − XB = C then

${\begin{bmatrix}I_{n}&X\\0&I_{m}\end{bmatrix}}{\begin{bmatrix}A&C\\0&B\end{bmatrix}}{\begin{bmatrix}I_{n}&-X\\0&I_{m}\end{bmatrix}}={\begin{bmatrix}A&0\\0&B\end{bmatrix}}.$ Roth's removal rule does not generalize to infinite-dimensional bounded operators on a Banach space.

## Numerical solutions

A classical algorithm for the numerical solution of the Sylvester equation is the Bartels–Stewart algorithm, which consists of transforming $A$ and $B$ into Schur form by a QR algorithm, and then solving the resulting triangular system via back-substitution. This algorithm, whose computational cost is ${\mathcal {O}}(n^{3})$ arithmetical operations,[citation needed] is used, among others, by LAPACK and the lyap function in GNU Octave. See also the sylvester function in that language. In some specific image processing application, the derived Sylvester equation has a closed form solution.