# Talk:Eigendecomposition of a matrix

WikiProject Mathematics (Rated Start-class, Mid-priority)
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
 Start Class
 Mid Priority
Field: Algebra

# Items needing attention

• The section on decomposition of special matrices needs expansion.
• The section on useful facts needs expansion.
• The section on advanced topics is haphazard and needs cleanup.

Repliedthemockturtle 02:27, 7 October 2007 (UTC)

• The section on "Fundamental theory of matrix eigenvectors and eigenvalues" under "summary" doesn't treat eigenvectors as solutions of the eigenproblem properly: the solution set is generally infinite, not finite; the dimensions of the eigenspaces are finite. Language on "eigenspaces" is needed to clean this up.

63.228.55.134 17:19, 10 October 2007 (UTC)Craig Calcaterra

# Text that was removed

The following text may contains some useful additional information, but needs to be reworked. It might also be better placed somewhere else.

Repliedthemockturtle 00:52, 7 October 2007 (UTC)

## Decomposition theorems for general matrices

The decomposition theorem is a version of the spectral theorem in the particular case of matrices. This theorem is usually introduced in terms of coordinate transformation. If U is an invertible matrix, it can be seen as a transformation from one coordinate system to another, with the columns of U being the components of the new basis vectors within the old basis set. In this new system the coordinates of the vector ${\displaystyle v}$ are labeled ${\displaystyle v'}$. The latter are obtained from the coordinates v in the original coordinate system by the relation ${\displaystyle v'=Uv}$ and, the other way around, we have ${\displaystyle v=U^{-1}v'}$. Applying successively ${\displaystyle v'=Uv}$, ${\displaystyle w'=Uw}$ and ${\displaystyle U^{-1}U=I}$, to the relation ${\displaystyle Av=w}$ defining the matrix multiplication provides ${\displaystyle A'v'=w'}$ with ${\displaystyle A'=UAU^{-1}}$, the representation of A in the new basis. In this situation, the matrices A and ${\displaystyle A'}$ are said to be similar.

The decomposition theorem states that, if one chooses as columns of ${\displaystyle U^{-1}}$ n linearly independent eigenvectors of A, the new matrix ${\displaystyle A'=UAU^{-1}}$ is diagonal and its diagonal elements are the eigenvalues of A. If this is possible the matrix A is diagonalizable. An example of non-diagonalizable matrix is given by the matrix A above. There are several generalizations of this decomposition which can cope with the non-diagonalizable case, suited for different purposes:

• the Schur triangular form states that any matrix is unitarily equivalent to an upper triangular one;
• the singular value decomposition, ${\displaystyle A=U\Sigma V^{*}}$ where ${\displaystyle \Sigma }$ is diagonal with U and V unitary matrices. The diagonal entries of ${\displaystyle A=U\Sigma V^{*}}$ are nonnegative; they are called the singular values of A. This can be done for non-square matrices as well;
• the Jordan normal form, where ${\displaystyle A=X\Lambda X^{-1}}$ where ${\displaystyle \Lambda }$ is not diagonal but block-diagonal. The number and the sizes of the Jordan blocks are dictated by the geometric and algebraic multiplicities of the eigenvalues. The Jordan decomposition is a fundamental result. One might glean from it immediately that a square matrix is described completely by its eigenvalues, including multiplicity, up to similarity. This shows mathematically the important role played by eigenvalues in the study of matrices;
• as an immediate consequence of Jordan decomposition, any matrix A can be written uniquely as A = S + N where S is diagonalizable, N is nilpotent (i.e., such that Nq=0 for some q), and S commutes with N (SN=NS).

## Some other properties of eigenvalues

The spectrum is invariant under similarity transformations: the matrices A and P-1AP have the same eigenvalues for any matrix A and any invertible matrix P. The spectrum is also invariant under transposition: the matrices A and AT have the same eigenvalues.

Since a linear transformation on finite dimensional spaces is bijective if and only if it is injective, a matrix is invertible if and only if zero is not an eigenvalue of the matrix.

Some more consequences of the Jordan decomposition are as follows:

• a matrix is diagonalizable if and only if the algebraic and geometric multiplicities coincide for all its eigenvalues. In particular, an n×n matrix which has n different eigenvalues is always diagonalizable; Under the same reasoning a matrix A with eigenvectors stored in matrix P will result in P-1AP=Σ where Σ is a diagonal matrix with the eigenvalues of A along the diagonal.
• the vector space on which the matrix acts can be viewed as a direct sum of its invariant subspaces span by its generalized eigenvectors. Each block on the diagonal corresponds to a subspace in the direct sum. When a block is diagonal, its invariant subspace is an eigenspace. Otherwise it is a generalized eigenspace, defined above;
• Since the trace, or the sum of the elements on the main diagonal of a matrix, is preserved by unitary equivalence, the Jordan normal form tells us that it is equal to the sum of the eigenvalues;
• Similarly, because the eigenvalues of a triangular matrix are the entries on the main diagonal, the determinant equals the product of the eigenvalues (counted according to algebraic multiplicity).

The location of the spectrum for a few subclasses of normal matrices are:

Suppose that A is an m×n matrix, with mn, and that B is an n×m matrix. Then BA has the same eigenvalues as AB plus nm eigenvalues equal to zero.

Each matrix can be assigned an operator norm, which depends on the norm of its domain. The operator norm of a square matrix is an upper bound for the moduli of its eigenvalues, and thus also for its spectral radius. This norm is directly related to the power method for calculating the eigenvalue of largest modulus given above. For normal matrices, the operator norm induced by the Euclidean norm is the largest moduli among its eigenvalues.

### Algebraic multiplicity

The algebraic multiplicity of an eigenvalue λ of A is the order of λ as a zero of the characteristic polynomial of A; in other words, if λ is one root of the polynomial, it is the number of factors (tλ) in the characteristic polynomial after factorization. An n×n matrix has n eigenvalues, counted according to their algebraic multiplicity, because its characteristic polynomial has degree n.

An eigenvalue of algebraic multiplicity 1 is called a "simple eigenvalue".

In an article on matrix theory, a statement like the one below might be encountered:

"the eigenvalues of a matrix A are 4,4,3,3,3,2,2,1,"

meaning that the algebraic multiplicity of 4 is two, of 3 is three, of 2 is two and of 1 is one. This style is used because algebraic multiplicity is the key to many mathematical proofs in matrix theory.

Recall that above we defined the geometric multiplicity of an eigenvalue to be the dimension of the associated eigenspace, the nullspace of λI − A. The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix (λI − A)k for any sufficiently large k. That is, it is the space of generalized eigenvectors (1st sense), where a generalized eigenvector is any vector which eventually becomes 0 if λI − A is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity. The first sense should not to be confused with generalized eigenvalue problem as stated below.

For example:

${\displaystyle A={\begin{bmatrix}1&1\\0&1\end{bmatrix}}.}$

It has only one eigenvalue, namely λ = 1. The characteristic polynomial is ${\displaystyle (\lambda -1)^{2}}$, so this eigenvalue has algebraic multiplicity 2. However, the associated eigenspace is the axis usually called the x axis, spanned by the unit vector ${\displaystyle {\begin{bmatrix}1\\0\end{bmatrix}}}$, so the geometric multiplicity is only 1.

Generalized eigenvectors can be used to calculate the Jordan normal form of a matrix (see discussion below). The fact that Jordan blocks in general are not diagonal but nilpotent is directly related to the distinction between eigenvectors and generalized eigenvectors.

# Recent edits

I've made a number of recent edits to try to change the tone of the article. It reads like a textbook, which wikipedia is not. I've removed a lot of material in the process, usually stuff that's spelled out in greater detail than is suitable, summaries, or stuff that's just not relevant. James pic 16:58, 12 October 2007 (UTC)

good show!!!86.164.224.41 (talk) 19:29, 6 August 2008 (UTC)

## Largest: algebraic vs. magnitude?

I'm reading documentation for ARPACK and I come across this:

c  WHICH   Character*2.  (INPUT)
c          Specify which of the Ritz values of OP to compute.
c
c          'LA' - compute the NEV largest (algebraic) eigenvalues.
c          'SA' - compute the NEV smallest (algebraic) eigenvalues.
c          'LM' - compute the NEV largest (in magnitude) eigenvalues.
c          'SM' - compute the NEV smallest (in magnitude) eigenvalues.
c          'BE' - compute NEV eigenvalues, half from each end of the
c                 spectrum.  When NEV is odd, compute one more from the
c                 high end than from the low end.
c           (see remark 1 below)


It's clear what "largest (in magnitude)" and "smallest (in magnitude)" mean, but what are the "algebraic" ones? Is this algebraic multiplicity or something else? —Ben FrantzDale (talk) 03:51, 13 February 2009 (UTC)

## Not any basis is orthogonal

"Any eigenvector basis for a real symmetric matrix is orthogonal" --- really? The zero matrix is symmetric, and all vectors are its eigenvectors. Yes, the orthogonal basis can be chosen; but the given formulation is inaccurate. Boris Tsirelson (talk) 20:06, 15 April 2009 (UTC)

I revert the edit of 143.167.67.175. It is far not enough to require that the matrix is not zero. The zero matrix is only the simplest example. Please do not claim statements unless you find them in a source. Boris Tsirelson (talk) 15:16, 21 April 2009 (UTC)

Just out of interest, can you specify a counterexample? Oli Filth(talk|contribs) 16:59, 21 April 2009 (UTC)
Sure. It is a matter of multiplicity. If at least one eigenvalue is of multiplicity 2 or more then we can choose non-orthogonal basis vectors from the corresponding eigenspace. Boris Tsirelson (talk) 17:39, 21 April 2009 (UTC)
Then can we add in parentheses: (If there are n distinct eigenvalues, the eigenvectors must be orthogonal.)? 132.67.97.20 (talk) 13:39, 28 November 2010 (UTC)

## spectral

Why is this called "spectral" decomp./theorem? 150.203.48.23 (talk) 06:37, 18 March 2010 (UTC)

Because the set of eigenvalues is sometimes call'd the spectrum. Sometimes they correspond to energy levels. The absorption/emission spectrum of an atom is based on the differences between its energy levels, and a particular "series" is based on the energy differences between various states and a certain given state. That's the connexion between energy levels and a spectrum. Eric Kvaalen (talk) 06:38, 2 November 2010 (UTC)

## Merge proposal

I think this article and Diagonalizable matrix should be merged. Eric Kvaalen (talk) 06:38, 2 November 2010 (UTC)

## Symmetric Matrices

If you relax the constraint that the matrix be real, symmetric matrices can produce mutually orthogonal eigenvectors in the complex space, which is very nice in quantum mechanics since eigenvector1* times eigenvector2 should equal zero. --24.80.113.218 (talk) 05:48, 10 July 2011 (UTC)

## Regarding "Eigendecomposition of a matrix"

Is the beginning of the "Eigendecomposition of a matrix" paragraph correct?

I tend to think it should be phrased

"Let A be a square (N×N) matrix with N linearly independent eigenvectors, q_i columns, a_i \,\, (i = 1, \dots, N). Then A can be factorized as \mathbf{A}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1} where Q is the square (N×N) matrix whose ith column is the eigenvector q_i of A ..."

Chlopisko (talk) 04:52, 12 July 2012 (UTC)

Correct. N linearly independent columns is merely equivalent to A be invertible, which is insufficient to make A diagonalizable (for example, the defective matrix is invertible). N linearly independent eigenvectors however means that the A cannot have any strictly generalized eigenvectors, so its Jordan form has to be diagonal. I'll change it. 68.102.164.94 (talk) 03:17, 2 October 2013 (UTC)

## Dimension (Generalized eigenvalue problem section)

Hello,

in the Generalized eigenvalue problem section, do the vi vectors have to be of dimension n? Shouldn't we write

${\displaystyle P={\begin{pmatrix}|&&|\\\mathbf {v_{1}} &\cdots &\mathbf {v_{n}} \\|&&|\\\end{pmatrix}}}$${\displaystyle {\begin{pmatrix}(\mathbf {v_{1}} )_{1}&\cdots &(\mathbf {v_{n}} )_{1}\\\vdots &&\vdots \\(\mathbf {v_{1}} )_{m}&\cdots &(\mathbf {v_{n}} )_{m}\\\end{pmatrix}}}$

I see no reason why the number of generalised eigenvectors should be equal to their dimension, but as this is a bit beyond my math skills, I prefere to ask before modifying.

cdang|write me 12:49, 2 January 2013 (UTC)

Mmmm, I found a source telling me that A and B are n×n matrices. This is consistant with the begining of the article, but as this seciton is about a "generalised" case, it is not obvious that this limitation also applies here. So, I reformulate: would it be a good idea to remind that we are still in the case of square matrices?
cdang|write me 13:15, 2 January 2013 (UTC)

## Confusion about (in?) Eigendecomposition of a matrix: Example

I am trying to understand this topic/example. In ${\displaystyle \mathbf {A} =\mathbf {Q} \mathbf {\Lambda } \mathbf {Q} ^{-1}}$ it is Λ (the matrix with eigenvalues on the diagonal) which is surrounded by Q and Q−1, while A is on the other side of the equation. But in the example, it is the other way around. What am I missing? Or what is the article missing? Orehet (talk) 11:24, 13 November 2013 (UTC)

## Functional Calculus and orthogonality condition for Q

136.159.49.121 (talk) 21:24, 2 December 2015 (UTC) In the section "Eigendecomposition of a matrix" it is mentioned that the eigenvectors are usually normalized but they need not be. While that's true, we should note that in the "Functional calculus" section for the formula's to hold, we need Q to be an orthogonal matrix. I just found it's never mentioned in the text, which could be misleading.

## Example correction

Shouldn't we add the condition c!=0 and d!=0 in the Example? Pokyrek 8. March 2016 —Preceding undated comment added 07:37, 8 March 2016 (UTC)