Jacobi eigenvalue algorithm

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix (a process known as diagonalization). It is named after Carl Gustav Jacob Jacobi, who first proposed the method in 1846,[1] but only became widely used in the 1950s with the advent of computers.[2]

Description[edit]

Let S be a symmetric matrix, and G = G(i,j,θ) be a Givens rotation matrix. Then:

S'=G S G^\top \,

is symmetric and similar to S.

Furthermore, S′ has entries:

\begin{align}
 S'_{ii} &= c^2\, S_{ii}  -  2\, s c \,S_{ij}  +  s^2\, S_{jj} \\
 S'_{jj} &= s^2 \,S_{ii}  +  2 s c\, S_{ij}  +  c^2 \, S_{jj} \\
 S'_{ij} &= S'_{ji} = (c^2 - s^2 ) \, S_{ij}  +  s c \, (S_{ii} - S_{jj} ) \\
 S'_{ik} &= S'_{ki} = c \, S_{ik}  -  s \, S_{jk} & k \ne i,j \\
 S'_{jk} &= S'_{kj} = s \, S_{ik}  + c \, S_{jk} & k \ne i,j \\
 S'_{kl} &= S_{kl} &k,l \ne i,j
\end{align}

where s = sin(θ) and c = cos(θ).

Since G is orthogonal, S and S′ have the same Frobenius norm ||·||F (the square-root sum of squares of all components), however we can choose θ such that Sij = 0, in which case S′ has a larger sum of squares on the diagonal:

 S'_{ij} = \cos(2\theta) S_{ij} + \tfrac{1}{2} \sin(2\theta) (S_{ii} - S_{jj})

Set this equal to 0, and rearrange:

 \tan(2\theta) = \frac{2 S_{ij}}{S_{jj} - S_{ii}}

if   S_{jj} = S_{ii}

 \theta = \frac{\pi} {4}

In order to optimize this effect, Sij should be the largest off-diagonal component, called the pivot.

The Jacobi eigenvalue method repeatedly performs rotations until the matrix becomes almost diagonal. Then the elements in the diagonal are approximations of the (real) eigenvalues of S.

Convergence[edit]

If  p = S_{kl} is a pivot element, then by definition  |S_{ij} | \le |p| for  1 \le i, j \le n, i \ne j . Since S has exactly 2 N := n ( n - 1) off-diag elements, we have   p^2 \le \Gamma(S )^2 \le 2 N p^2 or   2 p^2 \ge \Gamma(S )^2 / N . This implies   \Gamma(S^J )^2  \le  (1 - 1 / N ) \Gamma (S )^2  or   \Gamma (S^ J )  \le (1 - 1 / N )^{1 / 2} \Gamma(S ) 	, i.e. the sequence of Jacobi rotations converges at least linearly by a factor   (1 - 1 / N )^{1 / 2} to a diagonal matrix.

A number of N Jacobi rotations is called a sweep; let  S^{\sigma} denote the result. The previous estimate yields

  \Gamma(S^{\sigma} )  \le  (1 - 1 / N )^{N / 2} \Gamma(S )  ,

i.e. the sequence of sweeps converges at least linearly with a factor ≈   e ^{1 / 2} .

However the following result of Schönhage[3] yields locally quadratic convergence. To this end let S have m distinct eigenvalues   \lambda_1, ... , \lambda_m with multiplicities   \nu_1, ... , \nu_m and let d > 0 be the smallest distance of two different eigenvalues. Let us call a number of

  N_S := \frac{1}{2} n (n - 1) - \sum_{\mu = 1}^{m} \frac{1}{2} \nu_{\mu} (\nu_{\mu} - 1) \le N

Jacobi rotations a Schönhage-sweep. If  S^ s denotes the result then

  \Gamma(S^ s ) \le\sqrt{\frac{n}{2} - 1} \frac{\gamma^2}{d - 2\gamma}, \quad \gamma :=  \Gamma(S )  .

Thus convergence becomes quadratic as soon as  \Gamma(S ) < d / (2 + \sqrt{\frac{n}{2} - 1})

Cost[edit]

Each Jacobi rotation can be done in n steps when the pivot element p is known. However the search for p requires inspection of all N ≈ ½ n2 off-diag elements. We can reduce this to n steps too if we introduce an additional index array  m_1, \, \dots \, , \, m_{n - 1} with the property that  m_i is the index of the largest element in row i, (i = 1, …, n − 1) of the current S. Then (k, l) must be one of the pairs   (i, m_i) . Since only columns k and l change, only   m_k \mbox{ and } m_l must be updated, which again can be done in n steps. Thus each rotation has O(n) cost and one sweep has O(n3) cost which is equivalent to one matrix multiplication. Additionally the  m_i must be initialized before the process starts, this can be done in n2 steps.

Typically the Jacobi method converges within numerical precision after a small number of sweeps. Note that multiple eigenvalues reduce the number of iterations since N_S < N.

Algorithm[edit]

The following algorithm is a description of the Jacobi method in math-like notation. It calculates a vector e which contains the eigenvalues and a matrix E which contains the corresponding eigenvectors, i.e.  e_i is an eigenvalue and the column  E_i an orthonormal eigenvector for  e_i , i = 1, …, n.

procedure jacobi(SRn×n; out eRn; out ERn×n)
  var
    i, k, l, m, stateN
    s, c, t, p, y, d, rR
    indNn
    changedLn

  function maxind(kN) ∈ N ! index of largest off-diagonal element in row k
    m := k+1
    for i := k+2 to n do
      ifSki│ > │Skmthen m := i endif
    endfor
    return m
  endfunc

  procedure update(kN; tR) ! update ek and its status
    y := ek; ek := y+t
    if changedk and (y=ek) then changedk := false; state := state−1
    elsif (not changedk) and (yek) then changedk := true; state := state+1
    endif
  endproc

  procedure rotate(k,l,i,jN) ! perform rotation of Sij, Skl  ┐    ┌     ┐┌   ┐
    │Skl│    │cs││Skl│
    │   │ := │     ││   │
    │Sij│    │s   c││Sij│
    └   ┘    └     ┘└   endproc

  ! init e, E, and arrays ind, changed
  E := I; state := n
  for k := 1 to n do indk := maxind(k); ek := Skk; changedk := true endfor
  while state≠0 do ! next rotation
    m := 1 ! find index (k,l) of pivot p
    for k := 2 to n−1 do
      ifSk indk│ > │Sm indmthen m := k endif
    endfor
    k := m; l := indm; p := Skl
    ! calculate c = cos φ, s = sin φ
    y := (elek)/2; d := │y│+√(p2+y2)
    r := √(p2+d2); c := d/r; s := p/r; t := p2/d
    if y<0 then s := −s; t := −t endif
    Skl := 0.0; update(k,−t); update(l,t)
    ! rotate rows and columns k and l
    for i := 1 to k−1 do rotate(i,k,i,l) endfor
    for i := k+1 to l−1 do rotate(k,i,i,l) endfor
    for i := l+1 to n do rotate(k,i,l,i) endfor
    ! rotate eigenvectors
    for i := 1 to n do  ┐    ┌     ┐┌   ┐
      │Eki│    │cs││Eki│
      │   │ := │     ││   │
      │Eli│    │s   c││Eli│
      └   ┘    └     ┘└   endfor
    ! rows k, l have changed, update rows indk, indl
    indk := maxind(k); indl := maxind(l)
  loop
endproc

Notes[edit]

1. The logical array changed holds the status of each eigenvalue. If the numerical value of  e_k or  e_l changes during an iteration, the corresponding component of changed is set to true, otherwise to false. The integer state counts the number of components of changed which have the value true. Iteration stops as soon as state = 0. This means that none of the approximations  e_1,\, ...\, , e_n has recently changed its value and thus it is not very likely that this will happen if iteration continues. Here it is assumed that floating point operations are optimally rounded to the nearest floating point number.

2. The upper triangle of the matrix S is destroyed while the lower triangle and the diagonal are unchanged. Thus it is possible to restore S if necessary according to

for k := 1 to n−1 do ! restore matrix S
  for l := k+1 to n do Skl := Slk endfor
endfor

3. The eigenvalues are not necessarily in descending order. This can be achieved by a simple sorting algorithm.

for k := 1 to n−1 do
  m := k
  for l := k+1 to n do
    if el > em then m := l endif
  endfor
  if km then swap em,ek; swap Em,Ek endif
endfor

4. The algorithm is written using matrix notation (1 based arrays instead of 0 based).

5. When implementing the algorithm, the part specified using matrix notation must be performed simultaneously.

6. This implementation does not correctly account for the case in which one dimension is an independent subspace. For example, if given a diagonal matrix, the above implementation will never terminate, as none of the eigenvalues will change. Hence, in real implementations, extra logic must be added to account for this case.

Example[edit]

Let 
	S = \begin{pmatrix}0.58464 & 0.0505 & 0.6289 & 0.2652 & 0.6857 \\ 0.0505 & 0.19659 & 0.2204 & 0.3552 & 0.0088 \\ 0.6289 & 0.2204 & 0.44907 & 0.1831 & 0.5086 \\ 0.2652 & 0.3552 & 0.1831 & 0.21333 & 0.272 \\ 0.6857 & 0.0088 & 0.5086 & 0.272 & 0.49667  \end{pmatrix}

Then jacobi produces the following eigenvalues and eigenvectors after 3 sweeps (19 iterations) :


	e_1 = 2585.25381092892231


	E_1 = \begin{pmatrix}0.0291933231647860588\\ -0.328712055763188997\\ 0.791411145833126331\\ -0.514552749997152907\end{pmatrix}


	e_2 = 37.1014913651276582


	E_2 = \begin{pmatrix}-0.179186290535454826\\ 0.741917790628453435\\ -0.100228136947192199\\ -0.638282528193614892\end{pmatrix}


	e_3 = 1.4780548447781369


	E_3 = \begin{pmatrix}-0.582075699497237650\\ 0.370502185067093058\\ 0.509578634501799626\\ 0.514048272222164294\end{pmatrix}


	e_4 = 0.1666428611718905


	E_4 = \begin{pmatrix}0.792608291163763585\\ 0.451923120901599794\\ 0.322416398581824992\\ 0.252161169688241933\end{pmatrix}

This link clarifies the example: http://www.scribd.com/doc/24953275/Jacobi-Transformation-Used-for-Eigenproblems-for-Symmetric-Matrix

Applications for real symmetric matrices[edit]

When the eigenvalues (and eigenvectors) of a symmetric matrix are known, the following values are easily calculated.

Singular values
The singular values of a (square) matrix A are the square roots of the (non-negative) eigenvalues of  A^T A . In case of a symmetric matrix S we have of  S^T S = S^2 , hence the singular values of S are the absolute values of the eigenvalues of S
2-norm and spectral radius
The 2-norm of a matrix A is the norm based on the Euclidean vectornorm, i.e. the largest value  \| A x\|_2 when x runs through all vectors with  \|x\|_2 = 1 . It is the largest singular value of A. In case of a symmetric matrix it is largest absolute value of its eigenvectors and thus equal to its spectral radius.
Condition number
The condition number of a nonsingular matrix A is defined as  \mbox{cond} (A) = \| A \|_2 \| A^{-1}\|_2 . In case of a symmetric matrix it is the absolute value of the quotient of the largest and smallest eigenvalue. Matrices with large condition numbers can cause numerically unstable results: small perturbation can result in large errors. Hilbert matrices are the most famous ill-conditioned matrices. For example, the fourth-order Hilbert matrix has a condition of 15514, while for order 8 it is 2.7 × 108.
Rank
A matrix A has rank r if it has r columns that are linearly independent while the remaining columns are linearly dependent on these. Equivalently, r is the dimension of the range of A. Furthermore it is the number of nonzero singular values.
In case of a symmetric matrix r is the number of nonzero eigenvalues. Unfortunately because of rounding errors numerical approximations of zero eigenvalues may not be zero (it may also happen that a numerical approximation is zero while the true value is not). Thus one can only calculate the numerical rank by making a decision which of the eigenvalues are close enough to zero.
Pseudo-inverse
The pseudo inverse of a matrix A is the unique matrix  X = A^+ for which AX and XA are symmetric and for which AXA = A, XAX = X holds. If A is nonsingular, then ' A^+ = A^{-1} .
When procedure jacobi (S, e, E) is called, then the relation  S = E^T \mbox{Diag} (e) E holds where Diag(e) denotes the diagonal matrix with vector e on the diagonal. Let  e^+ denote the vector where  e_i is replaced by  1/e_i if  e_i \le 0 and by 0 if  e_i is (numerically close to) zero. Since matrix E is orthogonal, it follows that the pseudo-inverse of S is given by  S^+ = E^T \mbox{Diag} (e^+) E .
Least squares solution
If matrix A does not have full rank, there may not be a solution of the linear system Ax = b. However one can look for a vector x for which  \| Ax - b \|_2 is minimal. The solution is  x = A^+ b . In case of a symmetric matrix S as before, one has  x = S^+ b = E^T \mbox{Diag} (e^+) E b .
Matrix exponential
From  S = E^T \mbox{Diag} (e) E one finds  \exp S = E^T \mbox{Diag} (\exp e) E where exp e is the vector where  e_i is replaced by  exp e_i . In the same way, f(S) can be calculated in an obvious way for any (analytic) function f.
Linear differential equations
The differential equation x' Ax, x(0) = a has the solution x(t) = exp(t Aa. For a symmetric matrix S, it follows that  x(t) = E^T \mbox{Diag} (\exp t e) E a . If  a = \sum_{i = 1}^n a_i E_i is the expansion of a by the eigenvectors of S, then  x(t) = \sum_{i = 1}^n a_i \exp(t e_i) E_i .
Let  W^s be the vector space spanned by the eigenvectors of S which correspond to a negative eigenvalue and  W^u analogously for the positive eigenvalues. If  a \in W^s then  \mbox{lim}_{t \ \infty} x(t) = 0 i.e. the equilibrium point 0 is attractive to x(t). If  a \in W^u then  \mbox{lim}_{t \ \infty} x(t) = \infty , i.e. 0 is repulsive to x(t).  W^s and  W^u are called stable and unstable manifolds for S. If a has components in both manifolds, then one component is attracted and one component is repelled. Hence x(t) approaches  W^u as  t \ \infty .

Generalizations[edit]

The Jacobi Method has been generalized to complex Hermitian matrices, general nonsymmetric real and complex matrices as well as block matrices.

Since singular values of a real matrix are the square roots of the eigenvalues of the symmetric matrix  S = A^T A it can also be used for the calculation of these values. For this case, the method is modified in such a way that S must not be explicitly calculated which reduces the danger of round-off errors. Note that  J S J^T = J A^T A J^T = J A^T J^T J A J^T = B^T B  with  B \, := J A J^T .

The Jacobi Method is also well suited for parallelism.

References[edit]

  1. ^ Jacobi, C.G.J. (1846). "Über ein leichtes Verfahren, die in der Theorie der Säkularstörungen vorkommenden Gleichungen numerisch aufzulösen". Crelle's Journal (in German) 30: 51–94. 
  2. ^ Golub, G.H.; van der Vorst, H.A. (2000). "Eigenvalue computation in the 20th century". Journal of Computational and Applied Mathematics 123 (1-2): 35–65. doi:10.1016/S0377-0427(00)00413-1. 
  3. ^ Schönhage, A. (1964). "Zur quadratischen Konvergenz des Jacobi-Verfahrens". Numerische Mathematik (in German) 6 (1): 410–412. doi:10.1007/BF01386091. MR 174171. 

Further reading[edit]

External links[edit]