In mathematics, the spectral radius of a square matrix or a bounded linear operator is the supremum among the absolute values of the elements in its spectrum, which is sometimes denoted by ρ(·).

## Matrices

Let λ1, ..., λn be the (real or complex) eigenvalues of a matrix ACn×n. Then its spectral radius ρ(A) is defined as:

${\displaystyle \rho (A)=\max \left\{|\lambda _{1}|,\cdots ,|\lambda _{n}|\right\}.}$

The following lemma shows a simple yet useful upper bound for the spectral radius of a matrix:

Lemma. Let ACn×n with spectral radius ρ(A) and a consistent matrix norm ||⋅||; then, for each kN:
${\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}$

Proof: Let (v, λ) be an eigenvector-eigenvalue pair for a matrix A. By the sub-multiplicative property of the matrix norm, we get:

${\displaystyle |\lambda |^{k}\|\mathbf {v} \|=\|\lambda ^{k}\mathbf {v} \|=\|A^{k}\mathbf {v} \|\leq \|A^{k}\|\cdot \|\mathbf {v} \|}$

and since v ≠ 0 we have

${\displaystyle |\lambda |^{k}\leq \|A^{k}\|}$

and therefore

${\displaystyle \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}.}$

The spectral radius is closely related to the behaviour of the convergence of the power sequence of a matrix; namely, the following theorem holds:

Theorem. Let ACn×n with spectral radius ρ(A); then ρ(A) < 1 if and only if
${\displaystyle \lim _{k\to \infty }A^{k}=0.}$
Moreover, if ρ(A) > 1, ||Ak|| is not bounded for increasing values of k.

Proof. Assume the limit in question is zero, we will show that ρ(A) < 1. Let (v, λ) be an eigenvector-eigenvalue pair for A. Since Akv = λkv we have:

{\displaystyle {\begin{aligned}0&=\left(\lim _{k\to \infty }A^{k}\right)\mathbf {v} \\&=\lim _{k\to \infty }\left(A^{k}\mathbf {v} \right)\\&=\lim _{k\to \infty }\lambda ^{k}\mathbf {v} \\&=\mathbf {v} \lim _{k\to \infty }\lambda ^{k}\end{aligned}}}

and, since by hypothesis v ≠ 0, we must have

${\displaystyle \lim _{k\to \infty }\lambda ^{k}=0}$

which implies |λ| < 1. Since this must be true for any eigenvalue λ, we can conclude ρ(A) < 1.

Now assume the radius of A is less than 1. From the Jordan normal form theorem, we know that for all ACn×n, there exist V, JCn×n with V non-singular and J block diagonal such that:

${\displaystyle A=VJV^{-1}}$

with

${\displaystyle J={\begin{bmatrix}J_{m_{1}}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}(\lambda _{s})\end{bmatrix}}}$

where

${\displaystyle J_{m_{i}}(\lambda _{i})={\begin{bmatrix}\lambda _{i}&1&0&\cdots &0\\0&\lambda _{i}&1&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}&1\\0&0&\cdots &0&\lambda _{i}\end{bmatrix}}\in \mathbf {C} ^{m_{i}\times m_{i}},1\leq i\leq s.}$

It is easy to see that

${\displaystyle A^{k}=VJ^{k}V^{-1}}$

and, since J is block-diagonal,

${\displaystyle J^{k}={\begin{bmatrix}J_{m_{1}}^{k}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}^{k}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}^{k}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}^{k}(\lambda _{s})\end{bmatrix}}}$

Now, a standard result on the k-power of an ${\displaystyle m_{i}\times m_{i}}$ Jordan block states that, for ${\displaystyle k\geq m_{i}-1}$:

${\displaystyle J_{m_{i}}^{k}(\lambda _{i})={\begin{bmatrix}\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&{k \choose 2}\lambda _{i}^{k-2}&\cdots &{k \choose m_{i}-1}\lambda _{i}^{k-m_{i}+1}\\0&\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&\cdots &{k \choose m_{i}-2}\lambda _{i}^{k-m_{i}+2}\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}\\0&0&\cdots &0&\lambda _{i}^{k}\end{bmatrix}}}$

Thus, if ${\displaystyle \rho (A)<1}$ then for all i ${\displaystyle |\lambda _{i}|<1}$. Hence for all i we have:

${\displaystyle \lim _{k\to \infty }J_{m_{i}}^{k}=0}$

which implies

${\displaystyle \lim _{k\to \infty }J^{k}=0.}$

Therefore,

${\displaystyle \lim _{k\to \infty }A^{k}=\lim _{k\to \infty }VJ^{k}V^{-1}=V\left(\lim _{k\to \infty }J^{k}\right)V^{-1}=0}$

On the other side, if ${\displaystyle \rho (A)>1}$, there is at least one element in J which doesn't remain bounded as k increases, so proving the second part of the statement.

## Gelfand's Formula

Theorem (Gelfand's Formula; 1941). For any matrix norm ||⋅||, we have
${\displaystyle \rho (A)=\lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}.}$[1]

### Proof

For any ε > 0, first we construct the following two matrices:

${\displaystyle A_{\pm }={\frac {1}{\rho (A)\pm \varepsilon }}A.}$

Then:

${\displaystyle \rho \left(A_{\pm }\right)={\frac {\rho (A)}{\rho (A)\pm \varepsilon }},\qquad \rho (A_{+})<1<\rho (A_{-}).}$

First we apply the previous theorem to A+:

${\displaystyle \lim _{k\to \infty }A_{+}^{k}=0.}$

That means, by the sequence limit definition, there exists N+N such that

{\displaystyle {\begin{aligned}\forall k\geq N_{+}\quad \left\|A_{+}^{k}\right\|<1\qquad &\Rightarrow \qquad \forall k\geq N_{+}\quad \left\|A^{k}\right\|<(\rho (A)+\varepsilon )^{k}\\&\Rightarrow \qquad \forall k\geq N_{+}\quad \left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon .\end{aligned}}}

Applying the previous theorem to A implies ${\displaystyle \|A_{-}^{k}\|}$ is not bounded and there exists NN such that

{\displaystyle {\begin{aligned}\forall k\geq N_{-}\quad \left\|A_{-}^{k}\right\|>1\qquad &\Rightarrow \qquad \forall k\geq N_{-}\quad \left\|A^{k}\right\|>(\rho (A)-\varepsilon )^{k}\\&\Rightarrow \qquad \forall k\geq N_{-}\quad \left\|A^{k}\right\|^{\frac {1}{k}}>\rho (A)-\varepsilon .\end{aligned}}}

Let N = max{N+, N}, then we have:

${\displaystyle \forall \varepsilon >0,\exists N\in \mathbf {N} ,\forall k\geq N\quad \rho (A)-\varepsilon <\left\|A^{k}\right\|^{\frac {1}{k}}<\rho (A)+\varepsilon }$

which, by definition, is

${\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}=\rho (A).}$

### Corollaries

Gelfand's formula leads directly to a bound on the spectral radius of a product of finitely many matrices, namely assuming that they all commute we obtain

${\displaystyle \rho (A_{1}\cdots A_{n})\leq \rho (A_{1})\cdots \rho (A_{n}).}$

Actually, in case the norm is consistent, the proof shows more than the thesis; in fact, using the previous lemma, we can replace in the limit definition the left lower bound with the spectral radius itself and write more precisely:

${\displaystyle \forall \varepsilon >0,\exists N\in \mathbf {N} ,\forall k\geq N\quad \rho (A)\leq \|A^{k}\|^{\frac {1}{k}}<\rho (A)+\varepsilon }$

which, by definition, is

${\displaystyle \lim _{k\to \infty }\left\|A^{k}\right\|^{\frac {1}{k}}=\rho (A)^{+}.}$

### Example

Consider the matrix

${\displaystyle A={\begin{bmatrix}9&-1&2\\-2&8&4\\1&1&8\end{bmatrix}}}$

whose eigenvalues are 5, 10, 10; by definition, ρ(A) = 10. In the following table, the values of ${\displaystyle \|A^{k}\|^{\frac {1}{k}}}$ for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,${\displaystyle \|.\|_{1}=\|.\|_{\infty }}$):

k ${\displaystyle \|.\|_{1}=\|.\|_{\infty }}$ ${\displaystyle \|.\|_{F}}$ ${\displaystyle \|.\|_{2}}$
1 14 15.362291496 10.681145748
2 12.649110641 12.328294348 10.595665162
3 11.934831919 11.532450664 10.500980846
4 11.501633169 11.151002986 10.418165779
5 11.216043151 10.921242235 10.351918183
${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$
10 10.604944422 10.455910430 10.183690042
11 10.548677680 10.413702213 10.166990229
12 10.501921835 10.378620930 10.153031596
${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$
20 10.298254399 10.225504447 10.091577411
30 10.197860892 10.149776921 10.060958900
40 10.148031640 10.112123681 10.045684426
50 10.118251035 10.089598820 10.036530875
${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$
100 10.058951752 10.044699508 10.018248786
200 10.029432562 10.022324834 10.009120234
300 10.019612095 10.014877690 10.006079232
400 10.014705469 10.011156194 10.004559078
${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$
1000 10.005879594 10.004460985 10.001823382
2000 10.002939365 10.002230244 10.000911649
3000 10.001959481 10.001486774 10.000607757
${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$
10000 10.000587804 10.000446009 10.000182323
20000 10.000293898 10.000223002 10.000091161
30000 10.000195931 10.000148667 10.000060774
${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$ ${\displaystyle \vdots }$
100000 10.000058779 10.000044600 10.000018232

## Bounded Linear Operators

For a bounded linear operator A and the operator norm ||·||, again we have

${\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{\frac {1}{k}}.}$

A bounded operator (on a complex Hilbert space) is called a spectraloid operator if its spectral radius coincides with its numerical radius. An example of such an operator is a normal operator.

## Graphs

The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix.

This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number C such that the degree of every vertex of the graph is smaller than C). In this case, for the graph G define:

${\displaystyle \ell ^{2}(G)=\left\{f:V(G)\to \mathbf {R} \ :\ \sum \nolimits _{v\in V(G)}\left\|f(v)^{2}\right\|<\infty \right\}.}$

Let γ be the adjacency operator of G:

${\displaystyle {\begin{cases}\gamma :\ell ^{2}(G)\to \ell ^{2}(G)\\(\gamma f)(v)=\sum _{(u,v)\in E(G)}f(u)\end{cases}}}$

The spectral radius of G is defined to be the spectral radius of the bounded linear operator γ.

## Notes and References

1. ^ The formula holds for any Banach algebra; see Lax 2002, pp. 195-197