In mathematics , the spectral radius of a square matrix or a bounded linear operator is the supremum among the absolute values of the elements in its spectrum , which is sometimes denoted by ρ(·).
Matrices
Let λ1 , ..., λn be the (real or complex ) eigenvalues of a matrix A ∈ C n × n . Then its spectral radius ρ(A ) is defined as:
ρ
(
A
)
=
d
e
f
max
i
(
|
λ
i
|
)
{\displaystyle \rho (A){\overset {\underset {\mathrm {def} }{}}{=}}\max _{i}(|\lambda _{i}|)}
The following lemma shows a simple yet useful upper bound for the spectral radius of a matrix:
Lemma : Let
A
∈
C
n
×
n
{\displaystyle A\in \mathbb {C} ^{n\times n}}
be a complex-valued matrix, ρ(A ) its spectral radius and ||·|| a consistent matrix norm ; then, for each k ∈ N :
ρ
(
A
)
≤
‖
A
k
‖
1
/
k
.
{\displaystyle \rho (A)\leq \|A^{k}\|^{1/k}.}
Proof : Let (v , λ) be an eigenvector -eigenvalue pair for a matrix A . By the sub-multiplicative property of the matrix norm, we get:
|
λ
|
k
‖
v
‖
=
‖
λ
k
v
‖
=
‖
A
k
v
‖
≤
‖
A
k
‖
⋅
‖
v
‖
{\displaystyle |\lambda |^{k}\|\mathbf {v} \|=\|\lambda ^{k}\mathbf {v} \|=\|A^{k}\mathbf {v} \|\leq \|A^{k}\|\cdot \|\mathbf {v} \|}
and since v ≠ 0 we have
|
λ
|
k
≤
‖
A
k
‖
{\displaystyle |\lambda |^{k}\leq \|A^{k}\|}
and therefore
ρ
(
A
)
≤
‖
A
k
‖
1
/
k
◻
{\displaystyle \rho (A)\leq \|A^{k}\|^{1/k}\,\,\square }
The spectral radius is closely related to the behaviour of the convergence of the power sequence of a matrix; namely, the following theorem holds:
Theorem : Let A ∈ C n × n be a complex-valued matrix and ρ(A ) its spectral radius; then
lim
k
→
∞
A
k
=
0
{\displaystyle \lim _{k\to \infty }A^{k}=0}
if and only if
ρ
(
A
)
<
1.
{\displaystyle \rho (A)<1.}
Moreover, if ρ(A )>1,
‖
A
k
‖
{\displaystyle \|A^{k}\|}
is not bounded for increasing k values.
Proof :
(
lim
k
→
∞
A
k
=
0
⇒
ρ
(
A
)
<
1
)
{\displaystyle \left(\lim _{k\to \infty }A^{k}=0\Rightarrow \rho (A)<1\right)}
Let (v , λ) be an eigenvector -eigenvalue pair for matrix A . Since
A
k
v
=
λ
k
v
,
{\displaystyle A^{k}\mathbf {v} =\lambda ^{k}\mathbf {v} ,}
we have:
0
=
(
lim
k
→
∞
A
k
)
v
=
lim
k
→
∞
A
k
v
=
lim
k
→
∞
λ
k
v
=
v
lim
k
→
∞
λ
k
{\displaystyle {\begin{aligned}0&=\left(\lim _{k\to \infty }A^{k}\right)\mathbf {v} \\&=\lim _{k\to \infty }A^{k}\mathbf {v} \\&=\lim _{k\to \infty }\lambda ^{k}\mathbf {v} \\&=\mathbf {v} \lim _{k\to \infty }\lambda ^{k}\end{aligned}}}
and, since by hypothesis v ≠ 0, we must have
lim
k
→
∞
λ
k
=
0
{\displaystyle \lim _{k\to \infty }\lambda ^{k}=0}
which implies |λ| < 1. Since this must be true for any eigenvalue λ, we can conclude ρ(A ) < 1.
(
ρ
(
A
)
<
1
⇒
lim
k
→
∞
A
k
=
0
)
{\displaystyle \left(\rho (A)<1\Rightarrow \lim _{k\to \infty }A^{k}=0\right)}
From the Jordan normal form theorem, we know that for any complex valued matrix
A
∈
C
n
×
n
{\displaystyle A\in \mathbb {C} ^{n\times n}}
, a non-singular matrix
V
∈
C
n
×
n
{\displaystyle V\in \mathbb {C} ^{n\times n}}
and a block-diagonal matrix
J
∈
C
n
×
n
{\displaystyle J\in \mathbb {C} ^{n\times n}}
exist such that:
A
=
V
J
V
−
1
{\displaystyle A=VJV^{-1}}
with
J
=
[
J
m
1
(
λ
1
)
0
0
⋯
0
0
J
m
2
(
λ
2
)
0
⋯
0
⋮
⋯
⋱
⋯
⋮
0
⋯
0
J
m
s
−
1
(
λ
s
−
1
)
0
0
⋯
⋯
0
J
m
s
(
λ
s
)
]
{\displaystyle J={\begin{bmatrix}J_{m_{1}}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}(\lambda _{s})\end{bmatrix}}}
where
J
m
i
(
λ
i
)
=
[
λ
i
1
0
⋯
0
0
λ
i
1
⋯
0
⋮
⋮
⋱
⋱
⋮
0
0
⋯
λ
i
1
0
0
⋯
0
λ
i
]
∈
C
m
i
,
m
i
,
1
≤
i
≤
s
.
{\displaystyle J_{m_{i}}(\lambda _{i})={\begin{bmatrix}\lambda _{i}&1&0&\cdots &0\\0&\lambda _{i}&1&\cdots &0\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}&1\\0&0&\cdots &0&\lambda _{i}\end{bmatrix}}\in \mathbb {C} ^{m_{i},m_{i}},1\leq i\leq s.}
It is easy to see that
A
k
=
V
J
k
V
−
1
{\displaystyle A^{k}=VJ^{k}V^{-1}}
and, since
J
{\displaystyle J}
is block-diagonal,
J
k
=
[
J
m
1
k
(
λ
1
)
0
0
⋯
0
0
J
m
2
k
(
λ
2
)
0
⋯
0
⋮
⋯
⋱
⋯
⋮
0
⋯
0
J
m
s
−
1
k
(
λ
s
−
1
)
0
0
⋯
⋯
0
J
m
s
k
(
λ
s
)
]
{\displaystyle J^{k}={\begin{bmatrix}J_{m_{1}}^{k}(\lambda _{1})&0&0&\cdots &0\\0&J_{m_{2}}^{k}(\lambda _{2})&0&\cdots &0\\\vdots &\cdots &\ddots &\cdots &\vdots \\0&\cdots &0&J_{m_{s-1}}^{k}(\lambda _{s-1})&0\\0&\cdots &\cdots &0&J_{m_{s}}^{k}(\lambda _{s})\end{bmatrix}}}
Now, a standard result on the
k
{\displaystyle k}
-power of an
m
i
×
m
i
{\displaystyle m_{i}\times m_{i}}
Jordan block states that, for
k
≥
m
i
−
1
{\displaystyle k\geq m_{i}-1}
:
J
m
i
k
(
λ
i
)
=
[
λ
i
k
(
k
1
)
λ
i
k
−
1
(
k
2
)
λ
i
k
−
2
⋯
(
k
m
i
−
1
)
λ
i
k
−
m
i
+
1
0
λ
i
k
(
k
1
)
λ
i
k
−
1
⋯
(
k
m
i
−
2
)
λ
i
k
−
m
i
+
2
⋮
⋮
⋱
⋱
⋮
0
0
⋯
λ
i
k
(
k
1
)
λ
i
k
−
1
0
0
⋯
0
λ
i
k
]
{\displaystyle J_{m_{i}}^{k}(\lambda _{i})={\begin{bmatrix}\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&{k \choose 2}\lambda _{i}^{k-2}&\cdots &{k \choose m_{i}-1}\lambda _{i}^{k-m_{i}+1}\\0&\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}&\cdots &{k \choose m_{i}-2}\lambda _{i}^{k-m_{i}+2}\\\vdots &\vdots &\ddots &\ddots &\vdots \\0&0&\cdots &\lambda _{i}^{k}&{k \choose 1}\lambda _{i}^{k-1}\\0&0&\cdots &0&\lambda _{i}^{k}\end{bmatrix}}}
Thus, if
ρ
(
A
)
<
1
{\displaystyle \rho (A)<1}
then
|
λ
i
|
<
1
∀
i
{\displaystyle |\lambda _{i}|<1\forall i}
, so that
lim
k
→
∞
J
m
i
k
=
0
∀
i
{\displaystyle \lim _{k\to \infty }J_{m_{i}}^{k}=0\ \forall i}
which implies
lim
k
→
∞
J
k
=
0.
{\displaystyle \lim _{k\to \infty }J^{k}=0.}
Therefore,
lim
k
→
∞
A
k
=
lim
k
→
∞
V
J
k
V
−
1
=
V
(
lim
k
→
∞
J
k
)
V
−
1
=
0
{\displaystyle \lim _{k\to \infty }A^{k}=\lim _{k\to \infty }VJ^{k}V^{-1}=V(\lim _{k\to \infty }J^{k})V^{-1}=0}
On the other side, if
ρ
(
A
)
>
1
{\displaystyle \rho (A)>1}
, there is at least one element in
J
{\displaystyle J}
which doesn't remain bounded as k increases, so proving the second part of the statement.
◻
{\displaystyle \square }
Theorem (Gelfand's formula, 1941)
For any matrix norm ||·||, we have
ρ
(
A
)
=
lim
k
→
∞
‖
A
k
‖
1
/
k
.
{\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{1/k}.}
In other words, Gelfand's formula shows how the spectral radius of A gives the asymptotic growth rate of the norm of A k :
‖
A
k
‖
∼
ρ
(
A
)
k
{\displaystyle \|A^{k}\|\sim \rho (A)^{k}}
for
k
→
∞
.
{\displaystyle k\rightarrow \infty .\,}
Proof : For any ε > 0, consider the matrix
A
~
=
(
ρ
(
A
)
+
ϵ
)
−
1
A
.
{\displaystyle {\tilde {A}}=(\rho (A)+\epsilon )^{-1}A.}
Then, obviously,
ρ
(
A
~
)
=
ρ
(
A
)
ρ
(
A
)
+
ϵ
<
1
{\displaystyle \rho ({\tilde {A}})={\frac {\rho (A)}{\rho (A)+\epsilon }}<1}
and, by the previous theorem,
lim
k
→
∞
A
~
k
=
0.
{\displaystyle \lim _{k\to \infty }{\tilde {A}}^{k}=0.}
That means, by the sequence limit definition, a natural number N1 ∈ N exists such that
∀
k
≥
N
1
⇒
‖
A
~
k
‖
<
1
{\displaystyle \forall k\geq N_{1}\Rightarrow \|{\tilde {A}}^{k}\|<1}
which in turn means:
∀
k
≥
N
1
⇒
‖
A
k
‖
<
(
ρ
(
A
)
+
ϵ
)
k
{\displaystyle \forall k\geq N_{1}\Rightarrow \|A^{k}\|<(\rho (A)+\epsilon )^{k}}
or
∀
k
≥
N
1
⇒
‖
A
k
‖
1
/
k
<
(
ρ
(
A
)
+
ϵ
)
.
{\displaystyle \forall k\geq N_{1}\Rightarrow \|A^{k}\|^{1/k}<(\rho (A)+\epsilon ).}
Let's now consider the matrix
A
ˇ
=
(
ρ
(
A
)
−
ϵ
)
−
1
A
.
{\displaystyle {\check {A}}=(\rho (A)-\epsilon )^{-1}A.}
Then, obviously,
ρ
(
A
ˇ
)
=
ρ
(
A
)
ρ
(
A
)
−
ϵ
>
1
{\displaystyle \rho ({\check {A}})={\frac {\rho (A)}{\rho (A)-\epsilon }}>1}
and so, by the previous theorem,
‖
A
ˇ
k
‖
{\displaystyle \|{\check {A}}^{k}\|}
is not bounded.
This means a natural number N2 ∈ N exists such that
∀
k
≥
N
2
⇒
‖
A
ˇ
k
‖
>
1
{\displaystyle \forall k\geq N_{2}\Rightarrow \|{\check {A}}^{k}\|>1}
which in turn means:
∀
k
≥
N
2
⇒
‖
A
k
‖
>
(
ρ
(
A
)
−
ϵ
)
k
{\displaystyle \forall k\geq N_{2}\Rightarrow \|A^{k}\|>(\rho (A)-\epsilon )^{k}}
or
∀
k
≥
N
2
⇒
‖
A
k
‖
1
/
k
>
(
ρ
(
A
)
−
ϵ
)
.
{\displaystyle \forall k\geq N_{2}\Rightarrow \|A^{k}\|^{1/k}>(\rho (A)-\epsilon ).}
Taking
N
:=
max
(
N
1
,
N
2
)
{\displaystyle N:=\max(N_{1},N_{2})}
and putting it all together, we obtain:
∀
ϵ
>
0
,
∃
N
∈
N
:
∀
k
≥
N
⇒
ρ
(
A
)
−
ϵ
<
‖
A
k
‖
1
/
k
<
ρ
(
A
)
+
ϵ
{\displaystyle \forall \epsilon >0,\exists N\in \mathbb {N} :\forall k\geq N\Rightarrow \rho (A)-\epsilon <\|A^{k}\|^{1/k}<\rho (A)+\epsilon }
which, by definition, is
lim
k
→
∞
‖
A
k
‖
1
/
k
=
ρ
(
A
)
.
◻
{\displaystyle \lim _{k\to \infty }\|A^{k}\|^{1/k}=\rho (A).\,\,\square }
Gelfand's formula leads directly to a bound on the spectral radius of a product of finitely many matrices, namely assuming that they all commute we obtain
ρ
(
A
1
A
2
…
A
n
)
≤
ρ
(
A
1
)
ρ
(
A
2
)
…
ρ
(
A
n
)
.
{\displaystyle \rho (A_{1}A_{2}\ldots A_{n})\leq \rho (A_{1})\rho (A_{2})\ldots \rho (A_{n}).}
Actually, in case the norm is consistent , the proof shows more than the thesis; in fact, using the previous lemma, we can replace in the limit definition the left lower bound with the spectral radius itself and write more precisely:
∀
ϵ
>
0
,
∃
N
∈
N
:
∀
k
≥
N
⇒
ρ
(
A
)
≤
‖
A
k
‖
1
/
k
<
ρ
(
A
)
+
ϵ
{\displaystyle \forall \epsilon >0,\exists N\in \mathbb {N} :\forall k\geq N\Rightarrow \rho (A)\leq \|A^{k}\|^{1/k}<\rho (A)+\epsilon }
which, by definition, is
lim
k
→
∞
‖
A
k
‖
1
/
k
=
ρ
(
A
)
+
.
{\displaystyle \lim _{k\to \infty }\|A^{k}\|^{1/k}=\rho (A)^{+}.}
Example : Let's consider the matrix
A
=
[
9
−
1
2
−
2
8
4
1
1
8
]
{\displaystyle A={\begin{bmatrix}9&-1&2\\-2&8&4\\1&1&8\end{bmatrix}}}
whose eigenvalues are 5, 10, 10; by definition, its spectral radius is ρ(A )=10. In the following table, the values of
‖
A
k
‖
1
/
k
{\displaystyle \|A^{k}\|^{1/k}}
for the four most used norms are listed versus several increasing values of k (note that, due to the particular form of this matrix,
‖
.
‖
1
=
‖
.
‖
∞
{\displaystyle \|.\|_{1}=\|.\|_{\infty }}
):
k
‖
.
‖
1
=
‖
.
‖
∞
{\displaystyle \|.\|_{1}=\|.\|_{\infty }}
‖
.
‖
F
{\displaystyle \|.\|_{F}}
‖
.
‖
2
{\displaystyle \|.\|_{2}}
1
14
15.362291496
10.681145748
2
12.649110641
12.328294348
10.595665162
3
11.934831919
11.532450664
10.500980846
4
11.501633169
11.151002986
10.418165779
5
11.216043151
10.921242235
10.351918183
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
10
10.604944422
10.455910430
10.183690042
11
10.548677680
10.413702213
10.166990229
12
10.501921835
10.378620930
10.153031596
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
20
10.298254399
10.225504447
10.091577411
30
10.197860892
10.149776921
10.060958900
40
10.148031640
10.112123681
10.045684426
50
10.118251035
10.089598820
10.036530875
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
100
10.058951752
10.044699508
10.018248786
200
10.029432562
10.022324834
10.009120234
300
10.019612095
10.014877690
10.006079232
400
10.014705469
10.011156194
10.004559078
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
1000
10.005879594
10.004460985
10.001823382
2000
10.002939365
10.002230244
10.000911649
3000
10.001959481
10.001486774
10.000607757
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
10000
10.000587804
10.000446009
10.000182323
20000
10.000293898
10.000223002
10.000091161
30000
10.000195931
10.000148667
10.000060774
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
⋮
{\displaystyle \vdots }
100000
10.000058779
10.000044600
10.000018232
Bounded linear operators
For a bounded linear operator A and the operator norm ||·||, again we have
ρ
(
A
)
=
lim
k
→
∞
‖
A
k
‖
1
/
k
.
{\displaystyle \rho (A)=\lim _{k\to \infty }\|A^{k}\|^{1/k}.}
A bounded operator (on a complex Hilbert space) called a spectraloid operator if its spectral radius coincides with its numerical radius . An example of such an operator is a normal operator .
Graphs
The spectral radius of a finite graph is defined to be the spectral radius of its adjacency matrix .
This definition extends to the case of infinite graphs with bounded degrees of vertices (i.e. there exists some real number C such that the degree of every vertex of the graph is smaller than C). In this case, for the graph
G
{\displaystyle G}
let
l
2
(
G
)
{\displaystyle l^{2}(G)}
denote the space of functions
f
:
V
(
G
)
→
R
{\displaystyle f\colon V(G)\to {\mathbb {R} }}
with
∑
v
∈
V
(
G
)
‖
f
(
v
)
2
‖
<
∞
{\displaystyle \sum _{v\in V(G)}\|f(v)^{2}\|<\infty }
. Let
γ
:
l
2
(
G
)
→
l
2
(
G
)
{\displaystyle \gamma \colon l^{2}(G)\to l^{2}(G)}
be the adjacency operator of
G
{\displaystyle G}
, i.e.,
(
γ
f
)
(
v
)
=
∑
(
u
,
v
)
∈
E
(
G
)
f
(
u
)
{\displaystyle (\gamma f)(v)=\sum _{(u,v)\in E(G)}f(u)}
. The spectral radius of G is defined to be the spectral radius of the bounded linear operator
γ
{\displaystyle \gamma }
.
See also
Spaces
Theorems Operators Algebras Open problems Applications Advanced topics