Eigenvalue perturbation

In mathematics, eigenvalue perturbation is a perturbation approach to finding eigenvalues and eigenvectors of systems perturbed from one with known eigenvectors and eigenvalues. It also allows one to determine the sensitivity of the eigenvalues and eigenvectors with respect to changes in the system. The following derivations are essentially self-contained and can be found in many texts on numerical linear algebra[1] or numerical functional analysis.

Example

Suppose we have solutions to the generalized eigenvalue problem,

$[K_0] \mathbf{x}_{0i} = \lambda_{0i} [M_0] \mathbf{x}_{0i}. \qquad (1)$

That is, we know $\lambda_{0i}$ and $\mathbf{x}_{0i}$ for $i=1,\dots,N$. Now suppose we want to change the matrices by a small amount. That is, we want to let

$[K] = [K_0]+[\delta K] \,$

and

$[M] = [M_0]+[\delta M] \,$

where all of the $\delta$ terms are much smaller than the corresponding term. We expect answers to be of the form

$\lambda_i = \lambda_{0i}+\delta\lambda_{0i} \,$

and

$\mathbf{x}_i = \mathbf{x}_{0i} + \delta\mathbf{x}_{0i}. \,$

Steps

We assume that the matrices are symmetric and positive definite and assume we have scaled the eigenvectors such that

$\mathbf{x}_{0j}^\top[M_0]\mathbf{x}_{0i} = \delta_i^j \qquad(2)$

where $\delta_i^j$ is the Kronecker delta.

Now we want to solve the equation

$[K]\mathbf{x}_i = \lambda_i [M] \mathbf{x}_i.$

Substituting, we get

$([K_0]+[\delta K])(\mathbf{x}_{0i} + \delta \mathbf{x}_{i}) = (\lambda_{0i}+\delta\lambda_{i})([M_0]+[\delta M])(\mathbf{x}_{0i}+\delta\mathbf{x}_{i}),$

which expands to

\begin{align} \left[K_0\right]\mathbf{x}_{0i} & + [\delta K]\mathbf{x}_{0i} + [K_0]\delta \mathbf{x}_i + [\delta K]\delta \mathbf{x}_i \\[6pt] & = \lambda_{0i}[M_0]\mathbf{x}_{0i}+ \lambda_{0i}[M_0]\delta\mathbf{x}_i + \lambda_{0i}[\delta M]\mathbf{x}_{0i} + \delta\lambda_i[M_0]\mathbf{x}_{0i} \\[6pt] & {} + \lambda_{0i}[\delta M]\delta\mathbf{x}_i + \delta\lambda_i[\delta M]\mathbf{x}_{0i} + \delta\lambda_i[M_0]\delta\mathbf{x}_i + \delta\lambda_i[\delta M]\delta\mathbf{x}_i. \end{align}

Canceling from (1) leaves

\begin{align} \left[\delta K\right]\mathbf{x}_{0i} & + [K_0]\delta \mathbf{x}_i + [\delta K]\delta \mathbf{x}_i \\[6pt] & = \lambda_{0i}[M_0]\delta\mathbf{x}_i + \lambda_{0i}[\delta M]\mathbf{x}_{0i} + \delta\lambda_i[M_0]\mathbf{x}_{0i} \\[6pt] & {} + \lambda_{0i}[\delta M]\delta\mathbf{x}_i + \delta\lambda_i[\delta M]\mathbf{x}_{0i} + \delta\lambda_i[M_0]\delta\mathbf{x}_i + \delta\lambda_i[\delta M]\delta\mathbf{x}_i. \end{align}

Removing the higher-order terms, this simplifies to

$[K_0] \delta\mathbf{x}_i+[\delta K] \mathbf{x}_{0i} = \lambda_{0i}[M_0] \delta \mathbf{x}_i + \lambda_{0i}[\delta M]\mathrm{x}_{0i} + \delta \lambda_i [M_0]\mathbf{x}_{0i}. \qquad(3)$

When the matrix is symmetric, the unperturbed eigenvectors are orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct

$\delta \mathbf{x}_i = \sum_{j=1}^N \epsilon_{ij} \mathbf{x}_{0j} \qquad(4)$

where the $\epsilon_{ij}$ are small constants that are to be determined. Substituting (4) into (3) and rearranging gives

$[K_0]\sum_{j=1}^N \epsilon_{ij} \mathbf{x}_{0j} + [\delta K]\mathbf{x}_{0i} = \lambda_{0i} [M_0] \sum_{j=1}^N \epsilon_{ij} \mathbf{x}_{0j} + \lambda_{0i} [\delta M] \mathbf{x}_{0i} + \delta\lambda_i [M_0] \mathbf{x}_{0i}. \qquad (5)$

Or:

$\sum_{j=1}^N \epsilon_{ij} [K_0] \mathbf{x}_{0j} + [\delta K]\mathbf{x}_{0i} = \lambda_{0i} [M_0] \sum_{j=1}^N \epsilon_{ij} \mathbf{x}_{0j} + \lambda_{0i} [\delta M] \mathbf{x}_{0i} + \delta\lambda_i [M_0] \mathbf{x}_{0i}.$

By equation (1):

$\sum_{j=1}^N \epsilon_{ij} \lambda_{0j} [M_0] \mathbf{x}_{0j} + [\delta K]\mathbf{x}_{0i} = \lambda_{0i} [M_0] \sum_{j=1}^N \epsilon_{ij} \mathbf{x}_{0j} + \lambda_{0i} [\delta M] \mathbf{x}_{0i} + \delta\lambda_i [M_0] \mathbf{x}_{0i}.$

Because the eigenvectors are orthogonal, we can remove the summations by left multiplying by $\mathbf{x}_{0i}^\top$:

$\mathbf{x}_{0i}^\top \epsilon_{ii} \lambda_{0i} [M_0] \mathbf{x}_{0i} + \mathbf{x}_{0i}^\top[\delta K]\mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0i}^\top[M_0] \epsilon_{ii} \mathbf{x}_{0i} + \lambda_{0i}\mathbf{x}_{0i}^\top [\delta M] \mathbf{x}_{0i} + \delta\lambda_i\mathbf{x}_{0i}^\top [M_0] \mathbf{x}_{0i}.$

By use of equation (1) again:

$\mathbf{x}_{0i}^\top[K_0] \epsilon_{ii} \mathbf{x}_{0i} + \mathbf{x}_{0i}^\top[\delta K]\mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0i}^\top[M_0] \epsilon_{ii} \mathbf{x}_{0i} + \lambda_{0i}\mathbf{x}_{0i}^\top [\delta M] \mathbf{x}_{0i} + \delta\lambda_i\mathbf{x}_{0i}^\top [M_0] \mathbf{x}_{0i}. ~~(6)$

The two terms containing $\epsilon_{ii}$ are equal because left-multiplying (1) by $\mathbf{x}_{0i} ^\top$ gives

$\mathbf{x}_{0i}^\top[K_0]\mathbf{x}_{0i} = \lambda_{0i}\mathbf{x}_{0i}^\top[M_0]\mathbf{x}_{0i}.$

Canceling those terms in (6) leaves

$\mathbf{x}_{0i}^\top[\delta K]\mathbf{x}_{0i} = \lambda_{0i} \mathbf{x}_{0i}^\top[\delta M] \mathbf{x}_{0i} + \delta\lambda_i \mathbf{x}_{0i}^\top [M_0] \mathbf{x}_{0i}.$

Rearranging gives

$\delta\lambda_i = \frac{\mathbf{x}^\top_{0i}([\delta K] - \lambda_{0i}[\delta M] )\mathbf{x}_{0i}}{\mathbf{x}_{0i}^\top[M_0] \mathbf{x}_{0i}}$

But by (2), this denominator is equal to 1. Thus

$\delta\lambda_i = \mathbf{x}^\top_{0i}([\delta K] - \lambda_{0i}[\delta M] )\mathbf{x}_{0i}$   ■

Then, by left-multiplying equation (5) by $\mathbf{x}_{0k}$ (for $i\neq k$):

$\epsilon_{ik} = \frac{\mathbf{x}^\top_{0k}([\delta K] - \lambda_{0i}[\delta M])\mathbf{x}_{0i}}{\lambda_{0i}-\lambda_{0k}}, \qquad i\neq k.$

Or by changing the name of the indices:

$\epsilon_{ij} = \frac{\mathbf{x}^\top_{0j}([\delta K] - \lambda_{0i}[\delta M])\mathbf{x}_{0i}}{\lambda_{0i}-\lambda_{0j}}, \qquad i\neq j.$

To find $\epsilon_{ii}$, use

$\mathbf{x}^\top_i[M]\mathbf{x}_i = 1 \Rightarrow \epsilon_{ii}=-\frac{1}{2}\mathbf{x}^\top_{0i}[\delta M]\mathbf{x}_{0i}.$

Summary

$\lambda_i = \lambda_{0i} + \mathbf{x}^\top_{0i} ([\delta K] - \lambda_{0i}[\delta M]) \mathbf{x}_{0i}$

and

$\mathbf{x}_i = \mathbf{x}_{0i}(1 - \frac{1}{2} \mathbf{x}^\top_{0i}[\delta M] \mathbf{x}_{0i}) + \sum_{j=1\atop j\neq i}^N \frac{\mathbf{x}^\top_{0j}([\delta K] - \lambda_{0i}[\delta M])\mathbf{x}_{0i}}{\lambda_{0i}-\lambda_{0j}}\mathbf{x}_{0j}$

for infinitesimal $\delta K$ and $\delta M$ (the high order terms in (3) being negligible)

Results

This means it is possible to efficiently do a sensitivity analysis on $\lambda_i$ as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing $K_{(k\ell)}$ will also change $K_{(\ell k)}$, hence the $(2-\delta_k^\ell)$ term.)

$\frac{\partial \lambda_i}{\partial K_{(k\ell)}} = \frac{\partial}{\partial K_{(k\ell)}}\left(\lambda_{0i} + \mathbf{x}^\top_{0i} ([\delta K] - \lambda_{0i}[\delta M]) \mathbf{x}_{0i}\right) = x_{0i(k)} x_{0i(\ell)} (2 - \delta_k^\ell)$

and

$\frac{\partial \lambda_i}{\partial M_{(k\ell)}} = \frac{\partial}{\partial M_{(k\ell)}}\left(\lambda_{0i} + \mathbf{x}^\top_{0i} ([\delta K] - \lambda_{0i}[\delta M]) \mathbf{x}_{0i}\right) = \lambda_i x_{0i(k)} x_{0i(\ell)}(2-\delta_k^\ell).$

Similarly

$\frac{\partial\mathbf{x}_i}{\partial K_{(k\ell)}} = \sum_{j=1\atop j\neq i}^N \frac{x_{0j(k)} x_{0i(\ell)}(2-\delta_k^\ell)}{\lambda_{0i}-\lambda_{0j}}\mathbf{x}_{0j}$

and

$\frac{\partial \mathbf{x}_i}{\partial M_{(k\ell)}} = -\mathbf{x}_{0i}\frac{x_{0i(k)}x_{0i(\ell)}}{2}(2-\delta_k^\ell) - \sum_{j=1\atop j\neq i}^N \frac{\lambda_{0i}x_{0j(k)} x_{0i(\ell)}}{\lambda_{0i}-\lambda_{0j}}\mathbf{x}_{0j}(2-\delta_k^\ell).$