# LOBPCG

Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric positive definite generalized eigenvalue problem

${\displaystyle Ax=\lambda Bx,}$

for a given pair ${\displaystyle (A,B)}$ of complex Hermitian or real symmetric matrices, where the matrix ${\displaystyle B}$ is also assumed positive-definite.

## Background

Kantorovich in 1948 proposed calculating the smallest eigenvalue ${\displaystyle \lambda _{1}}$ of a symmetric matrix ${\displaystyle A}$ by steepest descent using a direction ${\displaystyle r=Ax-\lambda (x)x}$ of a scaled gradient of a Rayleigh quotient ${\displaystyle \lambda (x)=(x,Ax)/(x,x)}$ in a scalar product ${\displaystyle (x,y)=x'y}$, with the step size computed by minimizing the Rayleigh quotient in the linear span of the vectors ${\displaystyle x}$ and ${\displaystyle w}$, i.e. in a locally optimal manner. Samokish[1] proposed applying a preconditioner ${\displaystyle T}$ to the residual vector ${\displaystyle r}$ to generate the preconditioned direction ${\displaystyle w=Tr}$ and derived asymptotic, as ${\displaystyle x}$ approaches the eigenvector, convergence rate bounds. D'yakonov suggested[2] spectrally equivalent preconditioning and derived non-asymptotic convergence rate bounds. Block locally optimal multi-step steepest descent for eigenvalue problems was described in.[3] Local minimization of the Rayleigh quotient on the subspace spanned by the current approximation, the current residual and the previous approximation, as well as its block version, appeared in.[4] The preconditioned version was analyzed in [5] and.[6]

## Main features[7]

• Matrix-free, i.e. does not require storing the coefficient matrix explicitly, but can access the matrix by evaluating matrix-vector products.
• Factorization-free, i.e. does not require any matrix decomposition even for a generalized eigenvalue problem.
• The costs per iteration and the memory use are competitive with those of the Lanczos method, computing a single extreme eigenpair of a symmetric matrix.
• Linear convergence is theoretically guaranteed and practically observed.
• Accelerated convergence due to direct preconditioning, in contrast to the Lanczos method, including variable and non-symmetric as well as fixed and positive definite preconditioning.
• Allows trivial incorporation of efficient domain decomposition and multigrid techniques via preconditioning.
• Warm starts and computes an approximation to the eigenvector on every iteration.
• More numerically stable compared to the Lanczos method, and can operate in low-precision computer arithmetic.
• Easy to implement, with many versions already appeared.
• Blocking allows utilizing highly efficient matrix-matrix operations, e.g., BLAS 3.
• The block size can be tuned to balance convergence speed vs. computer costs of orthogonalizations and the Rayleigh-Ritz method on every iteration.

## Algorithm

### Single-vector version

#### Preliminaries: Gradient descent for eigenvalue problems

The method performs an iterative maximization (or minimization) of the generalized Rayleigh quotient

${\displaystyle \rho (x):=\rho (A,B;x):={\frac {x^{T}Ax}{x^{T}Bx}},}$

which results in finding largest (or smallest) eigenpairs of ${\displaystyle Ax=\lambda Bx.}$

The direction of the steepest ascent, which is the gradient, of the generalized Rayleigh quotient is positively proportional to the vector

${\displaystyle r:=Ax-\rho (x)Bx,}$

called the eigenvector residual. If a preconditioner ${\displaystyle T}$ is available, it is applied to the residual and gives the vector

${\displaystyle w:=Tr,}$

called the preconditioned residual. Without preconditioning, we set ${\displaystyle T:=I}$ and so ${\displaystyle w:=r}$. An iterative method

${\displaystyle x^{i+1}:=x^{i}+\alpha ^{i}T(Ax^{i}-\rho (x^{i})Bx^{i}),}$

or, in short,

${\displaystyle x^{i+1}:=x^{i}+\alpha ^{i}w^{i},\,}$
${\displaystyle w^{i}:=Tr^{i},\,}$
${\displaystyle r^{i}:=Ax^{i}-\rho (x^{i})Bx^{i},}$

is known as preconditioned steepest ascent (or descent), where the scalar ${\displaystyle \alpha ^{i}}$ is called the step size. The optimal step size can be determined by maximizing the Rayleigh quotient, i.e.,

${\displaystyle x^{i+1}:=\arg \max _{y\in span\{x^{i},w^{i}\}}\rho (y)}$

(or ${\displaystyle \arg \min }$ in case of minimizing), in which case the method is called locally optimal.

#### Three-term recurrence

To dramatically accelerate the convergence of the locally optimal preconditioned steepest ascent (or descent), one extra vector can be added to the two-term recurrence relation to make it three-term:

${\displaystyle x^{i+1}:=\arg \max _{y\in span\{x^{i},w^{i},x^{i-1}\}}\rho (y)}$

(use ${\displaystyle \arg \min }$ in case of minimizing). The maximization/minimization of the Rayleigh quotient in a 3-dimensional subspace can be performed numerically by the Rayleigh–Ritz method. Adding more vectors, see, e.g., Richardson extrapolation, does not result in significant acceleration[8] but increases computation costs, so is not generally recommended.

#### Numerical stability improvements

As the iterations converge, the vectors ${\displaystyle x^{i}}$ and ${\displaystyle x^{i-1}}$ become nearly linearly dependent, resulting in a precision loss and making the Rayleigh–Ritz method numerically unstable in the presence of round-off errors. The loss of precision may be avoided by substituting the vector ${\displaystyle x^{i-1}}$ with a vector ${\displaystyle p^{i}}$, that may be further away from ${\displaystyle x^{i}}$, in the basis of the three-dimensional subspace ${\displaystyle span\{x^{i},w^{i},x^{i-1}\}}$, while keeping the subspace unchanged and avoiding orthogonalization or any other extra operations.[8] Furthermore, orthogonalizing the basis of the three-dimensional subspace may be needed for ill-conditioned eigenvalue problems to improve stability and attainable accuracy.

#### Krylov subspace analogs

This is a single-vector version of the LOBPCG method—one of possible generalization of the preconditioned conjugate gradient linear solvers to the case of symmetric eigenvalue problems.[8] Even in the trivial case ${\displaystyle T=I}$ and ${\displaystyle B=I}$ the resulting approximation with ${\displaystyle i>3}$ will be different from that obtained by the Lanczos algorithm, although both approximations will belong to the same Krylov subspace.

#### Practical use scenarios

Extreme simplicity and high efficiency of the single-vector version of LOBPCG make it attractive for eigenvalue-related applications under sever hardware limitations, ranging from spectral clustering based real-time anomaly detection via graph partitioning on embedded ASIC or FPGA to modelling physical phenomena of record computing complexity on exascale TOP500 supercomputers.

### Block version

#### Summary

Subsequent eigenpairs can be computed one-by-one via single-vector LOBPCG supplemented with an orthogonal deflation or simultaneously as a block. In the former approach, imprecisions in already computed approximate eigenvectors additively affect the accuracy of the subsequently computed eigenvectors, thus increasing the error with every new computation. Iterating several approximate eigenvectors together in a block in a locally optimal fashion in the block version of the LOBPCG.[8] allows fast, accurate, and robust computation of eigenvectors, including those corresponding to nearly-multiple eigenvalues where the single-vector LOBPCG suffers from slow convergence. The block size can be tuned to balance numerical stability vs. convergence speed vs. computer costs of orthogonalizations and the Rayleigh-Ritz method on every iteration.

#### Core design

The block approach in LOBPCG replaces single-vectors ${\displaystyle x^{i},\,w^{i},}$ and ${\displaystyle p^{i}}$ with block-vectors, i.e. matrices ${\displaystyle X^{i},\,W^{i},}$ and ${\displaystyle P^{i}}$, where, e.g., every column of ${\displaystyle X^{i}}$ approximates one of the eigenvectors. All columns are iterated simultaneously, and the next matrix of approximate eigenvectors ${\displaystyle X^{i+1}}$ is determined by the Rayleigh–Ritz method on the subspace spanned by all columns of matrices ${\displaystyle X^{i},\,W^{i},}$ and ${\displaystyle P^{i}}$. Each column of ${\displaystyle W^{i}}$ is computed simply as the preconditioned residual for every column of ${\displaystyle X^{i}.}$ The matrix ${\displaystyle P^{i}}$ is determined such that the subspaces spanned by the columns of ${\displaystyle [X^{i},\,P^{i}]}$ and of ${\displaystyle [X^{i},\,X^{i-1}]}$ are the same.

#### Numerical stability vs. efficiency

The outcome of the Rayleigh–Ritz method is determined by the subspace spanned by all columns of matrices ${\displaystyle X^{i},\,W^{i},}$ and ${\displaystyle P^{i}}$, where a basis of the subspace can theoretically be arbitrary. However, in inexact computer arithmetic the Rayleigh–Ritz method becomes numerically unstable if some of the basis vectors are approximately linearly depended. Numerical instabilities typically occur, e.g., if some of the eigenvectors in the iterative block already reach attainable accuracy for a given computer precision and are especially prominent in low precision, e.g., single precision.

The art of multiple different implementation of LOBPCG is to ensure numerical stability of the Rayleigh–Ritz method at minimal computing costs by choosing a good basis of the subspace. The arguably most stable approach of making the basis vectors orthogonal, e.g., by the Gram–Schmidt process, is also the most computational expensive. For example, LOBPCG implementations,[9][10] utilize unstable but efficient Cholesky decomposition of the normal matrix, which is performed only on individual matrices ${\displaystyle W^{i}}$ and ${\displaystyle P^{i}}$, rather than on the whole subspace. The constantly increasing amount of computer memory allows typical block sizes nowadays in the ${\displaystyle 10^{3}-10^{4}}$ range, where the percentage of compute time spend on orthogonalizations and the Rayleigh-Ritz method starts dominating.

#### Locking of previously converged eigenvectors

Block methods for eigenvalue problems that iterate subspaces commonly have some of the iterative eigenvectors converged faster than others that motivates locking the already converged eigenvectors, i.e. removing them from the iterative loop, in order to eliminate unnecessary computations and improve numerical stability. A simple removal of an eigenvector may likely result in forming its duplicate in still iterating vectors. The fact that the eigenvectors of symmetric eigenvalue problems are pair-wise orthogonal suggest keeping all iterative vectors orthogonal to the locked vectors.

Locking can be implemented differently maintaining numerical accuracy and stability while minimizing the compute costs. For example, LOBPCG implementations,[9][10] follow,[8][11] separating hard locking, i.e. a deflation by restriction, where the locked eigenvectors serve as a code input and do not change, from soft locking, where the locked vectors do not participate in the typically most expensive iterative step of computing the residuals, however, fully participate in the Rayleigh—Ritz method and thus are allowed to be changed by the Rayleigh—Ritz method.

## Convergence theory and practice

LOBPCG by construction is guaranteed[8] to minimize the Rayleigh quotient not slower than the block steepest gradient descent, which has a comprehensive convergence theory. Every eigenvector is a stationary point of the Rayleigh quotient, where the gradient vanishes. Thus, the gradient descent may slow down in a vicinity of any eigenvector, however, it is guaranteed to either converge to the eigenvector with a linear convergence rate or, if this eigenvector is a saddle point, the iterative Rayleigh quotient is more likely to drop down below the corresponding eigenvalue and start converging linearly to the next eigenvalue below. The worst value of the linear linear convergence rate has been determined[8] and depends on the relative gap between the eigenvalue and the rest of the matrix spectrum and the quality of the preconditioner, if present.

For a general matrix, there is evidently no way to predict the eigenvectors and thus generate the initial approximations that always work well. The iterative solution by LOBPCG may be sensitive to the initial eigenvectors approximations, e.g., taking longer to converge slowing down as passing intermediate eigenpairs. Moreover, in theory, one cannot guarantee convergence necessarily to the smallest eigenpair, although the probability of the miss is zero. A good quality random Gaussian function with the zero mean is commonly the default in LOBPCG to generate the initial approximations. To fix the initial approximations, one can select a fixed seed for the random number generator.

In contrast to the Lanczos method, LOBPCG rarely exhibits asymptotic superlinear convergence in practice.

## Partial Principal component analysis (PCA) and Singular Value Decomposition (SVD)

LOBPCG can be trivially adopted for computing several largest singular values and the corresponding singular vectors (partial SVD), e.g., for iterative computation of PCA, for a data matrix D with zero mean, without explicitly computing the covariance matrix DTD, i.e. in matrix-free fashion. The main calculation is evaluation of a function of the product DT(D X) of the covariance matrix DTD and the block-vector X that iteratively approximates the desired singular vectors. PCA needs the largest eigenvalues of the covariance matrix, while LOBPCG is typically implemented to calculate the smallest ones. A simple work-around is to negate the function, substituting -DT(D X) for DT(D X) and thus reversing the order of the eigenvalues, since LOBPCG does not care if the matrix of the eigenvalue problem is positive definite or not.[9]

LOBPCG for PCA and SVD is implemented in SciPy since revision 1.4.0[12]

## General software implementations

LOBPCG's inventor, Andrew Knyazev, published a reference implementation called Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX)[13][14] with interfaces to PETSc, hypre, and Parallel Hierarchical Adaptive MultiLevel method (PHAML).[15] Other implementations are available in, e.g., GNU Octave,[16] MATLAB (including for distributed or tiling arrays),[9] Java,[17] Anasazi (Trilinos),[18] SLEPc,[19][20] SciPy ,[10] Julia,[21] MAGMA,[22] Pytorch,[23] Rust,[24] OpenMP and OpenACC,[25] RAPIDS cuGraph[26] and NVIDIA AMGX.[27] LOBPCG is implemented,[28] but not included, in TensorFlow.

## Applications

### Material sciences

LOBPCG is implemented in ABINIT[29] (including CUDA version) and Octopus.[30] It has been used for multi-billion size matrices by Gordon Bell Prize finalists, on the Earth Simulator supercomputer in Japan.[31][32] Hubbard model for strongly-correlated electron systems to understand the mechanism behind the superconductivity uses LOBPCG to calculate the ground state of the Hamiltonian on the K computer.[33] There are MATLAB [34] and Julia[35][36] [37] versions of LOBPCG for Kohn-Sham equations and density functional theory (DFT) using the plain-wave basis. Recent implementations include TTPY,[38] Platypus‐QM,[39] MFDn,[40] ACE-Molecule,[41] LACONIC.[42]

### Mechanics and fluids

LOBPCG from BLOPEX is used for preconditioner setup in Multilevel Balancing Domain Decomposition by Constraints (BDDC) solver library BDDCML, which is incorporated into OpenFTL (Open Finite element Template Library) and Flow123d simulator of underground water flow, solute and heat transport in fractured porous media. LOBPCG has been implemented[43] in LS-DYNA.

### Maxwell's equations

LOBPCG is one of core eigenvalue solvers in PYFEMax and high performance multiphysics finite element software Netgen/NGSolve. LOBPCG from hypre is incorporated into open source lightweight scalable C++ library for finite element methods MFEM, which is used in many projects, including BLAST, XBraid, VisIt, xSDK, the FASTMath institute in SciDAC, and the co-design Center for Efficient Exascale Discretizations (CEED) in the Exascale computing Project.

### Denoising

Iterative LOBPCG-based approximate low-pass filter can be used for denoising; see,[44] e.g., to accelerate total variation denoising.

### Image segmentation

Image segmentation via spectral clustering performs a low-dimension embedding using an affinity matrix between pixels, followed by clustering of the components of the eigenvectors in the low dimensional space. LOBPCG with multigrid preconditioning has been first applied to image segmentation in [45] via spectral graph partitioning using the graph Laplacian for the bilateral filter. Scikit-learn uses LOBPCG from SciPy with algebraic multigrid preconditioning for solving the eigenvalue problem.[46]

### Data mining

Software packages scikit-learn and Megaman[47] use LOBPCG to scale spectral clustering[48] and manifold learning[49] via Laplacian eigenmaps to large data sets. NVIDIA has implemented[50] LOBPCG in its nvGRAPH library introduced in CUDA 8.

## References

1. ^ Samokish, B.A. (1958). "The steepest descent method for an eigenvalue problem with semi-bounded operators". Izvestiya Vuzov, Math. (5): 105–114.
2. ^ D'yakonov, E. G. (1996). Optimization in solving elliptic problems. CRC-Press. p. 592. ISBN 978-0-8493-2872-5.
3. ^ Cullum, Jane K.; Willoughby, Ralph A. (2002). Lanczos algorithms for large symmetric eigenvalue computations. Vol. 1 (Reprint of the 1985 original). Society for Industrial and Applied Mathematics.
4. ^ Knyazev, Andrew V. (1987). "Convergence rate estimates for iterative methods for mesh symmetric eigenvalue problem". Soviet J. Numerical Analysis and Math. Modelling. 2 (5): 371–396.
5. ^ Knyazev, Andrew V. (1991). "A preconditioned conjugate gradient method for eigenvalue problems and its implementation in a subspace". International Ser. Numerical Mathematics, V. 96, Eigenwertaufgaben in Natur- und Ingenieurwissenschaften und Ihre Numerische Behandlung, Oberwolfach 1990, Birkhauser: 143–154.
6. ^ Knyazev, Andrew V. (1998). "Preconditioned eigensolvers - an oxymoron?". Electronic Transactions on Numerical Analysis. 7: 104–123.
7. ^ Knyazev, Andrew (2017). "Recent implementations, applications, and extensions of the Locally Optimal Block Preconditioned Conjugate Gradient method (LOBPCG)". arXiv:1708.08354 [cs.NA].
8. Knyazev, Andrew V. (2001). "Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method". SIAM Journal on Scientific Computing. 23 (2): 517–541. doi:10.1137/S1064827500366124.
9. ^ a b c d
10. ^ a b c
11. ^ Knyazev, A. (2004). Hard and soft locking in iterative methods for symmetric eigenvalue problems. Eighth Copper Mountain Conference on Iterative Methods March 28 - April 2, 2004. doi:10.13140/RG.2.2.11794.48327.
12. ^
13. ^
14. ^ Knyazev, A. V.; Argentati, M. E.; Lashuk, I.; Ovtchinnikov, E. E. (2007). "Block Locally Optimal Preconditioned Eigenvalue Xolvers (BLOPEX) in Hypre and PETSc". SIAM Journal on Scientific Computing. 29 (5): 2224. arXiv:0705.2626. Bibcode:2007arXiv0705.2626K. doi:10.1137/060661624.
15. ^
16. ^ Octave linear-algebra function lobpcg
17. ^
18. ^
19. ^ Native SLEPc LOBPCG
20. ^
21. ^
22. ^ Anzt, Hartwig; Tomov, Stanimir; Dongarra, Jack (2015). "Accelerating the LOBPCG method on GPUs using a blocked sparse matrix vector product". Proceedings of the Symposium on High Performance Computing (HPC '15). Society for Computer Simulation International, San Diego, CA, USA: 75–82.
23. ^
24. ^
25. ^ Rabbi, Fazlay; Daley, Christopher S.; Aktulga, Hasan M.; Wright, Nicholas J. (2019). Evaluation of Directive-based GPU Programming Models on a Block Eigensolver with Consideration of Large Sparse Matrices (PDF). Seventh Workshop on Accelerator Programming Using Directives, SC19: The International Conference for High Performance Computing, Networking, Storage and Analysis.
26. ^ RAPIDS cuGraph NVgraph LOBPCG at GitHub
27. ^ NVIDIA AMGX LOBPCG at GitHub
28. ^ Rakhuba, Maxim; Novikov, Alexander; Osedelets, Ivan (2019). "Low-rank Riemannian eigensolver for high-dimensional Hamiltonians". Journal of Computational Physics. 396: 718–737. arXiv:1811.11049. Bibcode:2019JCoPh.396..718R. doi:10.1016/j.jcp.2019.07.003.
29. ^ ABINIT Docs: WaveFunction OPTimisation ALGorithm
30. ^ Octopus Developers Manual:LOBPCG
31. ^ Yamada, S.; Imamura, T.; Machida, M. (2005). 16.447 TFlops and 159-Billion-dimensional Exact-diagonalization for Trapped Fermion-Hubbard Model on the Earth Simulator. Proc. ACM/IEEE Conference on Supercomputing (SC'05). p. 44. doi:10.1109/SC.2005.1. ISBN 1-59593-061-2.
32. ^ Yamada, S.; Imamura, T.; Kano, T.; Machida, M. (2006). Gordon Bell finalists I—High-performance computing for exact numerical approaches to quantum many-body problems on the earth simulator. Proc. ACM/IEEE conference on Supercomputing (SC '06). p. 47. doi:10.1145/1188455.1188504. ISBN 0769527000.
33. ^ Yamada, S.; Imamura, T.; Machida, M. (2018). High Performance LOBPCG Method for Solving Multiple Eigenvalues of Hubbard Model: Efficiency of Communication Avoiding Neumann Expansion Preconditioner. Asian Conference on Supercomputing Frontiers. Yokota R., Wu W. (eds) Supercomputing Frontiers. SCFA 2018. Lecture Notes in Computer Science, vol 10776. Springer, Cham. pp. 243–256. doi:10.1007/978-3-319-69953-0_14.
34. ^ Yang, C.; Meza, J. C.; Lee, B.; Wang, L.-W. (2009). "KSSOLV - a MATLAB toolbox for solving the Kohn-Sham equations". ACM Trans. Math. Softw. 36: 1–35. doi:10.1145/1499096.1499099.
35. ^ Fathurrahman, Fadjar; Agusta, Mohammad Kemal; Saputro, Adhitya Gandaryus; Dipojono, Hermawan Kresno (2020). "PWDFT.jl: A Julia package for electronic structure calculation using density functional theory and plane wave basis". doi:10.1016/j.cpc.2020.107372. Cite journal requires |journal= (help)
36. ^
37. ^
38. ^ Rakhuba, Maxim; Oseledets, Ivan (2016). "Calculating vibrational spectra of molecules using tensor train decomposition". J. Chem. Phys. 145 (12): 124101. arXiv:1605.08422. Bibcode:2016JChPh.145l4101R. doi:10.1063/1.4962420. PMID 27782616.
39. ^ Takano, Yu; Nakata, Kazuto; Yonezawa, Yasushige; Nakamura, Haruki (2016). "Development of massive multilevel molecular dynamics simulation program, platypus (PLATform for dYnamic protein unified simulation), for the elucidation of protein functions". J. Comput. Chem. 37 (12): 1125–1132. doi:10.1002/jcc.24318. PMC 4825406. PMID 26940542.
40. ^ Shao, Meiyue; et al. (2018). "Accelerating Nuclear Configuration Interaction Calculations through a Preconditioned Block Iterative Eigensolver". Computer Physics Communications. 222 (1): 1–13. arXiv:1609.01689. Bibcode:2018CoPhC.222....1S. doi:10.1016/j.cpc.2017.09.004.
41. ^ Kang, Sungwoo; et al. (2020). "ACE-Molecule: An open-source real-space quantum chemistry package". The Journal of Chemical Physics. 152 (12): 124110. doi:10.1063/5.0002959.
42. ^ Baczewski, Andrew David; Brickson, Mitchell Ian; Campbell, Quinn; Jacobson, Noah Tobias; Maurer, Leon (2020-09-01). A Quantum Analog Coprocessor for Correlated Electron Systems Simulation (Report). United States: Sandia National Lab. (SNL-NM). doi:10.2172/1671166. OSTI 1671166.
43. ^ A Survey of Eigen Solution Methods in LS-DYNA®. 15th International LS-DYNA Conference, Detroit. 2018.
44. ^ Knyazev, A.; Malyshev, A. (2015). Accelerated graph-based spectral polynomial filters. 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), Boston, MA. pp. 1–6. arXiv:1509.02468. doi:10.1109/MLSP.2015.7324315.
45. ^ Knyazev, Andrew V. (2003). Boley; Dhillon; Ghosh; Kogan (eds.). Modern preconditioned eigensolvers for spectral image segmentation and graph bisection. Clustering Large Data Sets; Third IEEE International Conference on Data Mining (ICDM 2003) Melbourne, Florida: IEEE Computer Society. pp. 59–62.
46. ^ https://scikit-learn.org/stable/modules/clustering.html#spectral-clustering
47. ^ McQueen, James; et al. (2016). "Megaman: Scalable Manifold Learning in Python". Journal of Machine Learning Research. 17 (148): 1–5. Bibcode:2016JMLR...17..148M.
48. ^
49. ^
50. ^ Naumov, Maxim (2016). "Fast Spectral Graph Partitioning on GPUs". NVIDIA Developer Blog.