Biconjugate gradient method

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations

Unlike the conjugate gradient method, this algorithm does not require the matrix to be self-adjoint, but instead one needs to perform multiplications by the conjugate transpose A*.

The algorithm[edit]

  1. Choose initial guess , two other vectors and and a preconditioner
  2. for do

In the above formulation, the computed and satisfy

and thus are the respective residuals corresponding to and , as approximate solutions to the systems

is the adjoint, and is the complex conjugate.

Unpreconditioned version of the algorithm[edit]

  1. Choose initial guess ,
  2. for do

Discussion[edit]

The biconjugate gradient method is numerically unstable[citation needed] (compare to the biconjugate gradient stabilized method), but very important from a theoretical point of view. Define the iteration steps by

where using the related projection

with

These related projections may be iterated themselves as

A relation to Quasi-Newton methods is given by and , where

The new directions

are then orthogonal to the residuals:

which themselves satisfy

where .

The biconjugate gradient method now makes a special choice and uses the setting

With this particular choice, explicit evaluations of and A−1 are avoided, and the algorithm takes the form stated above.

Properties[edit]

  • If is self-adjoint, and , then , , and the conjugate gradient method produces the same sequence at half the computational cost.
  • The sequences produced by the algorithm are biorthogonal, i.e., for .
  • if is a polynomial with , then . The algorithm thus produces projections onto the Krylov subspace.
  • if is a polynomial with , then .

See also[edit]

References[edit]