When solving a linear system due to the compounded accumulation of rounding errors, the computed solution may sometimes deviate from the exact solution Starting with iterative refinement computes a sequence which converges to when certain assumptions are met.
For the mth iteration of iterative refinement consists of three steps:
- Compute the residual error rm
- Solve the system for the correction, cm, that removes the residual error
- Add the correction to get the revised next solution xm+1
The crucial reasoning for the refinement algorithm is that although the solution for cm in step (ii) may indeed be troubled by similar errors as the first solution, , the calculation of the residual rm in step (i), in comparison, is numerically nearly exact: You may not know the right answer very well, but you know quite accurately just how far the solution you have in hand is from producing the correct outcome (b). If the residual is small in some sense, then the correction must also be small, and should at the very least steer the current estimate of the answer, xm, closer to the desired one,
The iterations will stop on their own when the residual rm is zero, or close enough to zero that the corresponding correction cm is too small to change the solution xm which produced it; alternatively, the algorithm stops when rm is too small to convince the linear algebraist monitoring the progress that it is worth continuing with any further refinements.
Note that the matrix equation solved in step (ii) uses the same matrix for each iterations. If the matrix equation is solved using a direct method, such as Cholesky or LU decomposition, the numerically expensive factorization of is done once and is reused for the relatively inexpensive forward and back substitution to solve for cm at each iteration.
As a rule of thumb, iterative refinement for Gaussian elimination produces a solution correct to working precision if double the working precision is used in the computation of r, e.g. by using quad or double extended precision IEEE 754 floating point, and if A is not too ill-conditioned (and the iteration and the rate of convergence are determined by the condition number of A).
More formally, assuming that each step (ii) can be solved reasonably accurately, i.e., in mathematical terms, for every m, we have
where ‖Fm‖∞ < 1, the relative error in the m-th iterate of iterative refinement satisfies
- ‖·‖∞ denotes the ∞-norm of a vector,
- κ(A) is the ∞-condition number of A,
- n is the order of A,
- ε1 and ε2 are unit round-offs of floating-point arithmetic operations,
- σ, μ1 and μ2 are constants that depend on A, ε1 and ε2
if A is "not too badly conditioned", which in this context means
and implies that μ1 and μ2 are of order unity.
The distinction of ε1 and ε2 is intended to allow mixed-precision evaluation of rm where intermediate results are computed with unit round-off ε2 before the final result is rounded (or truncated) with unit round-off ε1. All other computations are assumed to be carried out with unit round-off ε1.
- Wilkinson, James H. (1963). Rounding Errors in Algebraic Processes. Englewood Cliffs, NJ: Prentice Hall.
- Moler, Cleve B. (April 1967). "Iterative refinement in floating point". Journal of the ACM. New York, NY: Association for Computing Machinery. 14 (2): 316–321. doi:10.1145/321386.321394.
- Higham, Nicholas (2002). Accuracy and Stability of Numerical Algorithms (2 ed.). SIAM. p. 232.