This is an old revision of this page, as edited by DannyS712 bot(talk | contribs) at 17:59, 11 May 2020(Task 70: Update syntaxhighlight tags - remove use of deprecated <source> tags). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 17:59, 11 May 2020 by DannyS712 bot(talk | contribs)(Task 70: Update syntaxhighlight tags - remove use of deprecated <source> tags)
Then A can be decomposed into a diagonal component D, a lower triangular part L and an upper triangular part U:
The solution is then obtained iteratively via
where is the kth approximation or iteration of and is the next or k + 1 iteration of . The element-based formula is thus:
The computation of requires each element in x(k) except itself. Unlike the Gauss–Seidel method, we can't overwrite with , as that value will be needed by the rest of the computation. The minimum amount of storage is two vectors of size n.
Algorithm
Input:initial guess to the solution, (diagonal dominant) matrix , right-hand side vector , convergence criterion
Output:solution when convergence is reachedComments:pseudocode based on the element-based formula abovewhile convergence not reached dofor i := 1 step until n dofor j := 1 step until n doif j ≠ i thenendendendend
Convergence
The standard convergence condition (for any iterative method) is when the spectral radius of the iteration matrix is less than 1:
A sufficient (but not necessary) condition for the method to converge is that the matrix A is strictly or irreducibly diagonally dominant. Strict row diagonal dominance means that for each row, the absolute value of the diagonal term is greater than the sum of absolute values of other terms:
The Jacobi method sometimes converges even if these conditions are not satisfied.
Note that the Jacobi method does not converge for every symmetric positive-definite matrix.
For example
Example
A linear system of the form with initial estimate is given by
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle A= \begin{bmatrix} 2 & 1 \\ 5 & 7 \\ \end{bmatrix}, \ b= \begin{bmatrix} 11 \\ 13 \\ \end{bmatrix} \quad \text{and} \quad x^{(0)} = \begin{bmatrix} 1 \\ 1 \\ \end{bmatrix} .}
We use the equation , described above, to estimate . First, we rewrite the equation in a more convenient form , where and . From the known values
we determine as
Further, is found as
With and calculated, we estimate as :
The next iteration yields
This process is repeated until convergence (i.e., until is small). The solution after 25 iterations is
Another example
Suppose we are given the following linear system:
If we choose (0, 0, 0, 0) as the initial approximation, then the first approximate solution is given by
Using the approximations obtained, the iterative procedure is repeated until the desired accuracy has been reached. The following are the approximated solutions after five iterations.
0.6
2.27272
-1.1
1.875
1.04727
1.7159
-0.80522
0.88522
0.93263
2.05330
-1.0493
1.13088
1.01519
1.95369
-0.9681
0.97384
0.98899
2.0114
-1.0102
1.02135
The exact solution of the system is (1, 2, −1, 1).
An example using Python and Numpy
The following numerical procedure simply iterates to produce the solution vector.
defjacobi(A,b,x_init,epsilon=1e-10,max_iterations=500):D=np.diag(np.diag(A))LU=A-Dx=x_initforiinrange(max_iterations):D_inv=np.diag(1/np.diag(D))x_new=np.dot(D_inv,b-np.dot(LU,x))ifnp.linalg.norm(x_new-x)<epsilon:returnx_newx=x_newreturnx# problem dataA=np.array([[5,2,1,1],[2,6,2,1],[1,2,7,1],[1,1,2,8]])b=np.array([29,31,26,19])# you can choose any starting vectorx_init=np.zeros(len(b))x=jacobi(A,b,x_init)print('x:',x)print('computed b:',np.dot(A,x))print('real b:',b)