CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Wiki > Jacobi method

Jacobi method

From CFD-Wiki

(Difference between revisions)
Jump to: navigation, search
(Added more material)
Line 1: Line 1:
 +
== Introduction ==
 +
We seek the solution to set of linear equations: <br>
We seek the solution to set of linear equations: <br>
-
:<math> A \cdot \Phi = B </math> <br>
+
:<math> A \phi = b </math> <br>
In matrix terms, the definition of the Jacobi method can be expressed as : <br>
In matrix terms, the definition of the Jacobi method can be expressed as : <br>
-
<math>  
+
:<math>  
-
\phi^{(k)}  = D^{ - 1} \left( {L + U} \right)\phi^{(k - 1)}  + D^{ - 1} B
+
\phi^{(k+1)}  = D^{ - 1} \left[\left( {L + U} \right)\phi^{(k)}  + b\right]
</math><br>
</math><br>
-
Where '''D''','''L''' and '''U''' represent the diagonal, lower triangular and upper triangular matrices of coefficient matrix '''A''' and k is iteration counter.<br>
 
-
=== Algorithm ===
+
where <math>D</math>, <math>L</math>, and <math>U</math> represent the diagonal, lower triangular, and upper triangular parts of the coefficient matrix <math>A</math> and <math>k</math> is the iteration count.  This matrix expression is mainly of academic interest, and is not used to program the method.  Rather, an element-based approach is used:
-
----
+
 
-
:    Chose an intital guess <math>\Phi^{0}</math> to the solution <br>
+
:<math>
 +
\phi^{(k+1)}_i  = \frac{1}{a_{ii}} \left(b_i -\sum_{j\ne i}a_{ij}\phi^{(k)}_j\right),\, i=1,2,\ldots,n.
 +
</math>
 +
 
 +
Note that the computation of <math>\phi^{(k+1)}_i</math> requires each element in <math>\phi^{(k)}</math> except itself.  Then, unlike in the [[Gauss-Seidel method]], we can't overwrite <math>\phi^{(k)}_i</math> with <math>\phi^{(k+1)}_i</math>, as that value will be needed by the rest of the computation.  This is the most meaningful difference between the Jacobi and Gauss-Seidel methods.  The minimum ammount of storage is two vectors of size <math>n</math>, and explicit copying will need to take place.
 +
 
 +
== Algorithm ==
 +
Chose an initial guess <math>\phi^{0}</math> to the solution <br>
:    for k := 1 step 1 untill convergence do <br>
:    for k := 1 step 1 untill convergence do <br>
::  for i := 1 step until n do <br>
::  for i := 1 step until n do <br>
Line 24: Line 32:
::  check if convergence is reached
::  check if convergence is reached
:    end (k-loop)
:    end (k-loop)
-
----
 
-
 
-
'''Note''': The major difference between the Gauss-Seidel method and Jacobi method lies in the fact that for Jacobi method the values of solution of previous iteration (here k) are used, where as in Gauss-Seidel method the latest available values of solution vector <math>\Phi</math> are used. <br>
 
-
 
-
 
-
----
 
-
<i> Return to [[Numerical methods | Numerical Methods]] </i>
 

Revision as of 01:22, 19 December 2005

Introduction

We seek the solution to set of linear equations:

 A \phi = b

In matrix terms, the definition of the Jacobi method can be expressed as :

 
\phi^{(k+1)}  = D^{ - 1} \left[\left( {L + U} \right)\phi^{(k)}  +  b\right]

where D, L, and U represent the diagonal, lower triangular, and upper triangular parts of the coefficient matrix A and k is the iteration count. This matrix expression is mainly of academic interest, and is not used to program the method. Rather, an element-based approach is used:

 
\phi^{(k+1)}_i  = \frac{1}{a_{ii}} \left(b_i -\sum_{j\ne i}a_{ij}\phi^{(k)}_j\right),\, i=1,2,\ldots,n.

Note that the computation of \phi^{(k+1)}_i requires each element in \phi^{(k)} except itself. Then, unlike in the Gauss-Seidel method, we can't overwrite \phi^{(k)}_i with \phi^{(k+1)}_i, as that value will be needed by the rest of the computation. This is the most meaningful difference between the Jacobi and Gauss-Seidel methods. The minimum ammount of storage is two vectors of size n, and explicit copying will need to take place.

Algorithm

Chose an initial guess \phi^{0} to the solution

for k := 1 step 1 untill convergence do
for i := 1 step until n do
 \sigma = 0
for j := 1 step until n do
if j != i then
 \sigma  = \sigma  + a_{ij} \phi_j^{(k-1)}
end if
end (j-loop)
  \phi_i^{(k)}  = {{\left( {b_i  - \sigma } \right)} \over {a_{ii} }}
end (i-loop)
check if convergence is reached
end (k-loop)
My wiki