# Jacobi method

(Difference between revisions)
 Revision as of 20:33, 15 December 2005 (view source)Tsaad (Talk | contribs) (fixed dot product notation)← Older edit Revision as of 20:47, 15 December 2005 (view source)Tsaad (Talk | contribs) (towards a uniform notation for linear systems : A*Phi = B)Newer edit → Line 1: Line 1: We seek the solution to set of linear equations:
We seek the solution to set of linear equations:
- :$A \cdot X = Q$
+ :$A \cdot \Phi = B$
- For the given matrix '''A''' and vectors '''X''' and '''Q'''.
In matrix terms, the definition of the Jacobi method can be expressed as :
In matrix terms, the definition of the Jacobi method can be expressed as :
$[itex] - x^{(k)} = D^{ - 1} \left( {L + U} \right)x^{(k - 1)} + D^{ - 1} Q + \phi^{(k)} = D^{ - 1} \left( {L + U} \right)\phi^{(k - 1)} + D^{ - 1} B$
[/itex]
Where '''D''','''L''' and '''U''' represent the diagonal, lower triangular and upper triangular matrices of coefficient matrix '''A''' and k is iteration counter.
Where '''D''','''L''' and '''U''' represent the diagonal, lower triangular and upper triangular matrices of coefficient matrix '''A''' and k is iteration counter.
Line 18: Line 17: :::  for j := 1 step until n do
:::  for j := 1 step until n do
::::  if j != i then ::::  if j != i then - :::::      $\sigma = \sigma + a_{ij} x_j^{(k-1)}$ + :::::      $\sigma = \sigma + a_{ij} \phi_j^{(k-1)}$ ::::  end if ::::  end if :::    end (j-loop)
:::    end (j-loop)
- :::    $x_i^{(k)} = {{\left( {q_i - \sigma } \right)} \over {a_{ii} }}$ + :::    $\phi_i^{(k)} = {{\left( {b_i - \sigma } \right)} \over {a_{ii} }}$ ::  end (i-loop) ::  end (i-loop) ::  check if convergence is reached ::  check if convergence is reached Line 27: Line 26: ---- ---- - '''Note''': The major difference between the Gauss-Seidel method and Jacobi method lies in the fact that for Jacobi method the values of solution of previous iteration (here k) are used, where as in Gauss-Seidel method the latest available values of solution vector '''X''' are used.
+ '''Note''': The major difference between the Gauss-Seidel method and Jacobi method lies in the fact that for Jacobi method the values of solution of previous iteration (here k) are used, where as in Gauss-Seidel method the latest available values of solution vector $\Phi$ are used.

## Revision as of 20:47, 15 December 2005

We seek the solution to set of linear equations:

$A \cdot \Phi = B$

In matrix terms, the definition of the Jacobi method can be expressed as :
$\phi^{(k)} = D^{ - 1} \left( {L + U} \right)\phi^{(k - 1)} + D^{ - 1} B$
Where D,L and U represent the diagonal, lower triangular and upper triangular matrices of coefficient matrix A and k is iteration counter.

### Algorithm

Chose an intital guess $X^{0}$ to the solution
for k := 1 step 1 untill convergence do
for i := 1 step until n do
$\sigma = 0$
for j := 1 step until n do
if j != i then
$\sigma = \sigma + a_{ij} \phi_j^{(k-1)}$
end if
end (j-loop)
$\phi_i^{(k)} = {{\left( {b_i - \sigma } \right)} \over {a_{ii} }}$
end (i-loop)
check if convergence is reached
end (k-loop)

Note: The major difference between the Gauss-Seidel method and Jacobi method lies in the fact that for Jacobi method the values of solution of previous iteration (here k) are used, where as in Gauss-Seidel method the latest available values of solution vector $\Phi$ are used.