CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Wiki > Iterative methods

Iterative methods

From CFD-Wiki

(Difference between revisions)
Jump to: navigation, search
(Stationary Iterative Methods)
m (Reverted edits by AcelcAceld (Talk) to last version by Jasond)
 
(5 intermediate revisions not shown)
Line 1: Line 1:
-
For solving a set of linear equations, we seek the solution to the problem: <br>
+
We seek the solution to the linear system of equations <br>
-
:<math> AX = Q </math> <br>
+
:<math> Ax = b </math> <br>
-
After ''k'' iterations we obtain an approaximation to the solution as: <br>
+
Iterative methods, unlike direct methods, generate a sequence of approximate solutions to the system that (hopefully) converges to the exact solution.  After ''k'' iterations, we obtain an approximation to the exact solution as: <br>
-
:<math> Ax^{(k)}  = Q - r^{(k)} </math><br>
+
:<math> Ax^{(k)}  = b - r^{(k)}, </math><br>
where <math> r^{(k)} </math> is the residual after ''k'' iterations. <br>
where <math> r^{(k)} </math> is the residual after ''k'' iterations. <br>
-
Defining: <br>
+
Defining <br>
:<math> \varepsilon ^{(k)}  = x - x^{(k)} </math> <br>
:<math> \varepsilon ^{(k)}  = x - x^{(k)} </math> <br>
-
as the difference between the exact and approaximate solution. <br>
+
as the difference between the exact and approaximate solution, we obtain <br>
-
we obtain : <br>
+
:<math> A\varepsilon ^{(k)}  = r^{(k)}. </math> <br>
-
:<math> A\varepsilon ^{(k)}  = r^{(k)}  </math> <br>
+
The purpose of iterations is to drive this residual to zero.
-
the purpose of iterations is to drive this residual to zero.
+
===Stationary Iterative Methods===
===Stationary Iterative Methods===
-
Iterative methods that can be expressed in the simple form: <br>
+
Iterative methods that can be expressed in the simple form
 +
 
:<math>
:<math>
x^{(k+1)}  = Bx^{(k)}  + c
x^{(k+1)}  = Bx^{(k)}  + c
-
</math> <br>
+
</math>
 +
 
 +
when  neither '''B''' nor '''c''' depend upon the iteration count (k), the iterative method is called stationary iterative method. Some of the stationary iterative methods are
-
When  neither '''B''' nor '''c''' depend upon the iteration count (k), the iterative method is called stationary iterative method. Some of the stationary iterative methods are: <br>
 
#Jacobi  method
#Jacobi  method
#Gauss-Seidel  method
#Gauss-Seidel  method
#Successive Overrelaxation  (SOR) method and  
#Successive Overrelaxation  (SOR) method and  
#Symmetric Successive Overrelaxation  (SSOR) method
#Symmetric Successive Overrelaxation  (SSOR) method
 +
 +
The convergence of such iterative methods can be investigated using the [[Fixed point theorem]].
===Nonstationary Iterative Methods===
===Nonstationary Iterative Methods===

Latest revision as of 14:17, 19 December 2008

We seek the solution to the linear system of equations

 Ax = b

Iterative methods, unlike direct methods, generate a sequence of approximate solutions to the system that (hopefully) converges to the exact solution. After k iterations, we obtain an approximation to the exact solution as:

 Ax^{(k)}  = b - r^{(k)},

where  r^{(k)} is the residual after k iterations.
Defining

 \varepsilon ^{(k)}  = x - x^{(k)}

as the difference between the exact and approaximate solution, we obtain

 A\varepsilon ^{(k)}  = r^{(k)}.

The purpose of iterations is to drive this residual to zero.

Stationary Iterative Methods

Iterative methods that can be expressed in the simple form


x^{(k+1)}  = Bx^{(k)}  + c

when neither B nor c depend upon the iteration count (k), the iterative method is called stationary iterative method. Some of the stationary iterative methods are

  1. Jacobi method
  2. Gauss-Seidel method
  3. Successive Overrelaxation (SOR) method and
  4. Symmetric Successive Overrelaxation (SSOR) method

The convergence of such iterative methods can be investigated using the Fixed point theorem.

Nonstationary Iterative Methods

When during the iterations B and c changes during the iterations, the method is called Nonstationary Iterative Method. Typically, constants B and c are computed by taking inner products of residuals or other vectors arising from the iterative method.

Some examples are:

  1. Conjugate Gradient Method (CG)
  2. MINRES and SYMMLQ
  3. Generalized Minimal Residual (GMRES)
  4. BiConjugate Gradient (BiCG)
  5. Quasi-Minimal Residual (QMR)
  6. Conjugate Gradient Squared Method (CGS)
  7. BiConjugate Gradient Stabilized (Bi-CGSTAB)
  8. Chebyshev Iteration
My wiki