# Multigrid methods

### From CFD-Wiki

(→Definitions) |
m (Reverted edits by Bancream (talk) to last revision by Mmahendhran) |
||

(8 intermediate revisions not shown) | |||

Line 1: | Line 1: | ||

==Introduction== | ==Introduction== | ||

- | The convergence rate of standard iterative solvers (Gauss-Siedl, Jacobi, SOR) has a tendency to 'stall', i.e. to fail in effectively reducing errors after a few number of iterations. The problem is more prominent when the meshes are refined. In fact, standard | + | The convergence rate of standard iterative solvers (Gauss-Siedl, Jacobi, SOR) has a tendency to 'stall', i.e. to fail in effectively reducing errors after a few number of iterations. The problem is more prominent when the meshes are refined. In fact, standard solvers behave much better on coarse grids! A close inspection of this behavior reveals that the convergence rate is a function of the error field frequency, i.e. the gradient of the error from node to node. If the error is distributed in a high frequency mode, the convergence rate is fast. However, after the first few iterations, the error field is smoothed out (low frequency) and the convergence rate deteriorates. <br> |

For many iterative methods, the number of iterations required to reach a converged solution is linearly proportional to number of nodes in one direction. This behavior could be rooted out to the fact that during the iteration process, the information travels only one grid size per iteration. While for proper convergence, the information has to travel back and forth several times. | For many iterative methods, the number of iterations required to reach a converged solution is linearly proportional to number of nodes in one direction. This behavior could be rooted out to the fact that during the iteration process, the information travels only one grid size per iteration. While for proper convergence, the information has to travel back and forth several times. | ||

Line 9: | Line 9: | ||

[[Image:Multigrid_low-high_frequency_error(tsaad).jpg| Low-High frequency error as seen on fine and coarse grids]] | [[Image:Multigrid_low-high_frequency_error(tsaad).jpg| Low-High frequency error as seen on fine and coarse grids]] | ||

- | Multigrid methods effectively reduce the distribution of low frequency errors which makes them the ideal | + | Multigrid methods effectively reduce the distribution of low frequency errors which makes them the ideal ingredient to be used with standard solvers. |

Note: Multigrid is '''NOT''' a solver. It is a technique used in conjuction with a linear solver to yield a better covergence rate. | Note: Multigrid is '''NOT''' a solver. It is a technique used in conjuction with a linear solver to yield a better covergence rate. | ||

Line 22: | Line 22: | ||

A '''Prolongation''' operation is the inverse of restriction and is defined as the interpolation method used to inject the residual from the coarse grid to the finer one. | A '''Prolongation''' operation is the inverse of restriction and is defined as the interpolation method used to inject the residual from the coarse grid to the finer one. | ||

- | A typical multigrid cycle starts at a finest level. The fine level solution is then transferred to next coarser level, (restriction). After some relaxation cycles on the coarse level, the solution is then restricted to next coarser level until the coarsest level is reached. The solution obtained at the coarsest level is | + | A typical multigrid cycle starts at a finest level. The fine level solution is then transferred to next coarser level, (restriction). After some relaxation cycles on the coarse level, the solution is then restricted to next coarser level until the coarsest level is reached. The solution obtained at the coarsest level is then interpolated back to the finer level, (prolongation). The solution from this finer level is interpolated to next finer level after some relaxation iterations, called post multigrid sweeps. The solution is prologated till the finest level is reached. The whole process is repeated until satisfactory convergence is reached. |

Subsequent sections will explain how agglomeration, restriction, and prolongation work. | Subsequent sections will explain how agglomeration, restriction, and prolongation work. | ||

Line 28: | Line 28: | ||

== Classification == | == Classification == | ||

Multigrid methods are classified into two branches<br> | Multigrid methods are classified into two branches<br> | ||

- | # ''' | + | # '''[[Geometric multigrid - FAS]]''' <br> |

- | # '''Algebraic | + | # '''[[Algebraic multigrid - AMG]]''' <br> |

In the Geometric Multigrid, agglomeration of the nodes (cells, elements, or control volumes) takes place on the geometric level, and a set of new data structures representing the coarse grids need to be constructed for each level. This method is usually difficult to deal with when using the finite volume method since the FVM is based on a cell centered discretization which makes it difficult to define irregularly shaped control volumes. | In the Geometric Multigrid, agglomeration of the nodes (cells, elements, or control volumes) takes place on the geometric level, and a set of new data structures representing the coarse grids need to be constructed for each level. This method is usually difficult to deal with when using the finite volume method since the FVM is based on a cell centered discretization which makes it difficult to define irregularly shaped control volumes. |

## Latest revision as of 08:46, 5 July 2013

## Contents |

## Introduction

The convergence rate of standard iterative solvers (Gauss-Siedl, Jacobi, SOR) has a tendency to 'stall', i.e. to fail in effectively reducing errors after a few number of iterations. The problem is more prominent when the meshes are refined. In fact, standard solvers behave much better on coarse grids! A close inspection of this behavior reveals that the convergence rate is a function of the error field frequency, i.e. the gradient of the error from node to node. If the error is distributed in a high frequency mode, the convergence rate is fast. However, after the first few iterations, the error field is smoothed out (low frequency) and the convergence rate deteriorates.

For many iterative methods, the number of iterations required to reach a converged solution is linearly proportional to number of nodes in one direction. This behavior could be rooted out to the fact that during the iteration process, the information travels only one grid size per iteration. While for proper convergence, the information has to travel back and forth several times.

Since direct matrix inversion is out of the question for realistic problems and since the solvers that rely on conjugate-gradient methods have robustness problems associated with them, the methods of choice are point implicit solvers like Gauss-Seidel, Jacobi, and SOR.

To take advantage of this fact, Multigrid methods are based on the idea of defining coarser grids on which a low frequency error will be seen as a high frequency one. This is illustrated in the figure below.

Multigrid methods effectively reduce the distribution of low frequency errors which makes them the ideal ingredient to be used with standard solvers.

Note: Multigrid is **NOT** a solver. It is a technique used in conjuction with a linear solver to yield a better covergence rate.

## Definitions

In the context of multigrid techniques, some of the most commonly used terms are **Agglomeration**, **Restriction**, and **Prolongation**.

As it was shown in the previous section, all multigrid methods require a definition of a succession of coarse grids, based on the original "fine" grid. The process of defining the coarser grids involves what is called **Agglomeration**, i.e. the combination of several nodes or control volumes or coefficients from the original grid.

A **Restriction** operation is defined as the interpolation method used to inject the error from a fine grid to a coarser one.

A **Prolongation** operation is the inverse of restriction and is defined as the interpolation method used to inject the residual from the coarse grid to the finer one.

A typical multigrid cycle starts at a finest level. The fine level solution is then transferred to next coarser level, (restriction). After some relaxation cycles on the coarse level, the solution is then restricted to next coarser level until the coarsest level is reached. The solution obtained at the coarsest level is then interpolated back to the finer level, (prolongation). The solution from this finer level is interpolated to next finer level after some relaxation iterations, called post multigrid sweeps. The solution is prologated till the finest level is reached. The whole process is repeated until satisfactory convergence is reached.

Subsequent sections will explain how agglomeration, restriction, and prolongation work.

## Classification

Multigrid methods are classified into two branches

In the Geometric Multigrid, agglomeration of the nodes (cells, elements, or control volumes) takes place on the geometric level, and a set of new data structures representing the coarse grids need to be constructed for each level. This method is usually difficult to deal with when using the finite volume method since the FVM is based on a cell centered discretization which makes it difficult to define irregularly shaped control volumes.

On the other hand, the Algebraic Multigrid performs the agglomeration using the coefficient matrix only (hence the name algebraic). In the AMG method, specially chosen coefficients of the matrix are combined together to form a new coefficient matrix representing the coarser grid and no new discretization is required on the coarse grids.

Agglomeration techniques could vary from algorithm to algorithm. For more details see Multigrid coarsening algorithms.

## External links

- MGnet An excellent reference for multigrid methods. It contains tutorials, free codes, presentations, papers...
- Article in Wikipedia

* Return to Numerical Methods *