CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Is TDMA used for pressure solver? (https://www.cfd-online.com/Forums/main/11690-tdma-used-pressure-solver.html)

 moo June 22, 2006 10:55

Is TDMA used for pressure solver?

Do you use TDMA for pressure eq'?

In my case, immersed boundary method usually cause a divergence problem with TDMA for pressure solver, so I use conjugate gradient method. (FVM, non-staggered)

But, it is hard to parallelize (preconditioning) and very slow comparing TDMA. -- am i wrong??

What do you recommend for pressure solver?

 rt June 22, 2006 13:04

Re: Is TDMA used for pressure solver?

You can use diagonal preconditioner that is fully parallel. If convergence rate is too slow you can you GMRES method that is more rubost and has faster convergence than CG (althouth it is memory intensive), it is fully parallel (with diagonal preconditioner)

Also if you think that your TDMA solver is parallel!! you can use one or more iterations of it as preconditioner in krylov-subspace solver such as CG and Bicgstab.

But generally the best treatment for convergence problem in ellyptic systems (such as pressure eq.) is allpication of multigrid methods. As you deal with structured grid implementation of multigrid is very easy. I recommend multigrid preconditioned conjugate gradient method, MGCG, (multigrid is preconditioner of CG), it is parallel with high convergence rate in comparison with simple multigrid sor or GS relaxations. for more information see seris of paper due to: o. tatebe wich are availible from:http://phase.hpcc.jp/people/tatebe/r...blication.html. also related source code is freely availible from: http://phase.hpcc.jp/people/tatebe/software/mgcg/

 Tom June 23, 2006 04:04

Re: Is TDMA used for pressure solver?

Why is it hard to parallelize?

Are you using MPI/MPP or openMP?

With openMP nearly all the do loops should parallelize trivially leaving only the inner products which can be parallelized using the openMP reduction statement.

With MPI/MPP you need to do a little more work (but not much). Basically you need to ensure that each PE's halo is corrected in the subroutines that calculate the matrix product and preconditioning steps as well as ensuring that all PE's comunicate in the calculation of the inner products.

 moo June 25, 2006 19:37

Re: Is TDMA used for pressure solver?

Thanks Tom

I use MPI. but, what kind of preconditining you mean? and pls, expain in detail. and any refrerence??

 agg June 25, 2006 22:03

Re: Is TDMA used for pressure solver?

A parallel iterative library is available from Sandia National Labs. It is called Aztec. It uses several methods including the conjugate gradient method. http://www.cs.sandia.gov/CRF/aztec1.html

 Tom June 26, 2006 04:32

Re: Is TDMA used for pressure solver?

All I mean by preconditioning is multiplying the problem Ax=b by another matrix M (which must be easy to calculate) to obtain MAx=Mb. If M were the inverse of A this would give you the solution. The simplest form of preconditioning is if M contains the reciprical of the diagonals of A ( so that MA has 1's on the diagonal). In practice any M that can be obtained via an fixed point iteative method will do; e.g. Jacobi, Guass-Seidel, SOR, ADI etc.

The main point of preconditioning is that the "combined iterative" scheme converges faster than the component parts.

You should look at the book "Matrix computations" by Golub and van Loan (there's also a book by Yousef Saad which I believe is online).

The problem with the preconditioner step is, if you require bit reproducability across PE's, that they can be fairly difficult to progam (i.e. ILU or SIP).

Since you already appear to have a TDMA (by which I assume you mean ADI) solver I would use this for your preconditioner.

 niaz May 16, 2010 14:48

parallel

Dear friends
it is not a hard problem. you can use a simple technique to parallel all solvers.
you should use red-black algorithm for doing this.

 All times are GMT -4. The time now is 13:40.