CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   AMG vs ICCG (https://www.cfd-online.com/Forums/openfoam-solving/60583-amg-vs-iccg.html)

eugene June 7, 2005 10:22

A quick question for those in
 
A quick question for those in the know.

I have been running a 1.5 million cell simpleFoam calc. Since the mesh is only so-so, I decided to try the AMG solvers for the pressure matrix in the hope that it would smooth out the results in problem cells. For some reason I dont quite understand the parallel efficiency (as reported in top) for the AMG solver is absolutely atrocious. I mean really dismal (like 10-20% compared to the ICCG's 80+% and sometimes it drops of the chart on the master processor). I used 15000 cells for the top level AMG. Have I done something stupid or might there be a communication/synchronisation problem with the AMG implementation?

Btw, the AMG solver does seem to improve convergence when running on a single cpu, so it would be good if it was a viable option for parallel calculations.

hjasak June 7, 2005 10:40

I know. The AMG solver buil
 
I know.

The AMG solver builds a hierarchy of matrices and they talk to each other at each level. If you want to know about what it's doing, switch on some debugging and it will talk to you. :-)

This is where the comms issue comes in: most of the time is spent on coarse matrices (that's the point of AMG), which have fewer cells but still want to talk to their neighbours. The "worst offender", of course, is the very top level.

In order to do this properly, I need to gather all top-level matrices onto a single CPU, make a single system out of them, solve it (no communications) and distribute the solution. This has been planned for a while... The delay in implementation is due to the fact that the lduMatrix code is a bit past its best (if you know what I mean) and the rewrite includes and number of other very nice things, e.g. block solver, templating on matrix coefficient types, elimination of most of fvMatrix and tetFemMatrix functionality (moves into lduMatrix), dynamic addressing capability and the famed op+ for matrices with different lduAddressing support.

Once this is done, gathering the matrix to a single CPU will be a doddle (and now it would be a prerrt nasty custom-written and never re-used code). Therefore, I would suggest putting up with the AMG performance pending a matrix rewrite (it is happening + you can have a preview if you're really keen) :-)

As a short-term measure, play around with the number of cells on top level. I think 15k per CPU is way too much, but cannot offer a definitive answer.

One more thing: unlike the ICCG solver, AMG is a "smoother", i.e. it smoothly approaches the solution. As a consequence, you can relax the pressure solution tolerance a LOT (which makes the solver faster). Some people run with a relative tolerance of about 0.1-0.2 (I would go for, say, 0.05); for tuning, make sure you do 3-4 cycles and you don't need any more.

Pls keep me posted, this is pretty intresting,

Hrv

eugene June 7, 2005 14:07

Ok, I ran with 1000 top level
 
Ok, I ran with 1000 top level cells per cpu and relaxation of 0.05. It does 4 cycles per step and efficiency is low as before. It uses 25 seconds or so per timestep, so in an absolute time sense it is not bad at all, since this is less than twice as much as the ICCG solver which has a much higher efficiency (80% compared to 20%). Granted, more than half the time is spent outside the pressure solver, so a more detailed comparison is probably warranted.

Unfortunately, I think my mesh is just too bad to get nice convergence (harpoon mesh with no surface layers) or the flow is intrinsically unsteady. The pressure residual stabilises around 0.2 and slowly oscilates around that level, pretty much the same I got with the ICCG solver. The flow is dominated by many small jets issueing into an enclosed space and the outlets (inletOutlet) cut through recirculation zones (nothing I can do about it).


All times are GMT -4. The time now is 03:20.