CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

AMG vs ICCG

Register Blogs Community New Posts Updated Threads Search

Like Tree3Likes
  • 1 Post By eugene
  • 1 Post By hjasak
  • 1 Post By eugene

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   June 7, 2005, 10:22
Default A quick question for those in
  #1
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
A quick question for those in the know.

I have been running a 1.5 million cell simpleFoam calc. Since the mesh is only so-so, I decided to try the AMG solvers for the pressure matrix in the hope that it would smooth out the results in problem cells. For some reason I dont quite understand the parallel efficiency (as reported in top) for the AMG solver is absolutely atrocious. I mean really dismal (like 10-20% compared to the ICCG's 80+% and sometimes it drops of the chart on the master processor). I used 15000 cells for the top level AMG. Have I done something stupid or might there be a communication/synchronisation problem with the AMG implementation?

Btw, the AMG solver does seem to improve convergence when running on a single cpu, so it would be good if it was a viable option for parallel calculations.
lakeat likes this.
eugene is offline   Reply With Quote

Old   June 7, 2005, 10:40
Default I know. The AMG solver buil
  #2
Senior Member
 
Hrvoje Jasak
Join Date: Mar 2009
Location: London, England
Posts: 1,905
Rep Power: 33
hjasak will become famous soon enough
I know.

The AMG solver builds a hierarchy of matrices and they talk to each other at each level. If you want to know about what it's doing, switch on some debugging and it will talk to you. :-)

This is where the comms issue comes in: most of the time is spent on coarse matrices (that's the point of AMG), which have fewer cells but still want to talk to their neighbours. The "worst offender", of course, is the very top level.

In order to do this properly, I need to gather all top-level matrices onto a single CPU, make a single system out of them, solve it (no communications) and distribute the solution. This has been planned for a while... The delay in implementation is due to the fact that the lduMatrix code is a bit past its best (if you know what I mean) and the rewrite includes and number of other very nice things, e.g. block solver, templating on matrix coefficient types, elimination of most of fvMatrix and tetFemMatrix functionality (moves into lduMatrix), dynamic addressing capability and the famed op+ for matrices with different lduAddressing support.

Once this is done, gathering the matrix to a single CPU will be a doddle (and now it would be a prerrt nasty custom-written and never re-used code). Therefore, I would suggest putting up with the AMG performance pending a matrix rewrite (it is happening + you can have a preview if you're really keen) :-)

As a short-term measure, play around with the number of cells on top level. I think 15k per CPU is way too much, but cannot offer a definitive answer.

One more thing: unlike the ICCG solver, AMG is a "smoother", i.e. it smoothly approaches the solution. As a consequence, you can relax the pressure solution tolerance a LOT (which makes the solver faster). Some people run with a relative tolerance of about 0.1-0.2 (I would go for, say, 0.05); for tuning, make sure you do 3-4 cycles and you don't need any more.

Pls keep me posted, this is pretty intresting,

Hrv
lakeat likes this.
__________________
Hrvoje Jasak
Providing commercial FOAM/OpenFOAM and CFD Consulting: http://wikki.co.uk
hjasak is offline   Reply With Quote

Old   June 7, 2005, 14:07
Default Ok, I ran with 1000 top level
  #3
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
Ok, I ran with 1000 top level cells per cpu and relaxation of 0.05. It does 4 cycles per step and efficiency is low as before. It uses 25 seconds or so per timestep, so in an absolute time sense it is not bad at all, since this is less than twice as much as the ICCG solver which has a much higher efficiency (80% compared to 20%). Granted, more than half the time is spent outside the pressure solver, so a more detailed comparison is probably warranted.

Unfortunately, I think my mesh is just too bad to get nice convergence (harpoon mesh with no surface layers) or the flow is intrinsically unsteady. The pressure residual stabilises around 0.2 and slowly oscilates around that level, pretty much the same I got with the ICCG solver. The flow is dominated by many small jets issueing into an enclosed space and the outlets (inletOutlet) cut through recirculation zones (nothing I can do about it).
lakeat likes this.
eugene is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
RIPPLE ICCG diverge brian CFX 0 February 24, 2007 00:26
AMG versus ICCG msrinath80 OpenFOAM Running, Solving & CFD 2 November 7, 2006 15:15
[ICCG, CGSTAB ?] name Main CFD Forum 1 September 13, 2001 11:13
ICCG convergence case hong Main CFD Forum 7 December 30, 2000 05:44
ICCG / ILU routines Jin Li Main CFD Forum 0 June 24, 2000 05:45


All times are GMT -4. The time now is 05:54.