CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Pre-Processing

interFoam parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes
  • 1 Post By santiagomarquezd

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 30, 2013, 07:25
Default interFoam parallel
  #1
Member
 
Join Date: Aug 2011
Posts: 89
Rep Power: 14
idefix is on a distinguished road
Hello together,

I am using interFoam and it´s getting really slow when I use it in parallel.
I´ve got a 8,1 Mio cell-grid and I used 100, 2000 and 4000 processors.
With 100 processors I need 10 hours for one interval to be written out. The intervall is 10^(-4) seconds. When I use 2000 or 4000 processors I need about 1 min for one iteration.

Has anyone some experiences with interFoam in parallel?

Thanks a lot
idefix is offline   Reply With Quote

Old   September 30, 2013, 08:57
Default
  #2
Senior Member
 
Nima Samkhaniani
Join Date: Sep 2009
Location: Tehran, Iran
Posts: 1,266
Blog Entries: 1
Rep Power: 24
nimasam is on a distinguished road
it may relate to your matrix solver, would you please post your fvSolution here
__________________
My Personal Website (http://nimasamkhaniani.ir/)
Telegram channel (https://t.me/cfd_foam)
nimasam is offline   Reply With Quote

Old   September 30, 2013, 09:01
Default
  #3
Senior Member
 
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 21
Bernhard is on a distinguished road
4000 processors seems a bit excessive for 8M cells. I would assume you did not gain anything by increasing from 2000 to 4000 processors.

How are you solving the pressure equation?
Bernhard is offline   Reply With Quote

Old   September 30, 2013, 09:45
Default
  #4
Member
 
Join Date: Aug 2011
Posts: 89
Rep Power: 14
idefix is on a distinguished road
Hello,

thanks for your ideas

Here is my fvSolution-file:

solvers
{
pcorr
{
solver PCG;
preconditioner DIC;
tolerance 1e-10;
relTol 0;
}

p_rgh
{
solver PCG;
preconditioner DIC;
tolerance 1e-07;
relTol 0.05;
}

p_rghFinal
{
solver PCG;
preconditioner DIC;
tolerance 1e-07;
relTol 0;
}

"(U|k|epsilon)"
{
solver PBiCG;
preconditioner DILU;
tolerance 1e-06;
relTol 0;
}

"(U|k|epsilon)Final"
{
solver PBiCG;
preconditioner DILU;
tolerance 1e-08;
relTol 0;
}
}

PIMPLE
{
momentumPredictor no;
nCorrectors 3;
nNonOrthogonalCorrectors 0;
nAlphaCorr 1;
nAlphaSubCycles 4;
cAlpha
}

Thanks a lot
idefix is offline   Reply With Quote

Old   October 4, 2013, 05:01
Default
  #5
Member
 
Michiel
Join Date: Oct 2010
Location: Delft, Netherlands
Posts: 97
Rep Power: 15
michielm is on a distinguished road
I think using GAMG for the pressure can help speed up the solution process quite a bit.

And I agree with Bernhard that the amount of processors you use is really excessive. For an 8M grid using 2000 processors results in only 4000 cells per processors, which might sound nice but all of these processors have to talk to their neighbours so chances are (big) that the communication between processors becomes the limiting factor if you use that many processors and it might even make you lose speed.

You can easily test the speed up more systematically by running relatively short simulation runs (e.g. only a few hundred timesteps) with different amounts of processors. And see how much speed you gain from adding more processors. A typical way of doing this is doubling the amount of processors every time and look at the speed up. So start e.g. at 50 then 100, 200, 400, etc
michielm is offline   Reply With Quote

Old   October 4, 2013, 05:43
Default
  #6
Senior Member
 
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 21
Bernhard is on a distinguished road
Be careful, GAMG does not necessary outperform PCG on large parallel case, see: https://www.hpc.ntnu.no/display/hpc/...lywithpisoFoam
Bernhard is offline   Reply With Quote

Old   October 11, 2013, 14:32
Default
  #7
Senior Member
 
santiagomarquezd's Avatar
 
Santiago Marquez Damian
Join Date: Aug 2009
Location: Santa Fe, Santa Fe, Argentina
Posts: 452
Rep Power: 23
santiagomarquezd will become famous soon enough
Hi, it was suggested in this forum to use ~50Kcells/processor, which gives you ~160 processors. I think beyond this value the speed-up will start to decrease due to communication times.

Regards.
BSengupta likes this.
__________________
Santiago MÁRQUEZ DAMIÁN, Ph.D.
Research Scientist
Research Center for Computational Methods (CIMEC) - CONICET/UNL
Tel: 54-342-4511594 Int. 7032
Colectora Ruta Nac. 168 / Paraje El Pozo
(3000) Santa Fe - Argentina.
http://www.cimec.org.ar
santiagomarquezd is offline   Reply With Quote

Old   October 23, 2013, 03:35
Default
  #8
Member
 
Join Date: Aug 2011
Posts: 89
Rep Power: 14
idefix is on a distinguished road
Hello,

I tried a lot but in my case I need 320 processors to get a "fast" simulation.
If I use 160 processors I need more than twice the time as I need with 320 processor.
But still I need 5-6 sec per complete iteration step ( I count here the time between the appearing from one "Courant Number mean..." to the next in my output-file)
Is it normal that interFoam needs so much time?

Thanks a lot
idefix is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Interfoam blows on parallel run danvica OpenFOAM Running, Solving & CFD 16 December 22, 2012 03:09
interFoam (OF 1.7.1) in parallel ..need help farhagim OpenFOAM 4 July 26, 2012 17:42
InterFoam in parallel sara OpenFOAM Running, Solving & CFD 3 April 19, 2011 06:05
interFoam parallel bunni OpenFOAM Bugs 2 June 9, 2010 18:39
Performance of interFoam running in parallel hsieh OpenFOAM Running, Solving & CFD 8 September 14, 2006 10:15


All times are GMT -4. The time now is 20:33.