CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

OpenFoam parallel scalability

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 10, 2010, 19:31
Default OpenFoam parallel scalability
  #1
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
Hi, I am testing OpenFoam scalability on cluster system (> 100 cores). I found test cases from tutorial is very small and can't scale.

Can anyone provide some test cases scales or the way to increase test case size?

Thanks a lot.
Steve
hpc_benchmark is offline   Reply With Quote

Old   January 11, 2010, 03:52
Default
  #2
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
You may try the Lid-driven cavity flow tutorial in order to compare your speed-up results with, for instance, those reported for Murska and Louhi (Cray XT) servers . You just need to increase the cell numbers in the X-Y-Z directions up to the desired resolution:

cd ~/OpenFOAM/OpenFOAM-1.5-dev/tutorials/icoFoam/cavity
vim constant/polyMesh/blockMeshDict
edit the bold text in the following line: hex (0 1 2 3 4 5 6 7) (20 20 1) simpleGrading (1 1 1) to (200 200 200) to attain 8M cells.
blockMesh
decomposePar
run icoFoam in parallel mode, etc

See also the results in the ENEA-GRID environment: OpenFOAM parallel performances
pbohorquez is offline   Reply With Quote

Old   January 11, 2010, 12:33
Default
  #3
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
Thanks Patricio.

But I can't locate cavity/system/decomposeParDict which is required by decomposePar and mpirun -np 16 icoFoam -case cavity -parallel.
hpc_benchmark is offline   Reply With Quote

Old   January 11, 2010, 13:04
Default
  #4
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
You can copy and edit the default file which is located in the folder that follows:

~/OpenFOAM/OpenFOAM-1.5-dev/applications/utilities/parallelProcessing/decomposePar/decomposeParDict

Please notice that the decomposition method may affect strongly the speed-up. Therefore, you should test which is the best approach for your platform. Furthermore, I would suggest to play with the OptimisationSwitches 'floatTransfer' and 'commsType' which are located at the global controlDict:

~/OpenFOAM/OpenFOAM-1.5-dev/etc/controlDict

Finally, if you employ a direct solver, you must 'renumberMesh' in order to reduce the bandwidth of the discretized system of equations.

You will find more information surfing the forum.

Good luck, Patricio.
lakeat likes this.
pbohorquez is offline   Reply With Quote

Old   January 12, 2010, 02:38
Default
  #5
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
Patrico, how can I change dimension to 3D, i think (20, 20.1 ) is two dimension, but your (200,200,200) is 3D.

Thanks
Steve
hpc_benchmark is offline   Reply With Quote

Old   January 12, 2010, 02:45
Default
  #6
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
Just replace (20, 20, 1) with (200, 200, 200) and edit the boundary conditions at t=0:

- For the pressure: the face frontAndBack from empty to zeroGradient.
- For the velocity: the face frontAndBack from empty to fixedValue; value uniform (0 0 0)

Last edited by pbohorquez; January 12, 2010 at 05:27.
pbohorquez is offline   Reply With Quote

Old   January 12, 2010, 03:16
Default
  #7
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
Also, for increasing number of cpu cores to execute the test case, do i only need to modify the following two entries: numberofsubdomains and n (4 4 1)?

Seems like execution time becomes even longer as cpu number increases.

numberOfSubdomains 16;
//- Keep owner and neighbour on same processor for faces in zones:
// preserveFaceZones (heater solid1 solid3);
//method scotch;
// method hierarchical;
method simple;
// method metis;
// method manual;
simpleCoeffs
{
n (4 4 1);
delta 0.001;
}
hpc_benchmark is offline   Reply With Quote

Old   January 12, 2010, 03:29
Default
  #8
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
Yes, that's all. Have you tried decreasing the number of cores/cells? Maybe the cache is a bottleneck.

Are you using infiniband or similar? Could I know the hardware characteristics?

/proc/cpuinfo
/proc/meminfo
numactl --hardware
pbohorquez is offline   Reply With Quote

Old   January 12, 2010, 12:44
Default
  #9
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
cat /proc/meminfo
MemTotal: 16508772 kB
MemFree: 11369420 kB
Buffers: 676276 kB
Cached: 3823348 kB

cat /proc/cpuinfo
Quad-Core AMD Opteron(tm) Processor 2382
stepping : 2
cpu MHz : 2600.000
cache size : 512 KB

numactl --hardware
available: 2 nodes (0-1)
node 0 size: 8066 MB
node 0 free: 5250 MB
node 1 size: 8080 MB
node 1 free: 5852 MB
node distances:
node 0 1
0: 10 20
1: 20 10


I am using cavity from tutorial which is 2D: icoFoam/cavity
1.
Changed cavity/constant/polyMesh/blockMeshDict to increase cell size:

blocks
(
hex (0 1 2 3 4 5 6 7) (100 100 1) simpleGrading (1 1 1)
);

2.
In cavity/system/decomposeParDict:
numberOfSubdomains 16;
method simple;
simpleCoeffs
{
n (4 4 1);
delta 0.001;
}

mpirun -np 16 -hostfile host-dell icoFoam -case cavity -parallel
hpc_benchmark is offline   Reply With Quote

Old   January 12, 2010, 14:53
Default
  #10
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
BTW, i am using compiled openfoam-1.6 binary from openfoam website.
I don't need to recompile to get reasonable performance, right?
hpc_benchmark is offline   Reply With Quote

Old   January 13, 2010, 08:08
Default
  #11
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
Steve, to check the scaling it is not necessary to recompile OF. However, you could compare the runtime with and without optimization options. To recompile OF:

echo $WM_OPTIONS : I suppose you will get 'linux64GccDPOpt'
foam : in order to change the path to ~/OpenFOAM/OpenFOAM-1.6
cd wmake/rules
edit the files c++Opt and cOpt which are located at linux64Gcc with the tuning options you best.
foam
./Allwmake

But you should obtain first the 2D scaling, subsequently the 3D and finally repeat the process with the optimization flags. Well, in my experience 2D cases scale much better than 3D. So both benchmarks are useful.

Before running with 16 cores, try running a 1M-cells case only in one node with 1, 2, 4 and 8 threads and see what happens.
pbohorquez is offline   Reply With Quote

Old   January 13, 2010, 15:12
Default
  #12
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
To use 1M cell in 2D
It should look like this (1000 1000 1), right?

I also need to modify delta=0.1/1000

What else to change?
I am getting error with this. It works with smaller config (200 200 1)

[87] [99] #0 Foam::error:rintStack(Foam::Ostream&)#[90] #0 Foam::error:rintStack(Foam::Ostream&)[159] #0 Foam::error:rintStack(Foam::Ostream&)[44] #0 Foam::error:rintStack(Foam::Ostream&)[12] #0 Foam::error:rintStack(Foam::Ostream&)0[189] #0 Foam::error:rintStack(Foam::Ostream&)[13] #0 Foam::error:rintStack(Foam::Ostream&) Foam::error:rintStack(Foam::Ostream&) in "/application/OpenFOAM/OpenFOAM-1. in in "/application/OpenFOAM/OpenF"/application/OpenFOAM/OpenFOAM-1OAM-1.6/lib/linux64Gcc6/lib/linux64GccDPOpt/libOpenFOAM.so"
[159] in #1 Foam::sigFpe::sigFpeHandler(int)".6/lib/linux64GccDPOpt/libODPOpt/libOpenFOAM.so"
/penFOAM.so"
[90] #[99] #1 a1 Foam::sigFpe::sigFpeHandler(int)Foam::sigFpe::sigF peHandler(int)p in p"/application/OpenFOAM/OpenFOAlicaM-1.6/lib/linux64GccDPOpt/libOpetnFOAM.so"
i[44] #1 oFoam::sigFpe::sigFpeHandler(int)n/Op in enF"/application/OpenFOAM/OpenFOAM/OpOAM-1.6/lib/linux64GcceDPOpt/libOpenFOAM.so"n
[189] #F1 OFoam::sigFpe::sigFpeHandler(int)AM-1.6/lib/linux64GccDPOpt/libO in p"/application/OpenFOAM/OpenFOAM-1.e6/lib/linux64GccDPOpt/nFOAM.libOpenFOAM.so"
s[13] #1 oFoam::sigFpe::sigFpeHandler(int)"
[87] #1 Foam::sigFpe::sigFpeHandler(int) in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[12] #1 Foam::sigFpe::sigFpeHandler(int) in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[13] #2 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[12] #2 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[90] #2 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[44] #2 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[189] #2 __restore_rt__restore_rt__restore_rt__restore_rt at sigaction.c:0
__restore_rt[13] #3 Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const at sigaction.c:0
[12] #3 Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/application/OpenFOAM/Ope at sigaction.c:0
[90] #n3 FFoam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) constOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.s at osigaction.c:0
[44] #3 "Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const
[87] #2 at sigaction.c:0
[189] #3 Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[13] #4 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[90] #4 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[159] #2 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[44] #4 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[12] #4 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[99] #2 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[189] #4 Foam::fvMatrix<Foam::Vector<double> >::solve(Foam::dictionary const&)__restore_rtFoam::fvMatrix<Foam::Vector<dou ble> >::solve(Foam::dictionary const&) in "/application/OpenFOAM/OpenFOAM-1.6/applications/bin/linux64GccDPOpt/icoFoam"
[13] #5 __restore_rt in "/application/OpenFOAM/OpenFOAM-1.6/applications/bin/linux64GccDPOpt/icoFoam"
[12] #5 Foam::fvMatrix<Foam::Vector<double> >::solve(Foam::dictionary const&)Foam::fvMatrix<Foam::Vector<double> >::solve(Foam::dictionary const&)Foam::fvMatrix<Foam::Vector<double> >::solve(Foam::dictionary const&) at sigaction.c:0
[159] #3 Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) constmain__restore_rt at sigaction.c:0
[87] #3 Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/applica in "/application/OpenFOAM/OpenFOAM-1.tion/OpenFOAM/Open6/applications/bin/linuxFOAM-1.6/ap64GccDPOpt/icoFoam"
plications/bin/linux64G[13] #6 ccDPOpt/icoFoam"
__libc_start_main[44] #5 main in "/application/OpenFOAM/OpenFOAM-1.6/applications/bin/linux64GccDPOpt/icoFoam"
[90] #5 at sigaction.c:0
[99] #3 Foam::PBiCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[159] #4 in "/application/OpenFOAM/OpenFOAM-1.6/applications/bin/linux64GccDPOpt/icoFoam"
[12] #6 __libc_start_main in "/lib64/libc.so.6"
[13] #7 in "/application/OpenFOAM/OpenFOAM-1.6/lib/linux64GccDPOpt/libOpenFOAM.so"
[99] #4 in "/application/OpenFOAM/OpenFOAM-1.6/applications/bin/linux64GccDPOpt/icoFoam"
[189] #5 in "/lib64/libc.so.6"
[12] #7 in "/application/OpFoam::fvMatrix<Foam::Vector<double> >::solve(Foam::dictionary const&)enFOAM/OpenFOAM-1.6/li_startb/linux64GccDPOpt/libOpenFoam::fvMatrix<Foam::Vector<double> >::solve(Foam::dictionary const&)FOAM.so"
[87] #4 mainmain at /usr/src/packages/BUILD/glibc-2.9/csu/../sysdeps/x86_64/elf/start.S:116
hpc_benchmark is offline   Reply With Quote

Old   January 13, 2010, 15:53
Default
  #13
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
Are you getting that error at t=0 or later? It works for me during the first time steps (later i stoped it) for a similar hardware and OF-1.6.x. Strange ... theoretically the time step "delta=0.1/1000" is right, but in view of the current error please decrease it down to "delta=0.04/1000" and modify the convective scheme in system/fvSchemes from "div(phi,U) Gauss linear;" to "div(phi,U) Gauss upwind;" Does it work?
pbohorquez is offline   Reply With Quote

Old   January 13, 2010, 15:59
Default
  #14
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
How do i know if it is after t=0?
hpc_benchmark is offline   Reply With Quote

Old   January 13, 2010, 16:07
Default
  #15
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
To add more

it actually runs for about 15 steps then failed

...
...
...
Courant Number mean: 1.22202e+63 max: 7.75973e+67
DILUPBiCG: Solving for Ux, Initial residual = 0.999995, Final residual = 1.16608, No Iterations 1001
DILUPBiCG: Solving for Uy, Initial residual = 0.999996, Final residual = 1.09995, No Iterations 1001
DICPCG: Solving for p, Initial residual = 1, Final residual = 66.4662, No Iterations 1001
time step continuity errors : sum local = 4.02726e+74, global = 3.83607e+58, cumulative = 3.83607e+58
DICPCG: Solving for p, Initial residual = 0.957693, Final residual = 58.9586, No Iterations 1001
time step continuity errors : sum local = 5.18098e+77, global = -5.68607e+60, cumulative = -5.64771e+60
ExecutionTime = 84.67 s ClockTime = 85 s
Time = 0.0014
Courant Number mean: 5.23134e+77 max: 5.91045e+82
DILUPBiCG: Solving for Ux, Initial residual = 0.999997, Final residual = 1.04889, No Iterations 1001
DILUPBiCG: Solving for Uy, Initial residual = 0.999998, Final residual = 1.03628, No Iterations 1001
DICPCG: Solving for p, Initial residual = 1, Final residual = 5.03803, No Iterations 1001
time step continuity errors : sum local = 4.55256e+87, global = -8.81927e+70, cumulative = -8.81927e+70
DICPCG: Solving for p, Initial residual = 0.945851, Final residual = 1.31256, No Iterations 1001
time step continuity errors : sum local = 4.95068e+89, global = -5.33324e+73, cumulative = -5.34206e+73
ExecutionTime = 92.46 s ClockTime = 92 s
Time = 0.0015
hpc_benchmark is offline   Reply With Quote

Old   January 13, 2010, 16:12
Default
  #16
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
Thanks. Now it's clear:

Courant Number mean: 5.23134e+77 max: 5.91045e+82

The simulation crashes because of the increase of the max Courant number that must lie below 1. Please change the time step and the convective scheme as described above.
pbohorquez is offline   Reply With Quote

Old   January 13, 2010, 16:18
Default
  #17
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
I changed to 0.04/1000 and fvSchemes accordingly but it still failed at

Courant Number mean: 1.01438e+28 max: 1.1376e+34
DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 6.56472e-06, No Iterations 847
DILUPBiCG: Solving for Uy, Initial residual = 1, Final residual = 7.58524e-06, No Iterations 793
DICPCG: Solving for p, Initial residual = 1, Final residual = 9.52153e-07, No Iterations 36
time step continuity errors : sum local = 1.21105e+47, global = -5.76682e+30, cumulative = -5.76682e+30
DICPCG: Solving for p, Initial residual = 0.986946, Final residual = 2.25487e-10, No Iterations 2
time step continuity errors : sum local = 6.90662e+48, global = 1.29807e+32, cumulative = 1.24041e+32
ExecutionTime = 108.37 s ClockTime = 108 s
Time = 0.00084
Courant Number mean: 2.00307e+48 max: 2.56405e+54
DILUPBiCG: Solving for Ux, Initial residual = 0.999993, Final residual = 21424.5, No Iterations 1001
DILUPBiCG: Solving for Uy, Initial residual = 0.999993, Final residual = 4.22121e+06, No Iterations 1001
DICPCG: Solving for p, Initial residual = 1, Final residual = 5.8586e-09, No Iterations 8
time step continuity errors : sum local = 9.89617e+97, global = 4.42758e+87, cumulative = 4.42758e+87
DICPCG: Solving for p, Initial residual = 0.8704, Final residual = 1.84481e-07, No Iterations 8
time step continuity errors : sum local = 4.71685e+99, global = -2.7024e+88, cumulative = -2.25964e+88
ExecutionTime = 113.58 s ClockTime = 113 s
Time = 0.00088

Courant Number mean: 5.94215e+103 max: 2.46371e+109
hpc_benchmark is offline   Reply With Quote

Old   January 13, 2010, 16:29
Default
  #18
Member
 
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 17
pbohorquez is on a distinguished road
OK. Try to converge the solution at every time step:

Increase "nCorrectors" in system/fvSolution up to the required number in order to obtain the Initial residual of 'p' of order 10^{-3} at the end of the PISO loop. For instance, start with nCorrectors=20.
pbohorquez is offline   Reply With Quote

Old   January 13, 2010, 16:59
Default
  #19
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
looks like it is working with this change.
i am wondering if this will dramatically impact scalability/speedup?
hpc_benchmark is offline   Reply With Quote

Old   January 13, 2010, 17:02
Default
  #20
New Member
 
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 16
hpc_benchmark is on a distinguished road
Also can i specify end time at shorter time
Changing from 0.5 to 0.0026, can this represent the complete run in terms of performance?
hpc_benchmark is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
running OpenFoam in parallel vishwa OpenFOAM Running, Solving & CFD 22 August 2, 2015 08:53
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 05:36
Testing OpenFOAM parallel computing jason.ryon OpenFOAM 0 October 5, 2009 11:46
Parallel solution in OpenFOAM makaveli_lcf OpenFOAM Running, Solving & CFD 0 September 21, 2009 08:07
Modified OpenFOAM Forum Structure and New Mailing-List pete Site News & Announcements 0 June 29, 2009 05:56


All times are GMT -4. The time now is 14:25.