CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Get the most of parallel simulations [mpi flags]

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 15, 2016, 05:37
Default Get the most of parallel simulations [mpi flags]
  #1
Senior Member
 
Pablo Higuera
Join Date: Jan 2011
Location: Auckland
Posts: 627
Rep Power: 19
Phicau is on a distinguished road
Dear all,

lately I have been working with a new computer and have been experiencing a very poor parallelization gain (subjective opinion) for what I was expecting. My computer is a dual Intel Xeon E52680 v3 (2.5GHz, 12 physical cores, 24 threads each), 64GB (4x16GB, 2133MHz DDR4) RAM.

To confirm my thought I decided to do a parallel performance test, as shown here:

https://www.pugetsystems.com/labs/hp...d-Opteron-587/

So basically I prepared a case based on the cavity tutorial, with a 1024x1024 mesh and reduced viscosity that iterates 100 times (no output to disk). Then I set a batch of cases with different mpirun flags, each to run independently while the computer was completely idle at night.

See graph attached. The X axis is the number of processes and the Y axis is the speedup.



Legend:
perfect -> the land of utopia
normal -> mpirun -np X simpeFoam -parallel
bc -> --bind-to core: overload-allowed
bcmc -> --bind-to core: overload-allowed --map-by core
bh -> --bind-to hwthread
bb -> --bind-to board

My findings confirm my fears, up to 12 processes, the scaling looks excellent, but after that it plateaus. Since the case is not huge, I could stand that scaling turned worse, but not as bad as shown by the graph. Furthermore, this guy has almost linear scaling all the way through to 40 cores (quad socket...), so I was expecting something similar up to 24.

What is interesting is that adding the flag --bind-to core: overload-allowed, increases the performance radically for large numbers of processes.

Does anyone have any clue on what could be happening or any thoughts to boost parallel performance further to 24 processes?

Thanks!

Pablo
Attached Images
File Type: png scalability.png (58.2 KB, 64 views)
Phicau is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Running parallel case after parallel meshing with snappyHexMesh? Adam Persson OpenFOAM Running, Solving & CFD 0 August 31, 2015 22:04
Can not run OpenFOAM in parallel in clusters, help! ripperjack OpenFOAM Running, Solving & CFD 5 May 6, 2014 15:25
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel JR22 OpenFOAM Running, Solving & CFD 2 April 19, 2013 16:49
Running in parallel Djub OpenFOAM Running, Solving & CFD 3 January 24, 2013 16:01
Parallel Computing Classes at San Diego Supercomputer Center Jan. 20-22 Amitava Majumdar Main CFD Forum 0 January 5, 1999 12:00


All times are GMT -4. The time now is 00:30.