CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

Scaling of parallel computation? Solver/thread count combinations?

Register Blogs Community New Posts Updated Threads Search

Like Tree2Likes
  • 2 Post By tdof

 
 
LinkBack Thread Tools Search this Thread Display Modes
Prev Previous Post   Next Post Next
Old   February 2, 2017, 11:01
Default Scaling of parallel computation? Solver/thread count combinations?
  #1
Member
 
Join Date: Jun 2016
Posts: 31
Rep Power: 9
tdof is on a distinguished road
Hi,

I'm currently looking into the scaling of OpenFOAM 4.0 and OF Extend 3.1 while running cases in parallel on my local machine (i7 6800k 6C/12T @ 4GHz, 32GB DDR4 2666MHz quadchannel, Windows 7 64bit Ultimate, Linux VMs with OF running in Virtualbox) using 4, 8 and 12 threads respectively. I've searched a bit about parallel scaling, but I've noticed a strange behaviour. Well, at least for me it is strange. After reading this

https://www.hpc.ntnu.no/display/hpc/...mance+on+Vilje

and this PDF

http://www.dtic.mil/get-tr-doc/pdf?AD=ADA612337

I was quite confident that I'd get a nice approximately linear speedup on my little processor, but that wasn't the case at all. I've started using a Hagen-Poiseuille laminar pipe flow with about 144k elements and pisoFoam. Using 12 threads resulted the slowest simulation speed, 8 threads were a little faster and 4 threads were somewhere in the middle. I figured that the case was too small to profit from 12 domains and tested a lid-driven cavity flow with Re = 1000, pisoFoam again and 1.0E6 cells, so roughly 83.3E3 cells per thread. Interestingly, using 12 threads was the slowest method, 8 threads were fastest and 4 threads were somewhere in the middle. In OF Extend, 4 threads were actually the fastest. I've read the following here in the forum:

Quote:
The multigrid solvers (GAMG) are quick (in terms of walltime), but does not at all scale well in parallel. They require around 100k cells/process for the parallel efficiency to be acceptable.
The conjugate gradient solvers (PCG) are slow in terms of walltime, but scale extremely well in parallel. As low as 10k cells/process can be effective.
I've tried GAMG as well as PCG/PBiCG for pressure and velocity as well as mixtures of both. The diagrams from the Vilje cluster show even superlinear speedup when using up to 100 parallel processes, why am I not getting at least an approximately linear speedup using only 12 threads? I've tested simple and scotch decomposition methods, renumberMesh, but no differences here. Could the virtual machines be the reason or am I missing something? The scalability is apparently dependent on the solver as well, but I can't imagine the overhead being that bad on such a small level of parallelization.



Cavity 1m cells with GAMG/GAMG solving for p/U:
12 threads: 726s walltime
8 threads: 576s
4 threads: 691s

Cavity 1m cells with GAMG/GAMG solving for p/U, OF Extend:
12 threads: 1044s walltime
8 threads: 613s
4 threads: 592s

Approximately the same bad scaling for the laminar pipe flow case. What is the cause? I'd appreciate any help Oh I forgot, I use openMPI and start the cases using "mpirun -np num_of_threads foamJob pisoFoam -parallel", that should be correct.
tdof is offline   Reply With Quote

 


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[General] Extracting ParaView Data into Python Arrays Jeffzda ParaView 30 November 6, 2023 21:00
Partition: cell count = 0 metmet FLUENT 1 August 31, 2014 19:41
Serial UDF is working for parallel computation also Tanjina Fluent UDF and Scheme Programming 0 December 26, 2013 18:24
Installation issues for parallel computation Akash C SU2 Installation 1 June 21, 2013 05:26
Parallel computation problem in Tascflow dandy CFX 3 April 21, 2002 00:32


All times are GMT -4. The time now is 18:32.