|
[Sponsors] |
Scaling of parallel computation? Solver/thread count combinations? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
February 2, 2017, 11:01 |
Scaling of parallel computation? Solver/thread count combinations?
|
#1 | |
Member
Join Date: Jun 2016
Posts: 31
Rep Power: 9 |
Hi,
I'm currently looking into the scaling of OpenFOAM 4.0 and OF Extend 3.1 while running cases in parallel on my local machine (i7 6800k 6C/12T @ 4GHz, 32GB DDR4 2666MHz quadchannel, Windows 7 64bit Ultimate, Linux VMs with OF running in Virtualbox) using 4, 8 and 12 threads respectively. I've searched a bit about parallel scaling, but I've noticed a strange behaviour. Well, at least for me it is strange. After reading this https://www.hpc.ntnu.no/display/hpc/...mance+on+Vilje and this PDF http://www.dtic.mil/get-tr-doc/pdf?AD=ADA612337 I was quite confident that I'd get a nice approximately linear speedup on my little processor, but that wasn't the case at all. I've started using a Hagen-Poiseuille laminar pipe flow with about 144k elements and pisoFoam. Using 12 threads resulted the slowest simulation speed, 8 threads were a little faster and 4 threads were somewhere in the middle. I figured that the case was too small to profit from 12 domains and tested a lid-driven cavity flow with Re = 1000, pisoFoam again and 1.0E6 cells, so roughly 83.3E3 cells per thread. Interestingly, using 12 threads was the slowest method, 8 threads were fastest and 4 threads were somewhere in the middle. In OF Extend, 4 threads were actually the fastest. I've read the following here in the forum: Quote:
Cavity 1m cells with GAMG/GAMG solving for p/U: 12 threads: 726s walltime 8 threads: 576s 4 threads: 691s Cavity 1m cells with GAMG/GAMG solving for p/U, OF Extend: 12 threads: 1044s walltime 8 threads: 613s 4 threads: 592s Approximately the same bad scaling for the laminar pipe flow case. What is the cause? I'd appreciate any help Oh I forgot, I use openMPI and start the cases using "mpirun -np num_of_threads foamJob pisoFoam -parallel", that should be correct. |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[General] Extracting ParaView Data into Python Arrays | Jeffzda | ParaView | 30 | November 6, 2023 21:00 |
Partition: cell count = 0 | metmet | FLUENT | 1 | August 31, 2014 19:41 |
Serial UDF is working for parallel computation also | Tanjina | Fluent UDF and Scheme Programming | 0 | December 26, 2013 18:24 |
Installation issues for parallel computation | Akash C | SU2 Installation | 1 | June 21, 2013 05:26 |
Parallel computation problem in Tascflow | dandy | CFX | 3 | April 21, 2002 00:32 |