|
[Sponsors] |
December 8, 2014, 20:03 |
parallel strategy
|
#1 |
Member
Join Date: Feb 2011
Posts: 41
Rep Power: 15 |
Hi all,
I'm thinking to write a DNS code using finite difference scheme using OpenMPI. I am wondering what kind of the parallel strategy is the best? |
|
December 9, 2014, 15:14 |
|
#2 |
Senior Member
|
What do you mean by parallel strategy? Once you select OpenMPI (which actually is an MPI implementation... or maybe you meant openmp?) then there is not so much left to choose in the parallel framework.
|
|
December 12, 2014, 07:08 |
|
#3 | |
Member
Join Date: Feb 2011
Posts: 41
Rep Power: 15 |
Quote:
This is what I learned from the code I currently have. It seems that the scalability of my code is not good (when the number of the processors exceeding 5, the gain is not so much). I'm wondering if there is any other way to do the same thing. Thanks. |
||
December 12, 2014, 09:24 |
|
#4 |
Senior Member
Michael Prinkey
Join Date: Mar 2009
Location: Pittsburgh PA
Posts: 363
Rep Power: 25 |
If you are using classic finite differences, there is no need to transpose the data to calculation approximate derivatives. You need to maintain ghost layers of data on each PE sufficient to feed the finite difference stencil. This is very basic MPI coding. I recommend that you go read the thoroughly comprehend the Jacobi iteration examples in the MPI Exercises here:
http://www.mcs.anl.gov/research/proj.../contents.html That example uses Jacobi iteration to solve the 2D difference equations approximating a 2D Laplace equation. The same strategy applies to any finite difference scheme with a 5-point stencil. And the Jacobi iteration code can be trivially turned into the RHS calculation/time-step update for a transient calculation. The extension to 3D is obvious. If you are using high-order finite differences or multi-point upwinding schemes, you may need to add a second or even third ghost layer, but the essential strategy is the same. |
|
December 13, 2014, 15:14 |
|
#5 | |
Member
Join Date: Feb 2011
Posts: 41
Rep Power: 15 |
Quote:
|
||
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
simpleFoam parallel | AndrewMortimer | OpenFOAM Running, Solving & CFD | 12 | August 7, 2015 18:45 |
Can not run OpenFOAM in parallel in clusters, help! | ripperjack | OpenFOAM Running, Solving & CFD | 5 | May 6, 2014 15:25 |
simpleFoam in parallel issue | plucas | OpenFOAM Running, Solving & CFD | 3 | July 17, 2013 11:30 |
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel | JR22 | OpenFOAM Running, Solving & CFD | 2 | April 19, 2013 16:49 |
parallel results different depending on decomposition strategy | franzisko | OpenFOAM | 3 | November 4, 2009 06:37 |