CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

parallel strategy

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 8, 2014, 20:03
Default parallel strategy
  #1
Member
 
Join Date: Feb 2011
Posts: 41
Rep Power: 15
jollage is on a distinguished road
Hi all,

I'm thinking to write a DNS code using finite difference scheme using OpenMPI. I am wondering what kind of the parallel strategy is the best?
jollage is offline   Reply With Quote

Old   December 9, 2014, 15:14
Default
  #2
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,152
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
What do you mean by parallel strategy? Once you select OpenMPI (which actually is an MPI implementation... or maybe you meant openmp?) then there is not so much left to choose in the parallel framework.
sbaffini is offline   Reply With Quote

Old   December 12, 2014, 07:08
Default
  #3
Member
 
Join Date: Feb 2011
Posts: 41
Rep Power: 15
jollage is on a distinguished road
Quote:
Originally Posted by sbaffini View Post
What do you mean by parallel strategy? Once you select OpenMPI (which actually is an MPI implementation... or maybe you meant openmp?) then there is not so much left to choose in the parallel framework.
Hi sbaffini, thanks for your comments. By parallel strategy, I mean the way how people partition the computational domain. Like, I have a nx*ny rectangular domain and I use finite difference method. The computational domain is partitioned as (nx/N)*ny for each processor (N in total) when I take the derivative in the y direction or nx*(ny/N) when I take the derivative in the x direction. I have to transform the computational grids in each processor in either x or y direction.

This is what I learned from the code I currently have. It seems that the scalability of my code is not good (when the number of the processors exceeding 5, the gain is not so much). I'm wondering if there is any other way to do the same thing.

Thanks.
jollage is offline   Reply With Quote

Old   December 12, 2014, 09:24
Default
  #4
Senior Member
 
Michael Prinkey
Join Date: Mar 2009
Location: Pittsburgh PA
Posts: 363
Rep Power: 25
mprinkey will become famous soon enough
If you are using classic finite differences, there is no need to transpose the data to calculation approximate derivatives. You need to maintain ghost layers of data on each PE sufficient to feed the finite difference stencil. This is very basic MPI coding. I recommend that you go read the thoroughly comprehend the Jacobi iteration examples in the MPI Exercises here:

http://www.mcs.anl.gov/research/proj.../contents.html

That example uses Jacobi iteration to solve the 2D difference equations approximating a 2D Laplace equation. The same strategy applies to any finite difference scheme with a 5-point stencil. And the Jacobi iteration code can be trivially turned into the RHS calculation/time-step update for a transient calculation. The extension to 3D is obvious. If you are using high-order finite differences or multi-point upwinding schemes, you may need to add a second or even third ghost layer, but the essential strategy is the same.
mprinkey is offline   Reply With Quote

Old   December 13, 2014, 15:14
Default
  #5
Member
 
Join Date: Feb 2011
Posts: 41
Rep Power: 15
jollage is on a distinguished road
Quote:
Originally Posted by mprinkey View Post
If you are using classic finite differences, there is no need to transpose the data to calculation approximate derivatives. You need to maintain ghost layers of data on each PE sufficient to feed the finite difference stencil. This is very basic MPI coding. I recommend that you go read the thoroughly comprehend the Jacobi iteration examples in the MPI Exercises here:

http://www.mcs.anl.gov/research/proj.../contents.html

That example uses Jacobi iteration to solve the 2D difference equations approximating a 2D Laplace equation. The same strategy applies to any finite difference scheme with a 5-point stencil. And the Jacobi iteration code can be trivially turned into the RHS calculation/time-step update for a transient calculation. The extension to 3D is obvious. If you are using high-order finite differences or multi-point upwinding schemes, you may need to add a second or even third ghost layer, but the essential strategy is the same.
Thanks a lot mprinkey, I'll look into it. I'll come back to this when I get stuck.
jollage is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
simpleFoam parallel AndrewMortimer OpenFOAM Running, Solving & CFD 12 August 7, 2015 18:45
Can not run OpenFOAM in parallel in clusters, help! ripperjack OpenFOAM Running, Solving & CFD 5 May 6, 2014 15:25
simpleFoam in parallel issue plucas OpenFOAM Running, Solving & CFD 3 July 17, 2013 11:30
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel JR22 OpenFOAM Running, Solving & CFD 2 April 19, 2013 16:49
parallel results different depending on decomposition strategy franzisko OpenFOAM 3 November 4, 2009 06:37


All times are GMT -4. The time now is 21:24.