CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

Running OpenFOAM in a group of MPI_WORLD

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 1, 2016, 13:16
Default Running OpenFOAM in a group of MPI_WORLD
  #1
Member
 
Victor Koppejan
Join Date: May 2015
Posts: 40
Rep Power: 10
vkoppejan is on a distinguished road
Hi Everyone,

I'm running simulations using CFDEM, which couples OpenFOAM to a DEM package called liggghts. I have a system in which the particles (Npart 5K-1M) require much more cpu's than the cfd mesh (Ncells < 50K).

As you can imagine running 50K cells over several nodes with 24 cpu's each wastes a lot of time on communication and forms a bottleneck to other optimization. To solve this my idea was to change the Pstreams files and create a smaller MPI group from MPI_COMM_WORLD for OpenFOAM to work with.

I'm going through the code, the mpi implementation in the sourcecode appears to be confined to the files Pstream/mpi folder.

Any thoughts on this or any stories on how this didn't work? Both are welcome.

Thanks in advance for your feedback.

Cheers,

Victor
vkoppejan is offline   Reply With Quote

Old   August 4, 2016, 08:52
Default
  #2
Member
 
Bruno Blais
Join Date: Sep 2013
Location: Canada
Posts: 64
Rep Power: 12
blais.bruno is on a distinguished road
Hey Victor,

I don't have an exact solution to your issue. However, you have to note that all of the processors will help you calculate the interphase coupling (which is actually sometimes more expensive than calculating the CFD), therefore I don't think there is that much to gain here. I suggest you use a smarter decomposition on the CFD side (METIS or SCOTCH), which will reduce the cost associated with the parallel aspect of the CFD.

Obviously you are using too many processors for 50k cells, but it should not be THAT detrimental to your simulation speed.
Also, be careful, since the information of ALL particles is spread onto all processors via some sort of broadcast to all. Therefore, the coupling does not scale that well past 12 processors (from what I have observed)

Cheers!
BB


Quote:
Originally Posted by vkoppejan View Post
Hi Everyone,

I'm running simulations using CFDEM, which couples OpenFOAM to a DEM package called liggghts. I have a system in which the particles (Npart 5K-1M) require much more cpu's than the cfd mesh (Ncells < 50K).

As you can imagine running 50K cells over several nodes with 24 cpu's each wastes a lot of time on communication and forms a bottleneck to other optimization. To solve this my idea was to change the Pstreams files and create a smaller MPI group from MPI_COMM_WORLD for OpenFOAM to work with.

I'm going through the code, the mpi implementation in the sourcecode appears to be confined to the files Pstream/mpi folder.

Any thoughts on this or any stories on how this didn't work? Both are welcome.

Thanks in advance for your feedback.

Cheers,

Victor
blais.bruno is offline   Reply With Quote

Old   August 4, 2016, 09:52
Default
  #3
Member
 
Victor Koppejan
Join Date: May 2015
Posts: 40
Rep Power: 10
vkoppejan is on a distinguished road
Hi Bruno,

Thanks for the reply. I didn't think of the coupling on the CFD side. I'm using a very simple cylindrical domain so I don't think scotch will do a lot, but I'll give it a try.

With the scaling past twelve cores, do you mean the coupling? As far as I can see LIGGGHTS itself scales quite nicely up to 48 cores (I haven't tried more due to the small CFD mesh).

Cheers,

Victor
vkoppejan is offline   Reply With Quote

Old   August 4, 2016, 10:26
Default
  #4
Member
 
Bruno Blais
Join Date: Sep 2013
Location: Canada
Posts: 64
Rep Power: 12
blais.bruno is on a distinguished road
LIGGGHTS scale very well up to hundreds of core from my experience, but the coupling scales a bit more poorly. The reason for this is that LIGGGHTS does not know which particles belong to which CFD processor, so it sends all the particle information to all the processors, therefore if you have 1e6 particles, all processors know the position and the velocity of the 1e6 particles. Obviously this is not efficient. There exist a turbo model, but right now it is not public I believe.

I think this is more what is killing your efficiency than the resolution of the CFD équations in this case.


Quote:
Originally Posted by vkoppejan View Post
Hi Bruno,

Thanks for the reply. I didn't think of the coupling on the CFD side. I'm using a very simple cylindrical domain so I don't think scotch will do a lot, but I'll give it a try.

With the scaling past twelve cores, do you mean the coupling? As far as I can see LIGGGHTS itself scales quite nicely up to 48 cores (I haven't tried more due to the small CFD mesh).

Cheers,

Victor
blais.bruno is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
mass flow in is not equal to mass flow out saii CFX 12 March 19, 2018 05:21
1st New England OpenFOAM Users Group Meeting - April 27th 2013 kmooney OpenFOAM Announcements from Other Sources 4 May 11, 2013 12:06
4th Mid-Atlantic/Mid-West USA & Canada OpenFOAM Users Group Meeting jans OpenFOAM Announcements from Other Sources 5 October 19, 2012 18:11
Running OpenFoam on a Computer Cluster in the Cloud - cloudnumbers.com Markus Schmidberger OpenFOAM Announcements from Other Sources 0 July 26, 2011 08:18
Statically Compiling OpenFOAM Issues herzfeldd OpenFOAM Installation 21 January 6, 2009 09:38


All times are GMT -4. The time now is 13:33.