CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Cores decouple in parallel simulation

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 15, 2018, 15:10
Default Cores decouple in parallel simulation
  #1
New Member
 
Raśl Urtubia
Join Date: Nov 2018
Posts: 1
Rep Power: 0
McClane is on a distinguished road
Hello,

I'm running a biphasic flow simulation through porous media with the impesFoam solver, which is a part of a toolbox for OpenFOAM (here is the link in case you are curious or you think it may be useful for you too https://openfoamwiki.net/index.php/C..._3.0_and_newer). To accelerate the process I decided to run it in parallel with mpirun and then the problem arises: some processors get ahead of others in time and "decouple" (I don't know if there is an specific term to call this phenomena, hence the quotation marks).

For example, for a simulation with time limit 20000s:
prossesor0: 14000s
prossesor1: 16400s
prossesor2: 19600s
prossesor3: 17400s
prossesor4: 14200s
prossesor5: 20000s
prossesor6: 15800s
prossesor7: 14400s

The weird part of all this is that I don't receive any kind of error message. The simulation just goes all the way until a processor hits the time threshold to stop, then all the others enter a limbo and don't update their time folders anymore. I've tried with different ways of decomposing the domain, such as scotch, hierarchy and simple, but they all have the same problem.

This is my run txt:
Quote:
#!/bin/sh

# Source tutorial run functions
. $WM_PROJECT_DIR/bin/tools/RunFunctions

# case 2 - with gravity

runApplication blockMesh
cp 0/Sb.org 0/Sb
cp constant/K.org constant/K
runApplication setFields
cp constant/K 0/K
decomposePar > log.decomposePar
cp processor0/0/K processor0/constant/K
cp processor1/0/K processor1/constant/K
cp processor2/0/K processor2/constant/K
cp processor3/0/K processor3/constant/K
cp processor4/0/K processor4/constant/K
cp processor5/0/K processor5/constant/K
cp processor6/0/K processor6/constant/K
cp processor7/0/K processor7/constant/K
mpirun -np 8 impesFoam -parallel > log.impesFoam
reconstructPar > log.reconstructPar
runApplication postProcess -func sampleDict
And the decomposeParDict:
Quote:
/*--------------------------------*- C++ -*----------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 5 |
| \\ / A nd | Web: www.OpenFOAM.org |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object decomposeParDict;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

numberOfSubdomains 8;

method scotch;

simpleCoeffs
{
n (2 1 1);
delta 0.001;
}

hierarchicalCoeffs
{
n (4 2 1);
delta 0.001;
order xyz;
}

metisCoeffs
{
processorWeights
(
1
1
1
1
1
1
1
1
)
}

manualCoeffs
{
dataFile "";
}

distributed no;

roots ( );


// ************************************************** *********************** //
I had read mpirun made sure each domain exchanged information with the neighbors to fulfill their boundary conditions, and so they must be synchronized. So it really does not make sense to me that some processors finish at a completely different time. I know the toolbox works properly, because I have tested and validated with several cases using one core. So I really don't know what is happening. Has anyone ever encountered this problem before?

If anyone thinks the problem lays in other part of the process I'm ready to provide all data you may think is relevant.

Thanks in advance
McClane is offline   Reply With Quote

Reply

Tags
decouple, parallel


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Solver seems to diverge after re-run the simulation (parallel) cryabroad OpenFOAM Running, Solving & CFD 2 October 30, 2018 23:55
running multiple parallel cases in openfoam slows the simulation kkpal OpenFOAM Running, Solving & CFD 2 August 21, 2015 11:08
simpleFoam parallel AndrewMortimer OpenFOAM Running, Solving & CFD 12 August 7, 2015 18:45
Error when running in parallel (> 512 cores) using MVAPICH2 ripperjack OpenFOAM Running, Solving & CFD 7 June 25, 2015 03:12
how to specify parallel running? (core i7 but with 6 cores running) vitorspadeto OpenFOAM Running, Solving & CFD 2 April 16, 2014 10:25


All times are GMT -4. The time now is 17:03.