CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Problem running movingCylinders case in parallel with foam-extend-3.1

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   July 16, 2014, 16:06
Default Problem running movingCylinders case in parallel with foam-extend-3.1
  #1
New Member
 
Join Date: Jun 2014
Posts: 10
Rep Power: 10
mhkenergy is on a distinguished road
Hi all,

I am trying to run movingCylinders case in parallel with 2 processors. The case is presented as a tutorial with foam-extend-3.1. As a background info on those who haven't looked into the case, the solver it uses is pimpleDyMFoam and it has GGI interface and topoChanges functionality together. However, I could not get it to running no matter what I tried. Here, I am posting the error message I get upon running the following commands (which are built into ./Allrun script) consecutively. Following that you can have a look at my decomposeParDict to see if I set anything wrong. And you can find the case in compressed format in attachment. All that needs to be done is to run ./Allrun within the case folder:
Commands that are run in the case directory:
Code:
$ blockMesh
$ setSet -batch setBatchGgi
$ rm -f log.setSet
$ setSet -batch setBatchMotion
$ rm -rf constant/polyMesh/sets/*_old*
$ setsToZones 
$ rm -rf constant/polyMesh/sets/
$ decomposePar 
$ decomposeSets
$ mpirun -np 2 pimpleDyMFoam -parallel
Error message I get:
Code:
[1] --> FOAM FATAL ERROR: 
[1] Face 4420 contains no vertex labels
[1] 
[1]     From function polyMesh::polyMesh::resetPrimitives
(
    const Xfer<pointField>& points,
    const Xfer<faceList>& faces,
    const Xfer<labelList>& owner,
    const Xfer<labelList>& neighbour,
    const labelList& patchSizes,
    const labelList& patchStarts
)
[1] 
[1]     in file meshes/polyMesh/polyMesh.C at line 743.
[1] 
FOAM parallel run aborting
[1] 
[0] 
[0] 
[0] --> FOAM FATAL ERROR: 
[0] Face 4461 contains no vertex labels
[0] 
[0]     From function polyMesh::polyMesh::resetPrimitives
(
    const Xfer<pointField>& points,
    const Xfer<faceList>& faces,
    const Xfer<labelList>& owner,
    const Xfer<labelList>& neighbour,
    const labelList& patchSizes,
    const labelList& patchStarts
)
[0] 
[0]     in file meshes/polyMesh/polyMesh.C at line 743.
[0] 
FOAM parallel run aborting
[0] 
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD 
with errorcode 1
Here's how my decomposeParDict looks like:
Code:
numberOfSubdomains 2;

method          simple;

globalFaceZones
(
    frontInZone
    frontOutZone
    middleInZone
    middleOutZone
    backInZone
    backOutZone
);

simpleCoeffs
{
    n               ( 2 1 1 );
    delta           0.001;
}

hierarchicalCoeffs
{
    n               ( 2 1 1 );
    delta           0.001;
    order           xyz;
}

metisCoeffs
{
    processorWeights ( 2 1 1 );
}

manualCoeffs
{
    dataFile        "";
}

distributed     no;

roots           ( );
I'd really appreciate help in this. Thank you in advance.
Attached Files
File Type: gz myMovingCylinders.tar.gz (29.1 KB, 11 views)
File Type: txt logRunParallel.pimpleDyMFoam.txt (65.3 KB, 7 views)

Last edited by mhkenergy; July 16, 2014 at 16:58. Reason: Attached the case files and the parallel run logfile
mhkenergy is offline   Reply With Quote

Old   September 1, 2014, 16:02
Default
  #2
New Member
 
Jim
Join Date: Feb 2014
Location: UK
Posts: 22
Rep Power: 10
jimteb is on a distinguished road
Hi,

Did you manage to get any further with this?

I have set up my own model based closely on the moving cylinders case in foam-extend 3.1 (i.e. ggi interface with topological changes thrown in). Like you I am having a lot of trouble getting the model to solve using PimpleDyMFoam in parallel, when it works fine with one processor.

Have you tried using the patchConstrained method, which as far as I can tell forces the faces of a patch to run on the same processor?

The only documentation I could really find on this was:

http://www.tfd.chalmers.se/~hani/kur...ainingOFW9.pdf

on page 35, which is based on the pimpleDyMFoam/axialTurbine case.
jimteb is offline   Reply With Quote

Old   September 2, 2014, 13:43
Default
  #3
New Member
 
Jim
Join Date: Feb 2014
Location: UK
Posts: 22
Rep Power: 10
jimteb is on a distinguished road
Hi,

So I tried running the movingCylinders case in foam-extend-3.1 in parallel and I have modified the decomposeParDict to use the patchConstrained method, rather than just simple.

My decomposeParDict now looks like:

Code:
numberOfSubdomains 4;

method         patchConstrained;

globalFaceZones // All ggi faces go here
(
      frontInZone
      frontOutZone
      middleInZone
      middleOutZone
      backInZone
      backOutZone
);

patchConstrainedCoeffs
{
    method            simple; 

        simpleCoeffs
             {
                  n               ( 4 1 1 );
                  delta           0.001;
             }

    numberOfSubdomains    4;
    patchConstraints
    (
        (frontIn 0)
        (frontOut 0)
        (middleIn 1)
        (middleOut 1)
        (backIn 2)
        (backOut 2)
    );
}

simpleCoeffs
{
    n               ( 4 1 1 );
    delta           0.001;
}

scotchCoeffs
{
    processorWeights ( 1 1 1 1 );
}

distributed     no;

roots           ( );
When I run this case now with pimpleDyMFoam I get the following error:

Code:
Create time

Create dynamic mesh for time = 0

Selecting dynamicFvMesh multiTopoBodyFvMesh
Initializing the GGI interpolator between master/shadow patches: frontIn/frontOut
Initializing the GGI interpolator between master/shadow patches: middleIn/middleOut
Initializing the GGI interpolator between master/shadow patches: backIn/backOut
Selecting solid-body motion function linearOscillation
Moving body frontCyl:
    moving cells: cyl1
    layer faces : 
2
(
topLayerCyl1
botLayerCyl1
)

    invert mask : false

Selecting solid-body motion function linearOscillation
Moving body backCyl:
    moving cells: cyl2
    layer faces : 
2
(
topLayerCyl2
botLayerCyl2
)

    invert mask : false

Time = 0
Adding zones and modifiers to the mesh.  2 bodies found
Copying existing point zones
Adding point, face and cell zones
Creating layering topology modifier topLayerCyl1 on object frontCyl
Creating layering topology modifier botLayerCyl1 on object frontCyl
Creating layering topology modifier topLayerCyl2 on object backCyl
Creating layering topology modifier botLayerCyl2 on object backCyl
Adding topology modifiers.  nModifiers = 4
Initializing the GGI interpolator between master/shadow patches: frontIn/frontOut
Initializing the GGI interpolator between master/shadow patches: middleIn/middleOut
Initializing the GGI interpolator between master/shadow patches: backIn/backOut
Reading field p

Reading field U

Reading/calculating face flux field phi

Selecting incompressible transport model Newtonian
Selecting turbulence model type laminar
Reading field rAU if present


Starting time loop

Courant Number mean: 0.2191859253 max: 1.400519294 velocity magnitude: 1
deltaT = 0.000495049505
Time = 0.000495049505

Executing mesh motion
[2] 
[3] 
[3] 
[3] --> FOAM FATAL ERROR: 
[3] [2] 
[2] --> FOAM FATAL ERROR: 
[2] face 69 area does not match neighbour by 0.972411% -- possible face ordering problem.
patch: procBoundary2to3 my area: 3e-06 neighbour area: 2.97097e-06 matching tolerance: 0.0001
Mesh face: 2291 vertices: 4((0.056 -0.023 -0.0005) (0.056 -0.023 0.0005) (0.056 -0.02 0.0005) (0.056 -0.02 -0.0005))
Rerun with processor debug flag set for more information.
[2] 
[2]     From function processorPolyPatch::calcGeometry()
[2]     in file meshes/polyMesh/polyPatches/constraint/processor/processorPolyPatch.C at line face 69 area does not match neighbour by 0.972411% -- possible face ordering problem.
patch: procBoundary3to2 my area: 2.97097e-06 neighbour area: 3e-06 matching tolerance: 0.0001
Mesh face: 2004 vertices: 4((0.056 -0.023 -0.0005) (0.056 -0.020029 -0.0005) (0.056 -0.020029 0.0005) (0.056 -0.023 0.0005))
Rerun with processor debug flag set for more information.
[3] 
[3]     From function processorPolyPatch::calcGeometry()
[3]     in file meshes/polyMesh/polyPatches/constraint/processor/processorPolyPatch.C at line 217.
[3] 
FOAM parallel run exiting
[3] 
217.
[2] 
FOAM parallel run exiting
[2] 
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD 
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[1] 
[1] 
[1] --> FOAM FATAL ERROR: 
[1] face 23 area does not match neighbour by 0.972411% -- possible face ordering problem.
patch: procBoundary1to2 my area: 3e-06 neighbour area: 2.97097e-06 matching tolerance: 0.0001
Mesh face: 2373 vertices: 4((0.024 -0.023 -0.0005) (0.024 -0.02 -0.0005) (0.024 -0.02 0.0005) (0.024 -0.023 0.0005))
Rerun with processor debug flag set for more information.
[1] 
[1]     From function processorPolyPatch::calcGeometry()
[1]     in file meshes/polyMesh/polyPatches/constraint/processor/processorPolyPatch.C at line 217.
[1] 
FOAM parallel run exiting
[1] 
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 12416 on
node james-pc exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[james-pc:12414] 2 more processes have sent help message help-mpi-api.txt / mpi-abort
[james-pc:12414] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
I have tried using scotch method instead of simple, which gives a similar error. I'd really appreciate any suggestions on how this error could be resolved?

Thanks in advance.
jimteb is offline   Reply With Quote

Old   September 3, 2014, 01:37
Default
  #4
New Member
 
Join Date: Jun 2014
Posts: 10
Rep Power: 10
mhkenergy is on a distinguished road
Hi again,

Unfortunately I still could not find a solution. I e-mailed some people and checked similar tutorial cases, however could not get it running...
mhkenergy is offline   Reply With Quote

Old   April 12, 2016, 04:43
Default
  #5
Member
 
Join Date: Jun 2011
Posts: 80
Rep Power: 13
maalan is on a distinguished road
Quote:
Unfortunately I still could not find a solution. I e-mailed some people and checked similar tutorial cases, however could not get it running...
Did you finally solve this error??
maalan is offline   Reply With Quote

Old   March 3, 2017, 06:20
Default
  #6
New Member
 
Julian
Join Date: Jun 2016
Posts: 4
Rep Power: 8
JulianG is on a distinguished road
Hey guys, I was currently facing the same problems.

Try to deactivate the globalFaceZones option in your decomposeParDict so it looks something like this:

Code:
numberOfSubdomains  4;

/*globalFaceZones (
		    frontInZone
		    frontOutZone
		    middleInZone
		    middleOutZone
		    backInZone
		    backOutZone
		 );*/

preservePatches (
		    frontIn
		    frontOut
		    middleIn
		    middleOut
		    backIn
		    backOut
		 );

method          simple;

simpleCoeffs
{
    n           (1 4 1);
    delta       0.001;
}
JulianG is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
problem of running parallel Fluent on linux cluster ivanbuz FLUENT 15 September 23, 2017 20:12
simpleFoam in parallel issue plucas OpenFOAM Running, Solving & CFD 3 July 17, 2013 12:30
Something weird encountered when running OpenFOAM in parallel on multiple nodes xpqiu OpenFOAM Running, Solving & CFD 2 May 2, 2013 05:59
RSH problem for parallel running in CFX Nicola CFX 5 June 18, 2012 19:31
[blockMesh] Axisymmetrical mesh Rasmus Gjesing (Gjesing) OpenFOAM Meshing & Mesh Conversion 10 April 2, 2007 15:00


All times are GMT -4. The time now is 15:44.