CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Running AMI case in parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree4Likes
  • 1 Post By Kaskade
  • 3 Post By blaise

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   June 9, 2012, 09:47
Default Running AMI case in parallel
  #1
Senior Member
 
Onno
Join Date: Jan 2012
Location: Germany
Posts: 120
Rep Power: 15
Kaskade is on a distinguished road
Hello,
me again, with a new problem.
I've set up a case involving 3 AMIs which runs fine using MRFSimplefoam (1 core and multiple cores). But when I use the converged solution of the SteadyState-case and use it as the starting point for the transient simulation, I can't run it in parallel, while running on one core works fine.
When I decompose the case (simple/scotch) and run it in parallel I get the following error:
Quote:
[1]
[1]
[1] --> FOAM FATAL IO ERROR:
[1] Read 747304 undisplaced points from "/scratch/kaskade/OpenFOAM/kaskade-2.1.0/run/DA013-Netz3KEpsTransientParallel/processor1/constant/polyMesh/points" but the current mesh has 204035
[1]
[1] file: IOstream::solidBodyMotionFvMeshCoeffs from line 0 to line 0.
[1]
[1] From function solidBodyMotionFvMesh::solidBodyMotionFvMesh(const IOobject&)
[1] in file solidBodyMotionFvMesh/solidBodyMotionFvMesh.C at line 86.
[1]
FOAM parallel run exiting
[1]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[2]
[2]
[2] --> FOAM FATAL IO ERROR:
[2] Read 747304 undisplaced points from "/scratch/kaskade/OpenFOAM/kaskade-2.1.0/run/DA013-Netz3KEpsTransientParallel/processor2/constant/polyMesh/points" but the current mesh has 203734
[2]
[2] file: IOstream::solidBodyMotionFvMeshCoeffs from line 0 to line 0.
[2]
[2] From function solidBodyMotionFvMesh::solidBodyMotionFvMesh(const IOobject&)
[2] in file solidBodyMotionFvMesh/solidBodyMotionFvMesh.C at line 86.
[2]
FOAM parallel run exiting
[2]
[0]
[0]
[0] --> FOAM FATAL IO ERROR:
[0] Read 747304 undisplaced points from "/scratch/kaskade/OpenFOAM/kaskade-2.1.0/run/DA013-Netz3KEpsTransientParallel/processor0/constant/polyMesh/points" but the current mesh has 186025
[0]
[0] file: /scratch/kaskade/OpenFOAM/kaskade-2.1.0/run/DA013-Netz3KEpsTransientParallel/processor0/../constant/dynamicMeshDict::solidBodyMotionFvMeshCoeffs from line 24 to line 30.
[0]
[0] From function solidBodyMotionFvMesh::solidBodyMotionFvMesh(const IOobject&)
[0] in file solidBodyMotionFvMesh/solidBodyMotionFvMesh.C at line 86.
[0]
FOAM parallel run exiting
[0]
[3]
[3]
[3] --> FOAM FATAL IO ERROR:
[3] Read 747304 undisplaced points from "/scratch/kaskade/OpenFOAM/kaskade-2.1.0/run/DA013-Netz3KEpsTransientParallel/processor3/constant/polyMesh/points" but the current mesh has 185112
[3]
[3] file: IOstream::solidBodyMotionFvMeshCoeffs from line 0 to line 0.
[3]
[3] From function solidBodyMotionFvMesh::solidBodyMotionFvMesh(const IOobject&)
[3] in file solidBodyMotionFvMesh/solidBodyMotionFvMesh.C at line 86.
[3]
FOAM parallel run exiting
[3]
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 4799 on
node node119 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[node119:04797] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[node119:04797] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
I am using OpenFOAM 2.1.1., my dynamicMeshDict is (apart from the name of the rotating area) identical to the propeller-tutorial.
Thanks in advance.
Kaskade is offline   Reply With Quote

Old   June 23, 2012, 06:30
Default
  #2
Senior Member
 
Onno
Join Date: Jan 2012
Location: Germany
Posts: 120
Rep Power: 15
Kaskade is on a distinguished road
Solution: Someone messed up the decomposePar of OF211. If you decompose the case with OF210's decomposePar, it runs fine with OF211.
Kaskade is offline   Reply With Quote

Old   November 19, 2013, 03:24
Default
  #3
Senior Member
 
Onno
Join Date: Jan 2012
Location: Germany
Posts: 120
Rep Power: 15
Kaskade is on a distinguished road
Just for future reference: When you encounter this errors, because you want to start the DyM-run from a time != 0. You need to run "decomposePar -constant -time time" . I think the mesh for a specific time step is transformed from the original mesh and not from the mesh of the previous time step.
lukasf likes this.
Kaskade is offline   Reply With Quote

Old   March 14, 2016, 16:58
Default
  #4
Member
 
P.A.
Join Date: Mar 2009
Location: Germany
Posts: 83
Rep Power: 17
blaise is on a distinguished road
Hi,

I faced the same problem in OF2.3 today, and the proposed solution helped me in a certain way. The problem seems to be, that the faceSet(s) describing the AMI initially (in constant/polyMesh/sets) are no longer available in later time steps, so you have to recreate them with a suitable topoSet command for the time step you want to start from over again. As I use
Code:
singleProcessorFaceSets ((AMIfaces -1));
in decomposeParDict (which was the only way for me to get things running), the faceSet "AMIfaces" must be present. To this end, I use the following topoSetDict:

Code:
actions
(
    // Collect all AMI faces
    {
	name 	AMIfaces;
	type	faceSet;
	action	new;
    	source patchToFace;
    	sourceInfo
    	{
        	name "AMI.*";
	}
    }
);
Obviously, you have to name the AMI faces starting with the string AMI. I created these with createPatch with this dictionary:

Code:
pointSync false;

// Patches to create.
patches
(
    {
        // Stator domain
        name AMISHIP;
        // Dictionary to construct new patch from
        patchInfo
        {
            type cyclicAMI;
            matchTolerance 1E-4;
            neighbourPatch AMIPROP;
	    transform noOrdering;
        }
        constructFrom patches;
        patches ( PROPZYLSHIP PROPINSHIP PROPOUTSHIP );
    }

    {
        // Rotating domain
        name AMIPROP;
        // Dictionary to construct new patch from
        patchInfo
        {
            type cyclicAMI;
            matchTolerance 1E-4;
            neighbourPatch AMISHIP;
	    transform noOrdering;
        }
        constructFrom patches;
        patches ( PROPZYLPROP PROPINPROP PROPOUTPROP );
    }
);
The resulting section of my boundary file is then:

Code:
    AMISHIP
    {
        type            cyclicAMI;
        inGroups        1(cyclicAMI);
        nFaces          20800;
        startFace       3578872;
        matchTolerance  0.0001;
        transform       noOrdering;
        neighbourPatch  AMIPROP;
    }
    AMIPROP
    {
        type            cyclicAMI;
        inGroups        1(cyclicAMI);
        nFaces          5397;
        startFace       3599672;
        matchTolerance  0.0001;
        transform       noOrdering;
        neighbourPatch  AMISHIP;
    }

(It a ship with propeller). I only got things running by creating a single AMI patch in each domain holding all of their respective AMI patches like shown above. Don't know if this is in fact essential.

Hope this might help someone in the future.
blaise is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Is Playstation 3 cluster suitable for CFD work hsieh OpenFOAM 9 August 16, 2015 15:53
Performance of GGI case in parallel hannes OpenFOAM Running, Solving & CFD 26 August 3, 2011 04:07
strange behaviour of GGI in parallel on axis symmetrical case A.Devesa OpenFOAM Running, Solving & CFD 0 April 6, 2010 04:58
running multiple Fluent parallel jobs Michael Bo Hansen FLUENT 8 June 7, 2006 09:52
How to save a case running in background us FLUENT 0 July 6, 2005 11:43


All times are GMT -4. The time now is 09:43.