CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Bugs (https://www.cfd-online.com/Forums/openfoam-bugs/)
-   -   interDyMFoam parallel bug? (https://www.cfd-online.com/Forums/openfoam-bugs/63128-interdymfoam-parallel-bug.html)

nikos_fb16 March 30, 2009 07:32

interDyMFoam parallel bug?
 
1 Attachment(s)
Hello Mattijs,

here is an uploaded testcase:





And here is a link to refresh what this thread is about:

http://www.cfd-online.com/Forums/ope...-parallel.html

In the testcase i have put a README file, where I summarise in which cases the error occurs. I found that the error in parallel calculations appears and disappears for different domain decompositions.

The testcase is a Rayleigh Taylor Instability.

Thank you

Nikos

mattijs March 31, 2009 07:07

Hi Nikos,

Thanks for reporting. In the 1.5 series the pressure reference cell still has to be on the master. In interDyMFoam the reference is set by its location and in your case that location was on the second processor. It runs fine if you change the location to be on the master processor. I've pushed a change to 1.5.x interDyMFoam so at least it will tell you if the reference cell is not on the master.

sega March 31, 2009 07:21

Hi mattijs.

Thanks for your response. But how do you determine whether a cell is on the master processor?

As in this case the decomposition is done in x-direction (splitting with a zy-plane) one can suppose the cell with the number 0 is located on the first part of the slitted domain and hence on the first (master) processor?

nikos_fb16 March 31, 2009 07:56

Hi Mattijs,

thank for the answer and the change in1.5.x - i'm compiling.
What I did now is:

After the domain decomposition I looked in "processor0/constant/polyMesh/cellProcAddressing" for the cellLabels there and chose one label as entry for pdRefCell in my fvSolution-file.

The error still exists. Did I think in a too easy way by doing so?
(I already tried randomly different pdRefCell-Values but the error always appears. Thats against a fifty fifty chance for hitting a proper cell :))

mattijs March 31, 2009 08:19

interDyMFoam does not use the pRefCell - it uses the pRefProbe which is a location.

nikos_fb16 March 31, 2009 08:24

:)
Ok, thanks a lot.

msbealo March 4, 2011 14:49

Hi guys,

Is there a solution to this interDyMFoam/parallel pocessor problem?

I had one case that worked fine, but when I replaced the stl file with the actual geometry I was interested in (a yacht hull). interDyMFoam no longer wanted to run as a parallel case.

I'm using OF 1.7.1 with Ubuntu 10.10.

Kind Regards,

Mark

Benji January 19, 2018 05:06

EDIT: Seems to work when I overwrite the Baffles and start the simulation at t=0. I'm fine with this now (still don't understand why it worked with interFoam though).

Hey everyone

Sorry to dig this thread up, but I'm stuck with a similar problem.
I'm running a dynamic case (rotating AMI). Simulation works fine with interFoam in parallel and interDyMFoam, but NOT with interDyMFoam in parallel.

It says it cannot find the points file in the following directory: "/home/benji/OpenFOAM/benji-5.0/run/SuperPipe_2/processor1/constant/polyMesh/points". Since i start at 0.002, the directory where the files are is this: "/home/benji/OpenFOAM/benji-5.0/run/SuperPipe_2/processor1/0.002/polyMesh/points". Does anyone know why? As I said, with interFoam in parallel it works.... Does interDyMFoam in parallel require another way of decomposing/path names?

Ben

This is the error:
Code:

  benji@ubuntu:~/OpenFOAM/benji-5.0/run/SuperPipe_2$ mpirun -np 4 interDyMFoam -parallel > log &
  [1] 14745
  benji@ubuntu:~/OpenFOAM/benji-5.0/run/SuperPipe_2$ [1]
  [1]
  [1] --> FOAM FATAL ERROR:
  [1] cannot find file "/home/benji/OpenFOAM/benji-5.0/run/SuperPipe_2/processor1/constant/polyMesh/points"
  [1]
  [1]    From function virtual Foam::autoPtr<Foam::ISstream> Foam::fileOperations::uncollatedFileOperation::readStream(Foam::regIOobject&, const Foam::fileName&, const Foam::word&, bool) const
  [1]    in file global/fileOperations/uncollatedFileOperation/uncollatedFileOperation.C at line 505.
  [1]
  FOAM parallel run exiting
  [1]
  [0] --------------------------------------------------------------------------
  MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
  with errorcode 1.
 
  NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
  You may or may not see output from other processes, depending on
  exactly when Open MPI kills them.
  --------------------------------------------------------------------------
  [2]
  [2]
  [2] --> FOAM FATAL ERROR:
  [2] cannot find file "/home/benji/OpenFOAM/benji-5.0/run/SuperPipe_2/processor2/constant/polyMesh/points"
  [2]
  [2]    From function virtual Foam::autoPtr<Foam::ISstream> Foam::fileOperations::uncollatedFileOperation::readStream(Foam::regIOobject&, const Foam::fileName&, const Foam::word&, bool) const
  [2]    in file global/fileOperations/uncollatedFileOperation/uncollatedFileOperation.C at line 505.
  [2]
  FOAM parallel run exiting
  [2]
  [3]
  [3]
  [3] --> FOAM FATAL ERROR:
  [3] cannot find file "/home/benji/OpenFOAM/benji-5.0/run/SuperPipe_2/processor3/constant/polyMesh/points"
  [3]
  [3]    From function virtual Foam::autoPtr<Foam::ISstream> Foam::fileOperations::uncollatedFileOperation::readStream(Foam::regIOobject&, const Foam::fileName&, const Foam::word&, bool) const
  [3]    in file global/fileOperations/uncollatedFileOperation/uncollatedFileOperation.C at line 505.
  [3]
  FOAM parallel run exiting
  [3]
 
  [0]
  [0] --> FOAM FATAL ERROR:
  [0] cannot find file "/home/benji/OpenFOAM/benji-5.0/run/SuperPipe_2/processor0/constant/polyMesh/points"
  [0]
  [0]    From function virtual Foam::autoPtr<Foam::ISstream> Foam::fileOperations::uncollatedFileOperation::readStream(Foam::regIOobject&, const Foam::fileName&, const Foam::word&, bool) const
  [0]    in file global/fileOperations/uncollatedFileOperation/uncollatedFileOperation.C at line 505.
  [0]
  FOAM parallel run exiting
  [0]
  --------------------------------------------------------------------------
  mpirun has exited due to process rank 3 with PID 14749 on
  node ubuntu exiting improperly. There are two reasons this could occur:
 
  1. this process did not call "init" before exiting, but others in
  the job did. This can cause a job to hang indefinitely while it waits
  for all processes to call "init". By rule, if one process calls "init",
  then ALL processes must call "init" prior to termination.
 
  2. this process called "init", but exited without calling "finalize".
  By rule, all processes that call "init" MUST call "finalize" prior to
  exiting or it will be considered an "abnormal termination"
 
  This may have caused other processes in the application to be
  terminated by signals sent by mpirun (as reported here).
  --------------------------------------------------------------------------
  [ubuntu:14745] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
  [ubuntu:14745] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages



All times are GMT -4. The time now is 18:45.