CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Bugs (https://www.cfd-online.com/Forums/openfoam-bugs/)
-   -   Parallel Moving Mesh Bug for Multi-patch Case (https://www.cfd-online.com/Forums/openfoam-bugs/64751-parallel-moving-mesh-bug-multi-patch-case.html)

albcem May 22, 2009 10:44

Parallel Moving Mesh Bug for Multi-patch Case
 
Hello all,

Sorry for re-posting this, but getting absolutely no responses to any of my postings in general I thought I may want to target them to the appropriate sub-forums. If anyone knows how to delete previous postings, please let me know...

----

I am testing the mesh motion for a simple case that consists of a rectangular block with 6 patches for 6 faces. I use pointMotionVelocity to drive the mesh motion.

If I run a solver like "InterDyMFoam" on a single processor, everything runs right. However, if I decompose the case to 2 or more processors and continue the run in parallel then I get errors similar to the below.

It seems the mesh.update() does not interpret the number of points to move on the patches distributed between the processors right. If I define the whole cube as a single patch and run it in parallel then I do not run into any issues.

Some validation on whether this is a true bug and insight into resolving the issue would be helpful.

Thanks

Cem

Exec : rasInterDyMFoam -parallel
Date : May 20 2009
Time : 19:59:13
Host :
PID : 8833
Case :
nProcs : 2
Slaves :
1
(
goldenhorn.8834
)

Pstream initialized with:
floatTransfer : 1
nProcsSimpleSum : 0
commsType : nonBlocking

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0.01

Selecting dynamicFvMesh dynamicMotionSolverFvMesh
Selecting motion solver: velocityLaplacian
[1] [0]
[0]
[0] size 834 is not equal to the given value of 835
[0]
[0] file: /MultiPatchCube/processor0/0.01/pointMotionU::Block_Side2 from line 3800 to line 3801.
[0]
[0] From function Field<Type>::Field(const word& keyword, const dictionary& dict, const label s)
[0] in file /home/cae/OpenFOAM/OpenFOAM-1.5/src/OpenFOAM/lnInclude/Field.C at line 224.
[0]
FOAM parallel run exiting
[0]
[goldenhorn.caebridge.com:08833] MPI_ABORT invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1

[1]
[1] size 889 is not equal to the given value of 890
[1]
[1] file: /MultiPatchCube/processor1/0.01/pointMotionU::Block_Bottom from line 1627 to line 1628.
[1]
[1] From function Field<Type>::Field(const word& keyword, const dictionary& dict, const label s)
[1] in file /home/cae/OpenFOAM/OpenFOAM-1.5/src/OpenFOAM/lnInclude/Field.C at line 224.
[1]
FOAM parallel run exiting
[1]
[goldenhorn:08834] MPI_ABORT invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 1
mpirun noticed that job rank 1 with PID 8834 on node goldenhorn exited on signal 123 (Unknown signal 123).

henry May 23, 2009 04:18

Are your running 1.5 or 1.5.x? If the former please pull 1.5.x from our git repository and try again. If it still doesn't work please post the case here and we will take a look.

Henry

albcem May 23, 2009 13:03

Hello Henry,

The problem persists in all versions - 1.5, 1.5.x and 1.5-dev. For the 1.5.x, I pulled the latest version this morning and retried as you suggested. The problem is still there.

For your evaluation, I am including two cases - one with a multi-patch cube, the other with a single-patch cube. The single patch case is derived from the multi-patch one by manually editing the polymesh/boundary file to collapse the 6 cube patches to a single one. This way the geometric aspects of the meshes are exactly the same. The single patch case works in both serial and parallel modes. The multi-patch case fails in parallel mode as I mentioned before.

You will need to march the cases from the final time step at which the point and cell motion files have non-zero values. You can use interDyMFoam for the tests.

You can download the two cases from:

MultiPatchCube
SinglePatchCube

Thanks a lot for your support.

Cem

henry May 25, 2009 17:29

Unfortunately I am unable to download the cases, could you please check the links?

Thanks

H

albcem May 25, 2009 23:08

Hello Henry,

Sorry about the problem. Apparently the web server where the files are was offline due to a power outage. Now everything is in order - the links should work.

Thanks.

Cem

henry May 29, 2009 05:34

Your case runs fine if you decompose the 0 time and run in parallel from there.

Henry

albcem May 29, 2009 06:05

Hello Henry,

Wrong conclusion for the wrong reason is what I can respond you:

There is no mesh motion prescribed in a dictionary to keep things simple. So if you start from time step 0, where the point motion file prescribes no motion, you will trivially skip the mesh motion part of the solver, where there is trouble.

As I mentioned in my previous message, you need to start from the latest time step, in which some non-zero point and cell motion values are prescribed to get into the mesh motion portions of the solver.

I should have erased time step 0 to avoid this confusion...

Please let me know whether it is now clear to you that the bug is still there and needs to be fixed...

Thanks

Cem

henry May 29, 2009 06:19

When I run from time-step the mesh-motion is solved for:

FDICPCG: Solving for cellMotionUx, Initial residual = 0, Final residual = 0, No Iterations 0
FDICPCG: Solving for cellMotionUy, Initial residual = 0, Final residual = 0, No Iterations 0
FDICPCG: Solving for cellMotionUz, Initial residual = 0, Final residual = 0, No Iterations 0
Execution time for mesh.update() = 0.28 s

which uses the pointMotionU file provided. The fact than no motion occurs is not important with respect to the problem you reported. What this means is that you cannot currently decompose your 0.01 directory, this generates files with incorrect numbers of points in the pointMotionU files, what you need to do is decompose the 0 directory and then specify the motion in parallel either in the solver or by writing a setup code which you run in parallel.

Henry

albcem May 29, 2009 07:28

Hello Henry,

The underlying solver is a customized 6-DOF solver that I have been developing and it does exactly what you suggest in your response:

I do start my simulations in parallel at time step 0 and the point motion is specified during the execution of this code in parallel. If the single patch version works in serial+parallel, and the multi-patch version works in serial, why would the multi-patch in parallel fail? So this may be a solver dependent issue.

Forgetting about the details of the solvers, you have point and cell motion files that work fine in serial mode and fail in parallel after decomposition. Isn't this an issue of concern, ie a bug on the decomposePar side?

I still think there is a problem with mesh motion for a multi-patch case, when one specifies non uniform point motion values on the patches. If I could make a stronger case to you on this claim (how can I?), we could move forward looking both at decomposePar and the point motion solvers to see where the issue arises.

Cem

henry May 29, 2009 07:45

There is a limitation in decomposePar with respect to pointFields which will be removed following a future reorganization of the way pointFields and cellFields relate. For now you need to specify the pointMotionU field in a pre-processing application that runs in parallel and operates on your correctly decomposed initial field or is set in parallel in the solver running in parallel. This is what we do for complex mesh-motion cases running in parallel.

Henry

albcem May 29, 2009 11:07

Hello Henry,

Thank you for the insight. Before things cool off I wanted to get your advice on how to assign pointMotionU in parallel. The code I work with utilizes the following lines to assign a solid body rotation and translation on a multi-patch object:

for (int pcnt=0; pcnt<motionPatches_.size(); pcnt++)
{
patchI=label(mesh_.boundaryMesh().findPatchID(moti onPatches_[pcnt]));

pointCentres = mesh_.boundaryMesh()[patchI].localPoints() - CoG;

Xdot = (((pointCentres ^ unitVectX) & Omega_cog_global) * unitVectX);
Ydot = (((pointCentres ^ unitVectY) & Omega_cog_global) * unitVectY);
Zdot = (((pointCentres ^ unitVectZ) & Omega_cog_global) * unitVectZ);

pointMotionU.boundaryField()[patchI] ==
(
Xdot + Ydot + Zdot +
U_cog_global
);
}


If this needs to be modified and you have an example piece of code for the modification, it would help greatly. Else I don't exactly know what I need to change in the way I do things...

Thanks for your help.

Cem

albcem June 25, 2009 00:57

Hello Henry,

I would appreciate it if you could post an example code for the parallel assignment of the pointMotionU field as you suggest or a way for me to go around the issue. I am really stuck for quite some time with this problem...

Thanks.

Cem

henry June 25, 2009 02:56

You run whatever code you have written to set pointMotionU in parallel on a case which has been decomposed with the initial pointMotionU field having uniform values. The only issue you have to be careful of is the presence of "global" points on the processors which come from the shared points at edges and corners. To see an example take a look at how the cellMotionU field is mapped to the pointMotionU field after the mesh-motion solution.

H

albcem June 25, 2009 19:07

Hello Henry,

Thank you very much for the tip.

Before moving forward with any code modification, I played around with the decomposition parameters a little more and noticed that I could get my "hardest" case going with metis based decomposition and the less demanding ones by playing with the "delta" parameter in decomposeParDict.

This is a little relief for me, however I find it troubling that different decompositions give different success results.

Are there any best practices for parallel decomposition parameters for moving mesh problems? I can not find any guideline indicating the role of "delta" or how one would choose "simple", "hierarchical", "metis" or "manual" methods over the rest. I also remember reading one can force to assign chosen patches to a single processor as desired. Is this documented anywhere?

Best regards,

Cem

henry June 26, 2009 03:22

Changing the decomposition changes how many "global" points are generated and how they relate between the patches. By changing this you might get lucky and not need to set the position of the "global" points in your code but if you did do this as I suggest then the behavior will be independent of the decomposition.

H

podallaire August 17, 2009 10:23

Hi Henry,

I have the same problem here with version 1.6.x where as my pointMotionU file at time 0 is very simple :

dimensions [0 1 -1 0 0 0 0];

internalField uniform (0 0 0);

boundaryField
{
Inlet
{
//Slip will allow the mesh to move at the inlet
//type slip;
type fixedValue;
value uniform (0 0 0);
}

Outlet
{
//Slip will allow the mesh to move at the outlet
//type slip;
type fixedValue;
value uniform (0 0 0);
}

Walls
{
type symmetryPlane;
}

section_solid
{
type angularOscillatingDisplacement;
axis (0 0 1);
origin (2.1267 0 0);
angle0 0;
amplitude 0.035;
omega 22.0;
value uniform (0 0 0);
}
}


Using decomposePar with 2 cpus and trying to run with pimpleDyMFoam gave me this :

Create time

Create mesh for time = 0

Selecting dynamicFvMesh dynamicMotionSolverFvMesh
Selecting motion solver: velocityLaplacian
[1] [0]
[0]
[0] size 12540 is not equal to the given value of 6367
[0]
[0] file: /data-cfd01/projects/OpenFOAM/pod-1.6.x/run/BridgeSectionDy/processor0/0/pointMotionU::section_solid from line 40 to line 12591.
[0]
[0] From function Field<Type>::Field(const word& keyword, const dictionary&, const label)
[0] in file /data-cfd01/software/OpenFOAM/OpenFOAM-1.6.x/src/OpenFOAM/lnInclude/Field.C at line 237.
[0]
FOAM parallel run exiting
[0]

[1]
[1] size 12540 is not equal to the given value of 6240
[1]
[1] file: /data-cfd01/projects/OpenFOAM/pod-1.6.x/run/BridgeSectionDy/processor1/0/pointMotionU::section_solid from line 40 to line 12591.
[1]
[1] From function Field<Type>::Field(const word& keyword, const dictionary&, const label)
[1] in file /data-cfd01/software/OpenFOAM/OpenFOAM-1.6.x/src/OpenFOAM/lnInclude/Field.C at line 237.
[1]
FOAM parallel run exiting

I'm not doing anything fancy / not sure what is the problem. The same case runs well on single cpu.

Regards,

PO

bigphil April 19, 2011 18:00

Hi,

I realise this post is quite old but I have the same problem.
Is there now a solution?

Philip

flying April 28, 2013 23:44

Hey all of foamers:

I also have met this problem, have you solve the problem? If it is, would you please give me some hints?

Thanks and best regards.

Quote:

Originally Posted by bigphil (Post 304289)
Hi,

I realise this post is quite old but I have the same problem.
Is there now a solution?

Philip



All times are GMT -4. The time now is 01:45.