CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   directMapped + regionCoupling + parallel problems (https://www.cfd-online.com/Forums/openfoam-solving/111825-directmapped-regioncoupling-parallel-problems.html)

elisabet January 16, 2013 06:34

directMapped + regionCoupling + parallel problems
 
Hi Foamers!

I'm playing with an MHD conjugate solver (like conjugateHeatFoam in OF-1.6-ext) which has been validated. My present case of interest needs periodic boundary conditions and I'm trying to use directMapped b.c. for it.

Everything runs fine in serial mode but, when decomposing the case and running it, I experience some MPI_Recv problems.

DirectMapped b.c. works perfect in parallel and the same occurs for the conjugate solver (with regionCoupling b.c.), but both issues together do not.... is it too much? Do you have any experience with it?

Any help is appreciated....

elisabet

ngj January 16, 2013 07:13

Hi Elisabet,

Have you tried using the
Code:

preservePatches ( <listOfPatchNames> );
in the decomposeParDict? This allows for both of the cyclic patches to be on the same processor. It works in 1.6-ext.

On the other hand, I do not quite understand how you are using the directMapped on cyclic boundaries. Could you elaborate? Are the boundary patches not actually of type "cyclic", but something else?

Kind regards,

Niels

elisabet January 16, 2013 07:27

Hi Niels,

Thanks for your quick reply.

I already checked the preservePatches option but with no improvement... And it doesn't matter in which direction I split my mesh...

Related with cyclic b.c. I wanted to avoid that option and use, instead, the directMapped b.c. for the velocity field (with an average value) at the inlet. This gives me the option to fix a mean value for the pressure and the electric potential in the outlet and inlet b.c. respectively.
However, I do not dismiss using cyclic b.c.

ngj January 16, 2013 07:49

Okay. What happens if you make sure that the cutting plane for the directMapped BC is in the same processor as the boundary? Would this still crash?

/ Niels

elisabet January 16, 2013 08:20

Ok, I'm a little bit confused right now:


I reactivated again the line
Code:

preservePatches (inlet outlet);
in the decomposeParDict for region0 and the corresponding line for region solid. Then, I decomposed the domain as usual (decomposePar and decomposePar -region solid, using OF-1.7.1 just for these two commands). I thought that this would result in a 2 domain decomposition where both inlet and outlet b.c. would stay in one of the subdomains, BUT THIS IS NOT THE CASE!! I've checked it with paraFoam.

My channel length is 0.6 m in x direction, the main flow direction. The offset for the directMappedPatch is (0.5995 0 0 ). I'm using the simple decomposition method with a distribution (2 1 1).

How can I impose that the inlet and outlet b.c. are in the same processor?

Thanks!

elisabet

ngj January 16, 2013 08:27

Aha, I begin to understand your problem. As I understand preservePatches, then it preserves the individual patches on one of the subdomains, but it does not preserve the entire list of patches on a common subdomain.

You could for instance do a manual decomposition of the domains, where the process is hinted in this thread:

http://www.cfd-online.com/Forums/ope...computing.html

/ Niels

elisabet January 16, 2013 09:30

Thanks for the hint!

I succeed in splitting the domain in 2 subdomains (sbd) while keeping inlet and outlet patches on the same sbd. For those who would like some help on it, the steps are (for a channel along x direction)
1.- split the domain in 4 sbd along x using simple method in decomposeParDict and run 'decomposePar -cellDist'. This will create a 'cellDecomposition' file in constant directory.
2.- modify header (object entry) and the name of this file to decompDict, for instance
3.- replace all '2' by '1' and all '3' by '0' in decompDict file (make sure you do not alter the label of the list's length). You can easily do it with vim comand: :%s/2/1/g for instance
4.- change the decomposeParDict file in order to use the manual method and specify the name of your file (decompDict in this example)
5.- run again 'decomposePar' and check the results with paraFoam.
The bad new is that, once the system has been successfully decomposed, I ran the case in parallel and got the MPI_Recv error again:
*** An error occurred in MPI_Recv
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_TRUNCATE: message truncated
any idea?

elisabet

EDIT: P.D. setFields is a fantastic tool also for prepare 'decompDict' file for the manual decomposition!

ngj January 17, 2013 03:52

Hi Elisabet,

Really nice description of how to decompose "manually".

The MPI errors are really nasty, but sometimes it helps to put

Code:

export FOAM_ABORT=1
in the command line before running the simulation. But only sometimes :(

Secondly, if you change your directMapped into say fixedValue, does the simulation run?

All the best,

Niels

elisabet January 17, 2013 12:42

Hi Niels,

I've tried what you suggested about FOAM_ABORT, but the error persists and no extra info is obtained.

Just to sum up, with the conjugate MHD solver:

- The case works without directMapped b.c. (i.e. fixed value b.c.) both in serial and parallel modes.
- The case works with directMapped b.c. in serial mode
- The case DO NOT work with directMapped b.c. in PARALLEL mode (despite being the cutting plane in the same processor as the directMapped b.c.)

I'm going to build a simple case with the original conjugateHeatFoam and with directMapped b.c., let's see what happens.

Just in case: has anyone done it before? any suggestions?

Regards,

elisabet

ngj January 18, 2013 02:47

Hi Elisabet,

I am sorry, but I can not be of more help to you; I am out of ideas.

Good luck,

Niels

chegdan January 22, 2013 18:51

Elisabet,

I had a similar issue a while back and a work around i had was similar to your approach using manual decomposition, with some differences

1. In your fields and boundary file, make your directMapped patches into cyclic patch
2. Decompose with your favorite method (i think scotch will work), outputting the cellDist and making sure to preserve the cyclic patches
3. Remove your processor* folders sinc eyou are going to decompose again anyway
4. In your boundary file and fields, change your BC back to directMapped
5. Manually decompose with your cell distribution you previously found using cyclic BCs instead of directMapped

OF is good at decomposing cyclic BCs for paralle computation...

elisabet February 4, 2013 10:02

1 Attachment(s)
Dear all,

I could finally return to this problem.

I've attached a very simple example with conjugateHeatFoam set up, were the case fails to run in parallel but not in serial mode.

How to: (1) blockMesh for main and solid region (cp boundary_original on boundary file at constant directory afterwords) (2) decompose (already set to manual) (3) check the run in serial mode (4) run in parallel

If anyone could give me some advice....

elisabet

ngj February 4, 2013 12:02

HI Elisabet,

When I am decomposing it, I have a complaint about a missing region for the field T. It happens both times, either

Code:

decomposePar
or

Code:

decomposePar -region solid
Apparently the T field wants the neighbouring region, so how to give it that and still have correct decomposition of the boundary files?

Kind regards,

Niels

elisabet February 4, 2013 16:04

Hi Niels,

Due to size limitations I have not been able to attach the meshes. Hence, naming the folder I'd sent you 'mainFolder', you should:

1. create a folder for the solid region (solidFolder, hereafter) outside mainFolder.
2. copy the solid sub-folders ('mainFolder/0/solid', 'mainFolder/constant/solid' and 'mainFolder/system/solid') folders to this new one. Note that mainFolder/system/controlDict should also be copied into solidFolder/system.
2. blockMesh for mainFolder, and copy boundary_original into boundary file.
3. blockMesh for solidFolder, and copy boundary_original into boundary file.
4. copy solidFolder/constant/polyMesh into mainFolder/constant/solid one.
5. 'decompose' and 'decompose -region solid': watch up! for this utility you need to use 1.7 or newer OF versions.
6. run in parallel mode (mpirun -np 2 conjugateHeatFoam -parallel > log &')

Probably you are not using the right OF version?

elisabet

ngj February 4, 2013 16:44

Oh, I see. I am still running all my simulations in 1.6-ext, as I need a lot of those special things, so unfortunately I will not be able to help you out.

Good luck,

Niels

Kombinator October 3, 2018 10:04

Quote:

Originally Posted by elisabet (Post 402159)
Hi Foamers!

I'm playing with an MHD conjugate solver (like conjugateHeatFoam in OF-1.6-ext) which has been validated. My present case of interest needs periodic boundary conditions and I'm trying to use directMapped b.c. for it.

Everything runs fine in serial mode but, when decomposing the case and running it, I experience some MPI_Recv problems.

DirectMapped b.c. works perfect in parallel and the same occurs for the conjugate solver (with regionCoupling b.c.), but both issues together do not.... is it too much? Do you have any experience with it?

Any help is appreciated....

elisabet

Dear Elisabet,

Are you still working on conjugate mhd solver?

Best regards,
Artem


All times are GMT -4. The time now is 16:08.