directMapped + regionCoupling + parallel problems
Hi Foamers!
I'm playing with an MHD conjugate solver (like conjugateHeatFoam in OF-1.6-ext) which has been validated. My present case of interest needs periodic boundary conditions and I'm trying to use directMapped b.c. for it. Everything runs fine in serial mode but, when decomposing the case and running it, I experience some MPI_Recv problems. DirectMapped b.c. works perfect in parallel and the same occurs for the conjugate solver (with regionCoupling b.c.), but both issues together do not.... is it too much? Do you have any experience with it? Any help is appreciated.... elisabet |
Hi Elisabet,
Have you tried using the Code:
preservePatches ( <listOfPatchNames> ); On the other hand, I do not quite understand how you are using the directMapped on cyclic boundaries. Could you elaborate? Are the boundary patches not actually of type "cyclic", but something else? Kind regards, Niels |
Hi Niels,
Thanks for your quick reply. I already checked the preservePatches option but with no improvement... And it doesn't matter in which direction I split my mesh... Related with cyclic b.c. I wanted to avoid that option and use, instead, the directMapped b.c. for the velocity field (with an average value) at the inlet. This gives me the option to fix a mean value for the pressure and the electric potential in the outlet and inlet b.c. respectively. However, I do not dismiss using cyclic b.c. |
Okay. What happens if you make sure that the cutting plane for the directMapped BC is in the same processor as the boundary? Would this still crash?
/ Niels |
Ok, I'm a little bit confused right now:
I reactivated again the line Code:
preservePatches (inlet outlet); My channel length is 0.6 m in x direction, the main flow direction. The offset for the directMappedPatch is (0.5995 0 0 ). I'm using the simple decomposition method with a distribution (2 1 1). How can I impose that the inlet and outlet b.c. are in the same processor? Thanks! elisabet |
Aha, I begin to understand your problem. As I understand preservePatches, then it preserves the individual patches on one of the subdomains, but it does not preserve the entire list of patches on a common subdomain.
You could for instance do a manual decomposition of the domains, where the process is hinted in this thread: http://www.cfd-online.com/Forums/ope...computing.html / Niels |
Thanks for the hint!
I succeed in splitting the domain in 2 subdomains (sbd) while keeping inlet and outlet patches on the same sbd. For those who would like some help on it, the steps are (for a channel along x direction) 1.- split the domain in 4 sbd along x using simple method in decomposeParDict and run 'decomposePar -cellDist'. This will create a 'cellDecomposition' file in constant directory.The bad new is that, once the system has been successfully decomposed, I ran the case in parallel and got the MPI_Recv error again: *** An error occurred in MPI_Recvany idea? elisabet EDIT: P.D. setFields is a fantastic tool also for prepare 'decompDict' file for the manual decomposition! |
Hi Elisabet,
Really nice description of how to decompose "manually". The MPI errors are really nasty, but sometimes it helps to put Code:
export FOAM_ABORT=1 Secondly, if you change your directMapped into say fixedValue, does the simulation run? All the best, Niels |
Hi Niels,
I've tried what you suggested about FOAM_ABORT, but the error persists and no extra info is obtained. Just to sum up, with the conjugate MHD solver: - The case works without directMapped b.c. (i.e. fixed value b.c.) both in serial and parallel modes. - The case works with directMapped b.c. in serial mode - The case DO NOT work with directMapped b.c. in PARALLEL mode (despite being the cutting plane in the same processor as the directMapped b.c.) I'm going to build a simple case with the original conjugateHeatFoam and with directMapped b.c., let's see what happens. Just in case: has anyone done it before? any suggestions? Regards, elisabet |
Hi Elisabet,
I am sorry, but I can not be of more help to you; I am out of ideas. Good luck, Niels |
Elisabet,
I had a similar issue a while back and a work around i had was similar to your approach using manual decomposition, with some differences 1. In your fields and boundary file, make your directMapped patches into cyclic patch 2. Decompose with your favorite method (i think scotch will work), outputting the cellDist and making sure to preserve the cyclic patches 3. Remove your processor* folders sinc eyou are going to decompose again anyway 4. In your boundary file and fields, change your BC back to directMapped 5. Manually decompose with your cell distribution you previously found using cyclic BCs instead of directMapped OF is good at decomposing cyclic BCs for paralle computation... |
1 Attachment(s)
Dear all,
I could finally return to this problem. I've attached a very simple example with conjugateHeatFoam set up, were the case fails to run in parallel but not in serial mode. How to: (1) blockMesh for main and solid region (cp boundary_original on boundary file at constant directory afterwords) (2) decompose (already set to manual) (3) check the run in serial mode (4) run in parallel If anyone could give me some advice.... elisabet |
HI Elisabet,
When I am decomposing it, I have a complaint about a missing region for the field T. It happens both times, either Code:
decomposePar Code:
decomposePar -region solid Kind regards, Niels |
Hi Niels,
Due to size limitations I have not been able to attach the meshes. Hence, naming the folder I'd sent you 'mainFolder', you should: 1. create a folder for the solid region (solidFolder, hereafter) outside mainFolder. 2. copy the solid sub-folders ('mainFolder/0/solid', 'mainFolder/constant/solid' and 'mainFolder/system/solid') folders to this new one. Note that mainFolder/system/controlDict should also be copied into solidFolder/system. 2. blockMesh for mainFolder, and copy boundary_original into boundary file. 3. blockMesh for solidFolder, and copy boundary_original into boundary file. 4. copy solidFolder/constant/polyMesh into mainFolder/constant/solid one. 5. 'decompose' and 'decompose -region solid': watch up! for this utility you need to use 1.7 or newer OF versions. 6. run in parallel mode (mpirun -np 2 conjugateHeatFoam -parallel > log &') Probably you are not using the right OF version? elisabet |
Oh, I see. I am still running all my simulations in 1.6-ext, as I need a lot of those special things, so unfortunately I will not be able to help you out.
Good luck, Niels |
Quote:
Are you still working on conjugate mhd solver? Best regards, Artem |
All times are GMT -4. The time now is 16:08. |