Dear Forum
Good Morning
Dear Forum
Good Morning I have some doubts related to parallelization of an OpenFOAM application or solver. A modiication done to interFoam works fine on single processor but crashes, when run in parallel. A simple test such as : mesh.ncells() or field.size() give different results when run on single and in parallel. Also the changes to the solver do some explicit operation. For example : Look for a cell with a particular value of the volume fraction field gamma and take their neighbours and do some calculation. How to make such operations parallel aware. As far as my understanding goes one has to explicitly take care of it. In code this might look: forAll(gamma, cellI) { if (0.01 < gamma < 0.99) { perform calcs .... } } I request the users with experience of this aspect to please share the information. It would be great if we could compile a list of guidelines one may follow to make sure that their solver is completely compatible to parallel runs. Hope that I will find some contributors Kind Regards Jaswi |
Some more search shows that a
Some more search shows that a reduce operation has to be done whereever something processor dependent is being done. I tried the following:
label cellCount = mesh.nCells(); reduce(cellCount, sumOp<label>()); Info << "Mesh cell count : " << cellCount << endl; returns the same value for single and parallel, but when I do the same for faces: label faceCount = mesh.nFaces(); reduce(faceCount, sumOp<label>()); Info << "Mesh face count : " << faceCount << endl; the output is different as the faceCount now includes the number of faces for the processor patches as well so be aware of that fact. I do not how yet how to get the correct number of faces using reduction. It seems that the reduce operation can be done for various operations such as reduce(foundLabel, orOp<bool>()) Still digging.... Any body having a clear understanding please contribute Kind Regards Jaswi |
I have a suggestion - although
I have a suggestion - although I'm not too sure about the feasibility of implementation. MPI inherently numbers processes between 0 and (n-1); where n is the number of subdomains. I'm guessing that every processor patch can identify which MPI process / sub-domain it talks to. So, for every processor patch, identify if the neighbouring sub-domain is numbered higher than the current one. If yes, then add the number of faces for the processor patch to the current sub-domain's face-count, and discard if otherwise. Finally, you can do a conventional reduce with sumOp, to sum up the face count for all sub-domains.
|
OpenFOAM uses domain decomposi
OpenFOAM uses domain decomposition. Therefore each processor only knows about the cells and faces residing on it.
Your problem is difficult because you want to identify cells and then do calculations with their neighbours. This stretches the "domain decomposition" paradigmn because the neighbour might reside on a different processor. My recipie for you: 1) Create an indicator field locally 2) interpolate the result to the faces (This should update the processor-processor boundaries) 3) Identify the neighbour cells: They have faces with non-zero values in the interpolated indicator field 4) Do the calc involving neighbours There is probably a much more elegant solution, but I would have to dig a lot. Henrik |
@Jaswinder
I'm intrested to i
@Jaswinder
I'm intrested to in this kind of problem as I will create a solver that needs to know for each cell the neighbours, and I want to make it work in parallel. My question is: does exist any structure in Pstream that store the local cell location? So, something that tells us if a cell in the x domain is on the boundary with the y domain. Thanks! |
processorPolyPatch contains a
processorPolyPatch contains a list of faces on the patch between different processors and functions for getting the processor number of each patch.
I believe what you want to do is loop over the faces of your cell, check to see if the faces are on a processorPolyPatch, and then use the functions in processorPolyPatch to return the two processor IDs. -Nat |
Anyone has a follow-up on this
Anyone has a follow-up on this issue? Or are the suggestions (recipie) of Nat and Henrik are the optimal ones already?
Suggestions are appreciated! Heng |
After some research, it seems
After some research, it seems that the best solution I can propose now, it to create a global mesh on each processor by reading the mesh data in the global dir (this object needs only to be a "primitiveMesh" object, according the document written by Jaswi "Creation processes in OpenFoam"). Then, each processor can analyze the global mesh in terms of neighbouring relation with the build-in mesh analyze tools such as pointCell(); cellCell() etc to find neighbours in global mesh. And then, with processor-to-global mesh addressing relations / mapping (refer to the code in reconstructParMesh.C), you can find the neighboring relation in terms of local cell ID.
I admit this is an ugly solution, because the neighboring relation is obtained by going back to the global mesh data, and the parallel communication based on this neighboring relation still need to be build up from scratch, but this is the best I can think of now and I don't really need communication for now, just the neighboring relation. For my case, the global mesh is not very big at all (then people would ask, why bother go parallel of your mesh is small? the reason is, I am doing Euler-MD coupling, with OpenFoam dealing with the Eulerian part, based on the icoLagrangianFoam, I want the MD part to be parallel and be able to handle collision). I tried the approach by Nat, but it seems that "processorPolyPatch" only give the faceCenters and faceAreas of the neighbours (accross processors). What I need is the cell centers associated with these faces. Hope this contribution would finally lead to an elegant solution of this problem. |
Heng
processorPolyPatch can
Heng
processorPolyPatch can provide the processor id of a neighboring patch. You can then use pstream to send and receive messages between processors. You can assemble a list of cell centers or whatever you'd like and send them across the processor boundary. i.e. Loop over patch faces, create a list of owner cell centers, send to neighbor with pstream, and process on opposite side. If anyone can give their input as well it would be much appreciated. -Nat |
Hi Nat,
(First, sorry for c
Hi Nat,
(First, sorry for changing the name.. this is for some very weird reason ... I am still Heng) Thanks for the clarification! So, the procedure would be, on each processor: ------------------- forEach-ProcessorPatch { .... forEachFace-OnThisPatch .....{ ..........find the cells which own this face via faceCell() ..........add this cell to a buffer list .....} ......Create Pstream ......Send & receive to neighbours } ------------------------ Correct? The procedure of creating Pstream and exchange (send/receive) with neighbours is pretty commn in through the OpenFoam code, but I am not sure the part I am doing for each face on the processorPolyPatch is right. BTW, I feel that to solve the problem raised in this thread, something could be learned from the class "moleculerCloud", http://foam.sourceforge.net/doc/Doxygen/html/dir_fb64a3fdd283c156088e34430113301 a.html Particularly the following files: (build***InteractionList.H/C) http://foam.sourceforge.net/doc/Doxygen/html/moleculeCloudBuildDirectInteraction List_8H-source.html http://foam.sourceforge.net/doc/Doxygen/html/moleculeCloudBuildReferredInteracti onList_8H-source.html But I haven't fully understood the details yet. Still working on it. I will share my findings shortly if any. Best, Heng |
Hi Nat,
could you post a piec
Hi Nat,
could you post a piece of code in which you do this passage? |
This is a bit of my code where
This is a bit of my code where I did something similar. Replace the bit where it says blobID with whatever field you're sending across. Good luck deciphering http://www.cfd-online.com/OpenFOAM_D...part/happy.gif
//Parallel if (Pstream::parRun()){ //Loop over processor patches Info << "Sending stuff" << endl; forAll (mesh.boundaryMesh(),patchInd){ const polyPatch& patch = mesh.boundaryMesh()[patchInd]; if (typeid(patch) == typeid(processorPolyPatch)){ const processorPolyPatch& procpatch = refCast<const>(patch); //only for master chunk -> cut work in half if (procpatch.myProcNo() > procpatch.neighbProcNo()){ //Make buffer Field<scalar> mybuffer(patch.size()); const labelList& internalcells = patch.faceCells(); forAll(internalcells, ind){ label curcell = internalcells[ind]; mybuffer[ind] = blobID[curcell]; patchboundary[curcell] = procpatch.myProcNo(); //for debugging } //Send buffer to neighbor OPstream tNP(procpatch.neighbProcNo(),0,false); tNP << mybuffer << endl; } } } Info << "Messages sent" << endl; //Receive messages forAll (mesh.boundaryMesh(),patchInd){ const polyPatch& patch = mesh.boundaryMesh()[patchInd]; if (typeid(patch) == typeid(processorPolyPatch)){ const processorPolyPatch& procpatch = refCast<const>(patch); if (procpatch.myProcNo() < procpatch.neighbProcNo()){ //only for slave chunk //Make buffer Field<scalar> yourbuffer(patch.size()); IPstream fNP(procpatch.neighbProcNo()); fNP >> yourbuffer; |
Uau!
Thanks Nat, this could b
Uau!
Thanks Nat, this could be a great help for beginners in parallelization of codes!!! |
Thanks a lot !!!
One line of
Thanks a lot !!!
One line of code is as good as a thousand words! Heng |
A question: is it true that o
A question: is it true that on both sides of a processorPolyPatch, the faces are number the same, or at least in the same order? This is necessary for the communication above to work.
Say, proc 1 and 2 share a patch with 3 faces. So, face #3 on the patch on proc 1 is the same as face #3 on the patch of proc 2; Proc 1 3 ***** 4 **** 5 ******** (face # on proc 1) ---------|---------|----------- 3 **** 4 **** 5 ******** (face # on proc 2) Proc 2 Or is it like this: Proc 1 2 ***** 5 **** 6 ******** (face # on proc 1) ---------|--------|---------- 3 **** 4 **** 5 ******** (face # on proc 2) Proc 2 ************************************ The former seems to be true according to this post: http://openfoamwiki.net/index.php/Write_OpenFOAM_meshes but I doubt how this can be achieved, since one processor could be neighbor with several other processors. Regards, Heng |
The faces are most likely not
The faces are most likely not numbered the same, but the patches store the face list in the same order.
-Nat |
Hi Nat,
Thanks for the cla
Hi Nat,
Thanks for the clarification! best, Heng |
3 Attachment(s)
Hi all!
I read this post and I'm also worrying about OpenFOAM parallelization of solvers. Actually based on by experience parallel solution is not the same as serial one. It is concerns both standard solvers and new one, created by myself. Attachment 874 Attachment 875 First example is transient solution of the flow simulated with turbFoam. You see that solutions with serial run (first picture) and the parallel (second figure) are different at the same time moment. It is worse with coupled solver for the flow and energy calculations. My solver has a criteria to stop internal loop when all initial residuals are less then fixed value (say 1e-06), so it means that equations for liquid motion and heat transfer (which are connected through source terms) are converged together. For serial solution I get smooth fields. For parallel run there is strong gradient at the processors borders (see picture attached). Attachment 876 Please, does anyone has an idea what coses such behavior between processors? For me it is obvious that solver performs convergence criteria only within local domain, not taking into account global distribution of modeled fields at the neighboring processors. How to "teach" solver to perform global convergence criteria for parallel runs? Thank you, looking forward for your advices! |
Quote:
Dear Dr. Vakhrushev, I realize that this thread is very old, but I've run into the same problem that you described here: large gradients forming at subdomain boundaries. Did you ever solve this problem? TOS |
Hi Erik!
It seams it was solved with a OF version update... I'm not sure, I have to recheck that case file. But currently I do not observe similar behavior any more. Cheers, Alexander |
All times are GMT -4. The time now is 22:51. |