CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Running, Solving & CFD (
-   -   Parallel interDyMFoam cellLevel problem (

tgvosk July 11, 2012 13:24

Parallel interDyMFoam cellLevel problem
I am working with the interDyMFoam solver in parallel and have run into a problem with the adaptive mesh refinement. For my case, I run a few time steps serially to pre-adapt the mesh to the initial condition (creating time folders TIME1 and TIME2, for example), then run decomposePar to begin the main run in parallel (starting at time folder TIME2).

The 'startFrom' value in my controlDict is set to 'latestTime' and the pre-adaptation is done using a modified version of interDyMFoam with only the mesh adaptation routines.

The problem I am having is that even though decomposePar puts cellLevel from TIME2 into the processorX/TIME2 folder, it is not put into the processorX/TIME2/polyMesh folder and the parallel run does not load it. Even though the mesh is already refined, it resets the level to 0 everywhere on the first parallel time step, which creates excessive refinement.

If I do the main run serially, this problem does not occur. How can I make decomposePar also decompose the cellLevel into the polyMesh folder so the parallel run loads the correct cellLevel rather than resetting it to 0? Any help would be greatly appreciated!

kmooney October 2, 2012 17:13

I noticed the same problem. In response I wrote a quick exe. that would load the cellLevel volScalarField and write it into the correct directory as a labelList.

This does not fix the problem. After resuming the run post-load balance no further hex cell refinement operations would occur for some reason. The hex refinement engine also appears to be looking for a pointScalarField pointLevel. Unfortunately this is not mapped at all so there lies another problem.

So far I am unable to find a good way to load balance with interDyMFoam, which is confusing considering it may be the solver which demands dynamic load balancing more than any other.

kmooney October 2, 2012 17:34

There is a function that might be of use:

void Foam::hexRef8::distribute(const mapDistributePolyMesh& map)

from hexRef8.C

After adding some extra pointers I was able to send the mapDistributePolyMesh generated by fvMeshDistribute to this function to no avail. I received all sorts of seg faults.

This is supposed to map the cellLevel, pointLevel, and refinementHistory fields. In theory this is what we need...

tikulju February 17, 2014 10:48

Have you been able to get this load balancing to work? It's buzzling me again... I tried to hack the redistributePar, and got the mesh to be distributed, but the fields do not. Havent' tried the hexRef8::distribute yet...

tgvosk February 17, 2014 16:09

Load Balancing
I have a parallel load balancing library working with 2.1.x, but it requires fixing a number of bugs in various parts of the mesh libraries (particularly in the distribute functions).

I plan to make the library repo public sometime in the next couple months.

tikulju February 19, 2014 03:24

Maybe some could explain this dilemma. By executing


autoPtr<mapDistributePolyMesh> map = distributor.distribute(finalDecomp);
only the p and U* fields gets distributed. Some of the alpha*-fields does but some do not. The simulation ends in in "double free or corruption"-message, when writing the data. Any hints how to get the other fields mapped?

For the refinement history stuff, I think they could be mapped as


    // Update celllevel
    // Update pointlevel

    // Update refinement tree
    if (history().active())

like in hexRef8::distribute. I tried to declare (before distributing the mesh) cellLevel and pointLevel as labelIOLists and history as refinementHistory, and distributing them as above, but without any success. Any ideas?

All times are GMT -4. The time now is 07:06.