CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Programming & Development (
-   -   How to handel fields in parallel computation (

kaifu May 6, 2011 16:23

How to handel fields in parallel computation
I have some problem with creating fields in parallel OpenFoam. For example if i create a fvPatch in createField.H, it seems that I can only create the patch in part of mesh. The code is like,

    //find the patch:
    label patchWall=mesh.boundaryMesh().findPatchID("walls");
    const fvPatch &thePatchItselfWall=mesh.boundary()[patchWall];
    //loop over the patch:
        label cellI=thePatchItselfWall.faceCells()[faceI];

The serial calculation shows:

But the parallel one shows:

which is 1/4 of labels that I can get from serial one. (there are 4 procs)

Obviously I can only access part of mesh. So how could I create the fvPatch on the other 3 procs? If done, how could I know I have accessed to the other part of mesh? Does the output "Info<<cellI<<endl;" work? thx

// Kai

marupio May 7, 2011 13:11

That's the way parallel works. Each instance uses only 1/4 of the mesh, and solves everything as if that is the entire mesh, except at the interface between the meshes. Information passes between the interfaces using PStream.

impecca August 18, 2011 15:30

I am raising the same question on 'field handling with a parallel computation'. In my case, I haven't faced any problem with patch handling with a parallel computation. I do not know how does Kai have that problem. What I have done is pretty mush same.

label inletPatchID = mesh.boundaryMesh().findPatchID("inlet");
fvPatchVectorField refField= U.boundaryField()[inletPatchID];
forAll(refField, celli){
// any command

celli showed the same number as the 'inlet' patch has both single and parallel computing. But When I try to manipulate the field within the domain (not a patch on boundary conditions), I recognize that parallel and single computations show different number of cells in the field. code which I use,

volVectorField U_= U;

scalar i = 0;
forAll(U_, facei){
Info<<"i= "<<i<<nl;

with single processor : 'i' shows same number of cell as the domain contains
i= 336000

with 4 processors : 'i' shows smaller number then it should
i= 84332

To sum up, Could anyone can give a hint on field manipulation with a parallel computation?
Thanks in advance.


alberto August 21, 2011 04:14

It really depends on the manipulation you are trying to do. In general, as shown in many examples:

1. Try to avoid looping. In many cases it is possible to avoid them by re-thinking the operations you have to do. For example, if you have to define a field using piece-wise function, you can use "pos" and "neg" functions, instead than if and forAll.


v = a, if x < b
v = c if x > b

can be coded as

volScalarField v = neg(x - b)*a + pos(x-b)*c;


pos(x) = 1 if x is positive, and 0 if it's negative
neg(x) = 1 if x is negative, and 0 if it's positive

2. If you loop over cells, and modify the cell values of a field directly, remember you most likely have to correct the BC for that field after the manipulation.


impecca August 21, 2011 09:21

Thanks alberto, I 've managed to manipulate the field. The code I used (it's bit simplified),

forAll(cells, celli){
if( fabs(0.19-xc[celli]) <0.02 ){
jj=(int)( (yc[celli] )/dy );
kk=(int)( (zc[celli]) / dz);

U_[celli] = Um_[jj][kk]*vector(1, 0, 0)
+ vector(


based on pisoFoam, application is a channel flow.
intention was, imposing artificial fluctuations(u2_[jj][kk], v2_[jj][kk], _[jj][kk]) at a certain down stream distance of inlet (here x= 0.19). xc[celli], yc[celli] and zc[celli] are pre-defined cell-centre positions of the domain. mean velocity ( Um_[jj][kk]) and artificial fluctuations are generated N by M virtual grid where 0<jj<N and 0<kk<M. dy and dz are unifrom grid distance in the virtual grid.

The previouse problem was, I pre-defined the plane (saved cell ID at this specific plane) where the fluctuations are imposed. and try to call these saved cell IDs at each time step. I suspect that these pre-saved cell IDs do not have same orders between single and paralle computations. That's why it didn't work out with the parallel computations.
I am not sure whether I make myself clear or not but thank you very much for your great comments.


impecca August 30, 2011 09:01

Hi, Alberto.

Could you tell me more on the 'correcting boundary condition' from your reply? can you give me a simple example of requried BC corrections following by a field manipulation?
In my case, I change cell centre values at one plane in the domain. I do not see a reason for correcting BC in this case. Please correct me, if I am wrong.


alberto August 30, 2011 10:52

The reason to update boundary conditions is that the domain decomposition in OpenFOAM considers the interface between different processors as a boundary condition (the processor patch).

If you directly access a field (a forAll loop) to change its values, the processor patches won't be updated automatically. As a consequence, you might want to use correctBoundaryConditions().


mikeP March 20, 2013 15:34

what if a volScalarField is scaled with cmptMultiply function? is correctBoundaryConditions() still necessary?

also does the cmptMultiply function update both internalField() and boundaryField()?

niklas March 20, 2013 16:47

Just a note...

If you are running a parallel case and using the Info stream to output information, you need to remember that its only done for the main processor.

Instead of Info, try using Sout if you want the information printed for every processor.

rribeiro June 26, 2013 12:29


Originally Posted by marupio (Post 306627)
That's the way parallel works. Each instance uses only 1/4 of the mesh, and solves everything as if that is the entire mesh, except at the interface between the meshes. Information passes between the interfaces using PStream.


could explain this with a little more detail? For instance, the impact of parallel execution in the chosen equation solver?


marupio June 26, 2013 12:37

chegdan describes the disection of the matrix fairly well here:

All times are GMT -4. The time now is 10:11.