CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Programming & Development (
-   -   Gather/scatter in boundary code (

ngj October 22, 2013 08:40

Gather/scatter in boundary code
Dear all,

I have a implementation issue, which gives me a lot of problems. I need to perform a bit of manual exchange of parallel data in my code, because of two boundaries, which are physically connected, but they are not necessarily placed on the same processor. For that I am using the gatherList and scatterList code, and the following is a dummy example of it:


List<scalarField> test(Pstream::nProcs());
test[Pstream::myProcNo()].setSize(100, Pstream::myProcNo());


Pout << test << endl;

If I place this piece of code in a solver, then I get the expected output, namely:


[1] 2
[1] (
[1] 100{0}
[1] 100{1}
[1] )
[0] 2
[0] (
[0] 100{0}
[0] 100{1}
[0] )

But if I place the exact same code inside a boundary condition, where I need to exchange the information between the processors for my boundary conditions, then I receive given the following error:


[0] error in IOstream "IOstream" for operation operator>>(Istream&, List<T>&) : reading first token
[0] file: IOstream at line 0.
[0]    From function IOstream::fatalCheck(const char*) const
[0]    in file db/IOstreams/IOstreams/IOstream.C at line 109.

I have tried leaving either gatherList or scatterList out of the code, but the result is the same.

It should be said that irrespectively of where I use this piece of code, it compiles.

Any help on this problem is greatly appreciated.

Kind regards,


ngj October 23, 2013 02:37

Good morning,

I just wanted to post the solution, however, as it is a work-around, I would still very much like to get an explanation on the behaviour of gather-/scatterList inside a boundary condition.

The solution:
1. Apply gather/scatterList on a List<scalarField> in the solver.
2. Concatenate the List<scalarField> into a single scalarField, where the contribution from each processor is placed behind each other.
3. Put this information into a IOField<scalar>.
4. Since the IOField<scalar> is available in the database, it can be looked up using the mesh.thisDb().lookupObject<scalarField>("name") command.

Kind regards


bigphil October 24, 2013 07:02

Hi Niels,

Just a thought:

When you do scatter/gather, all the processors must be calling it at the same time, if not you will get an error.
So maybe some of the processor get stuck at earlier boundary conditions waiting for another global call or maybe boundaries with zero faces skip the scatter/gather operations.
I am not sure but just my thoughts.

Best regards,

ngj October 24, 2013 14:14

Good evening Philip,

Thanks for the thoughts, which made me go back for a bit of testing. I downscaled a bit (using a less complex problem), so I took one of my wave boundary conditions, which are almost as simple as it gets.

I added the gather/scatter part and it fails.
I tried decomposing, such that all processors had faces on the particular boundary, and it fails.
I substituted the gather/scatter by a simple

label a = 1; reduce(a, sumOp<label>());
and it fails.

It should be said that the last tests are done on another computer, so the problem seems to be portable. The problem only occurs, when I place the code snippet inside updateCoeffs(), and it does make it through the construction with a subsequent fail at the first encounter in the first time step.

Any ideas for better understanding this behaviour?

Kind regards


flames February 3, 2014 19:03

hello foamers,

Can I ask you a question about gatherList and scatterList? In terms of their functions, are they similar to the MPI functions MPI_Gather and MPI_Scatter? Specifically:

MPI_Gather: collect the data from all the processors and then stores the collected data in one array in the root.
MPI_Scatter: send the data from the root to the other processors.

Is my understanding correct?


All times are GMT -4. The time now is 15:14.