CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Reduce operation for an array (https://www.cfd-online.com/Forums/openfoam-solving/58174-reduce-operation-array.html)

xiao December 14, 2008 00:01

Hi all, I am writing a para
 
Hi all,

I am writing a parallel code and need to do a reduce operation on a vector array, i.e.

someScalar = some number in local cpu;
someArray = new vector[100];

For the scalar it should be:
reduce(someScalar, sumOp<scalar>());

What would be corresponding operation for the array? Would this work?

reduce(someArray, sumOp<vector[]>());

Doing this element-wise would be slow, I imagine, because there is too much communication involved.

Thanks very much!

Best,
Heng

xiao December 15, 2008 12:04

Anyone had experience to share
 
Anyone had experience to share? Or the experts are all on vacation now?

MPI has MPI_Allreduce, MPI_reduce, and many other function. Does openfoam have equivalent wrapper for them? Where should I find them? Pstream only has a few of them.

Heng

xiao December 22, 2008 15:48

After some (actually, a lot of
 
After some (actually, a lot of) research, I finally figured out.

The function I was looking for was:
listCombineGather() and listCombineScatter().

But this only works for list, not for arrays... but I guess is not hard to change arrays to lists. Just change the definition is enough. List has index operations.

For the reference of those who used to program in raw MPI styles, the scatter(), gather(), combineScatter(), and combineGather() are different from the MPI_scatter and MPI_gather, but corresponding to MPI_Bcast and MPI_reduce() respectively.

MPI_Scatter and MPI_gather corresponding to gatherList() and scatterList() in Foam.

ntrask December 23, 2008 10:57

To use the all of the gather/s
 
To use the all of the gather/scatter operations the object that you're gathering needs to have the <<>> operators overloaded and certain constructors defined. You should use a list instead of an array (they're pretty much the same anyway) because it has the interface set up.

-Nat

xiao December 23, 2008 17:33

Do you have some example code
 
Do you have some example code doing gather/scatter/reduce operations on data structures other than scalar and list? The following is from function cloud::move(). This is an operation on lableListList, but it requires a customer-defined operation.

// List of the numbers of particles to be transfered across the
00179 // processor patches
00180 labelList nsTransPs(transferList.size());
00181
00182 forAll(transferList, i)
00183 {
00184 nsTransPs[i] = transferList[i].size();
00185 }
00186
00187 // List of the numbers of particles to be transfered across the
00188 // processor patches for all the processors
00189 labelListList allNTrans(Pstream::nProcs());
00190 allNTrans[Pstream::myProcNo()] = nsTransPs;
00191 combineReduce(allNTrans, combineNsTransPs());

************************************************** *******
class combineNsTransPs
00097 {
00098
00099 public:
00100
00101 void operator()(labelListList& x, const labelListList& y) const
00102 {
00103 forAll(y, i)
00104 {
00105 if (y[i].size())
00106 {
00107 x[i] = y[i];
00108 }
00109 }
00110 }
00111 };

Fransje July 20, 2011 08:08

A few years late, but it might help other people looking for the same information..

For as far as I know, tt is possible to reduce all the variables types defined in OpenFOAM.

So for a vector, one would do:

Code:

vector myVector(vector::zero);

//do something with myVector

reduce( myVector, sumOp<vector>() );

Of course, the same can also be done using other variable types. For example:

Code:

scalarField myScalarField( theSizeYouWant, 0.0);

//do something with myScalarField

reduce( myScalarField, sumOp<scalarField>() );

Of course, other reduceOps can be used, like minOp<>(), maxOp<>() etc.

Looping over a field is not a good idea, because it is prohibitively expensive in inter-processor communication, and can slow your simulation to a point where it would be faster to run it on one processor.

DoQuocVu August 21, 2017 00:35

Hi everyone,

Sorry for bothering you at such an old thread but I'm having a problem using reduce() of openfoam.

I have Uk which is a vectorField and I want to reduce it. So I've try both these ways below:The first time I just want to check whether openfoam can reduce a vectorField so I use:
Code:

reduce (Ut, sumOp<vectorField>() );
The compilation is fine but when I run my test case with 2 processors, it returns these output:
Code:

Courant Number mean: 0.34375 max: 0.34375
smoothSolver:  Solving for Ux, Initial residual = 1, Final residual = 2.40442e-06, No Iterations 8
smoothSolver:  Solving for Uy, Initial residual = 0, Final residual = 0, No Iterations 0
smoothSolver:  Solving for Uz, Initial residual = 0, Final residual = 0, No Iterations 0
[0] #0  Foam::error::printStack(Foam::Ostream&) at ??:?
[0] #1  Foam::sigFpe::sigHandler(int) at ??:?
[0] #2  ? in "/lib/x86_64-linux-gnu/libpthread.so.0"
[0] #3  ? at ??:?
[0] #4  ? at ??:?
[0] #5  ? at ??:?
[0] #6  ? at ??:?
[0] #7  __libc_start_main in "/lib/x86_64-linux-gnu/libc.so.6"
[0] #8  ? at ??:?
[vu:10719] *** Process received signal ***
[vu:10719] Signal: Floating point exception (8)
[vu:10719] Signal code:  (-6)
[vu:10719] Failing at address: 0x3e8000029df
[vu:10719] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f15fa203390]
[vu:10719] [ 1] /lib/x86_64-linux-gnu/libpthread.so.0(raise+0x29)[0x7f15fa203269]
[vu:10719] [ 2] /lib/x86_64-linux-gnu/libpthread.so.0(+0x11390)[0x7f15fa203390]
[vu:10719] [ 3] inertialShortingIbmFoam[0x463e68]
[vu:10719] [ 4] inertialShortingIbmFoam[0x463f98]
[vu:10719] [ 5] inertialShortingIbmFoam[0x468e7b]
[vu:10719] [ 6] inertialShortingIbmFoam[0x41d5a8]
[vu:10719] [ 7] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)[0x7f15f9e48830]
[vu:10719] [ 8] inertialShortingIbmFoam[0x421379]
[vu:10719] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 10719 on node vu exited on signal 8 (Floating point exception).
--------------------------------------------------------------------------

Then I test reduce with scalarField:
Code:

    scalarField send_data_Ux(Ut.size());
    scalarField send_data_Uy(Ut.size());
    scalarField send_data_Uz(Ut.size());
    for(int i=0;i<Ut.size();i++)
    {
        send_data_Ux[i] = Ut[i].x();
        send_data_Uy[i] = Ut[i].y();
        send_data_Uz[i] = Ut[i].z();
    }
    reduce(send_data_Ux, sumOp<scalarField>() );
    reduce(send_data_Uy, sumOp<scalarField>() );
    reduce(send_data_Uz, sumOp<scalarField>() );

The error message still remain the same, although the compilation is ok. I've printed out some lines of send_data_Ux values and apparently only the processor number 1 is working because I don't see the printed value from processor 0. .

So do you have any idea what is actually the problem here? I would very appreciate if you could give me some hints to solve this?

Thanks in advance,
Vu

DoQuocVu August 21, 2017 03:20

Sorry for my inconvenient,

I've figured out the problem is of another part of my library. The Openfoam reduce is working just fine

ndtrong May 1, 2020 09:47

Quote:

Originally Posted by DoQuocVu (Post 661352)
Sorry for my inconvenient,

I've figured out the problem is of another part of my library. The Openfoam reduce is working just fine

Hi Vu,

Could you please share with me how did you solve this problem. I am having a stuck with this problem too.

Thanks

sadsid June 20, 2022 13:34

Quote:

Originally Posted by ndtrong (Post 768195)
Hi Vu,

Could you please share with me how did you solve this problem. I am having a stuck with this problem too.

Thanks

Hi

Have you found any solution? Facing same issue!

DoQuocVu June 21, 2022 10:01

Hi Trong,
It's been too long and I can't recall off the top of my head what was the problem and how did I solve it. I didn't document what I have done back then either. Sorry!
Vu


All times are GMT -4. The time now is 11:46.