CFD Online Logo CFD Online URL
Home > Forums > OpenFOAM Programming & Development

How to do communication across processors

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree4Likes
  • 2 Post By niklas
  • 2 Post By marupio

LinkBack Thread Tools Display Modes
Old   July 19, 2014, 09:10
Question How to do communication across processors
New Member
Salman Arshad
Join Date: May 2014
Posts: 4
Rep Power: 4
saloo is on a distinguished road
Hello everybody!:)
I hope you all are doing well with OpenFOAM.

I am working on parallelizing a code in OpenFOAM. This code does Large Eddy Simulation (LES) in OpenFOAM and uses Linear Eddy Model (LEM) as a sub-grid combustion model. In simple words Linear eddy model does direct numerical simulation (DNS) on a 1-D line. If you are more interested, you can read details about it in this PhD thesis:

In the code, for every LES cell in OpenFOAM, a new LEM line is initialized using ‘boost::ptr_vector’. See the figure below just to get an idea. In this figure we have 16 LES cells (2-D case having one cell in Z-direction) and in each of these LES cell, a new LEM line is initialized.

If these LEM lines do not communicate then parallelizing this code in OpenFOAM using domain decomposition is really easy because each LEM line belong to one LES cell only. See the following figure just to get an idea of domain decomposition in this case.

The real problem arises when these LEM lines communicate with each other. The communication between LEM lines is required for large scale advection process. In this process large scale (resolved) flow moves across LES cells. In the code this process is modelled by exchanging parts (LEM cells) of a LEM line to the neighboring LEM lines (to ensure conservation of mass). For details about large scale advection (also known as splicing) interested readers can read details in the above mentioned PhD thesis. The figure below is taken from the PhD thesis in order to get a general idea of splicing.

In the code we just loop over all internal (LES) faces of the domain (using mesh::owner()) and do the splicing (exchanging of LEM cells). In serial processing mode it is not a problem because splicing is done in whole domain (on all internal faces). When domain decomposition is done using ‘decomposePar’ and same code is run in parallel then the faces between processor boundaries are left out (splicing between internal faces of a single processor is still happening but not across processor boundaries) and hence no splicing is done across processor boundaries.
One idea on which I am working on now is (at each time step) to make local copies of neighboring LEM lines across each processor boundary to do splicing. To understand the idea, see the figure below.

In above figure, only processor 1 is considered (the concept is same for all processors). So in order to do splicing on faces between processor boundaries for processor 1, LEM lines of (LES) cells 5, 7, 9 and 10 will be copied and splicing (for faces between processor boundaries) will be done using these local copies.

This is just one idea and for this idea I am having difficulties in finding out how to find all the LES cells neighboring my current processor and more importantly how to copy LEM lines across processor boundaries? How to use Pstream to handle this copying operation because may be Pstream can handle certain types of data and the data that I need to copy is may be not supported by Pstream. Also the LEM lines are on ‘boost::ptr_vector’ and I do not want to copy the pointer to the LEM lines but instead the whole LEM line (the class to which ‘boost::ptr_vector’ is pointing to).

The above idea is just an idea and because I am new to parallelization in OpenFOAM so I have less knowledge of implementation details in OpenFOAM. Any other better idea of implementing parallel ‘splicing’ in OpenFOAM and also if possible any help related to implementation of my current idea in OpenFOAM will be highly appreciated.:)

Many thanks in advance :)
saloo is offline   Reply With Quote

Old   August 11, 2014, 01:46
Super Moderator
niklas's Avatar
Niklas Nordin
Join Date: Mar 2009
Location: Stockholm, Sweden
Posts: 693
Rep Power: 22
niklas will become famous soon enoughniklas will become famous soon enough
Maybe this piece of code will help you understand.
It just sends a random valued vector from each processor to the main processor.

    Random rnd(0);

    // get the number of processors
    label n=Pstream::nProcs();

    // generate a random vector, since the seed is the same for all procs they will be the same
    // if we only do it once
    vector localV = rnd.vector01();
    for(label i=0; i<Pstream::myProcNo(); i++)
        localV = rnd.vector01();

    // print the vector on the processor (so we can compare it later when we print it on the main proc)
    Sout << "[" << Pstream::myProcNo() << "] V = " << localV << endl;

    if (Pstream::myProcNo() == 0)
        List<vector> allV(n, vector::zero);
        allV[0] = localV;
        for(label i=1; i<n; i++)
            // create the input stream from processor i
 	    IPstream vStream(Pstream::blocking, i);
            vStream >> allV[i];
        // print the list of all vectors on the main proc
        Info << allV << endl;
        // create the stream to send to the main proc
        OPstream vectorStream
	    Pstream::blocking, 0
        vectorStream << localV;
ngj and saloo like this.
niklas is offline   Reply With Quote

Old   August 11, 2014, 12:49
Senior Member
David Gaden
Join Date: Apr 2009
Location: Winnipeg, Canada
Posts: 436
Rep Power: 15
marupio is on a distinguished road
This looks like a good candidate for Pstream::exchangeList.

Put the data you need to send to other processors in a list with size Pstream::nProcs(), initialize another list with the same datatype as the one you are sending (also give it a size nProcs) then call Pstream::exchangeList, and the data intended for each processor will be distributed efficiently.

Like this, (pseudo code):

    scalarListList sendData(Pstream::nProcs());
    scalarListList recvData(Pstream::nProcs());
    // fill in sendData with what you want to send...

    Pstream::exchangeList(sendData, recvData);
    // recvData now has all the data you need, including sendData
    // intended for itself
I'm probably missing something... but this should give you an idea. You can look through the code for other instances where exchangeList is used.
saloo and lzhou like this.
Follow me on twitter @DavidGaden
marupio is offline   Reply With Quote

Old   December 1, 2015, 11:01
Senior Member
Andrea Ferrari
Join Date: Dec 2010
Posts: 305
Rep Power: 9
Andrea_85 is on a distinguished road
Hello Niklas,

i have just one question regarding your example. The list you "reconstructed" on the main processor is automatically available on all the other proccessors (allV in your example)? If not, how can i send the global list to the processors?

I am trying to do a similar thing with my code (modified version of interFoam). I have a list of vectors for each processors. What i want do is to send each list of vectors to the main processor, reconstruct the "global" list and then make it available for all the processor. The vector list contains points on a surface and i want to calculate the minimum distance between each point of the mesh and the surface (that is why i need the global list for each processor).


Andrea_85 is offline   Reply With Quote

Old   December 4, 2015, 10:47
Senior Member
Andrea Ferrari
Join Date: Dec 2010
Posts: 305
Rep Power: 9
Andrea_85 is on a distinguished road
After a quick search on the forum it seems Pstream::gatherList and ListListOps::combine might do what i need. I try to explain better my problem. I have a pointField named "ppSurf" which contains points on a surface. For serial calculation ppSurf contains all the points of the surface while in case of parallel calculation ppSurf contains the points of the surface which lie on processor i.
This is what i did to gather the points on the master

 //List with size equal to number of processors
 List< pointField > gatheredData(Pstream::nProcs());

 //  Populate and gather the list onto the master processor.
 gatheredData[Pstream::myProcNo()] = ppSurf;
Now "gatheredData" contains n sublists with all the points, where n is the number of processors. Fine here.

Then i tried to use ListListOps::combine (src/OpenFOAM/containers/Lists/ListListOps/ListListOps.H) to combine the elements of the n sublists into one big list.

   pointField ppSurfGlobal
               ppSurf()      //<--- not sure what i have to put here

However this does not compile. The error is at line "ppSurf()" and OF complains about the fact that a pointField does not have the operator "operator:)". In the example provided in the doxigen is written that there it should be the access operator to access the individual element of the sublist but i am not sure what does it mean in my case.

Andrea_85 is offline   Reply With Quote


parallel computation, parallel decomposepar, parallel processing

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On

Similar Threads
Thread Thread Starter Forum Replies Last Post
How to create uneven load for the processors using decomposePar utility. shinde.gopal OpenFOAM Meshing & Mesh Conversion 1 May 24, 2014 08:49
How do I choose de number of processors with paraFoam? CSN OpenFOAM Post-Processing 0 April 17, 2012 04:44
polyhedral mesh with multiple processors user1 CD-adapco 7 August 22, 2008 10:59
Parallel Computing on Multi-Core Processors Upgrading Hardware CFX 6 June 7, 2007 15:54
64-bit processors for home computing Ananda Himansu Main CFD Forum 2 March 16, 2004 13:48

All times are GMT -4. The time now is 01:33.