CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

the global index for cells and facess in parallel computation

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree21Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 22, 2015, 11:24
Default the global index for cells and facess in parallel computation
  #1
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Hello,

When the parallel computations are conducted with OpenFOAM, the domain will be decomposed into several parts based on the number of the processors. So the following looping and the index "celli" will only for the cells or faces for the local processor:

Code:
const volVectorField& cellcentre = mesh.C();

forAll(cellcentre,celli)
{
  X[celli] = cellcentre[celli][0];
  Y[celli] = cellcentre[celli][1];
  Z[celli] = cellcentre[celli][2];
}
I am interested to know if there is a chance to know the index of the cell and faces before the demain is decomposed (as listed in constant/polyMesh/faces), in the above lines codes (when parallel computations are conducted). THank you in advance
openfoammaofnepo is offline   Reply With Quote

Old   October 31, 2015, 10:25
Default
  #2
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,963
Blog Entries: 45
Rep Power: 124
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi OFFO,

This is one of those situations that reaaaaally depends on what is your final objective.

Because from your original description, the solution is pretty simple:
  1. Create a utility that creates a field with the ID of each cell. Pretty much simply copy the source code folder for "writeCellCentres":
    Code:
    run
    cd ..
    cp -r $FOAM_UTILITIES/postProcessing/miscellaneous/writeCellCentres writeCellIDs
  2. Adapt the source code to create the field with the cell IDs.
  3. Then in your custom solver, load the field that has the cell IDs.
You can do something similar for the faces.


But like I wrote above, it might depend on what is your actual objective.


Best regards,
Bruno
gridley2 likes this.
__________________
wyldckat is offline   Reply With Quote

Old   October 31, 2015, 10:44
Default
  #3
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Thank you so much, Bruno.

My objective is:

when I run the parallel computations, all the cells and faces will be looped with their indices just in the local processor, not in the global indices (before it was decomposed). Now I would like to get the global indices within the looping in the parallel computations. Is there any method to do that? Thank you.
Quote:
Originally Posted by wyldckat View Post
Hi OFFO,

This is one of those situations that reaaaaally depends on what is your final objective.

Because from your original description, the solution is pretty simple:
  1. Create a utility that creates a field with the ID of each cell. Pretty much simply copy the source code folder for "writeCellCentres":
    Code:
    run
    cd ..
    cp -r $FOAM_UTILITIES/postProcessing/miscellaneous/writeCellCentres writeCellIDs
  2. Adapt the source code to create the field with the cell IDs.
  3. Then in your custom solver, load the field that has the cell IDs.
You can do something similar for the faces.


But like I wrote above, it might depend on what is your actual objective.


Best regards,
Bruno
openfoammaofnepo is offline   Reply With Quote

Old   October 31, 2015, 11:00
Default
  #4
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,963
Blog Entries: 45
Rep Power: 124
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Quote:
Originally Posted by openfoammaofnepo View Post
Now I would like to get the global indices within the looping in the parallel computations. Is there any method to do that? Thank you.
Quick answer: Like I wrote above, the idea is that you will need to create a field with the cell IDs with another utility, before decomposing.
When you run decomposePar, the field will be automatically decomposed.
Then in your solver, you need to load the field, like you do any other field, like "U" and "p".
And then, instead of looking for the "cellcentre[celli]", you look for:
Code:
originalID = cellID[celli];
This will give you the original cell ID.
JamesCC likes this.
wyldckat is offline   Reply With Quote

Old   October 31, 2015, 11:03
Default
  #5
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 341
Rep Power: 18
mkraposhin is on a distinguished road
You can read cellProcAddresing arrays from processor0 ... processorN folders

see example below


Code:
List<List<label> > processCellToGlobalAddr_;
List<label> globalCellToProcessAddr_;

    if (Pstream::parRun())
    {
	processCellToGlobalAddr_.resize
	(
	    Pstream::nProcs()
	);
        
	//read local cell addressing
	labelIOList localCellProcAddr
	(
	    IOobject
	    (
		"cellProcAddressing",
		localMesh.facesInstance(),
		localMesh.meshSubDir,
		localMesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
	
	processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr;
	
	//send local cell addressing to master process
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		IPstream fromSlave(Pstream::scheduled, jSlave);
		label nSlaveCells = 0;
		fromSlave >> nSlaveCells;
		processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
		labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
		forAll(slaveCellProcAddr, iCell)
		{
		    fromSlave >> slaveCellProcAddr[iCell];
		}
	    }
	}
	else
	{
	    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
	    toMaster << localCellProcAddr.size();
	    
	    forAll(localCellProcAddr, iCell)
	    {
		toMaster << localCellProcAddr[iCell];
	    }
	}
	
	//redistribute cell addressing to slave processes
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		OPstream toSlave (Pstream::scheduled, jSlave);
		forAll(processCellToGlobalAddr_, iProcess)
		{
		    const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		    const label nCells = thisProcessAddr.size();
		    toSlave << nCells;
		    forAll(thisProcessAddr, jCell)
		    {
			toSlave << thisProcessAddr[jCell];
		    }
		}
	    }
	}
	else
	{
	    IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
	    forAll(processCellToGlobalAddr_, iProcess)
	    {
		labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		label nCells = 0;
		fromMaster >> nCells;
		thisProcessAddr.resize(nCells);
		forAll(thisProcessAddr, jCell)
		{
		    fromMaster >> thisProcessAddr[jCell];
		}
	    }
	}

	forAll(processCellToGlobalAddr_, jProc)
	{
	    const labelList& jProcessAddr = processCellToGlobalAddr_[jProc];
	    forAll(jProcessAddr, iCell)
	    {
		label iGlobalCell = jProcessAddr[iCell];
		globalCellToProcessAddr_[iGlobalCell] = iCell;
	    }
	}
    }
mkraposhin is offline   Reply With Quote

Old   October 31, 2015, 16:18
Default
  #6
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Bruno, Thank you for your reply. I think your idea works. This is very clever method!

Quote:
Originally Posted by wyldckat View Post
Quick answer: Like I wrote above, the idea is that you will need to create a field with the cell IDs with another utility, before decomposing.
When you run decomposePar, the field will be automatically decomposed.
Then in your solver, you need to load the field, like you do any other field, like "U" and "p".
And then, instead of looking for the "cellcentre[celli]", you look for:
Code:
originalID = cellID[celli];
This will give you the original cell ID.
openfoammaofnepo is offline   Reply With Quote

Old   October 31, 2015, 16:19
Default
  #7
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Hello mkraposhin,

Thank you so much for your help. I will try your method for my case. This is also very clever approach!

Quote:
Originally Posted by mkraposhin View Post
You can read cellProcAddresing arrays from processor0 ... processorN folders

see example below


Code:
List<List<label> > processCellToGlobalAddr_;
List<label> globalCellToProcessAddr_;

    if (Pstream::parRun())
    {
	processCellToGlobalAddr_.resize
	(
	    Pstream::nProcs()
	);
        
	//read local cell addressing
	labelIOList localCellProcAddr
	(
	    IOobject
	    (
		"cellProcAddressing",
		localMesh.facesInstance(),
		localMesh.meshSubDir,
		localMesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
	
	processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr;
	
	//send local cell addressing to master process
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		IPstream fromSlave(Pstream::scheduled, jSlave);
		label nSlaveCells = 0;
		fromSlave >> nSlaveCells;
		processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
		labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
		forAll(slaveCellProcAddr, iCell)
		{
		    fromSlave >> slaveCellProcAddr[iCell];
		}
	    }
	}
	else
	{
	    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
	    toMaster << localCellProcAddr.size();
	    
	    forAll(localCellProcAddr, iCell)
	    {
		toMaster << localCellProcAddr[iCell];
	    }
	}
	
	//redistribute cell addressing to slave processes
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		OPstream toSlave (Pstream::scheduled, jSlave);
		forAll(processCellToGlobalAddr_, iProcess)
		{
		    const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		    const label nCells = thisProcessAddr.size();
		    toSlave << nCells;
		    forAll(thisProcessAddr, jCell)
		    {
			toSlave << thisProcessAddr[jCell];
		    }
		}
	    }
	}
	else
	{
	    IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
	    forAll(processCellToGlobalAddr_, iProcess)
	    {
		labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		label nCells = 0;
		fromMaster >> nCells;
		thisProcessAddr.resize(nCells);
		forAll(thisProcessAddr, jCell)
		{
		    fromMaster >> thisProcessAddr[jCell];
		}
	    }
	}

	forAll(processCellToGlobalAddr_, jProc)
	{
	    const labelList& jProcessAddr = processCellToGlobalAddr_[jProc];
	    forAll(jProcessAddr, iCell)
	    {
		label iGlobalCell = jProcessAddr[iCell];
		globalCellToProcessAddr_[iGlobalCell] = iCell;
	    }
	}
    }
openfoammaofnepo is offline   Reply With Quote

Old   October 31, 2015, 17:23
Default
  #8
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Dear Mkraposhin,

In the following lines, if I need to build the relation for bot local cell and face for their global ones, how can I add the face related in the following:
Code:
    if (Pstream::master()) 
    {
        for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
        {
            IPstream fromSlave(Pstream::scheduled, jSlave);
            label nSlaveCells = 0;
            fromSlave >> nSlaveCells;
            processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
            labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
            forAll(slaveCellProcAddr, iCell)
            {
                fromSlave >> slaveCellProcAddr[iCell];
            }
        }
    }
    else 
    {
        OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
        toMaster << localCellProcAddr.size();

        forAll(localCellProcAddr, iCell)
        {
            toMaster << localCellProcAddr[iCell];
        }
    }
In the normal MPI, we can have the different labels for MPI sending and receiving for cells and faces. But here I do not know use the label to distinguish the data communications for face and cell. Thank you so much if you could have me some hints. Thank you so much.

Quote:
Originally Posted by mkraposhin View Post
You can read cellProcAddresing arrays from processor0 ... processorN folders

see example below


Code:
List<List<label> > processCellToGlobalAddr_;
List<label> globalCellToProcessAddr_;

    if (Pstream::parRun())
    {
	processCellToGlobalAddr_.resize
	(
	    Pstream::nProcs()
	);
        
	//read local cell addressing
	labelIOList localCellProcAddr
	(
	    IOobject
	    (
		"cellProcAddressing",
		localMesh.facesInstance(),
		localMesh.meshSubDir,
		localMesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
	
	processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr;
	
	//send local cell addressing to master process
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		IPstream fromSlave(Pstream::scheduled, jSlave);
		label nSlaveCells = 0;
		fromSlave >> nSlaveCells;
		processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
		labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
		forAll(slaveCellProcAddr, iCell)
		{
		    fromSlave >> slaveCellProcAddr[iCell];
		}
	    }
	}
	else
	{
	    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
	    toMaster << localCellProcAddr.size();
	    
	    forAll(localCellProcAddr, iCell)
	    {
		toMaster << localCellProcAddr[iCell];
	    }
	}
	
	//redistribute cell addressing to slave processes
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		OPstream toSlave (Pstream::scheduled, jSlave);
		forAll(processCellToGlobalAddr_, iProcess)
		{
		    const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		    const label nCells = thisProcessAddr.size();
		    toSlave << nCells;
		    forAll(thisProcessAddr, jCell)
		    {
			toSlave << thisProcessAddr[jCell];
		    }
		}
	    }
	}
	else
	{
	    IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
	    forAll(processCellToGlobalAddr_, iProcess)
	    {
		labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		label nCells = 0;
		fromMaster >> nCells;
		thisProcessAddr.resize(nCells);
		forAll(thisProcessAddr, jCell)
		{
		    fromMaster >> thisProcessAddr[jCell];
		}
	    }
	}

	forAll(processCellToGlobalAddr_, jProc)
	{
	    const labelList& jProcessAddr = processCellToGlobalAddr_[jProc];
	    forAll(jProcessAddr, iCell)
	    {
		label iGlobalCell = jProcessAddr[iCell];
		globalCellToProcessAddr_[iGlobalCell] = iCell;
	    }
	}
    }
openfoammaofnepo is offline   Reply With Quote

Old   November 1, 2015, 06:34
Default
  #9
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 341
Rep Power: 18
mkraposhin is on a distinguished road
Hi, i'm not sure that i understand your question right, but i will try to give more explanation on the code, that i posted above.

For each MPI process (or processor) you can read addressing arrays, which maps local process indexation of primitives into global indexation of primitives. This arrays are located in folder processj/constant/polyMesh in next files
Code:
  boundaryProcAddressing 
  cellProcAddressing
  faceProcAddressing
  pointProcAddressing
  • boundaryProcAddresing - each element contains global index of patch that is present on current process, for "processor" boundaries this index is -1. Size of this array is equal to number of patches in global mesh plus number of "processor" patches in current processorj folder or processor.
  • cellProcAddressing - each element contains global index of given local process cell. Size of this array is equal to number of cells in current processor
  • faceProcAddressing - each element contains global index of given local process face. Size of this array is equal to number of face in current processor
  • pointProcAddressing - each element contains global index of given local process point. Size of this array is equal to number of points in current processor

You can read this arrays in each MPI process with the code similar to next
Code:
	labelIOList localCellProcAddr
	(
	    IOobject
	    (
		"cellProcAddressing",
		mesh.facesInstance(),
		mesh.meshSubDir,
		mesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
or for faces

Code:
	labelIOList localFaceProcAddr
	(
	    IOobject
	    (
		"faceProcAddressing",
		mesh.facesInstance(),
		mesh.meshSubDir,
		mesh,
		IOobject::MUST_READ,
		IOobject::NO_WRITE
	    )
	);
then you can store this addresing arrays in another arrays

Code:
processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr;
Variable Pstream::myProcNo() contains id of process. In master process it's value will be 0, in process 1 it's value will be 1 and so on.
At this point array processCellToGlobalAddr_ contains information only about adressing of current process, adressing of other processes is invsible. That's why at next step step you need to redistribute this information across other processes. The idea is simple:
1) send addresing information from all processes to master process (with id #0)
2) send gather infromation from master process to other processes
Code:
	//send local cell addressing to master process
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		IPstream fromSlave(Pstream::scheduled, jSlave);
		label nSlaveCells = 0;
		fromSlave >> nSlaveCells;
		processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
		labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
		forAll(slaveCellProcAddr, iCell)
		{
		    fromSlave >> slaveCellProcAddr[iCell];
		}
	    }
	}
	else
	{
	    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
	    toMaster << localCellProcAddr.size();
	    
	    forAll(localCellProcAddr, iCell)
	    {
		toMaster << localCellProcAddr[iCell];
	    }
	}
	
	//redistribute cell addressing to slave processes
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		OPstream toSlave (Pstream::scheduled, jSlave);
		forAll(processCellToGlobalAddr_, iProcess)
		{
		    const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		    const label nCells = thisProcessAddr.size();
		    toSlave << nCells;
		    forAll(thisProcessAddr, jCell)
		    {
			toSlave << thisProcessAddr[jCell];
		    }
		}
	    }
	}
	else
	{
	    IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());
	    forAll(processCellToGlobalAddr_, iProcess)
	    {
		labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
		label nCells = 0;
		fromMaster >> nCells;
		thisProcessAddr.resize(nCells);
		forAll(thisProcessAddr, jCell)
		{
		    fromMaster >> thisProcessAddr[jCell];
		}
	    }
	}
At the last step we may need to create reverse addressing - from global cell id to local process cell id (or face id, or point id):
Code:
	forAll(processCellToGlobalAddr_, jProc)
	{
	    const labelList& jProcessAddr = processCellToGlobalAddr_[jProc];
	    forAll(jProcessAddr, iCell)
	    {
		label iGlobalCell = jProcessAddr[iCell];
		globalCellToProcessAddr_[iGlobalCell] = iCell;
	    }
	}

Last edited by mkraposhin; November 1, 2015 at 06:39. Reason: grammar
mkraposhin is offline   Reply With Quote

Old   November 1, 2015, 10:07
Default
  #10
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Dear Matvey,

Thank you so much for your detailed explanation. This is very helpful. If I would like to collect and send the data for both cells and faces in the following codes:
Code:
	//send local cell addressing to master process
	if (Pstream::master())
	{
	    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
	    {
		IPstream fromSlave(Pstream::scheduled, jSlave);
		label nSlaveCells = 0;
		fromSlave >> nSlaveCells;
		processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
		labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
		forAll(slaveCellProcAddr, iCell)
		{
		    fromSlave >> slaveCellProcAddr[iCell];
		}
	    }
	}
	else
	{
	    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
	    toMaster << localCellProcAddr.size();
	    
	    forAll(localCellProcAddr, iCell)
	    {
		toMaster << localCellProcAddr[iCell];
	    }
	}
The above is only for cell related information. How to do add the face information sending and gathering based on the above code? I do not know how to distinguish the different informaiton for MPI calling. In MPI, we always use an integer TAG to label the sent and gathered information. But I do not know how to realize it here. Any hints?

Or in other words, when we have multiple packages of data, how to do the sending and gathering in Openfing parallelizations here?

cheer,
OFFO
openfoammaofnepo is offline   Reply With Quote

Old   November 1, 2015, 13:10
Default
  #11
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 341
Rep Power: 18
mkraposhin is on a distinguished road
In OpenFOAM synchronization between processes can be done in many ways:

1) With the usage of MPI standard API - just include "mpi.h" and do what you want to do
2) With IPstream and OPstream class:
2a) with overloaded stream operators "<<" and ">>"
2b) with static functions IPstream::read and OPstream::write
3) reduce and scatter operations - Foam::reduce(...) and Foam::scatter(...) static functions

In the example posted above, i used method 2a) - i don't need to care how data is marked, i only need to remember which data comes first. For example, if i want to send cellProcAddresing and faceProcAddressing array to master process, i must create OPstream class at each slave process. Then i will use it to send data to master process with "<<" operator:

Code:
if ( ! Pstream::master() )
{
    OPstream toMaster (Pstream::scheduled, Pstream::masterNo());
    
    //send size of cellProcAddr to master
    toMaster << cellProcAddr.size();
    
    //send size of faceProcAddr to master
    toMaster << faceProcAddr.size();

    //send arrays to master
    toMaster << cellProcAddr;
    toMaster << faceProcAddr;
}
Now, as data was sent to master process, i need to read it in master process in the same order, as i do when i performed 'send' operation before
Code:
if ( Pstream::master() )
{
    //read data from each slaves
    for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
    {
        IPstream fromSlave(Pstream::scheduled, jSlave);
        //read number of cells in slave process
        label nSlaveCells = 0;
        fromSlave >> nSlaveCells;

        //read number of faces in slave process
        label nSlaveFaces = 0;
        fromSlave >> nSlaveFaces;
 
        //create storage and read cellProcAddr from slave
        labelList slaveCellProcAddr (nSlaveCells);
        fromSlave >> slaveCellProcAddr;

        //create storage and read faceProcAddr from slave
        labelList slaveFaceProcAddr (nSlaveFaces);
        fromSlave >> slaveFaceProcAddr;
    }
}
That'all. The only rule - you must read data in recieving process in the same order as it sended in sending process

Last edited by mkraposhin; November 1, 2015 at 14:22.
mkraposhin is offline   Reply With Quote

Old   November 1, 2015, 13:19
Default
  #12
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Dear Matvey,

Thank you so much! This is clear for me now. Your explanations are very good!

OFFO
openfoammaofnepo is offline   Reply With Quote

Old   November 1, 2015, 14:09
Default
  #13
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Dear Matvey,

Please allow me to ask one more question:

if I do the following:
Code:
1) With the usage of MPI standard API - just include "mpi.h" and do what you want to do
Then I can use all the MPI functions which is not related to the OpenFOAM class? Is that right?
openfoammaofnepo is offline   Reply With Quote

Old   November 1, 2015, 14:20
Default
  #14
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 341
Rep Power: 18
mkraposhin is on a distinguished road
Quote:
Originally Posted by openfoammaofnepo View Post
Dear Matvey,

Please allow me to ask one more question:

if I do the following:
Code:
1) With the usage of MPI standard API - just include "mpi.h" and do what you want to do
Then I can use all the MPI functions which is not related to the OpenFOAM class? Is that right?
Yes, but i think that it would be better to use OpenFOAM's API
mkraposhin is offline   Reply With Quote

Old   November 2, 2015, 04:31
Default
  #15
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Dear Matvey,

I have modified your code to send and receive the information for face and cell. If you could help to have a look, I would really appreciate it. Thank you.

Code:
List<List<label> > processCellToGlobalAddr_;
List<label> globalCellToProcessAddr_;

List<List<label> > processFaceToGlobalAddr_;
List<label> globalFaceToProcessAddr_;

if (Pstream::parRun())
{
    processCellToGlobalAddr_.resize
    (
        Pstream::nProcs()
    );

    processFaceToGlobalAddr_.resize
    (
        Pstream::nProcs()
    );
        
    //read local cell addressing
    labelIOList localCellProcAddr
    (
        IOobject
        (
        "cellProcAddressing",
        localMesh.facesInstance(),
        localMesh.meshSubDir,
        localMesh,
        IOobject::MUST_READ,
        IOobject::NO_WRITE
         )
    );

    //read local face addressing
    labelIOList localFaceProcAddr
    (
        IOobject
        (
            "faceProcAddressing",
            localMesh.facesInstance(),
            localMesh.meshSubDir,
            localMesh,
            IOobject::MUST_READ,
            IOobject::NO_WRITE
         )
    );

    processCellToGlobalAddr_[Pstream::myProcNo()] = localCellProcAddr;
    processFaceToGlobalAddr_[Pstream::myProcNo()] = localFaceProcAddr;

    //send local cell and face addressing to master process
    if (Pstream::master()) // if this is rank=0 processor
    {
        for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
        {
        IPstream fromSlave(Pstream::scheduled, jSlave);

            // the cell information
        label nSlaveCells = 0;
        fromSlave >> nSlaveCells;
        processCellToGlobalAddr_[jSlave].resize(nSlaveCells);
        labelList& slaveCellProcAddr = processCellToGlobalAddr_[jSlave];
        forAll(slaveCellProcAddr, iCell)
        {
            fromSlave >> slaveCellProcAddr[iCell];
        }

            // the face information
            label nSlaveFaces = 0;
            fromSlave >> nSlaveFaces;
            processFaceToGlobalAddr_[jSlave].resize(nSlaveFaces);
            labelList& slaveFaceProcAddr = processFaceToGlobalAddr_[jSlave];
            forAll(slaveFaceProcAddr, iFace)
            {
                fromSlave >> slaveFaceProcAddr[iFace];
            }

        }
    } 
    else // if this is non-rank processor
    {
        OPstream toMaster (Pstream::scheduled, Pstream::masterNo());

        // cell information
        toMaster << localCellProcAddr.size();
    
        forAll(localCellProcAddr, iCell)
        {
        toMaster << localCellProcAddr[iCell];
        }

        // face information
        toMaster << localFaceProcAddr.size();

        forAll(localFaceProcAddr, iFace)
        {
            toMaster << localFaceProcAddr[iFace];
        }

    }
    
    //redistribute cell and face addressing to slave processes
    if (Pstream::master())
    {
        for (label jSlave=Pstream::firstSlave(); jSlave<=Pstream::lastSlave(); jSlave++)
        {
        OPstream toSlave (Pstream::scheduled, jSlave);

            // cell information
        forAll(processCellToGlobalAddr_, iProcess)
        {
            const labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
            const label nCells = thisProcessAddr.size();
            toSlave << nCells;
            forAll(thisProcessAddr, jCell)
            {
            toSlave << thisProcessAddr[jCell];
            }
        }

            // face informaiton
            forAll(processFaceToGlobalAddr_, iProcess)
            {
                const labelList& thisProcessAddr = processFaceToGlobalAddr_[iProcess];
                const label nFaces = thisProcessAddr.size();
                toSlave << nFaces;
                forAll(thisProcessAddr, jFace)
                {
                    toSlave << thisProcessAddr[jFace];
                }
            }

        }
    }
    else
    {
        IPstream fromMaster(Pstream::scheduled, Pstream::masterNo());

        // cell information
        forAll(processCellToGlobalAddr_, iProcess)
        {
        labelList& thisProcessAddr = processCellToGlobalAddr_[iProcess];
        label nCells = 0;
        fromMaster >> nCells;
        thisProcessAddr.resize(nCells);
        forAll(thisProcessAddr, jCell)
        {
            fromMaster >> thisProcessAddr[jCell];
        }
        }

        // face information
        forAll(processFaceToGlobalAddr_, iProcess)
        {
            labelList& thisProcessAddr = processFaceToGlobalAddr_[iProcess];
            label nFaces = 0;
            fromMaster >> nFaces;
            thisProcessAddr.resize(nFaces);
            forAll(thisProcessAddr, jFace)
            {
                fromMaster >> thisProcessAddr[jFace];
            }
        }

    }

    // Set the relation between local and global cell indices
    forAll(processCellToGlobalAddr_, jProc)
    {
        const labelList& jProcessAddr = processCellToGlobalAddr_[jProc];
        forAll(jProcessAddr, iCell)
        {
        label iGlobalCell = jProcessAddr[iCell];
        globalCellToProcessAddr_[iGlobalCell] = iCell;
        }
    }

    forAll(processFaceToGlobalAddr_, jProc)
    {
        const labelList& jProcessAddr = processFaceToGlobalAddr_[jProc];
        forAll(jProcessAddr, iFace)
        {
            label iGlobalFace = jProcessAddr[iFace];
            globalFaceToProcessAddr_[iGlobalFace] = iFace;
        }
    }

}
kaanm likes this.
openfoammaofnepo is offline   Reply With Quote

Old   November 2, 2015, 04:39
Default
  #16
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Dear Matvey,

How did you define the "localmesh" in the following:

Code:
    //read local cell addressing
    labelIOList localCellProcAddr
    (
        IOobject
        (
            "cellProcAddressing",
            localMesh.facesInstance(),
            localMesh.meshSubDir,
            localMesh,
            IOobject::MUST_READ,
            IOobject::NO_WRITE
         )
    );
Thank you. When I compiled, I have the error information:

Code:
CFD_Cell_Local2Glob.H(27): error: identifier "localMesh" is undefined
            localMesh.facesInstance(),
openfoammaofnepo is offline   Reply With Quote

Old   November 2, 2015, 04:45
Default
  #17
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 341
Rep Power: 18
mkraposhin is on a distinguished road
localMesh is simply mesh in standard OpenFOAM solver.
Also, according to OpenFOAM coding style, you must erase underscores "_" at the end of variables

List<List<label> > processCellToGlobalAddr_;
List<label> globalCellToProcessAddr_;
List<List<label> > processFaceToGlobalAddr_;
List<label> globalFaceToProcessAddr_;

In my example this variables were protected members of class, but in your case (if i understand it right) this variables are used as global variables of main(...) procedure.
mkraposhin is offline   Reply With Quote

Old   November 2, 2015, 05:00
Default
  #18
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Hi Matvey,

compilation is successful now. Thank you.
openfoammaofnepo is offline   Reply With Quote

Old   November 2, 2015, 05:11
Default
  #19
Senior Member
 
mkraposhin's Avatar
 
Matvey Kraposhin
Join Date: Mar 2009
Location: Moscow, Russian Federation
Posts: 341
Rep Power: 18
mkraposhin is on a distinguished road
If you want to check that everything works, you can do it by dumping arrays to files (OFstream class). If all files from all processors are the same, then your program operates like you expected
mkraposhin is offline   Reply With Quote

Old   November 2, 2015, 05:17
Default
  #20
Senior Member
 
Join Date: Jan 2013
Posts: 371
Rep Power: 11
openfoammaofnepo is on a distinguished road
Thank you so much. In openFOAM, how can I output the array to a file? I always try to print it out on the screen, but this does not look good when the data is very large. Thank you.
openfoammaofnepo is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 14:33.