CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

MPI (Allreduce) over 1D and 2D arrays

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   June 12, 2012, 11:40
Default MPI (Allreduce) over 1D and 2D arrays
  #1
Member
 
valerio
Join Date: Jun 2009
Location: Nancy
Posts: 30
Rep Power: 16
Alucard is on a distinguished road
Goodmorning I'm trying to parallelise a part of the code I'm developping.
A friend that works with MPI (in fortran enviroment) helped me and he wrote me some kind of "pseudo-code" instructions that I've to translate in C++/OpenFoam language.

1° I define an array of integers wich contain the number of "interesting" points for each subdomain.
I defined it as:
int nmiss[nProc];
where nProc is the number of processes involved.

After some operations I need to "reduce" it and (as my collegue wrote me) I try to convert
MPI_Allreduce(nmiss,1,nproc,MPI_INTEGER,MPI_SUM,Co mmunicator)

in something like : reduce(nmiss,sumOp<int>());


After I repeat the same operation for an array
double Pext_miss[nProc][nguess];
..
..
..
MPI_Allreduce(Pext_miss,1,(nproc*nmissmax*3),MPI_R EAL8,MPI_SUM,Communicator)


Can please someone help me?
Thank you
Valerio
Alucard is offline   Reply With Quote

Old   June 12, 2012, 17:17
Default
  #2
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Quote:
Originally Posted by Alucard View Post
Goodmorning I'm trying to parallelise a part of the code I'm developping.
A friend that works with MPI (in fortran enviroment) helped me and he wrote me some kind of "pseudo-code" instructions that I've to translate in C++/OpenFoam language.

1° I define an array of integers wich contain the number of "interesting" points for each subdomain.
I defined it as:
int nmiss[nProc];
where nProc is the number of processes involved.

After some operations I need to "reduce" it and (as my collegue wrote me) I try to convert
MPI_Allreduce(nmiss,1,nproc,MPI_INTEGER,MPI_SUM,Co mmunicator)

in something like : reduce(nmiss,sumOp<int>());


After I repeat the same operation for an array
double Pext_miss[nProc][nguess];
..
..
..
MPI_Allreduce(Pext_miss,1,(nproc*nmissmax*3),MPI_R EAL8,MPI_SUM,Communicator)


Can please someone help me?
Thank you
Valerio
Not quite sure what you're trying to achieve, but maybe one of these functions already does what you want: http://foam.sourceforge.net/docs/cpp/a01604.html
gschaider is offline   Reply With Quote

Old   June 13, 2012, 17:20
Default
  #3
Senior Member
 
Hannes Kröger
Join Date: Mar 2009
Location: Rostock, Germany
Posts: 123
Rep Power: 18
hannes is on a distinguished road
Hello Valerio,

just some comments.
To my understanding, "reduce" combines data of a single variable on each processor. Like this:

int nmiss;

/* Do something, results on different values on each processor, e.g.: */
nmiss=Pstream::myProcNo()+1;

reduce(nmiss,sumOp<int>());

Pout << nmiss <<endl; /* nmiss now has the value np*(np-1)/2 on all procs */

Not sure, if "reduce" works for lists. Maybe if you create some proper xxxOp-class (Or do they exist and I overlooked them?)

Anyway, to sync lists among processors, I use constructs like the following:

labelList offsets(Pstream::nProcs(), 0);
offsets[Pstream::myProcNo()]=nFaces; /* nFaces has different value on each processor */

Pstream::listCombineGather(offsets, maxEqOp<label>());
Pstream::listCombineScatter(offsets);


Afterwards, the list "offsets" is equal on each processor and contains the "nFaces" value from each other processor.

Hope, that helps you somehow.

Regards, Hannes
hannes is offline   Reply With Quote

Old   June 13, 2012, 18:05
Default
  #4
Member
 
valerio
Join Date: Jun 2009
Location: Nancy
Posts: 30
Rep Power: 16
Alucard is on a distinguished road
Thank you for your answers. I've to be honest I've to study a little to understand what you mean but it's my fault of course!
I try to be more "clear".
A part of my code (I copied it below) search some points P that corresponds to the center of a volume cell in the domain and moving from this points it calculate a new point called Pext that is not necessary in the center of a cell. I calculate the difference deltaCl of a variable called C_liq (in the points P and Pext) via interpolation and I store this value in the point P location (I need this variable to calculate a local velocity in the points P.
The problem rises when I'm in parallel cause if P is in a domain there are cases (close to the subdomain boundaries) where Pext belongs to another subdomain. So I've to "collect" somehow this "missing" points and I've to calculate deltaCl somehow.

Here is a list of the modifications I did to my code since yesterday (that's why sorry I didn't yet replied gschaider thank you!!)

I try in the following to show you the "serial" version of the code part I'm talking about and in a second moment I'll copy what he told me to do.
I've almost "translated" all his speudo-code except the reduce commands

HTML Code:
forAll (mesh.cellCentres(),cellI)
{
    if(alpha1[cellI]<=0.99 && alpha1[cellI]>=0.01) 
    {
          
          vector P = mesh.cellCentres()[cellI];
          
          vector N = -nHat[cellI];  // local normal vector
          
          double coordinate=W.value()*Foam::log((1.-alpha1[cellI])/alpha1[cellI]);
          
          vector Pext = P - coordinate*N;
          
          label cellK=mesh.findCell(Pext);
            scalar Clext = ClInterpolator->interpolate(Pext, cellK);
            scalar Clint=21.92;   
            scalar omega=(Clint-Clext)/(Clint*(1.-kp0.value()));    
            tip_velocity[cellI]=Iv(omega, kp0.value(), 2.4e-07, liqSlope.value(),20.,-W.value()*Foam::log((1.-0.95)/0.95), D_liq.value());   // vtip function
          
          
      }   
} 


Equivalent version in "parallel" in the "pseudo-code" formulation. Tomorrow morning as I'll finish to modify it I'll post it in the proprer C++/openFOAM format the parallel version. What you've to get from the code below is just (if it's possible to understand,I know is a little bit confusing) the commands Allreduce that is what I've to "translate" in an OF compatible language.

HTML Code:
!nsd='Number of cartesian space dimensions'
nguess =nsd*nboundarycells !guess maximum number of missed Pext*number of space dimensions. (Number of cartesian space dimensions) * (number of cells at boundary of partition) 

nmiss(1:nProc)=0                    !Initialize number of misses
Pext_miss(1:nProc,1:nguess)=0.0                !Zero the coordinates. must be initialized to zero for MPI_Allreduce to work correctly
nguess2=nboundarycells
CellImiss(1:nguess2)=0                    !Local matrix, not for communication

forAll (mesh.cellCentres(),cellI)
{
    if(alpha1[cellI]<=0.99 && alpha1[cellI]>=0.01) 
    {
          
          vector P = mesh.cellCentres()[cellI];
          vector N = -nHat[cellI];  // local normal vector
          
          double coordinate=W.value()*Foam::log((1.-alpha1[cellI])/alpha1[cellI]);
          
          vector Pext = P - coordinate*N;
          
          label cellK=mesh.findCell(Pext);
          if (cellK .NE. -1)then
            scalar Clext = ClInterpolator->interpolate(Pext, cellK);
            scalar Clint=21.92;  
            scalar omega=(Clint-Clext)/(Clint*(1.-kp0.value()));                
            tip_velocity[cellI]=Iv(omega, kp0.value(), 2.4e-07, liqSlope.value(),20.,-W.value()*Foam::log((1.-0.95)/0.95), D_liq.value());   // vtip function
          else
            nmiss(myProc)=nmiss(myProc)+1                                                !Number of missed Pext
            Pext_miss(myProc,(nmiss(myProc)-1)*nsd+1:nmiss(myProc)*nsd))=Pext(1:nsd)       !Stored vector of missed Pext
            CellImiss(nmiss(myProc))=cellI
          end if
      }   
} 

call MPI_Allreduce(nmiss,1,nproc,MPI_INTEGER,MPI_SUM,Communicator)        !Communicate nmiss and do a parallel SUM of nmiss. i.e. fill in all entries other than (proc==myProc)
nmissmax=max(nmiss)                                                        !Maximum number of misses for all procs

!Resize the array of missed vectors. Important for communication overhead in cpu-time
define Pext_miss_resized(1:nProc,1:nmissmax*nsd)                          !define new array with correct size based on "nmissmax"  
Pext_miss_resized(1:nProc,1:nmissmax*nsd)=Pext_miss(1:nProc,1:nmissmax*nsd)                    !Transfer to correctly sized arrays

!Communicate Pext_miss_resized to find all misssed vectors of all processors
call MPI_Allreduce(Pext_miss_resized,1,nproc*nmissmax*nsd,MPI_REAL8,MPI_SUM,Communicator)    !Collect all "missed" points on all processors. Send array Pext_miss_resized with given number of entries nproc*nmissmax*nsd, sum over all procs, and send back to all procs

Omega_miss(1:nproc,1:nmissmax)=0.0
do iproc=1,nproc                                                        !Sweep over proc/=myproc to collect Pext from other procs
  if (iproc.NE.myproc)
    do imiss=1,nmiss(iproc)
      Pext=Pext_miss_resized(iproc,(imiss-1)*nsd+1:imiss*nsd)  !Pext coord
      label cellK=mesh.findCell(Pext);
      if (cellK .NE. -1)then
        scalar Clext     = ClInterpolator->interpolate(Pext, cellK);
        
        scalar Clint    =21.92;   // I work with constant temperature so I know the concentration inside the solid phase
        
        scalar omega    =(Clint-Clext)/(Clint*(1.-kp0.value()));    //undercooling
        Omega_miss(iproc,imiss)    =scalar omega
        !!!!!tip_velocity[cellI]=Iv(omega, kp0.value(), 2.4e-07, liqSlope.value(),20.,-W.value()*Foam::log((1.-0.95)/0.95), D_liq.value());   // vtip function
      end if
    end do
  end if
end do
call MPI_Allreduce(Omega_miss,1,nproc*nmissmax,MPI_REAL8,MPI_SUM,Communicator)    !Collect all "missed" Omega

iproc=myproc
do imiss=1,nmiss(iproc)
    cellI            =CellImiss(imiss)
    scalar omega    =Omega_miss(iproc,imiss)
    tip_velocity[cellI]=Iv(omega, kp0.value(), 2.4e-07, liqSlope.value(),20.,-W.value()*Foam::log((1.-0.95)/0.95), D_liq.value());   // vtip function
  end if
end do
Alucard is offline   Reply With Quote

Old   June 13, 2012, 18:28
Default
  #5
Member
 
valerio
Join Date: Jun 2009
Location: Nancy
Posts: 30
Rep Power: 16
Alucard is on a distinguished road
Here it is the OpenFOAM version I've to check some parts and there are the All-reduce parts that need to be changed. For sure there are some errors except to the reduce part (I tried to finish it without compiling it) but it's just to have an idea.
Thank you!!

Valerio

HTML Code:
 //if (Pstream::myProcNo() == 0)
 //   { 

int Npoints=200;
int nProc=8;    
int nsd=3;  //number of spatial dimensions
int nguess=nsd*Npoints*Npoints;
int nmiss[nProc];                
for(int i =0; i < nProc; i++)
{nmiss[i] =0;}
int nguess2;
nguess2=nguess;
double Pext_miss[nProc][nguess];        
double CellImiss[nguess2];            
for(int i =0; i < nProc; i++)
{nmiss[i] =0;}

for(int i =0; i < nguess2; i++)
{
CellImiss[i] =0;}

for(int i =0; i < nProc; i++)
{for(int j =0; j < nguess; j++)
{Pext_miss[i][j]=0.;}}


forAll (mesh.cellCentres(),cellI)
{
    if(alpha1[cellI]<=0.99 && alpha1[cellI]>=0.01)  
    {

         vector p = mesh.cellCentres()[cellI];
         vector N = -nHat[cellI];
         double delta_plus;
         double delta_minus;
         double delta_tot;
         double coordinate=W.value()*Foam::log((1.-alpha1[cellI])/alpha1[cellI]);
         delta_tot=-W.value()*Foam::log((1.-0.99)/0.99);

         delta_minus=delta_tot+coordinate;
         delta_plus=-coordinate;

         vector Pext = p - coordinate*N;
         label cellK=mesh.findCell(Pext); 

         if (cellK>-1)
         {
        

         scalar Clext = ClInterpolator->interpolate(Pext, cellK);
         scalar Clint=21.92;
         scalar deltaCl=(Clint-Clext)/(Clint*(1.-kp0.value())); 
       tip_velocity[cellI]=Iv(deltaCl, kp0.value(), 6.525e-08, liqSlope.value(),C0.value(),20.e-06, D_liq.value());
       }      
       else
       {  
            int myProc=Pstream::myProcNo();

           double P_aux[3];
           P_aux[0]=Pext.x();
           P_aux[1]=Pext.y();
           P_aux[2]=Pext.z();
           for(int l=0;l<nsd; l++)
           {
           Pext_miss[myProc][nmiss[myProc]*nsd+l]=P_aux[l]; 
           }
        CellImiss[nmiss[myProc]]=cellI;
           nmiss[myProc]=nmiss[myProc]+1;
           
          
       } 
    
    }    
} 


call Allreduce(nmiss,1,nProc,MPI_INTEGER,MPI_SUM,Communicator);    
int nmissmax;
nmissmax=max(nmiss);
double Pext_miss_resized[nProc][nmissmax*nsd];

for(int i =0; i < nProc; i++)
{
    for(int j =0; j < nmissmax*nsd; j++)
    {
    Pext_miss_resized[i][j]=Pext_miss[i][j]
    }
}
!Transfer to correctly sized arrays

!Communicate Pext_miss_resized to find all misssed vectors of all processors
call MPI_Allreduce(Pext_miss_resized,1,nproc*nmissmax*nsd,MPI_REAL8,MPI_SUM,Communicator)    !Collect all "missed" points on all processors. Send array Pext_miss_resized with given number of entries nproc*nmissmax*nsd, sum over all procs, and send back to all procs


// PHASE : CALCUL MISSED OMEGA *************************************************

double Omega_miss[nProc][nmissmax];

for(int i =0; i < nProc; i++)
{for(int j =0; j < nguess; j++)
{Omega_miss[i][j]=0.;}}

for (int iproc=0; iproc<nProc; iproc++)   //do iproc=1,nproc
{ 
     if(iproc!=Pstream::myProcNo())
      { 
        for (int imiss=0; imiss<nmiss[iproc]; imiss++) 
          
        {   vector Pext;
           
            double P_aux[3]; 
           for(int l=0;l<nsd; l++)
           { 
            P_aux[l]=Pext_miss_resized[iProc][imiss*nsd+l]; 
           }
             Pext.x()=P_aux[0];
             Pext.y()=P_aux[1];
             Pext.z()=P_aux[2];
             label cellK=mesh.findCell(Pext); 
             if (cellK!=-1)
         {
         scalar Clext = ClInterpolator->interpolate(Pext, cellK);
         scalar Clint=21.92;
         scalar deltaCl=(Clint-Clext)/(Clint*(1.-kp0.value())); 
         Omega_miss[iproc][imiss]=deltaCl;
        }
       }

     }
}

call MPI_Allreduce(Omega_miss,1,nproc*nmissmax,MPI_REAL8,MPI_SUM,Communicator)    !Collect all "missed" Omega


int iproc=Pstream::myProcNo();

for (int imiss=0; imiss<nmiss[iproc]; imiss++)
{       
    int cellK=CellImiss[imiss];
    scalar omega=Omega_miss[iproc][imiss];
    tip_velocity[cellI]=Iv(omega, kp0.value(), 2.4e-07, liqSlope.value(),20.,-W.value()*Foam::log((1.-0.95)/0.95), D_liq.value());   // vtip function
}
Alucard is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 12:37.