CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   codedFixedValue: accessing other patch causes crash in parallel (https://www.cfd-online.com/Forums/openfoam-solving/193543-codedfixedvalue-accessing-other-patch-causes-crash-parallel.html)

RL-S September 27, 2017 09:09

codedFixedValue: accessing other patch causes crash in parallel
 
Hey foamers,

first of all, because this is about a BC, it's technically about the "Solving" portion of OF, but codedFixedValue is still code, so should this thread be moved to the OF/Programming sub-forum?

Now, to the my problem:

I'm working with a solver under OF2.4.0, which only uses p, not p_rgh. I want to define a pressure gradient between my inlet (left side of the mesh) and my outlet (right side of the mesh). On the outlet, I'm using a zeroGradient BC, and on the inlet I'm using codedFixedValue, which does the following:
  1. Read pressure values of outlet patch
  2. Read two values from a dictionary called caseDict, which are a pressure gradient and a ramp time for a linear ramp
  3. Calculate pressure gradient for current time
  4. Write pressure values of outlet patch + current pressure gradient to inlet patch
This works fine when the solver runs in serial. But when run in parallel, it crashes at the first time step.
My strong feeling is that the codedFixedValue BC of the inlet can simply not access the outlet patch anymore, because the mesh got decomposed and the outlet patch is on another processor core and in another folder.


Does that sound correct? And if so, does anyone know how to solve it?


Here's my code btw:
Code:

type            codedFixedValue;
value            uniform  0;
redirectType    BcPressureLB;
code
#{
    // get patch ID on opposite side
    word oPatchName = "inandouthalf12";
    label oPatchID = this->patch().boundaryMesh().
                        findPatchID(oPatchName);
    // get pressure on other side
    const volScalarField& p = this->db().
                objectRegistry::lookupObject<volScalarField>("p");
    const scalarField& pFieldOpp = p.boundaryField()[oPatchID];

    // get pressure gradient from caseDict
    #include "readgradp.H"
           
    // write pressure from opposite side to this patch, add gradP
    scalarField& pField = *this;
    forAll(pField, faceI)
    {
        pField[faceI] = pFieldOpp[faceI] + gradP;
        Info << "p outlet: " << pFieldOpp[faceI] <<
                ", p inlet: " << pField[faceI] << endl;
    }
#};

codeInclude
#{
    #include "fvCFD.H"
#};

codeOptions
#{
    -I$(FOAM_CASE)/incl                \
    -I$(LIB_SRC)/finiteVolume/lnInclude \
    -I$(LIB_SRC)/meshTools/lnInclude
#};
   
codeLibs
#{
    -lmeshTools \
    -lfiniteVolume
#};

with readgradp.H being

Code:

// read the caseDict file
const IOdictionary dict
(
 IOobject
 (
      "caseDict",
      ".",
      this->db(),
      IOobject::MUST_READ,
      IOobject::NO_WRITE,
  false
  )
);
scalar gradP = 0;
gradP = readScalar(dict.lookup("pressureGradient"));
if (gradP != 0)
{
    scalar rampTime = readScalar(dict.lookup("rampTimePG"));
    scalar curTime = this->db().time().value();
    if (rampTime != 0 && (curTime <= rampTime))
    {
        gradP = gradP * min(1, curTime / rampTime);
//        Info << "\nCurrent pressure gradient: " << gradP << "\n\n";
    }
}

Thanks in advance to anyone who has any ideas how to deal with this.
Lennart

RL-S September 29, 2017 07:17

Ok, I have found a way, thanks to this post, which describes using Pstream and the reduce() function to make data available on all processors.

The relevant part of my code now looks like this:
Code:

// get patch ID on opposite side
label oPatchID = this->patch().boundaryMesh().
                    findPatchID(oPatchName);
// get pressure on other side
const volScalarField& p = this->db().
        objectRegistry::lookupObject<volScalarField>("p");

// patch field on opposite side for single processor
const scalarField& fieldOSP = p.boundaryField()[oPatchID];

// get patch field size from each processor
labelList fSizePerProc( Pstream::nProcs(), 0.0 );
label thisProcNo = Pstream::myProcNo();
fSizePerProc[thisProcNo] = fieldOSP.size();

// combine field sizes from all processors
reduce( fSizePerProc, sumOp<labelList>() );

// get field values from all processors
label totalFSize = sum( fSizePerProc );

// get field from each processor
scalarField fieldOAP( totalFSize, 0.0 );

// if patch is on current proc
if( fSizePerProc[thisProcNo] > 0 )
{
    // get field size on previous processors
    label prevFSize = 0;
    for (label i = 0; i < thisProcNo; ++i)
    {
        prevFSize += fSizePerProc[i];
    }
    // write field from patch on current proc
    for (label i = 0; i < fSizePerProc[thisProcNo]; ++i)
    {
        fieldOAP[i + prevFSize] = fieldOSP[i];
    }
}

// combine fields from all processors
reduce( fieldOAP, sumOp<scalarField>() );

// get pressure gradient from caseDict
#include "readgradp.H"

// write pressure from opposite side to this patch, add gradP
scalarField& pField = *this;
forAll(pField, faceI)
{
    pField[faceI] = fieldOAP[faceI] + gradP;
}

The workaround I used before this was simply keeping the relevant patches on the same processor, by manual decomposition using the setFields utility. I described that process here.
Given the additional overhead my code introduces, it might still be the case that the workaround provides better performance. I'll edit my experiences into this post later, if I don't forget it.

Ehsanrah December 24, 2019 21:20

this pointers!
 
Hi Lennart, can you explain to me which object does each"this" pointer point to? I want to use this method but I an a little confused!


All times are GMT -4. The time now is 03:04.