CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (https://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   How OF implement processor boundary condition (https://www.cfd-online.com/Forums/openfoam-programming-development/241066-how-implement-processor-boundary-condition.html)

fxzf February 7, 2022 04:33

How OF implement processor boundary condition
 
Hello OF experters.

I want to ask a silly question, I am trying to understand how OF implement processor boundary condition.

Let's use the laplacian as example, so in build laplacian scheme, there is
Code:

fvmLaplacianUncorrected
{
....

  const fvsPatchScalarField& pDeltaCoeffs = deltaCoeffs.boundaryField()[patchi];

  if (pvf.coupled())
  {
    fvm.internalCoeffs()[patchi] = pGamma*pvf.gradientInternalCoeffs(pDeltaCoeffs)
    fvm.boundaryCoeffs()[patchi] = -pGamma*pvf.gradientBoundaryCoeffs(pDeltaCoeffs)

  }

}

First question, I believer, if it loops to processor type boundary patch, it will this pvf.coupled() is true? Correct me if I am wrong.

Second question, what is the pDeltaCoeffs here? I know deltaCoeffs here is tsnGradScheme_().deltaCoeffs(vf), so I believe it should be 1/distance(cell centre to neighbour cell centre) on the field, and on the processor boundary patch, it is 1/distance(cell centre to patch face centre) ???

However, for processor boundary patch, to build a matrix, I believe it should get the distance from cell centre to neighbour cell centre ???

Third Question, I checked processorFvPatchField.C, I didn't find gradientBoundaryCoeffs and gradientInternalCoeffs. So where and how do they implemented for processsor B.C.

Finally, for pGamma, I believe, it should be interpolated value from this processor cell to neighbour processor cell ? but I cannot find where does this interpolation implemented.

Thanks very much.

yueyun March 15, 2023 08:26

Hi,
I am also very interested in this topic. Did you get any clue?

fxzf March 15, 2023 09:16

Quote:

Originally Posted by yueyun (Post 846266)
Hi,
I am also very interested in this topic. Did you get any clue?

got some progress, but not full

dlahaye March 15, 2023 16:07

Greetings,

OpenFoam implements a non-overlapping domain-decomposition method by storing the linear system in a block-decomposed format. To solve the linear systems, a zero-halo approach in which processors exchange information through interface is implemented. The approach is classical, but possibly poorly documented in OpenFoam. An attempt to provide more information is at https://www.linkedin.com/pulse/openf...menico-lahaye/.

What exactly are you looking for?

yueyun March 16, 2023 11:01

Quote:

Originally Posted by dlahaye (Post 846310)
Greetings,

OpenFoam implements a non-overlapping domain-decomposition method by storing the linear system in a block-decomposed format. To solve the linear systems, a zero-halo approach in which processors exchange information through interface is implemented. The approach is classical, but possibly poorly documented in OpenFoam. An attempt to provide more information is at https://www.linkedin.com/pulse/openf...menico-lahaye/.

What exactly are you looking for?

Hi Domenico,

Thank you for your reply. I have read through 1/15 to 9/15 in your post. And it's really a wonderful job! I hope I could have found it earlier.

However, I'm still confused in 5/15 Parallel Assembly of Matrix and Right-Hand Side Vector. In the following, I'm going to list two questions that I could not understand:

1. In OF, how does fvMatrix / lduMatrix hold the off-diagonal coefficients of the cells in a neighbour processor?

From my understanding, in a parallel case, the matrix A in a linear system Ax = b is decomposed into local submatrices A_{ij} and assigned to the processors in this manner (Please correct me if I am wrong):
(a) Each processor i owns the submatrices A_{ij} for 1 <= j <= N (where N is the processor number)
(b) The local submatrix A_{ii} can be stored by a local ldu matrix, just as the serial case
(c) The coupled submatrices A_{ij}, i <> j, can be stored by another lu matrix.

As is the case in LduMatrix class,

Code:

template<class Type, class DType, class LUType>
class LduMatrix
{
    // private data

        //- LDU mesh reference
        const lduMesh& lduMesh_;

        //- Diagonal coefficients
        Field<DType> *diagPtr_;

        //- Off-diagonal coefficients
        Field<LUType> *upperPtr_, *lowerPtr_;

        //- Source
        Field<Type> *sourcePtr_;

        //- Field interfaces (processor patches etc.)
        LduInterfaceFieldPtrsList<Type> interfaces_;

        //- Off-diagonal coefficients for interfaces
        FieldField<Field, LUType> interfacesUpper_, interfacesLower_;

In local submatrix A_{ii}, the diagonal coefficients are stored in an array pointed by diagPtr_, and the off-diagonal coefficients in A_{ii} are stored in arrays pointed by upperPtr_ and lowerPtr_ (I guess the face-cell addressing is managed by the lduMesh lduMesh_?)

In coupled submatrices A_{ij}, i <> j, the coefficients are stored in arrays pointed by interfacesUpper_[j] and interfacesLower_[j] (I guess the face-cell addressing is managed by the ldu interface interfaces_?)

However, in lduMatrix class,

Code:

class lduMatrix
{
    // private data

        //- LDU mesh reference
        const lduMesh& lduMesh_;

        //- Coefficients (not including interfaces)
        scalarField *lowerPtr_, *diagPtr_, *upperPtr_;

Only the coefficients in local submatrix A_{ii} is stored by lowerPtr_, diagPtr_, upperPtr_. Then, how is the coefficients in coupled submatrix A_{ij} stored in lduMatrix and fvMatrix (which is derived from lduMatrix)?

2. In OF, how is the coefficients of the cells at a neighbour processor treated in parallel assembly of matrix?

Take the laplacian for example,

Code:

template<class Type, class GType>
 tmp<fvMatrix<Type>>
 gaussLaplacianScheme<Type, GType>::fvmLaplacianUncorrected
 (
    const surfaceScalarField& gammaMagSf,
    const surfaceScalarField& deltaCoeffs,
    const GeometricField<Type, fvPatchField, volMesh>& vf
 )
 {
    /* Treatment to the diffusive flux across the internal faces */
    ....
 
    /* Treatment to the diffusive flux across the boundary faces */
    forAll(vf.boundaryField(), patchi)
    {
        const fvPatchField<Type>& pvf = vf.boundaryField()[patchi];
        const fvsPatchScalarField& pGamma = gammaMagSf.boundaryField()[patchi];
        const fvsPatchScalarField& pDeltaCoeffs =
            deltaCoeffs.boundaryField()[patchi];
 
        if (pvf.coupled())
        {
            // I think inter-processor boundary would fall into this part

            // This line seems reasonable to me
            fvm.internalCoeffs()[patchi] =
                pGamma*pvf.gradientInternalCoeffs(pDeltaCoeffs);
            // I am confused of this line...
            fvm.boundaryCoeffs()[patchi] =
                -pGamma*pvf.gradientBoundaryCoeffs(pDeltaCoeffs);
        }
        else
        {
            fvm.internalCoeffs()[patchi] = pGamma*pvf.gradientInternalCoeffs();
            fvm.boundaryCoeffs()[patchi] = -pGamma*pvf.gradientBoundaryCoeffs();
        }
    }
 
    return tfvm;
 }

For the diffusive flux across the inter-processor boundary, the coefficient of the cell resides on the local processor is added to the diagonal coefficients array (seems reasonable to me). While the contribution from the neighbour cell resides on the neighbour processor is added to the boundaryCoeffs array, which would be eventually added to the source term. I am totally confused... I think it would be reasonable to me if it is added to the off-diagonal coefficient of that neighbour cell instead.

Does this mean that in a parallel simulation, some of the terms for cells resides on the inter-processor boundary are treated explicitly, while they should be treated implicitly if there is no inter-processor boundary?

Thank you in advance!

Best regards,
Yueyun

dlahaye March 20, 2023 04:51

Dear Yueyun,

Thank you for your input.

Concerning the two questions that you raise:

1/ lduMatrix

Line L104 of the header file https://github.com/OpenFOAM/OpenFOAM...ix/lduMatrix.H shows that the interprocessor interface list interface_ is stored in the class solver member data of the lduMatrix class;

2/ assembly across interprocessor boundaries

Assume face f with owner cell o on processor i and neighbour cell n on processor j to be part of an interprocessor patch separating processors i and j. Then the matrix element A_{on} (A_{no}) is part of the A_{ij} (A_{ji}) interface.

Remark-1: Please note that pvf.coupled() has (at least in my limited understanding) no (none, zip, zero) relation with interprocessor interfaces. Instead, pdf.coupled() deals with the implementation of physics on coupled subdomains (e.g. heat transfer on fluid and solid subdomains).

Remark-2: I suggest to elaborate a model problem of e.g. a small mesh on a one-dimensional domain subdivided in e.g. three subdomain. The linear system A u = f is then partitioned as (matlab/octave/julia notation)
A = [A_{11} A_{12} A_{13} ;
A_{21} A_{22} A_{23} ;
A_{31} A_{32} A_{33}]

and

f = [f_1; f_2; f_3]

Goal of the exercise is to fill in details of assembly, linear system solve and implementation.


All times are GMT -4. The time now is 01:58.