CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

How OF implement processor boundary condition

Register Blogs Community New Posts Updated Threads Search

Like Tree4Likes
  • 1 Post By fxzf
  • 1 Post By dlahaye
  • 1 Post By yueyun
  • 1 Post By dlahaye

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 7, 2022, 04:33
Default How OF implement processor boundary condition
  #1
Member
 
ff
Join Date: Feb 2010
Posts: 81
Rep Power: 16
fxzf is on a distinguished road
Hello OF experters.

I want to ask a silly question, I am trying to understand how OF implement processor boundary condition.

Let's use the laplacian as example, so in build laplacian scheme, there is
Code:
fvmLaplacianUncorrected
{
....

  const fvsPatchScalarField& pDeltaCoeffs = deltaCoeffs.boundaryField()[patchi];

  if (pvf.coupled())
  {
    fvm.internalCoeffs()[patchi] = pGamma*pvf.gradientInternalCoeffs(pDeltaCoeffs)
    fvm.boundaryCoeffs()[patchi] = -pGamma*pvf.gradientBoundaryCoeffs(pDeltaCoeffs)

  }

}
First question, I believer, if it loops to processor type boundary patch, it will this pvf.coupled() is true? Correct me if I am wrong.

Second question, what is the pDeltaCoeffs here? I know deltaCoeffs here is tsnGradScheme_().deltaCoeffs(vf), so I believe it should be 1/distance(cell centre to neighbour cell centre) on the field, and on the processor boundary patch, it is 1/distance(cell centre to patch face centre) ???

However, for processor boundary patch, to build a matrix, I believe it should get the distance from cell centre to neighbour cell centre ???

Third Question, I checked processorFvPatchField.C, I didn't find gradientBoundaryCoeffs and gradientInternalCoeffs. So where and how do they implemented for processsor B.C.

Finally, for pGamma, I believe, it should be interpolated value from this processor cell to neighbour processor cell ? but I cannot find where does this interpolation implemented.

Thanks very much.
yueyun likes this.
fxzf is offline   Reply With Quote

Old   March 15, 2023, 08:26
Default
  #2
New Member
 
Yueyun Xi
Join Date: Nov 2022
Posts: 6
Rep Power: 3
yueyun is on a distinguished road
Hi,
I am also very interested in this topic. Did you get any clue?
yueyun is offline   Reply With Quote

Old   March 15, 2023, 09:16
Default
  #3
Member
 
ff
Join Date: Feb 2010
Posts: 81
Rep Power: 16
fxzf is on a distinguished road
Quote:
Originally Posted by yueyun View Post
Hi,
I am also very interested in this topic. Did you get any clue?
got some progress, but not full
fxzf is offline   Reply With Quote

Old   March 15, 2023, 16:07
Default
  #4
Senior Member
 
Domenico Lahaye
Join Date: Dec 2013
Posts: 736
Blog Entries: 1
Rep Power: 17
dlahaye is on a distinguished road
Greetings,

OpenFoam implements a non-overlapping domain-decomposition method by storing the linear system in a block-decomposed format. To solve the linear systems, a zero-halo approach in which processors exchange information through interface is implemented. The approach is classical, but possibly poorly documented in OpenFoam. An attempt to provide more information is at https://www.linkedin.com/pulse/openf...menico-lahaye/.

What exactly are you looking for?
yueyun likes this.
dlahaye is offline   Reply With Quote

Old   March 16, 2023, 11:01
Default
  #5
New Member
 
Yueyun Xi
Join Date: Nov 2022
Posts: 6
Rep Power: 3
yueyun is on a distinguished road
Quote:
Originally Posted by dlahaye View Post
Greetings,

OpenFoam implements a non-overlapping domain-decomposition method by storing the linear system in a block-decomposed format. To solve the linear systems, a zero-halo approach in which processors exchange information through interface is implemented. The approach is classical, but possibly poorly documented in OpenFoam. An attempt to provide more information is at https://www.linkedin.com/pulse/openf...menico-lahaye/.

What exactly are you looking for?
Hi Domenico,

Thank you for your reply. I have read through 1/15 to 9/15 in your post. And it's really a wonderful job! I hope I could have found it earlier.

However, I'm still confused in 5/15 Parallel Assembly of Matrix and Right-Hand Side Vector. In the following, I'm going to list two questions that I could not understand:

1. In OF, how does fvMatrix / lduMatrix hold the off-diagonal coefficients of the cells in a neighbour processor?

From my understanding, in a parallel case, the matrix A in a linear system Ax = b is decomposed into local submatrices A_{ij} and assigned to the processors in this manner (Please correct me if I am wrong):
(a) Each processor i owns the submatrices A_{ij} for 1 <= j <= N (where N is the processor number)
(b) The local submatrix A_{ii} can be stored by a local ldu matrix, just as the serial case
(c) The coupled submatrices A_{ij}, i <> j, can be stored by another lu matrix.

As is the case in LduMatrix class,

Code:
template<class Type, class DType, class LUType>
class LduMatrix
{
    // private data

        //- LDU mesh reference
        const lduMesh& lduMesh_;

        //- Diagonal coefficients
        Field<DType> *diagPtr_;

        //- Off-diagonal coefficients
        Field<LUType> *upperPtr_, *lowerPtr_;

        //- Source
        Field<Type> *sourcePtr_;

        //- Field interfaces (processor patches etc.)
        LduInterfaceFieldPtrsList<Type> interfaces_;

        //- Off-diagonal coefficients for interfaces
        FieldField<Field, LUType> interfacesUpper_, interfacesLower_;
In local submatrix A_{ii}, the diagonal coefficients are stored in an array pointed by diagPtr_, and the off-diagonal coefficients in A_{ii} are stored in arrays pointed by upperPtr_ and lowerPtr_ (I guess the face-cell addressing is managed by the lduMesh lduMesh_?)

In coupled submatrices A_{ij}, i <> j, the coefficients are stored in arrays pointed by interfacesUpper_[j] and interfacesLower_[j] (I guess the face-cell addressing is managed by the ldu interface interfaces_?)

However, in lduMatrix class,

Code:
class lduMatrix
{
    // private data

        //- LDU mesh reference
        const lduMesh& lduMesh_;

        //- Coefficients (not including interfaces)
        scalarField *lowerPtr_, *diagPtr_, *upperPtr_;
Only the coefficients in local submatrix A_{ii} is stored by lowerPtr_, diagPtr_, upperPtr_. Then, how is the coefficients in coupled submatrix A_{ij} stored in lduMatrix and fvMatrix (which is derived from lduMatrix)?

2. In OF, how is the coefficients of the cells at a neighbour processor treated in parallel assembly of matrix?

Take the laplacian for example,

Code:
template<class Type, class GType>
 tmp<fvMatrix<Type>>
 gaussLaplacianScheme<Type, GType>::fvmLaplacianUncorrected
 (
     const surfaceScalarField& gammaMagSf,
     const surfaceScalarField& deltaCoeffs,
     const GeometricField<Type, fvPatchField, volMesh>& vf
 )
 {
     /* Treatment to the diffusive flux across the internal faces */
     ....
 
     /* Treatment to the diffusive flux across the boundary faces */
     forAll(vf.boundaryField(), patchi)
     {
         const fvPatchField<Type>& pvf = vf.boundaryField()[patchi];
         const fvsPatchScalarField& pGamma = gammaMagSf.boundaryField()[patchi];
         const fvsPatchScalarField& pDeltaCoeffs =
             deltaCoeffs.boundaryField()[patchi];
 
         if (pvf.coupled())
         {
             // I think inter-processor boundary would fall into this part

             // This line seems reasonable to me
             fvm.internalCoeffs()[patchi] =
                 pGamma*pvf.gradientInternalCoeffs(pDeltaCoeffs);
             // I am confused of this line...
             fvm.boundaryCoeffs()[patchi] =
                -pGamma*pvf.gradientBoundaryCoeffs(pDeltaCoeffs);
         }
         else
         {
             fvm.internalCoeffs()[patchi] = pGamma*pvf.gradientInternalCoeffs();
             fvm.boundaryCoeffs()[patchi] = -pGamma*pvf.gradientBoundaryCoeffs();
         }
     }
 
     return tfvm;
 }
For the diffusive flux across the inter-processor boundary, the coefficient of the cell resides on the local processor is added to the diagonal coefficients array (seems reasonable to me). While the contribution from the neighbour cell resides on the neighbour processor is added to the boundaryCoeffs array, which would be eventually added to the source term. I am totally confused... I think it would be reasonable to me if it is added to the off-diagonal coefficient of that neighbour cell instead.

Does this mean that in a parallel simulation, some of the terms for cells resides on the inter-processor boundary are treated explicitly, while they should be treated implicitly if there is no inter-processor boundary?

Thank you in advance!

Best regards,
Yueyun
dlahaye likes this.
yueyun is offline   Reply With Quote

Old   March 20, 2023, 04:51
Default
  #6
Senior Member
 
Domenico Lahaye
Join Date: Dec 2013
Posts: 736
Blog Entries: 1
Rep Power: 17
dlahaye is on a distinguished road
Dear Yueyun,

Thank you for your input.

Concerning the two questions that you raise:

1/ lduMatrix

Line L104 of the header file https://github.com/OpenFOAM/OpenFOAM...ix/lduMatrix.H shows that the interprocessor interface list interface_ is stored in the class solver member data of the lduMatrix class;

2/ assembly across interprocessor boundaries

Assume face f with owner cell o on processor i and neighbour cell n on processor j to be part of an interprocessor patch separating processors i and j. Then the matrix element A_{on} (A_{no}) is part of the A_{ij} (A_{ji}) interface.

Remark-1: Please note that pvf.coupled() has (at least in my limited understanding) no (none, zip, zero) relation with interprocessor interfaces. Instead, pdf.coupled() deals with the implementation of physics on coupled subdomains (e.g. heat transfer on fluid and solid subdomains).

Remark-2: I suggest to elaborate a model problem of e.g. a small mesh on a one-dimensional domain subdivided in e.g. three subdomain. The linear system A u = f is then partitioned as (matlab/octave/julia notation)
A = [A_{11} A_{12} A_{13} ;
A_{21} A_{22} A_{23} ;
A_{31} A_{32} A_{33}]

and

f = [f_1; f_2; f_3]

Goal of the exercise is to fill in details of assembly, linear system solve and implementation.
yueyun likes this.
dlahaye is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
How can I implement temperature jump boundary condition in microchannel walls? sima FLUENT 7 January 6, 2021 21:36
Divergence after implement nonreflecting boundary condition fireflies Main CFD Forum 0 October 23, 2014 10:16
Implement a new boundary condition and writes out the values kmefun OpenFOAM Running, Solving & CFD 0 September 19, 2014 12:03
How to implement heat flux boundary condition as function of time baran_foam OpenFOAM Running, Solving & CFD 3 September 15, 2014 00:28
How can I implement temperature jump boundary condition in microchannel walls? sima FLUENT 1 December 8, 2010 08:20


All times are GMT -4. The time now is 20:00.