How OF implement processor boundary condition
Hello OF experters.
I want to ask a silly question, I am trying to understand how OF implement processor boundary condition. Let's use the laplacian as example, so in build laplacian scheme, there is Code:
fvmLaplacianUncorrected Second question, what is the pDeltaCoeffs here? I know deltaCoeffs here is tsnGradScheme_().deltaCoeffs(vf), so I believe it should be 1/distance(cell centre to neighbour cell centre) on the field, and on the processor boundary patch, it is 1/distance(cell centre to patch face centre) ??? However, for processor boundary patch, to build a matrix, I believe it should get the distance from cell centre to neighbour cell centre ??? Third Question, I checked processorFvPatchField.C, I didn't find gradientBoundaryCoeffs and gradientInternalCoeffs. So where and how do they implemented for processsor B.C. Finally, for pGamma, I believe, it should be interpolated value from this processor cell to neighbour processor cell ? but I cannot find where does this interpolation implemented. Thanks very much. |
Hi,
I am also very interested in this topic. Did you get any clue? |
Quote:
|
Greetings,
OpenFoam implements a non-overlapping domain-decomposition method by storing the linear system in a block-decomposed format. To solve the linear systems, a zero-halo approach in which processors exchange information through interface is implemented. The approach is classical, but possibly poorly documented in OpenFoam. An attempt to provide more information is at https://www.linkedin.com/pulse/openf...menico-lahaye/. What exactly are you looking for? |
Quote:
Thank you for your reply. I have read through 1/15 to 9/15 in your post. And it's really a wonderful job! I hope I could have found it earlier. However, I'm still confused in 5/15 Parallel Assembly of Matrix and Right-Hand Side Vector. In the following, I'm going to list two questions that I could not understand: 1. In OF, how does fvMatrix / lduMatrix hold the off-diagonal coefficients of the cells in a neighbour processor? From my understanding, in a parallel case, the matrix A in a linear system Ax = b is decomposed into local submatrices A_{ij} and assigned to the processors in this manner (Please correct me if I am wrong): (a) Each processor i owns the submatrices A_{ij} for 1 <= j <= N (where N is the processor number) (b) The local submatrix A_{ii} can be stored by a local ldu matrix, just as the serial case (c) The coupled submatrices A_{ij}, i <> j, can be stored by another lu matrix. As is the case in LduMatrix class, Code:
template<class Type, class DType, class LUType> In coupled submatrices A_{ij}, i <> j, the coefficients are stored in arrays pointed by interfacesUpper_[j] and interfacesLower_[j] (I guess the face-cell addressing is managed by the ldu interface interfaces_?) However, in lduMatrix class, Code:
class lduMatrix 2. In OF, how is the coefficients of the cells at a neighbour processor treated in parallel assembly of matrix? Take the laplacian for example, Code:
template<class Type, class GType> Does this mean that in a parallel simulation, some of the terms for cells resides on the inter-processor boundary are treated explicitly, while they should be treated implicitly if there is no inter-processor boundary? Thank you in advance! Best regards, Yueyun |
Dear Yueyun,
Thank you for your input. Concerning the two questions that you raise: 1/ lduMatrix Line L104 of the header file https://github.com/OpenFOAM/OpenFOAM...ix/lduMatrix.H shows that the interprocessor interface list interface_ is stored in the class solver member data of the lduMatrix class; 2/ assembly across interprocessor boundaries Assume face f with owner cell o on processor i and neighbour cell n on processor j to be part of an interprocessor patch separating processors i and j. Then the matrix element A_{on} (A_{no}) is part of the A_{ij} (A_{ji}) interface. Remark-1: Please note that pvf.coupled() has (at least in my limited understanding) no (none, zip, zero) relation with interprocessor interfaces. Instead, pdf.coupled() deals with the implementation of physics on coupled subdomains (e.g. heat transfer on fluid and solid subdomains). Remark-2: I suggest to elaborate a model problem of e.g. a small mesh on a one-dimensional domain subdivided in e.g. three subdomain. The linear system A u = f is then partitioned as (matlab/octave/julia notation) A = [A_{11} A_{12} A_{13} ; A_{21} A_{22} A_{23} ; A_{31} A_{32} A_{33}] and f = [f_1; f_2; f_3] Goal of the exercise is to fill in details of assembly, linear system solve and implementation. |
All times are GMT -4. The time now is 01:58. |