- **OpenFOAM Running, Solving & CFD**
(*http://www.cfd-online.com/Forums/openfoam-solving/*)

- - **Question about the fvmatrix and Laplacian operator**
(*http://www.cfd-online.com/Forums/openfoam-solving/59264-question-about-fvmatrix-laplacian-operator.html*)

I have a question about the LDI have a question about the LDUMatrix.
LDUMatrix contains the coefficients, lowerPtr,upperPtr,diagPtr ,and the address reference, uPtr and lPtr. My questions are as follows: 1) Does the lPtr mean the owner of face and uPtr the neigbour of the face as done in the polyMesh? In some mesh class in openfoam, owner's index(P) is lower than neighbor index(N). 2) Does the upperPtr store the contribution of the neighbour cell(N) to the owner cell(P) and the lowerPtr store the contribution of the owner(P) to the neighbour(N). In Hoperations.C // H operator template<class> tmp<field<type> > lduMatrix::H(const ield<type>& ) const { ......... const scalarField& Lower = lower(); const scalarField& Upper = upper(); // Take refereces to addressing const unallocLabelList& l = lduAddr_.lowerAddr(); const unallocLabelList& u = lduAddr_.upperAddr(); for (register label face=0; face<l.size(); face++) { Hphi[u[face]] -= Lower[face]*sf[l[face]]; Hphi[l[face]] -= Upper[face]*sf[u[face]]; } } From this code, I guess the LowerAddr is owner list and UpperAddr is neighbour list, Lower is the contribution of the owner (P) to the neighbour (N), Upper is the contribution of the neighbour(N) to the owner. if the matrix is AD1 AU1 AU2 AU3 AU4 AL1 AD2 AU5 AU6 AU7 AL2 AL5 AD3 AU8 AU9 AL3 AL6 AL8 AD4 AU10 AL4 AL7 AL9 AL10 AD5 But in fvMatrix, the implementation of function negSumDiag() is as follows negSumDiag: for (register label face=0; face<l.size(); face++) { Diag[l[face]] -= Lower[face]; Diag[u[face]] -= Upper[face]; } Diag[i] should be substract the contribution from all its neighbouring cell, why not the following code? for (register label face=0; face<l.size(); face++) { Diag[l[face]] -= Upper[face]; Diag[u[face]] -= Lower[face]; } 3) For the implementation of FVM laplacian operator,only the fvm.upper is calculated, why not the fvm.lower? template<class> tmp<fvmatrix<type> > gaussLaplacianScheme<type>::fvmLaplacianUncorrecte d ( const surfaceScalarField& gammaMagSf, GeometricField<type,>& vf ) { ....... fvMatrix<type>& fvm = tfvm(); fvm.upper() = deltaCoeffs.internalField()*gammaMagSf.internalFie ld(); ! why not the fvm.lower() fvm.negSumDiag(); } Thanks in advance! Liu Huafei |

Liu,
I could take a guess at Liu,
I could take a guess at question #3. Wouldn't the Laplacian produce a symmetric matrix? Until you have created asymmetric terms in the matrix, you could get away with only storing and calculating half the matrix. I haven't dug through the code, so this is just a guess. David |

Hello,
I try to solve an equation with the term " gamma *laplacian(rAUf, pd) ", I don't understand how to apply fvm for this kind of term. May be someone can help me. Thanks. |

Quote:
I tried an other way to solve it, but now I have to turn back on this equation. On the programmer Guide, il's mentioned that fvMatrix autorizes * operator. So I tried to write the term above as: fvm::Sp(gamma,pd) * fvm::laplacian(rAUf, pd)) which would be what I want, it doesn't work. I tried also with the inner product & without success. Hoping to have someone who can guide me, Thanks, |

Quote:
I tried to understand the matrix, produced by fvmlaplacianuncorrected for two days now, but i dont get it. every discretisation for a diffusion (laplacian) term produces a sum over the faces. i looked into versteeg an in jasak's PhD. If i want to reconstruct the matrix this way i get Terms: - u_P + u_W + u_E + u_N + u_S = 0 (like in versteeg) That produces a matrix with terms in upper, diag an lower. If i look into OF code the laplacian Term only produces an upper-diagonal matrix. The lower part is 0. For me it looks like the code only uses the faces that are not owed by the cells. All my trys to construct or reverse-engeneer this matrix where useless. can anyone tell me, why you just can look at the upper part (some of the faces)?thx, Joern |

Easy: if you only set the upper, the code assumes the matrix is SYMMETRIC and that you (clever trick) choose to store only one half of coefficients to save the memory and speed up the solver.
For asymmetric matrices you have no choice and have to store all coefficients. Otherwise, a_ij = a_ji and the pressure solver (the most expensive part of a segregated solver) runs much faster. So here, the secret is out :) Hope this helps, Hrv |

thx a lot.
this helps. now i can work on... |

All times are GMT -4. The time now is 01:13. |