CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Too diffusive result on Delaunay type mesh for pipe

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree11Likes
  • 1 Post By AlmostSurelyRob
  • 1 Post By wyldckat
  • 1 Post By AlmostSurelyRob
  • 8 Post By AlmostSurelyRob

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 23, 2014, 06:27
Default Too diffusive result on Delaunay type mesh for pipe
  #1
Senior Member
 
Robert Sawko
Join Date: Mar 2009
Posts: 117
Rep Power: 22
AlmostSurelyRob will become famous soon enough
Dear OpenFOAM users and developers,

I am looking for some advice for good setup for a laminar pipe flow with a Delaunay type mesh. I am not sure if I am using the nomenclature properly so I am attaching a picture of the mesh with our best steady state result so far.

This is a simple laminar, steady state flow calculation with analytical laminar profile at the inlet. The results are all right but when we look closer we see that centerline velocity decreases.

We're* using simpleFoam or actually our custom version of simpleFoam with heat transfer. We don't think it's boundary condition problem as we've done this case with the same properties and boundary conditions for two other meshes: pure hexahedral mesh and an "octree" tetrahedral mesh and the results were perfect i.e. constant centerline velocity. The only difference as far as we can see is that cells in Delaunay mesh are not aligned with the flow so well. We managed however to get a good result with StarCCM+ on the same mesh so we think it must something about the numerics.

Now for some setup. The mesh passes all tests. It contains a boundary layer of hexahedral elements. This is the output of checkMesh is
Code:
Mesh stats
    points:           123633
    faces:            1099545
    internal faces:   1067800
    cells:            520492
    faces per cell:   4.16403
    boundary patches: 3
    point zones:      0
    face zones:       0
    cell zones:       0

Overall number of cells of each type:
    hexahedra:     0
    prisms:        85377
    wedges:        0
    pyramids:      0
    tet wedges:    0
    tetrahedra:    435115
    polyhedra:     0

Checking topology...
    Boundary definition OK.
    Cell to face addressing OK.
    Point usage OK.
    Upper triangular ordering OK.
    Face vertices OK.
    Number of regions: 1 (OK).

Checking patch topology for multiply connected surfaces...
    Patch               Faces    Points   Surface topology                  
    INLET               1660     1009     ok (non-closed singly connected)  
    PIPE                28459    14318    ok (non-closed singly connected)  
    OUTLET              1626     990      ok (non-closed singly connected)  

Checking geometry...
    Overall domain bounding box (0 -0.0381 -0.0381) (0.381 0.0381 0.0381)
    Mesh (non-empty, non-wedge) directions (1 1 1)
    Mesh (non-empty) directions (1 1 1)
    Boundary openness (2.40967e-16 1.74365e-17 -3.90365e-16) OK.
    Max cell openness = 6.93409e-16 OK.
    Max aspect ratio = 19.6623 OK.
    Minimum face area = 1.85994e-07. Maximum face area = 2.10504e-05.  Face area magnitudes OK.
    Min volume = 1.42745e-10. Max volume = 3.06659e-08.  Total volume = 0.00173649.  Cell volumes OK.
    Mesh non-orthogonality Max: 49.2663 average: 16.2478
    Non-orthogonality check OK.
    Face pyramids OK.
    Max skewness = 0.46614 OK.
    Coupled point location match (average 0) OK.

Mesh OK.

End
This is our best so far setup of fvSchemes. The setup in fvScheme seems to have a serious effect on the rate of decrease.
Code:
ddtSchemes
{
    default         steadyState;
}

gradSchemes
{
     default        cellLimited pointCellsLeastSquares 0.8;
}

divSchemes
{
    default         none;
    div(phi,U)      bounded Gauss linearUpwind default;
    div(phi,T)      bounded Gauss linearUpwind default;
    div(phi,k)      bounded Gauss linearUpwind default;
    div(phi,epsilon)  bounded Gauss linearUpwind default;
    div((nuEff*dev(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
    default         Gauss linear limited corrected 0.333;
}
show
interpolationSchemes
{
    default         linear;
}

snGradSchemes
{
    default         limited 0.333;
}

fluxRequired
{
    default         no;
    p_rgh           ;
}
and fvSolution. We set nNonOrthogonalCorrection to zero as this is a steady state and average non-orthognality is below 40. I am not sure if there's a valid reasoning behind it though.
Code:
solvers
{
    p_rgh
    {
        solver           GAMG;
        smoother         GaussSeidel;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 100;
        agglomerator     faceAreaPair;
        mergeLevels      1;
        tolerance        1e-8;
        relTol           1e-3;
    }

    "(U|T|k|epsilon|omega)"
    {
        solver          PBiCG;
        preconditioner  DILU
        nSweeps         5;
        tolerance       1e-7;
        relTol          1e-4;
    }
    
}
    
SIMPLE
{
    nNonOrthogonalCorrectors 0;
    pRefCell        0;
    pRefValue       0;

    residualControl
    {
        p_rgh           1e-6;
        U               1e-6;
        T               1e-6;

        // possibly check turbulence fields
        "(k|epsilon|omega)" 5e-9;
    }
}


relaxationFactors
{
    fields
    {
        p               0.7;
    }
    equations
    {
        U               0.3;
        "(k|epsilon)"   0.5;
        T               0.5;
    }
}
The case I am presenting converged in 2403 steps with residuals 1e-6 on p and U. I can present more debugging information if you wish. At the moment we're running a comprehensive study of different schemes and parameters but our study is a bit blind. We have a script that would allows us to generate cases and show centerline graphs for various schemes. Please let us know if you have any ideas how to make it less diffusive or which schemes/solution strategies we should focus on.

*Sorry for the sudden plural. It's not "royal we" - I am working on this with one of my colleagues.
Attached Images
File Type: jpg mesh_and_velocity.jpg (81.6 KB, 222 views)
Attached Files
File Type: pdf centerlineU.pdf (13.6 KB, 57 views)

Last edited by AlmostSurelyRob; October 24, 2014 at 03:59.
AlmostSurelyRob is offline   Reply With Quote

Old   October 26, 2014, 07:15
Default pointLinear scheme and parallel issues
  #2
Senior Member
 
Robert Sawko
Join Date: Mar 2009
Posts: 117
Rep Power: 22
AlmostSurelyRob will become famous soon enough
We've made some progress. After a brute force search through various limiting parameters and grad/div schemes settings we found that we get good result with

Code:
ddtSchemes
{
    default         steadyState;
}

gradSchemes
{
     default        cellLimited pointCellsLeastSquares 0.8;
}

divSchemes
{
    default         none;
    div(phi,U)      bounded Gauss pointLinear default;
    div(phi,T)      bounded Gauss upwind default;
    div(phi,k)      bounded Gauss pointLinear default;
    div(phi,epsilon)  bounded Gauss pointLinear default;
    div((nuEff*dev(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
    default         Gauss pointLinear limited corrected 0.333;
}

interpolationSchemes
{
    default         pointLinear;
}

snGradSchemes
{
    default         limited corrected 0.333;
}

fluxRequired
{
    default         no;
    p_rgh           ;
}
We are not quite sure what pointLinear does but we can probably investigate that in the code. The problem is that using this scheme causes an awful lot of mess for parallel runs. I am attaching a picture with converged pointLinear run and a diverging 4x simple decomposition run. You can see that it is developing an instability in three different points which happen to correspond to processor domain bondaries. Is this is an issue or a known property of pointLinear?

Meanwhile I've also found this post
http://www.cfd-online.com/Forums/ope...r-schemes.html
which was an interesting read. I am in process of trying some of the hints there, but it doesn't look promising. For instance linearUpwind shows diffusion shows decreasing velocity profile which is probably due to high Pe number and hybrid preference for upwind there.

One more thing worth mentioning is that our Peclet number reaches the value of 80 in several points in the middle of the pipe. I was hoping though that introduces only dispersive error rather than dissipative. At the moment we're more concerned about the latter.
Attached Files
File Type: pdf centerlineU.pdf (16.5 KB, 93 views)
AlmostSurelyRob is offline   Reply With Quote

Old   October 26, 2014, 09:43
Default
  #3
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,978
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Greetings Robert,

Sorry, I'm not an expert on this topic, but this does remind me of one of the presentation as at this year's (community) workshop at Zagreb. Search for the presentation "OFW09.0005 How grid quality affects solution accuracy" in this page: http://openfoam-extend.sourceforge.n.../download.html

Best regards,
Bruno
wyldckat is offline   Reply With Quote

Old   October 26, 2014, 11:04
Default
  #4
Senior Member
 
Robert Sawko
Join Date: Mar 2009
Posts: 117
Rep Power: 22
AlmostSurelyRob will become famous soon enough
Thanks Bruno. The presentation is really good - would like to hear what they have to say, but the message from the slides is pretty clear.

I am really interested in this pointLinear scheme. It improved the results when we used it for div scheme. The serial results still have some oscillation but we're fine with this. We don't like the dissipation we get for all the other schemes tried so far. Also this commit message

https://github.com/OpenCFD/OpenFOAM-...2918f2e1bb7904

Convinces me it is the right lead as the intention was to improve the quality on tet meshes. It's just a shame that is misbehaves so much for parallel runs. I wonder if this is worth a bug report.
wyldckat likes this.
AlmostSurelyRob is offline   Reply With Quote

Old   October 26, 2014, 12:19
Default
  #5
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,978
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Hi Robert,

This is a great find you've made! After reading the code on that commit, it makes perfect sense! The premise is that since the mesh is essentially one that is usually used in FEM, it makes sense that the data is processed in FVM with an FEM approach, even if it's just an intermediate calculation.

I don't have much time to look deeper into this myself, but from the looks of it and the history I've seen in OpenFOAM's development around point-fields, I'm guessing that this class was introduced back then as a proof-of-concept...

Wait, I remembered just now something that Henry wrote the other day.... it was here: http://www.openfoam.org/mantisbt/view.php?id=1410#c3246
Quote:
Also when running the case I improved the schemes, in particular it is a VERY bad idea to limit all gradients: the pressure gradient should NEVER be limited.

I suggest
Code:
 gradSchemes
{
    default     Gauss linear;
    grad(U)     cellLimited Gauss linear 1;
}
You should first check this same detail... namely not using this as default:
Code:
gradSchemes
{
     default        cellLimited pointCellsLeastSquares 0.8;
}
Preferably, use "none" in the default, so that the solver will complain about every single gradient it needs to perform.


But returning to my previous line of thinking, if Henry's comment doesn't solve your issue in parallel, then it's very likely that there is some sort of parallel mapping missing between processors, for this point class to work properly. This is way beyond my level of expertise, so if the above doesn't solve it, then I do strongly suggest that you report this as a bug, along with providing them with a good+quick test case for them to test+fix this quickly.

Best regards,
Bruno
tom_flint2012 likes this.
__________________
wyldckat is offline   Reply With Quote

Old   October 27, 2014, 04:50
Default
  #6
Senior Member
 
Robert Sawko
Join Date: Mar 2009
Posts: 117
Rep Power: 22
AlmostSurelyRob will become famous soon enough
Thanks for the feedback Bruno. Yes, that makes up a consistent story. Following your advice I changed fvSchemes to

Code:
ddtSchemes
{
    default         steadyState;
}

gradSchemes
{
     default        none;
     grad(p)        pointCellsLeastSquares;
     grad(U)        pointCellsLeastSquares;
}

divSchemes
{
    default         none;
    div(phi,U)      bounded Gauss pointLinear grad(U);
    div((nuEff*dev(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
	default         Gauss linear limited 0.333;
}

interpolationSchemes
{
    default         pointLinear;
}

snGradSchemes
{
    default         limited 0.333;
}

Let me now share some news in mafia like style.

Bad news: parallel still breaks. I reported the bug here
http://www.openfoam.org/mantisbt/view.php?id=1424

Good news: serial still works even though I removed all the limiting from grad and div schemes. I will also try to remove it from laplacian schemes and see if that helps with parallel or breaks serial.
wyldckat likes this.
AlmostSurelyRob is offline   Reply With Quote

Old   October 28, 2014, 09:02
Default
  #7
Senior Member
 
Robert Sawko
Join Date: Mar 2009
Posts: 117
Rep Power: 22
AlmostSurelyRob will become famous soon enough
Rejoice! The bug has been now resolved and I can confirm that the pointLinear works in parallel and gives the best results in terms of centerline velocities among all the schemes we tested so far. We're planning to write up a little document with comparisons for different meshes so hopefully you will hear from us soon.
AlmostSurelyRob is offline   Reply With Quote

Old   October 8, 2018, 13:34
Default
  #8
gu1
Senior Member
 
Guilherme
Join Date: Apr 2017
Posts: 232
Rep Power: 10
gu1 is on a distinguished road
Quote:
Originally Posted by AlmostSurelyRob View Post
Rejoice! The bug has been now resolved and I can confirm that the pointLinear works in parallel and gives the best results in terms of centerline velocities among all the schemes we tested so far. We're planning to write up a little document with comparisons for different meshes so hopefully you will hear from us soon.
Did you write the paper?

Best.
gu1 is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
second order schemes marine OpenFOAM 67 April 11, 2022 18:19
sliding mesh problem in CFX Saima CFX 46 September 11, 2021 07:38
interFoam/kOmegaSST tank filling with printStackError/Mules simpomann OpenFOAM Running, Solving & CFD 3 February 17, 2014 17:06
Pressure instability with rhoSimpleFoam daniel_mills OpenFOAM Running, Solving & CFD 44 February 17, 2011 17:08
[Commercial meshers] Trimmed cell and embedded refinement mesh conversion issues michele OpenFOAM Meshing & Mesh Conversion 2 July 15, 2005 04:15


All times are GMT -4. The time now is 01:00.