CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Lag while running DES with pimpleFoam (https://www.cfd-online.com/Forums/openfoam-solving/112468-lag-while-running-des-pimplefoam.html)

dabe January 29, 2013 10:47

Lag while running DES with pimpleFoam
 
Hello fellow Foamers!

I was wondering if anyone has experienced a sort of "lag" upon reaching the nuTilda calculation step while doing a DDES simulation with pimpleFoam?

Im running my case in parallel on 128 cpu's with approx 50k cells on each. All other variables seem to be calculated almost instantantly but when it reaches nuTilda it seems to freeze or lag for a second or two and then continue. And it only does one iteration on nuTilda, it shouldn't take so long right? This lag causes my simulation time to pretty much double which is really not good since it already is quite long.

Anyone experienced this or something similar? Any known solutions on how to prevent this?

I suspect there could be something wrong with my fvSolution or fvSchemes setup so I'm posting these below.

Code:

/*--------------------------------*- C++ -*----------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.0.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.com                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version    2.0;
    format      ascii;
    class      dictionary;
    location    "system";
    object      fvSchemes;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

ddtSchemes
{
    default        CrankNicholson 0.5;
}

gradSchemes
{
    default        Gauss linear;
    grad(p)        Gauss linear;
    grad(U)        Gauss linear;
}

divSchemes  // filteredLinear(2) is best according to Eugene de Villiers
{
    default        none;
    div(phi,U)      Gauss limitedLinearV 1;
    div(phi,nuTilda) Gauss upwind;
    div((nuEff*dev(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
    default        none;
    laplacian(nuEff,U) Gauss linear corrected; //limited 0.5;
    laplacian((1|A(U)),p) Gauss linear corrected; // limited 0.5;
    laplacian(DnuTildaEff,nuTilda) Gauss linear corrected; // limited 0.5;
    laplacian(1,p)  Gauss linear corrected; // limited 0.5;
}

interpolationSchemes
{
    default        linear;
    interpolate(U)  linear;
}

snGradSchemes
{
    default        corrected;
}

fluxRequired
{
    default        no;
    p              ;
}


// ************************************************************************* //

Code:

/*--------------------------------*- C++ -*----------------------------------*\
| =========                |                                                |
| \\      /  F ield        | OpenFOAM: The Open Source CFD Toolbox          |
|  \\    /  O peration    | Version:  2.0.1                                |
|  \\  /    A nd          | Web:      www.OpenFOAM.com                      |
|    \\/    M anipulation  |                                                |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version    2.0;
    format      ascii;
    class      dictionary;
    location    "system";
    object      fvSolution;
}
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

solvers
{
    p
    {
        solver          GAMG;
        tolerance      1e-04;
        //relTol          0.01;
        smoother        GaussSeidel;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 10;
        agglomerator    faceAreaPair;
        mergeLevels    1;

//        solver          PCG;
//        preconditioner  FDIC;
//        tolerance      1e-04;
//        //relTol          0.1;
//        maxIter                500;
    }

    pFinal
    {
        solver          GAMG;
        tolerance      1e-05;
        //relTol          0.01;
        smoother        GaussSeidel;
        cacheAgglomeration true;
        nCellsInCoarsestLevel 10;
        agglomerator    faceAreaPair;
        mergeLevels    1;

//        solver          PCG;
//        preconditioner  FDIC;
//        tolerance      1e-05;
//        //relTol          0.1;
//        maxIter                500;
    }

    "(U|nuTilda)"
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance      1e-06;
        //relTol          0.1;
    }

    "(U|nuTilda)Final"
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance      1e-06;
        //relTol          0.1;
    }
}

// PISO
// {
//    nCorrectors    2;
//    nNonOrthogonalCorrectors 0;
//    pRefCell        0;
//    pRefValue      0;
// }

PIMPLE
{
  nOuterCorrectors 2;
    nCorrectors    2;
    nNonOrthogonalCorrectors 0;
    pRefCell        0;
    pRefValue      0;
}

relaxationFactors
{
    "p.*"              0.3;
    "U.*"              0.5;
    "nuTilda.*"        0.5;
}


// ************************************************************************* //

Thanks!

dabe January 30, 2013 05:06

Edit, just tried running the case with the turbulence model turned off. The problem was still present and seams to occur between the timesteps.

sail January 30, 2013 14:34

maybe you set the controlDict to save every iteration and the lag you are seeing is the time taken to write the files on the (slow, compared to ram) disk? or it can be an issue due to latencies on your network fabric. can you try running the case on 32 or 64 cores and see if the speedup is actually good? 50k cells per core is starting to be a low value, in my experience.

dabe January 31, 2013 03:09

Hello Vieri and thanks for your reply,

The controlDict is unfortunately not set to save every iteration but with a fairly large interval. The cluster I'm running on is equipped with infiniband so there shouldn't be any latencies due to the network, I think.

I've tried running the case decomposed on 64, 72, 100 and also 144 cores. They all pretty much behave like when decomposed on 128 cores except the 64 and 72 configurations which improve the calculation time for one time step by approx 1s down to ~3 seconds per timstep compared to ~4 seconds with the other configurations. Now the problem is that still out of these 4 seconds approximately 3 seconds are due to the lag while the U and p calculations are carried out really smoothly. For the 64 and 72 case the lag seems to be less but still constitutes the major part of the time taken for calculating one time step.

Now i thought this could either be coupled to the SA turbulence model or to the DDES model so I tried running the case in URANS with both kOmegaSST and SA as turbulence models. Both of them run really smoothly and scaled very whell which means that there is no problem with the SA-model itself. This brings me to the thought that either there could be a scaling problem with DDES or LES in OpenFOAM?

Any thoughts?

dabe January 31, 2013 10:22

Problem solved!

The lag was caused by using vanDriest as delta. By switching to cubeRootVol the lag completely dissapeared.

However should using vanDriest be so computationally expensive?


All times are GMT -4. The time now is 07:38.