CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   incredible large computation time with interFoam (https://www.cfd-online.com/Forums/openfoam-solving/109605-incredible-large-computation-time-interfoam.html)

simpomann November 21, 2012 12:48

incredible large computation time with interFoam
 
Hi there,

Finally things start to work out with my thesis (at least everything seems stable now), but computation time is truly killing it.

Its related to the filling of a tank (complex geometry from an industrial product) with gasoline while pushing out air through a ventilation tube.
My mesh seems to be OK (checkMesh thinks so at least) and has around 230.000 cells. The complete filling takes 120s real time.
Flow velocities do not exceed 60 m/s.

Unluckily my timestep is around 8*10^(-7), this means I will need 150 millions of iterations for the job. I switched to first order upwind schemes and went down to 2 pressure solving loops (stability and residuals still look great) to come to this point.

Unluckily I still need ~4s /iteration, what brings me to a computation time of approximately 19 years. I didnt trust this and started a simulation one week ago: I simulated 0.15 seconds by now, so it will turn out around 13.4 years!

Crazy. My machine is a Dell precision workstation with the following specs

Xeon W3680 6 core @3.3 Ghz, with hyperthreading
24 GiB RAM

The case is running on 12 processors. Shouldnt it be faster? I didnt expect this to be done in 1 day, but 13 years seems highly unlikely too.

Timestep is auto controlled with Courant < 1 , I didnt dare a higher value.

Any suggestions?
Somebody having compareable simulation times?

I will attach fvSchemes and fvSolutions from the other computer in some minutes.

Best regards and thanks,

Simon

simpomann November 21, 2012 12:53

Code:

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

ddtSchemes
{
    default        Euler;
}

gradSchemes
{
    default        Gauss linear;
}

divSchemes
{
    div(rho*phi,U)  Gauss upwind;
    div(phi,alpha)  Gauss vanLeer;
    div(phirb,alpha) Gauss interfaceCompression 1;
    div(phi,p_rgh)  Gauss upwind;
    div(phi,k)      Gauss vanLeer;
    div((nuEff*dev(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
    default        Gauss linear corrected;
}

interpolationSchemes
{
    default        linear;
}

snGradSchemes
{
    default        corrected;
}

fluxRequired
{
    default        no;
    p;
    pcorr;
    alpha1;
    p_rgh;
}


// ************************************************************************* //

Code:

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

solvers
{
    p_rgh
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance      1e-07;
        relTol          0.05;
    }

    p_rghFinal
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance      1e-07;
        relTol          0.01;
    }

    pcorr
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance      1e-10;
        relTol          0.01;
    }

    p
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance      1e-07;
        relTol          0.05;
    }

    pFinal
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance      1e-07;
        relTol          0.01;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance      1e-06;
        relTol          0.01;
    }
}

PIMPLE
{
    momentumPredictor no;
    nCorrectors    2;
    nNonOrthogonalCorrectors 0;
    nAlphaCorr      1;
    nAlphaSubCycles 2;
    cAlpha          1;
}


// ************************************************************************* //


akidess November 21, 2012 13:22

Hyperthreading might actually slow you down, so compare if you really benefit from using 12 processes instead of 6. Also try using GAMG instead of PCG to improve the speed a bit. Overall, what's really killing you is the time step. It's gonna be tough to reduce computation per time step by such an amount that you can afford deltaT to be 1e-6s when aiming to simulate a process of 120s...

Lieven November 21, 2012 13:41

That is indeed a very small time step but that simply due to the huge velocity: 60 m/s = 210 km/h.

There are a few things you can try which could speed up the simulation.
1) don't use all the processors that you have for the simulation cause you need one for the OS processes as well (I'm quite sure that using all 12 will be significantly slower than using only 11).

2) complementary to (1), Hyperthreading has only a limited influence on speed. You won't gain a factor 2 in calculation time when you go from 6 to 12 cores. If you reduce the calculation time with 20% you should already be happy. So you might as well try to calculate it on 6 cores and keep the other six for OS processes.

3) You could always coarsen your mesh where you expect this maximum speed. I'm certainly not in favour of this option, but it will lower the courant number.

To be honest however, even with all these "tricks" I don't think you will end up with reasonable calculation times as long as the maximum speed remains in the order of 60 m/s.

ngj November 21, 2012 13:44

Hi Simon,

Besides trying to use GAMG, I would highly recommend that you put momentumPredictor in fvSolution::PIMPLE to "on". I believe this would increase your stability a lot.

Secondly, I have a couple of suggestions to your mesh:

1. The filling of a tank is probably not very much dependent on the boundary layers, so you could use a coarse mesh along the walls and apply slip conditions.

2. Identify the place very you have large velocities. I guess that the limiting areas are the inlet and outlet tubes, so if possible make the mesh coarser (in the direction of the flow) in those areas. On top of this, I have previously had problems with a jet of water coming into a domain filled with air. The problems arose at the kink between the wall and the jet, where I experienced large velocities in the air phase. If I recall correctly, this can be partly solved by adding a small tube section going into the chamber.

3. Have you tried running a simplified case in 2D?

Kind regards,

Niels

simpomann November 23, 2012 08:02

Hey there,

This is a very helpful place, big thanks to all of you!

Changing from PCG to GAMG was a very good suggestion, it brought computation time for each iteration down by ~50% while residuals are nearly equally good.

The trick finally was enhancing the diameter of my outlet tube (the real choking point in this), therefore reducing maximum occuring velocity from 60 m/s to 10 m/s, what lowered computation time by factor 10.It is now a bit of physically wrong (as mentioned, my geometry comes from the industry and I was originally not supposed to change it, but well, what options did I have?).

In addition I now can also switch from my 6 core workstation to a 72 core cluster and hope its going to be fine.

So my summary of how to bring computation time in interfoam down looks like this:

- use of upwind schemes
- using GAMG instead of PCG for pressure (it also needs much less iterations inside each timestep)
- reducing extreme velocities by physically changing the model :eek:

Additional ideas:
- being careful with free-stream jets (there are 2 in my system, but I really can't do anything about them)
- coarsening the mesh at walls with low velocities, probably even make them very coarse and apply slip boundaries (interesting idea, thanks! never thought about it, but might turn out useful! unluckily for my momentary geometry I see no way implementing this)


Thanks and regards to all preposters!

Lieven November 23, 2012 09:03

Hey Simon,

Just two remaks.

1. In general, GAMG indeed decreases the number iterations required but, especially for smaller meshes, the calculations could need more time than with the default PCG/PBiCG-Solvers. So don't focus only on the number of iterations per time step.

2. It's nice that you have a bigger workstation available to you for the calculations but note that be aware that this does not necessarily mean that the simulation will run 12x faster (=72/6).

The more you decompose the case, the lower the ratio (computation time in one CPU)/(communication time between the CPUs). At a certain point you will reach the bottle neck where the CPU needs to wait for the data packages to arrive. After this point, increasing the number of CPUs might even slow down the calculation. Best is to gradually increasing the number of CPUs and monitoring the computation time (or # resolved time steps/computation time). With the relatively small mesh you're using, I expect you to reach this limit quite quick (to put in perspective, for my calculations I try to assign about 200k cells to one CPU).

Regrads,


Lieven


All times are GMT -4. The time now is 23:03.