CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

incredible large computation time with interFoam

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 21, 2012, 13:48
Default incredible large computation time with interFoam
  #1
Member
 
Simon Arne
Join Date: May 2012
Posts: 42
Rep Power: 13
simpomann is on a distinguished road
Hi there,

Finally things start to work out with my thesis (at least everything seems stable now), but computation time is truly killing it.

Its related to the filling of a tank (complex geometry from an industrial product) with gasoline while pushing out air through a ventilation tube.
My mesh seems to be OK (checkMesh thinks so at least) and has around 230.000 cells. The complete filling takes 120s real time.
Flow velocities do not exceed 60 m/s.

Unluckily my timestep is around 8*10^(-7), this means I will need 150 millions of iterations for the job. I switched to first order upwind schemes and went down to 2 pressure solving loops (stability and residuals still look great) to come to this point.

Unluckily I still need ~4s /iteration, what brings me to a computation time of approximately 19 years. I didnt trust this and started a simulation one week ago: I simulated 0.15 seconds by now, so it will turn out around 13.4 years!

Crazy. My machine is a Dell precision workstation with the following specs

Xeon W3680 6 core @3.3 Ghz, with hyperthreading
24 GiB RAM

The case is running on 12 processors. Shouldnt it be faster? I didnt expect this to be done in 1 day, but 13 years seems highly unlikely too.

Timestep is auto controlled with Courant < 1 , I didnt dare a higher value.

Any suggestions?
Somebody having compareable simulation times?

I will attach fvSchemes and fvSolutions from the other computer in some minutes.

Best regards and thanks,

Simon
simpomann is offline   Reply With Quote

Old   November 21, 2012, 13:53
Default
  #2
Member
 
Simon Arne
Join Date: May 2012
Posts: 42
Rep Power: 13
simpomann is on a distinguished road
Code:
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

ddtSchemes
{
    default         Euler;
}

gradSchemes
{
    default         Gauss linear;
}

divSchemes
{
    div(rho*phi,U)  Gauss upwind;
    div(phi,alpha)  Gauss vanLeer;
    div(phirb,alpha) Gauss interfaceCompression 1;
    div(phi,p_rgh)  Gauss upwind;
    div(phi,k)      Gauss vanLeer;
    div((nuEff*dev(T(grad(U))))) Gauss linear;
}

laplacianSchemes
{
    default         Gauss linear corrected;
}

interpolationSchemes
{
    default         linear;
}

snGradSchemes
{
    default         corrected;
}

fluxRequired
{
    default         no;
    p;
    pcorr;
    alpha1;
    p_rgh;
}


// ************************************************************************* //
Code:
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

solvers
{
    p_rgh
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-07;
        relTol          0.05;
    }

    p_rghFinal
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-07;
        relTol          0.01;
    }

    pcorr
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-10;
        relTol          0.01;
    }

    p
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-07;
        relTol          0.05;
    }

    pFinal
    {
        solver          PCG;
        preconditioner  DIC;
        tolerance       1e-07;
        relTol          0.01;
    }

    U
    {
        solver          PBiCG;
        preconditioner  DILU;
        tolerance       1e-06;
        relTol          0.01;
    }
}

PIMPLE
{
    momentumPredictor no;
    nCorrectors     2;
    nNonOrthogonalCorrectors 0;
    nAlphaCorr      1;
    nAlphaSubCycles 2;
    cAlpha          1;
}


// ************************************************************************* //
simpomann is offline   Reply With Quote

Old   November 21, 2012, 14:22
Default
  #3
Senior Member
 
akidess's Avatar
 
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 29
akidess will become famous soon enough
Hyperthreading might actually slow you down, so compare if you really benefit from using 12 processes instead of 6. Also try using GAMG instead of PCG to improve the speed a bit. Overall, what's really killing you is the time step. It's gonna be tough to reduce computation per time step by such an amount that you can afford deltaT to be 1e-6s when aiming to simulate a process of 120s...
__________________
*On twitter @akidTwit
*Spend as much time formulating your questions as you expect people to spend on their answer.
akidess is offline   Reply With Quote

Old   November 21, 2012, 14:41
Default
  #4
Senior Member
 
Lieven
Join Date: Dec 2011
Location: Leuven, Belgium
Posts: 299
Rep Power: 22
Lieven will become famous soon enough
That is indeed a very small time step but that simply due to the huge velocity: 60 m/s = 210 km/h.

There are a few things you can try which could speed up the simulation.
1) don't use all the processors that you have for the simulation cause you need one for the OS processes as well (I'm quite sure that using all 12 will be significantly slower than using only 11).

2) complementary to (1), Hyperthreading has only a limited influence on speed. You won't gain a factor 2 in calculation time when you go from 6 to 12 cores. If you reduce the calculation time with 20% you should already be happy. So you might as well try to calculate it on 6 cores and keep the other six for OS processes.

3) You could always coarsen your mesh where you expect this maximum speed. I'm certainly not in favour of this option, but it will lower the courant number.

To be honest however, even with all these "tricks" I don't think you will end up with reasonable calculation times as long as the maximum speed remains in the order of 60 m/s.
Lieven is offline   Reply With Quote

Old   November 21, 2012, 14:44
Default
  #5
ngj
Senior Member
 
Niels Gjoel Jacobsen
Join Date: Mar 2009
Location: Copenhagen, Denmark
Posts: 1,900
Rep Power: 37
ngj will become famous soon enoughngj will become famous soon enough
Hi Simon,

Besides trying to use GAMG, I would highly recommend that you put momentumPredictor in fvSolution::PIMPLE to "on". I believe this would increase your stability a lot.

Secondly, I have a couple of suggestions to your mesh:

1. The filling of a tank is probably not very much dependent on the boundary layers, so you could use a coarse mesh along the walls and apply slip conditions.

2. Identify the place very you have large velocities. I guess that the limiting areas are the inlet and outlet tubes, so if possible make the mesh coarser (in the direction of the flow) in those areas. On top of this, I have previously had problems with a jet of water coming into a domain filled with air. The problems arose at the kink between the wall and the jet, where I experienced large velocities in the air phase. If I recall correctly, this can be partly solved by adding a small tube section going into the chamber.

3. Have you tried running a simplified case in 2D?

Kind regards,

Niels
ngj is offline   Reply With Quote

Old   November 23, 2012, 09:02
Default
  #6
Member
 
Simon Arne
Join Date: May 2012
Posts: 42
Rep Power: 13
simpomann is on a distinguished road
Hey there,

This is a very helpful place, big thanks to all of you!

Changing from PCG to GAMG was a very good suggestion, it brought computation time for each iteration down by ~50% while residuals are nearly equally good.

The trick finally was enhancing the diameter of my outlet tube (the real choking point in this), therefore reducing maximum occuring velocity from 60 m/s to 10 m/s, what lowered computation time by factor 10.It is now a bit of physically wrong (as mentioned, my geometry comes from the industry and I was originally not supposed to change it, but well, what options did I have?).

In addition I now can also switch from my 6 core workstation to a 72 core cluster and hope its going to be fine.

So my summary of how to bring computation time in interfoam down looks like this:

- use of upwind schemes
- using GAMG instead of PCG for pressure (it also needs much less iterations inside each timestep)
- reducing extreme velocities by physically changing the model

Additional ideas:
- being careful with free-stream jets (there are 2 in my system, but I really can't do anything about them)
- coarsening the mesh at walls with low velocities, probably even make them very coarse and apply slip boundaries (interesting idea, thanks! never thought about it, but might turn out useful! unluckily for my momentary geometry I see no way implementing this)


Thanks and regards to all preposters!
simpomann is offline   Reply With Quote

Old   November 23, 2012, 10:03
Default
  #7
Senior Member
 
Lieven
Join Date: Dec 2011
Location: Leuven, Belgium
Posts: 299
Rep Power: 22
Lieven will become famous soon enough
Hey Simon,

Just two remaks.

1. In general, GAMG indeed decreases the number iterations required but, especially for smaller meshes, the calculations could need more time than with the default PCG/PBiCG-Solvers. So don't focus only on the number of iterations per time step.

2. It's nice that you have a bigger workstation available to you for the calculations but note that be aware that this does not necessarily mean that the simulation will run 12x faster (=72/6).

The more you decompose the case, the lower the ratio (computation time in one CPU)/(communication time between the CPUs). At a certain point you will reach the bottle neck where the CPU needs to wait for the data packages to arrive. After this point, increasing the number of CPUs might even slow down the calculation. Best is to gradually increasing the number of CPUs and monitoring the computation time (or # resolved time steps/computation time). With the relatively small mesh you're using, I expect you to reach this limit quite quick (to put in perspective, for my calculations I try to assign about 200k cells to one CPU).

Regrads,


Lieven
Lieven is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Physical Reason for stability of Implicit Schemes? radhakrishnan Main CFD Forum 26 October 3, 2023 23:05
pimpleFoam: turbulence->correct(); is not executed when using residualControl hfs OpenFOAM Running, Solving & CFD 3 October 29, 2013 09:35
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 bookie56 OpenFOAM Installation 8 August 13, 2011 05:03
Interfoam (OF 1.7) : pressure evolution, impact, 2D computation kassiotis OpenFOAM 2 December 21, 2010 16:09
Modeling in micron scale using icoFoam m9819348 OpenFOAM Running, Solving & CFD 7 October 27, 2007 01:36


All times are GMT -4. The time now is 05:09.