CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   pisoFoam pressure issue (https://www.cfd-online.com/Forums/openfoam-solving/96329-pisofoam-pressure-issue.html)

jmart7 January 19, 2012 12:46

pisoFoam pressure issue
 
Hello everyone! I have been trying to simulate a scalar using pisoFoam, which is an incompressible transient solver. I modified the solver to solve for the scalar equation and it runs fine.

My problem is that when solving pressure, it takes from 500 to 600 iterations! which is slowing the run-time tremendously.

I am using a delta t of 1e-6, which keeps a stable Co of 2.5 at the moment.
My mesh size is about 1.3 million cells and I am running this on parallel with 128 processors, but even with that it takes 10 seconds per iteration.

In the intial conditions for pressure, I am using 0 for internal field, just like in the tutorials. I changed this to atmospheric pressure and nothing really changed. I am open to any suggestions and I appreciate your help in advance.

akidess January 20, 2012 02:34

Co 2.5 is too large for a PISO solver - it should be smaller 1. And something important you forgot to mention: which linear solver are you using?

jmart7 January 20, 2012 11:08

For pressure I am using PCG and for velocity PBiCG. Also, I am using the backward implicit scheme for the time derivative. I had originally set the Co # to 2.5 because it gave me the highest attainable time step (1e-6) since I was trying to get this to run a bit faster. So I guess I have to change Co # back to 1 and take the 1e-7 hit?

Thanks for your reply by the way

akidess January 20, 2012 12:02

Either you have Courant number lower 1, or you switch to pimpleFoam. Keep in mind that even though your time step will get smaller, you'll end up doing less work per time step. You can probably also speed up things by using GAMG to solve the pressure correction.

jmart7 January 23, 2012 02:00

Thanks for your suggestion! In pisoFoam I decreased my deltat to get a Co of about 0.6 and changed to GAMG in the linear solver for p. This decreased 2 seconds on the time. I also tried pimplefoam with a max Co of 2 and GAMG and I get the same time as pisoFoam, but my delta t is an order of magnitude greater, which would be faster. I guess pimplefoam and 8 seconds per iteration is the fastest I can do.

My second question is that if I refine my mesh, do you think I can increase the time performance? It's a bit counter intuitive since even though I am adding more points to make the calculations easier, there are also more calculations to be done.

My third question is that since this is a transient problem and LES, I've read in various threads that I should not start from a 0 initial condition, but a steady state one using simplesrfFoam. Is this true?

Sorry for all the questions, but you've been a lot of help. Thanks once again.

akidess January 23, 2012 08:46

2nd question - No, I can't imagine refining the mesh to do anything but increase the computational time.

3rd question - I don't know anything about LES. Hopefully someone else will give an answer.

vkrastev January 24, 2012 05:39

With LES turbulence modeling, you have to respect the MaxCo < 1 condition (sometimes even lower), otherwise you will not resolve adequately the turbulence time scales (in simple words: if you succeed in a stable simulation with Co > 1, which is not so obvious, you will miss a lot of the high frequency turbulent structures in your flow, and therefore the meaning of running a LES). In general, I don't know any other efficient way to speed up a LES than massive parallelization, as for turbulence spatial resolution reasons (yes, you have to account for both temporal and spacial resolution) the LES grids are generally quite "heavy". In that sense, my experience tell me that the PCG (or PBiCG) solver on pressure is more efficient in parallel scaling for high numbers of treads/cores than the GAMG one, which however is generally faster for serial or low number of treads/cores.

Regards

V.

jmart7 January 24, 2012 12:59

Thank you Anton for your help!

Vesselin: First of all, thanks for your very helpful input. I'm kind of new to openfoam and RANS/LES simulations, but I'm learning a lot. So I guess obsessing over a faster run-time is not the best way to go at it if I want to do a good LES simulation. What I got from your response is that I need to go back to PBICG or PCG because I did notice that even though my simulation run-time decreased using pimpleFoam and GAMG, my scalar solution completely diverged. I will go back to pisoFoam with PCG linear solver and try to figure out why my last Pressure takes about 500 iterations to solve when everything else takes about 2 or 3. Do you think the pressure solver in pisoFoam was built more for RANS simulations rather than LES?

Thanks once again V.

vkrastev January 24, 2012 13:37

Quote:

Originally Posted by jmart7 (Post 340920)
So I guess obsessing over a faster run-time is not the best way to go at it if I want to do a good LES simulation.

Yes, but this doesn't mean that you cannot reach a reasonable run time concerning the single time step (the real problem with LES is that you have to run it for a huge number of time steps): 128 cores for a 1.3 milion cells mesh are quite a lot!

Quote:

Originally Posted by jmart7 (Post 340920)
What I got from your response is that I need to go back to PBICG or PCG because I did notice that even though my simulation run-time decreased using pimpleFoam and GAMG, my scalar solution completely diverged. I will go back to pisoFoam with PCG linear solver and try to figure out why my last Pressure takes about 500 iterations to solve when everything else takes about 2 or 3. Do you think the pressure solver in pisoFoam was built more for RANS simulations rather than LES?

The pressure equation (independently from the LES or RANS approach) is always the bottleneck in the numerical solution of the discretized fluid flow equations, and this is true for all the most commonly used pressure-based methods (such as the PISO or SIMPLE ones implemented in OpenFOAM). However, with 128 cores your run time per-time-step is quite high, and this could be due to a lot of factors (not only the type of linear solver). Assuming that from now you will set the maxCo < 1 (I guess that your solution diverges not because of the linear solver type, but for other instability factors, and the too large time step can be one of them), can you please post the following info?

1) the checkMesh log file

2) your fvSolution file

3) the type of physical connection between the computing nodes

Regards

V.

jmart7 January 25, 2012 03:18

2 Attachment(s)
Here are the files. Regarding your last question, I use a supercomputer that is available to various universities for parallel computing, so I don't know exactly the physical connections they use, I assume they are pretty fast though.

vkrastev January 25, 2012 05:13

Ok, let's start with the mesh: if you were simulating with a RANS approach i would say that the the checkMesh is quite ok (I usually don't like to have pyramids in my domain, but that's only my opinion), but I cannot say the same about LES. Theoretically speaking a LES can run on any kind of mesh, but my actual experience told me that is quite hard to obtain a good quality LES on anything that's not pure hexahedral shape meshes (it is again correlated to the higher spatial/temporal resolution and accuracy required by the LES modeling). If it's feasible, remeshing your domain with hexas will be a good starting point to the road for a good quality and converging LES.

About the fvSolution: it seems to confirm my previous statement, because you have a quite high relative tolerance on the pressure solver (relTol set to 0.05 means that at each pressure iteration the linear solver will push the relative residuals down to at least 0.05, wich is a relatively high value). So, if the number of iterations is so high, it is more likely that the solution is diverging for some reasons (too big deltaT + too bad mesh, for instance), rather than a problem correlated with the linear solver itself.

Finally, sorry for not asking for this in my previous post, but can you also post your fvSchemes and (if you can) an example of some iterations from a simulation log file?

Regards

V.

PS-In general, you have to take in mind that with LES you'll need much more patience than with RANS approaches, but that's the price to pay for simulating "more turbulence stuff" than simply modeling it...

PPS-I agree with you about the HPC facility: generally speaking they have robust and fast connecting webs between the computing nodes, so let's assume this is true also for your case.

jmart7 January 25, 2012 17:19

2 Attachment(s)
I have been noticing from some LEShow dense LES meshes are, specially near inlets, so I will start refining my mesh. Another option that I have been thinking about is implementing a different pressure solver. Modifying the pisoFoam source code to instead of using the implicit PISO method, maybe use an explicit Runga Kutta, which is inherently faster.

Here are the extra files and part of the log file.

Thanks once again for the help

vkrastev January 30, 2012 06:26

Quote:

Originally Posted by jmart7 (Post 341182)
I have been noticing from some LEShow dense LES meshes are, specially near inlets, so I will start refining my mesh. Another option that I have been thinking about is implementing a different pressure solver. Modifying the pisoFoam source code to instead of using the implicit PISO method, maybe use an explicit Runga Kutta, which is inherently faster.

Here are the extra files and part of the log file.

Thanks once again for the help

Ok, here they are my advices:

1) lower all your absolute tolerances (the tolerance entry in the fvSolution file), except the pFinal one, to let's say 1e-12 (from your log file I see that you are not solving anymore for Zmix, as the absolute tolerance has fallen below the cut-off value: in general is not good to stop solving for something if the solution actually has still to evolve).

2) If Zmix is a concentration value, which has to be bounded between 0 and 1, use a strictly bounded convection scheme on div(phi,Zmix), such as for instance Gauss limitedVanLeer 0 1 (this will bound the Zmix value during the cell-to-face interpolation strictly between 0 and 1).

3) For div(phi,U) use a "V" scheme (e. g. Gauss limitedLinearV 1)

4) At least in the inital part of your simulation time, use a slightly less restrictive pFinal tolerance value (e.g. 1e-05 instead of of 1e-06): the huge number of iterations in your case are due to the second (and last) PISO pressure correction, where the solver tries to push down the residuals below the pFinal tolerance entry. Also, if you keep to use meshes with significant non-orthogonality values, add 1 or 2 non-orthogonality correctors: this will increase the number of pressure loops, but each of them should be not very expensive because they will follow the relative tolerance value (0.05 in your case). In addition, the final loop will start from a lower initial residual value, which should be beneficial in therms of the final loop number of iterations.

That's it, I'm absolutely not a big LES expert, but I hope those little pieces of advice will improve your simulations.

Regards

V.

jmart7 January 30, 2012 22:54

Thank you so much for these helpful hints. I will apply them and let you know how it goes. Thanks once again for your help V.

jmart7 February 9, 2012 16:56

Hello once again V and sorry to bother you once again,

So I applied your suggestions, but I kept on getting the high number of iterations. My next step without redoing the mesh was to modify the solver to solve for pressure explicitly through the Runge Kutta 2 scheme. I tested it on a simple geometry, a channel, and it worked great. Now, I am trying to implement it to my more complicated geometry and my pressure is increasing very rapidly until it explodes and the solution completely diverges very early. My geometry includes some inlets and an outlet. I followed the same boundary conditions for p and U.

P:
wall - zeroGradient
inlets - zeroGradient
outlet - fixedValue 0

U:
wall - fixedValue 0
inlets - fixedValue 11
outlet - fixedValue 0

At first I thought it was the boundary conditions, but I have messed around with them and nothing, I have ran out of ideas other than redoing the mesh.

vkrastev February 9, 2012 17:21

Quote:

Originally Posted by jmart7 (Post 343670)
U:
wall - fixedValue 0
inlets - fixedValue 11
outlet - fixedValue 0

The fixed value for U at the outlet is an error (in an incompressible solver you cannot fix both pressure and velocity at an outlet): use zeroGradient or inletOutlet for U. Although it should be less efficient than the PCG linear solver for a large number of cores, you can also try the GAMG solver for pressure (see in some OF tutorial how to set it properly). Finally, you talked about the number of iterations, but is the simulation diverging or is it running fine (apart from the computational efficiency)?

Regards

V.

jmart7 February 9, 2012 18:03

It is diverging as well. The pressure gets up to 10^9

vkrastev February 10, 2012 04:40

Ok, if changing the BC for U at the outlet doesn't make any change, then probably it is time to rebuild the mesh...

V.

maalan September 7, 2012 14:11

Hi all!!

I am writing because I have also a problem with PISO algorithm and I am stuck since longtime ago...

The point is I am trying to simulate the flow past a cube (Re=3000, 30000, 300000) by using URANS (RNG-ke, e.g.). My problem is that I do not get an unsteady behaviour in the wake - I put several probes in the wake and I reconstruct the solution and the case falls into steady state... I use the fvSchemes and fvSolution files which are included in the motorbike tutorial but nothing transient comes... About my mesh, I use a fully structured mesh with a refinement zone around the cube. I tried this case in 2D and I got the transient properly...

Also, I have tested a similar case: flow past a square cylinder with the same computational domain size and numerical schemes and voila, I got it!!

Finally, if I use the simpleFoam algorithm even though it is said is a steady state algorithm, I got the transient!!

What it is happening??? I hope you will help me...

Thank you so much!!
Best,


All times are GMT -4. The time now is 17:50.