CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (http://www.cfd-online.com/Forums/openfoam-solving/)
-   -   An alarming issue? (http://www.cfd-online.com/Forums/openfoam-solving/110119-alarming-issue.html)

diego_angeli December 4, 2012 06:45

An alarming issue?
 
Dear FOAMers,

yesterday I experienced a very weird situation.
I was in the middle of a lab session and I was teaching a custom tutorial to the students.
The case is the famous "mixing elbow", in its isothermal feature, to be solved with simpleFoam. I set it up successfully this time last year, and I was feeling very comfortable in doing it all over again.

You can download the zipped case from this link:
http://www.mimesis.eu/pool/elbow.zip

As you may see, the mesh is quite coarse but not horrible, I built it specifically to achieve a fast and safe execution in the lab sessions. The model, numerics and solver settings are entirely inherited from the pitzDaily case. Questionable but easy for a "first cup of openFoam" besides the usual cavity stuff. I imposed very standard BCs. I retained consciously the corrected laplacianSchemes, well aware that these may cause problems with unstructured grids. I wish to stress that I made such a choice not because I am stupid or masochist, but only to be able to finish the case from start to end, focusing on modifying the files in the 0 folder, in the time span of one session. I have 120 students, and 99% of them never saw any linux and/or text UI before. Next lesson is about fvSchemes and its use.

And there it went:
- last year it was OF-2.0.x, compiled DP with gcc 4.6 on an ubuntu 11.10 32bit virtual machine (I distributed it to students in order to let them work independently). Everything worked fine (and still does: I re-tried today), "convergence" (with coarse thresholds) is achieved within a hundred of iterations or so, and the result is meaningful. So far, so good.
- this year, OF-2.1.x, compiled DP with gcc 4.4 on a debian etch machine (Xeon) and cloned onto all the PCs of the lab: the case stops claiming to be converged after 10 iterations, but it's completely screwed. Then it was utter "panic at the lab".

Luckily, we changed the initial BC for epsilon on the fly (from 1e-4 to 1) and it worked.

This morning I did some more tests. I will sum them up here:
- OF-2.0.x - October 2011 - OpenSUSE 11.4 64bit - gcc 4.4 DP: fpe after 3 iter
- OF-2.0.x - October 2012 - Ubuntu 12.04 64bit - gcc 4.4 SP: fpe after 3 iter
- OF-2.1.x - January 2012 - Ubuntu 11.10 64bit - gcc 4.6 DP: fpe after 10 iterations, works after changing the IC for epsilon

In addition, some students with the 2.1.1 precompiled release for ubuntu got it working instead, with no changes!

Hence, it seems that (roughly) we have a dependence on the 32/64bit of the stability of the computation, and a dependence on the version of the sensitivity to initial conditions on epsilon.

That said, I think I will revert to uncorrected laplacianSchemes for next year (which always fix the issue, btw), in order to avoid such a mess. But, indeed, in my opinion this version-dependence is rather alarming.

Please tell me if I got it wrong somewhere.

Thanks in advance,

Regards

diego

atmcfd December 4, 2012 21:25

This looks interesting. Let me see if my experience is of any use to you:
I have faced some strange issues like this in the past (but I did not delve much into it). The most recent thing I faced is that I am running an LES simulation in parallel on a supercomputer. My case runs normally when I use 12 processors in parallel. However, when I use 20 or 24 processors, it gives me a sigFpe in after a few time steps ( this value is same, for different fvScheme/fvSolution etc. settings I have tried).

The supercomp I use utilizes Intel Xeon x5650 CPUs. I wonder whats going on here. the same case with the same mesh ( Mesh is of good quality) runs normally in serial or parallel with 12 procs.

diego_angeli December 5, 2012 06:21

Thank you atm for the feedback. Your experience looks even worse...

olivierG December 5, 2012 11:19

hello,

I'm not an expert, but i would just add my 2 cents:

1) I use 2.1.x, and before, 2.0.x and 1.7.x, and update with git approx once per month: usually, you can see some change, event on k-epsilon model (yesteday i have see some update on komegaSST). I don't follow what was changed (sometime this is typo), but you should investigate.

2) If i remember, there where a change in the calculation of residual in the 2.0 or 2.1 version: this may affect your setting, specially if this is a case with bad mesh.

3) about the case of "atm", this is curious, because "i work for me", but i would say: You should not keep this in a grey area and check why some decomposition doesn't work. Do you use GAMG for pressure ? try PCG ? ... another decomposition method, .... and share your results.

regards,

olivier


All times are GMT -4. The time now is 06:06.