- **OpenFOAM Running, Solving & CFD**
(*http://www.cfd-online.com/Forums/openfoam-solving/*)

- - **Solution convergence problem - poor mesh?**
(*http://www.cfd-online.com/Forums/openfoam-solving/94276-solution-convergence-problem-poor-mesh.html*)

Solution convergence problem - poor mesh?5 Attachment(s)
I'm trying to figure out why I can't converge my solution - yes, I know, lots of possibilities here. I'm wondering if it has to do with my mesh. My simulation is a vertical axis turbine, using the GGI interface and pimpleDyMFoam. WI previously used CFX to do the simulations, and found appropriate grid size, dimensions, boundary layers, timestep so that incremental changes in the simulations parameters changed the solution by about ~1%. To do 3D cases, I need to move to openFoam so that I can run on more processors. In order to compare the inputs (timestep, grid, etc.), I need to compare a 2D openFoam solution to a 2D CFX solution. I got a small (smaller, simpler geometry with quite coarse mesh) test case running in openFoam and have moved to my full sized mesh and am running into convergence/timestep problems. I would like to be able to run my cases with timesteps equal to about 1 degree of rotation, this is based on CFX results giving about a 1.5%-2.0% change in both power coefficient and maximum thrust coefficient between steps equivalent to 2.5 degrees, 1.0 degrees and 0.5 degrees. For my first test, this corresponds to a timestep of 2.78e-3 s. Problem with this is that using the adjustTimeStep in controlDict to keep maxCo low gives me timesteps on the order of 1e-7. On top of that, I have a lot of numerical instability so that my velocity (inlet conditon is about 10 m/s) climbs past 10e6, but seems to come back down, slowly, eventually. I set up my grid so that I would have 360 equal divisions on my ggi interface, one cell incrementation per "ideal" timestep. I was in contact with someone else who did a similar simulation, linnemann, and he used a much different grid - structured and hex. I'm wondering if my unstructured, grid and 5-sided cells has anything to do with the problem, or possibly my boundary layer, which has 50 inflated layers. Turbulence model is k-w SST. I ran his case which seemed to converge well, even with a fairly high (50) limite on maxCo. I've tried increasing the number of corrector steps (nCorrectors, nOuterCorrectors, nNonOrthoganalCorrectors), relaxation, solver method (GAMG, PCG, ...). Any tips are welcome. Thanks :) |

I should add that the linnemann mesh was much more coarse around the blaedes (300 nodes vs 1000 nodes) with fewer boundary layers (5 vs 50).
I will post my controlDict/fvSolution/fvSchemes files a little later, had to leave for other work. I was also wondering if maybe I should start with a simpleSolver and get a steady state solution at my initial blade position to use as initial conditions, since the very first iteration of my transient solution seems to generate ridiculously higher velocity pockets/pressure near the blade surfaces. I have read that sometimes there can be some serious sensitivity to the initial conditons. I suppose it's worth a try. Shawn |

1 Attachment(s)
Turns out that I just get high velocity (and gradient) at the leading edge of the blade. Given the high velocity, the segregated PISO solver and the small cell size, it's very easy for my Courant number to rise unless the timesteps are VERY small.
Increasing the number of nOuterCorrectors seemed to help, as did changing my scheme to upwind. I'm going to experiment with the number of various correctors and the schemes a little more. - In lieu of that, are there any coupled/fully implicit solvers available that allow for a rotor/stator interface (similar to my using pimpleDyMFoam with the ggi <OF1.6-ext>)? - Anyone have any experience with the new rotatingMotion in OF2.0? Perhaps I should try that approach. Thanks, Shawn |

All times are GMT -4. The time now is 11:42. |