Why switching time step length can help on achieving the convergence
When my simulation doesn't converge, I always change my time step to a smaller time step(or larger time step). Then after a lot of iterations, it still doesn't converge. Then I switch the time step back to what I used at the beginning. Then my simulation converges.
Could you tell me why switching time step(and then switch back) can help on the convergence? How do you explain this theoretically?
I am assuming you are referring to steady state simulations and that you are using timestepping for stability. Correct me if I'm wrong. Smaller timesteps make things change more slowly, making it less likely the solution will diverge or get into a situation where convergence gets stuck. However a smaller timestep will mean it will take longer to achieve a steady state solution. Thus it can be helpful to start with a small timestep to get things going in the right direction and them change to a larger one so it can actually reach steady state in a reasonable length of time.
|All times are GMT -4. The time now is 20:04.|