How can deltaT exceed the limitation of courant?
Hi, All foamers,
In OF Users Guide, the selection of deltaT is depend on courant. I follows this rule, and set my deltaT to under 0.00005, this works fine on turbDyMFoam. but there seems no such limit in Fluent, in Fluent, I used the same mesh, but I can set "time step size" to 0.005, and have no problem in getting a reasonable solution. And this can save a lot of computation effort. I wonder whether there will be any possible method to reduce the number of time points needed in OpenFOAM? And Thanks! 
Quote:
Keep Courant number below 1, or even 0.1. 
Quote:
Time Step Size: The time step size is the magnitude of http://venus.imp.mx/hilario/SuperCom...ml/img1287.gif. Since the FLUENT formulation is fully implicit, there is no stability criterion that needs to be met in determining http://venus.imp.mx/hilario/SuperCom...ml/img1288.gif. However, to model transient phenomena properly, it is necessary to set http://venus.imp.mx/hilario/SuperCom...ml/img1289.gif at least one order of magnitude smaller than the smallest time constant in the system being modeled. A good way to judge the choice of http://venus.imp.mx/hilario/SuperCom...ml/img1290.gif is to observe the number of iterations FLUENT needs to converge at each time step. The ideal number of iterations per time step is 1020. If FLUENT needs substantially more, the time step is too large. If FLUENT needs only a few iterations per time step, http://venus.imp.mx/hilario/SuperCom...ml/img1291.gif may be increased. Frequently a timedependent problem has a very fast ``startup'' transient that decays rapidly. It is thus often wise to choose a conservatively small http://venus.imp.mx/hilario/SuperCom...ml/img1292.gif for the first 510 time steps. http://venus.imp.mx/hilario/SuperCom...ml/img1293.gif may then be gradually increased as the calculation proceeds. For timeperiodic calculations, you should choose the time step based on the time scale of the periodicity. For a rotor/stator model, for example, you might want 20 time steps between each blade passing. For vortex shedding, you might want 20 steps per period. 
There are 2 reasons for choosing a small time step:
1. For stability; 2. For accuracy; Do use a small time step, I mean small enough, to get a "good" result. A good test is to compare deltat1 with deltat2, and see if the results' differences are small enough. This is especially for transient and turbulent problems, about which we care not only to the vortex shedding cycle, but also ACCURACY. A famous example is cylinder flow, where St is very easy to be right, but Cd is not. 
As I have said, the result of Fluent with deltaT 0,005 could already give a good fit with experiment result.
Well, for OpenFOAM, stability needs 0.0001, and a good result needs 0,00005. My question should be expressed like, what could be the possible reason that makes the two solvers perform so differently? 50 to 100 times deltaT gap is far more a question of accuracy. 
Quote:
Best, 
Thanks for the hints, Alberto.
 ddtSchemes { default Euler; } gradSchemes { default Gauss linear; grad(p) Gauss linear; grad(U) Gauss linear; } divSchemes { default none; div(phi,U) Gauss limitedLinearV 1; div(phi,k) Gauss limitedLinear 1; div(phi,epsilon) Gauss limitedLinear 1; div(phi,omega) Gauss limitedLinear 1; div(phi,R) Gauss limitedLinear 1; div(R) Gauss linear; div(phi,nuTilda) Gauss limitedLinear 1; div((nuEff*dev(grad(U).T()))) Gauss linear; } laplacianSchemes { default none; laplacian(nuEff,U) Gauss linear corrected; laplacian((1A(U)),p) Gauss linear corrected; laplacian(DkEff,k) Gauss linear corrected; laplacian(DepsilonEff,epsilon) Gauss linear corrected; laplacian(DREff,R) Gauss linear corrected; laplacian(DnuTildaEff,nuTilda) Gauss linear corrected; laplacian(DomegaEff,omega) Gauss linear corrected; laplacian((rho*(1A(U))),p) Gauss linear corrected; laplacian(alphaEff,h) Gauss linear corrected; laplacian(rAU,p) Gauss linear corrected; } interpolationSchemes { default linear; interpolate(U) linear; } snGradSchemes { default corrected; } fluxRequired { default no; p; }  solvers { p PCG { preconditioner { type DILU; } minIter 0; maxIter 10000; tolerance 1e10; relTol 0; }; pFinal PCG { preconditioner { type DILU; } minIter 0; maxIter 10000; tolerance 1e10; relTol 0; }; U BiCGStab { preconditioner { type DILU; } minIter 0; maxIter 10000; tolerance 1e10; relTol 0; }; k BiCGStab { preconditioner { type DILU; } tolerance 1e10; relTol 0; minIter 0; maxIter 10000; }; epsilon BiCGStab { preconditioner { type DILU; } tolerance 1e10; relTol 0; minIter 0; maxIter 10000; }; omega BiCGStab { preconditioner { type DILU; } tolerance 1e10; relTol 0; minIter 0; maxIter 10000; }; R BiCGStab { preconditioner { type DILU; } tolerance 1e10; relTol 0; minIter 0; maxIter 10000; }; nuTilda BiCGStab { preconditioner { type DILU; } tolerance 1e10; relTol 0; minIter 0; maxIter 10000; }; } PISO { nCorrectors 4; nNonOrthogonalCorrectors 0; pRefCell 0; pRefValue 0; }  in OpenFOAM turbDyMFOAM, the max Co reports to be 0.15 to 0.3 (depends on the rotation). Fluent: Unsteady Formulation: 1st order implicit Time: Unsteady Formulation: implicit Discretization: second Order upwind .................................................. ............................... is there any other need to be posted? greet 
Hi,
the schemes are pretty much standard. You might try to see what happens switching to "upwind" for the divergence schemes, but this will lower the order of accuracy. Might the difference be in the algorithm or approach (what do you use in FLUENT?) to represent the rotation? Best, 
Quote:
I have tried upwind, but I don't think that can let OpenFOAM exceed the maxCo=1, in my case, 0.00005 means maxCo about 0.150.3, and the setting 0.005 in fluent means maxCo more than 15...........if I could calc in this way. In Fluent, It is Sliding Mesh Model. greets 
Hi,
I think the condition you notice on your time step in OpenFOAM due to some other reason than the time integration scheme, to which, strictly speaking, the Courant number refers. Using Euler's scheme you should not have the limitation. This, however, does not mean that you do not have limitations on the time step due to other factors in the algorithm (think for example to a nonlinear source term whose value might become very large: it might compromise the stability if the time step becomes too big in an iterative solver). I would try to understand what are the differences in the algorithms used in the two codes, to explain why you see such a difference. Best, 
Thanks Alberto. Pls let me know if you have any new ideas.

All times are GMT 4. The time now is 16:06. 