Strange high velocity in centrifugal pump simulation
Hi,all:
I'm simulating a centrifugal pump's impeller with single channel, so I use SRFSimpleFoam to do a steadystate simulation. I refer to the settings of fvsolution and fvschemes http://www.cfd-online.com/Forums/ope...mparation.html The operating condition is n=725rpm and flow rate is 3.06L/s , indicates 0.914m/s at inlet. Here is a image of the pump http://www.cfd-online.com/Forums/mem...5-10-27-09.png |
here is the initial conditions
k Code:
dimensions [ 0 2 -2 0 0 0 0 ]; Code:
dimensions [ 0 2 -1 0 0 0 0 ]; Code:
dimensions [ 0 0 -1 0 0 0 0 ]; Code:
dimensions [ 0 2 -2 0 0 0 0 ]; Code:
dimensions [ 0 1 -1 0 0 0 0 ]; |
I use kOmegaSST,while I'm not sure how to set omega correctly, is there any formulation for estimation?
Here, go on with my simulation. I set a monitor point close to the suction side of blade outlet to see if the calculation is steady, however, I found a rather strange phenomenon Code:
# x -0.09319 |
fvschemes
Code:
ddtSchemes Code:
solvers |
Can anyone give me some advice? I'm struggling for several days:mad:
By the way, the residual is also always among 0.3-0.5 |
hello,everyone, I tried another way, still, the problem stays the same. I think it's due to the periodic boundary, so I generate a full impeller's mesh(unstructured) and with only 3 boundaries: inlet, wall, outlet
All the setting are the same as before except without periodic boundaries, while, the velocity and pressure continues to grow and finally crashed. I have already checked the mesh, it's ok, so there is no doubt about the geometry(for example, transform mesh with a wrong scale).The only problem my be the boundary conditions, the schemes used won't cause this unreal result , I think |
Hi,
I believe it is the schemes that cause your divergence. You use linear for the convection in the turbulent equations, but these have very high gradients near solid walls. This will lead to an unstable system. Also the SFCD scheme is not very stable. You could better use limitedLinear or perhaps even upwind for k or omega. Regards, Tom |
Quote:
Thank you for your reply. I changed the scheme and solution as the mixer tutorial and use K-e instead, now the result is converged. Xianbei |
Quote:
I'm facing another problem:( I want to do LES calculation with the same Mesh. In fact, after computing with RANS, the y+ turns out to be <2.5 around the blade, so it's fine enough to do LES. While, when I use SRFPimpleFoam to accomplish this, the Courant number goes large very quickly in a given timestep size(deltaT), while, if I use Code:
adjustTimeStep yes; All the boundary conditions are the same as in RANS except the new nuSgs,which is Code:
dimensions [ 0 2 -1 0 0 0 0 ]; Code:
PIMPLE Any advice is highly appreciated Xianbei |
Hello Xianbei,
I think using maxCo = 50 for LES may not be accurate enough. You will probably not resolve the smallest time scales. That being said, using only two outer correctors and maxCo=50 is probably the cause of the divergence. I would suggest to reduce your maxCo and/or increase the number of outer correctors for the PIMPLE solver. You may also want to use residualControl for the convergence within a time step. Regards, Tom |
Quote:
Thank you very much. I searched a lot about the divergence in the forum and found it should be more in outer correctors for PIMPLE. I'll try and report the result soon. Xianbei |
Quote:
I tried both your methods, use 20 outer correctors and residual control of p at 1e-2 and U 1e-5, also, the maxCo is limited to 5, the calculation is now proceeding well.Thank you very much:) A quick question about the p residual: Is it acceptable for a residual of p to 1e-2. When setting this, the calculation can converge in 20 steps. Xianbei |
Hi Xianbei,
Good to hear it is running. 1e-2 sounds like it is not enough, but you can only tell if you compare the results to experiment or literature. I would use at least 1e-4, but if 1e-2 is accurate enough than it is ok. I think that would be up to you to decide. Kind regards, Tom |
Quote:
Yes,it's do sounds not enough. I run another case which is 40 outercorrectors and p 1e-3, however, most steps take the largest 40 correctors, which means a tendency of not converge to 1e-3. So I'll keep the 1e-2 case and see if the result is acceptable(comparing with experiment). Thank you. Xianbei |
Quote:
Looking for help again:) I find a unexpected fact that if I decompose the domain into more parts in order to calculate in parallel, the residual rises. For example, if I decompose it into 4 parts, the residual can be below 1e-5 of p ,with GAMG solver and 1 nNonorthogonalCorrector, while if I decompose it into 16, the solution can never get a residual below even 1e-4. It's quite unacceptable for me, if I want to improve the accuracy, the speed will slow down with only 4 processors:(. Is there anything I can do to avoid this? Xianbei |
Quote:
Sorry for annoying. I tried many times, and find that the residual do be influenced by the number of processors used. I use the scotch method to decompose. The conclusion I can draw is that for my case, the maximum number of processor is 8 to make sure the residual of p to be below 1e-5. However, I still insist OF can be paralleled with as many processors as possible without accuracy loss. Xianbei |
Hi Xianbei,
No problem, I will only answer when I have time available and if someone else can join the discussion and help you, that would be great as well. About the decomposition, I remember a discussion during an OpenFOAM workshop where something similar was shown. That is to say, depending on the number of cpus used, the instantaneous results for a particular case (LES as well) differed, but the statistical parameters (average forces, standard deviation, fluctuation scales etc.) where all in agreement. I believe the comment was that the solver tolerance (fvSolution) should have been reduced by 1 or 2 orders of magnitude to have a more consistent development of the flow between different decompositions. The mathematical reason is that you do solve a slightly different matrix-vector system and the non-linearity of the equations can cause this effect. So I can only suggest that you tighten the tolerance on your final subiteration to have less difference between runs. Otherwise you maybe get lower residuals with 15 cpus instead of 16. Kind regards, Tom |
Quote:
Thank you for the details. You have helped me a lot, I really appreciate very much. Yes, maybe the result won't be affected much, however, only the residual is low enough can make one believe that the calculation is accurate enough. Thank you for your suggestion, I'll try it, if no further improvement is seen, I'll use 8 processor(Thanks God that the mesh size is not so big:), 0.3 million) Xianbei |
Quote:
I find another strange thing about the yPlus. When I use RANS model in the calculation, yPlusRAS returns the y+<5 on the walls which I have specified a small distance at the first layer of grid. I run the LES case for another 1 day and the flow is steady (the monitor velocity varies much less), then I run the yPlusLES and find that the yPlus returns is much bigger than the RANS case, the yPlus is almost twice of that!! Have you ever experience this? Xianbei |
Hi,
I have not performed any real LES cases, just played around a bit, so I can not say I have seen something similar. Tom |
All times are GMT -4. The time now is 04:42. |