Same case run twice I got different results, what's going wrong?
I ran a cavity case with icoFoam in parallel using 32 cores (i.e. two nodes) twice, but I got different results. By saying "different", I mean the log file shows residual difference.
log-1: Quote:
Quote:
Any random behavior in CPU? Or can we blame AMG solver for this? (scotch partitioning method is used) Any ideas? |
Hi,
it seems you've got rather strange settings in fvSolution. 1000 is a default value for maximum number of iterations for linear system solvers, and as GAMG can't fulfill your tolerance settings in 1000 iterations, final residual can differ. Also wrong settings for GAMG can lead to this situation, try switching to PCG and check if the behavior is the same. |
I've never had success in using PCG solver in parallel, it blows always.
Using a smaller rolerance, p 1.0e-6, U 1.0e-5, as you can see below, the number of iterations has not exceed the maximum value 1000, however the difference is still there. Why? One iterates 120, the other 122. I don't know why there is such a random behavior in the solver. Quote:
Quote:
Edit: ----- Here are the results of using PCG solver for the pressure (using the same tolerance and relTol, that are 1.0e-6 and 0.0 for the pressure and 1.0e-5 and 0.0 for the velocity), there is still difference in the final residual. Note it blows at the 14th time step, what you see here is only a copy of the first step residual log: Quote:
Quote:
Code:
p Code:
p |
Well,
In the first part of the answer you've used GAMG solver, can you show your settings for this solver? As it will do interpolation between coarse and fine meshes and maybe the difference in final residuals come from this operation. Blowup of PCG usually means that your initial or boundary conditions do not make sense. Can you describe your case a little bit, show your fvSchemes, fvSolutions, boundary conditions? For example with cavity case from tutorial (which uses PCG and PBiCG), the residuals between runs are identical. |
It is exactly the same case copied from the tutorial icoFoam cavity. The only difference is the original mesh is 20*20, now I refine it to 1000*1000. The fvScheme settings are also the same. Boundary conditions initial conditions have all been double checked with vimdiff, they show no difference.
I also want to add, that I have not seen this in my 4 core and 16 core decomposition. This only starts to happen as I was trying to use 32+ core. It seems nothing to do with maximum iteration number "1000", because in a 4 core simulation, if I run twice with 1e-16 0.0, two log files shows same final residuals. |
OK.
- What decomposition method do you use? - Does 4 and 16 core decomposition mean that you're running the case on a single node? |
Quote:
(2) 4 core: it is on my personal workstation. 16-core it's running on a high performance cluster. By using 16-core on that cluster, it means it is running on one node. Why not try your self on your cluster, you just need to refine the cavity case grid to 1000*1000*1 and time step 2E-5, endTime set to 0.001. With 32 core, it will only take less than one minute. |
And case blows-up only when run on more than one node? (i.e. 4 and 16 core variants converge).
|
OOPS, can anyone verify by running a small test on your computer.
Copy the cavity case from your tutorial and refine the mesh to 1000*1000*1, change deltaT to 2e-5 and endtime to 1e-3, and keep others unchanged. And run icoFoam. Because I found even with one core, PCG solver is not able to converge. What have I done wrong? |
Just a couple of quick notes:
|
Quote:
I don't think I have used that option, I just use "-O2 -no-prec-div", of cause I tried "-O3" as well. Quote:
PS: post #9 has been edited. Thanks |
All times are GMT -4. The time now is 21:43. |