|
[Sponsors] |
Same case run twice I got different results, what's going wrong? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 7, 2014, 17:51 |
Same case run twice I got different results, what's going wrong?
|
#1 | ||
Senior Member
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21 |
I ran a cavity case with icoFoam in parallel using 32 cores (i.e. two nodes) twice, but I got different results. By saying "different", I mean the log file shows residual difference.
log-1: Quote:
Quote:
Any random behavior in CPU? Or can we blame AMG solver for this? (scotch partitioning method is used) Any ideas?
__________________
~ Daniel WEI ------------- Boeing Research & Technology - China Beijing, China Last edited by lakeat; April 8, 2014 at 15:13. |
|||
April 8, 2014, 02:03 |
|
#2 |
Senior Member
|
Hi,
it seems you've got rather strange settings in fvSolution. 1000 is a default value for maximum number of iterations for linear system solvers, and as GAMG can't fulfill your tolerance settings in 1000 iterations, final residual can differ. Also wrong settings for GAMG can lead to this situation, try switching to PCG and check if the behavior is the same. |
|
April 8, 2014, 10:05 |
|
#3 | ||||
Senior Member
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21 |
I've never had success in using PCG solver in parallel, it blows always.
Using a smaller rolerance, p 1.0e-6, U 1.0e-5, as you can see below, the number of iterations has not exceed the maximum value 1000, however the difference is still there. Why? One iterates 120, the other 122. I don't know why there is such a random behavior in the solver. Quote:
Quote:
Edit: ----- Here are the results of using PCG solver for the pressure (using the same tolerance and relTol, that are 1.0e-6 and 0.0 for the pressure and 1.0e-5 and 0.0 for the velocity), there is still difference in the final residual. Note it blows at the 14th time step, what you see here is only a copy of the first step residual log: Quote:
Quote:
Code:
p { solver GAMG; tolerance 1e-6; relTol 0.0; smoother GaussSeidel; nPreSweeps 0; nPostSweeps 2; cacheAgglomeration on; agglomerator faceAreaPair; nCellsInCoarsestLevel 20; mergeLevels 1; } pFinal { $p; smoother DICGaussSeidel; tolerance 1e-6; relTol 0; }; U { solver smoothSolver; smoother GaussSeidel; tolerance 1e-5; relTol 0; } Code:
p { solver PCG; preconditioner DIC; tolerance 1e-06; relTol 0.0; } pFinal { $p$; relTol 0; } "(U|k|B|nuTilda)" { solver smoothSolver; smoother symGaussSeidel; tolerance 1e-05; relTol 0; }
__________________
~ Daniel WEI ------------- Boeing Research & Technology - China Beijing, China |
|||||
April 8, 2014, 10:40 |
|
#4 |
Senior Member
|
Well,
In the first part of the answer you've used GAMG solver, can you show your settings for this solver? As it will do interpolation between coarse and fine meshes and maybe the difference in final residuals come from this operation. Blowup of PCG usually means that your initial or boundary conditions do not make sense. Can you describe your case a little bit, show your fvSchemes, fvSolutions, boundary conditions? For example with cavity case from tutorial (which uses PCG and PBiCG), the residuals between runs are identical. |
|
April 8, 2014, 10:54 |
|
#5 |
Senior Member
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21 |
It is exactly the same case copied from the tutorial icoFoam cavity. The only difference is the original mesh is 20*20, now I refine it to 1000*1000. The fvScheme settings are also the same. Boundary conditions initial conditions have all been double checked with vimdiff, they show no difference.
I also want to add, that I have not seen this in my 4 core and 16 core decomposition. This only starts to happen as I was trying to use 32+ core. It seems nothing to do with maximum iteration number "1000", because in a 4 core simulation, if I run twice with 1e-16 0.0, two log files shows same final residuals.
__________________
~ Daniel WEI ------------- Boeing Research & Technology - China Beijing, China |
|
April 8, 2014, 11:02 |
|
#6 |
Senior Member
|
OK.
- What decomposition method do you use? - Does 4 and 16 core decomposition mean that you're running the case on a single node? |
|
April 8, 2014, 11:06 |
|
#7 | |
Senior Member
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21 |
Quote:
(2) 4 core: it is on my personal workstation. 16-core it's running on a high performance cluster. By using 16-core on that cluster, it means it is running on one node. Why not try your self on your cluster, you just need to refine the cavity case grid to 1000*1000*1 and time step 2E-5, endTime set to 0.001. With 32 core, it will only take less than one minute.
__________________
~ Daniel WEI ------------- Boeing Research & Technology - China Beijing, China |
||
April 8, 2014, 11:11 |
|
#8 |
Senior Member
|
And case blows-up only when run on more than one node? (i.e. 4 and 16 core variants converge).
|
|
April 8, 2014, 11:14 |
|
#9 |
Senior Member
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21 |
OOPS, can anyone verify by running a small test on your computer.
Copy the cavity case from your tutorial and refine the mesh to 1000*1000*1, change deltaT to 2e-5 and endtime to 1e-3, and keep others unchanged. And run icoFoam. Because I found even with one core, PCG solver is not able to converge. What have I done wrong?
__________________
~ Daniel WEI ------------- Boeing Research & Technology - China Beijing, China Last edited by lakeat; April 8, 2014 at 15:09. |
|
April 8, 2014, 14:52 |
|
#10 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128 |
Just a couple of quick notes:
|
|
April 8, 2014, 15:02 |
|
#11 | ||
Senior Member
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21 |
Quote:
I don't think I have used that option, I just use "-O2 -no-prec-div", of cause I tried "-O3" as well. Quote:
PS: post #9 has been edited. Thanks
__________________
~ Daniel WEI ------------- Boeing Research & Technology - China Beijing, China |
|||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
udf error | srihari | FLUENT | 1 | October 31, 2016 14:18 |
Is Playstation 3 cluster suitable for CFD work | hsieh | OpenFOAM | 9 | August 16, 2015 14:53 |
Big Difference Between serial run and parallel run case | alundilong | OpenFOAM Running, Solving & CFD | 3 | December 5, 2014 08:27 |
Run case from a script? | miro2000 | OpenFOAM Running, Solving & CFD | 4 | July 7, 2013 05:52 |
Big Difference Between Serial run and Parallel run case | alundilong | OpenFOAM Programming & Development | 1 | March 20, 2013 15:52 |