
[Sponsors] 
January 13, 2010, 18:25 

#21 
Member
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 10 
First, the computational time for each time step is mainly spent on solving for the pressure equation in almost all our simulations. This is the most expensive operation performed at every time step. Therefore, i feel that its speed up fixes the scaling of the global simulation.
Try the parallel simulation with the same value of nCorrectors as for the serial. Check if the number of iterations for the pressure in the serial and parallel case is the same at each time step. If this was so, then you could plot the speedup curve with confidence. Finally, 0.0026 seconds is more than enough. I ussually run just 3 time steps and consider the ClockTime equal to that employed in the 2º time step. So you avoid including IO operation time. 

January 14, 2010, 03:36 

#22 
New Member
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 9 
I am still doubt about the speed up.
current setting 1000x1000, delta 0.00005, decomposition (16 8 1) for 128 cores. Use simple method. fvSolution nCorrectors=10. On this 128 cores setting, IB should show big difference as others did, but i can't achieve big difference as your paper stated. I believe it is test case setting issue. BTW, what is the endtime you set? 

January 14, 2010, 07:32 

#23 
Member
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 10 
Are you using a shared or distributed memory machine?
I think that I have reported successful results for HP Superdome Itanium II (<= 32 cores) by using Jasak's case: DROPLET SPLASH SCALING TEST. This scaling test is set up to help with the validation of parallel performance of OpenFOAM... (see README file) Hrv's results, up to 4 cores, are reported in "Preconditioned Linear Solvers for Large Eddy Simulation" (Jasak, 2007). On the other hand, I was unable to get a speedup greater than 16 for the 3D cavity by using 8 quadcores AMD Opteron 8382 with shared memory (TYAN S4985G3NRE Thunder n4250QE; 64 GB RAM) and 4M cells (I did not check Jasak's case for the following reason). Indeed, I have run a very simple simulation with a smaller mesh (100K) to isolate the cause of the problem: decompose the case in 4 domains (2 2 1) and run the simulation in the same quadcore (you can select the socket by setting "numactl cpubind=... membind=... script.sh" where "script.sh" contains "mpirun np 4 hostfile hostdell icoFoam case cavity parallel"); subsequently, run the same simulation without cpu affinity; as far as i understood, your nodes have 2 quadcores, so in the latter case you must corroborate that 2 threads are running in each quad. Do you get the same ClockTime for the simulation? 

January 14, 2010, 21:17 

#24 
New Member
Steve Colde
Join Date: Jan 2010
Posts: 15
Rep Power: 9 
they are standard rack servers. 2CPUs/4 cores each CPU.
I have both infiniband and gige. When I run openfoam over infiniband, there are always big traffic generated by Openfoam over gige meanwhile, even more than over IB. This prevented infinband from getting good performance. Why does OpenFoam work like this? how can i make OF run over IB only? 

January 15, 2010, 05:29 

#25 
Member
Patricio Bohorquez
Join Date: Mar 2009
Location: Jaén, Spain
Posts: 95
Rep Power: 10 
The communication libraries are given by $WM_MPLIB. Its value is set in OpenFOAM/OpenFOAM1.6.x/etc/bashrc and can be tuned in OpenFOAM/OpenFOAM1.6.x/etc/settings.sh


October 27, 2012, 08:21 

#26 
New Member
Ozgur Kirlangic
Join Date: May 2009
Location: Istanbul
Posts: 16
Rep Power: 10 
Hello friends,
About the scaling tests I have got a question.. I hope someone gives me a hint. I made a simple test using icoFoam on the cavity tutorial: 1) serial 2) parallel with 2 procs 3) parallel with 4 procs. The problem is: In the outputs, the number of iterations reported are not same in the three of the test cases. In a large scale problem with a long simulation duration this difference in the iteration counts may(??) lead to a large number of difference in the total executed floating point operations/per core and it may be difficult to compare results of different decomposions in scaling study. The question is: Is this a bug or is there any way to fix the number of iterations while keeping the same level of accuracy? Ozgur Serial Iterations: DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 2.96338e06, No Iterations 8 DILUPBiCG: Solving for Uy, Initial residual = 0, Final residual = 0, No Iterations 0 DICPCG: Solving for p, Initial residual = 1, Final residual = 7.55402e07, No Iterations 35 DICPCG: Solving for p, Initial residual = 0.523591, Final residual = 9.72352e07, No Iterations 34 DILUPBiCG: Solving for Ux, Initial residual = 0.148584, Final residual = 7.15711e06, No Iterations 6 DILUPBiCG: Solving for Uy, Initial residual = 0.256618, Final residual = 8.94127e06, No Iterations 6 DICPCG: Solving for p, Initial residual = 0.379232, Final residual = 3.38648e07, No Iterations 34 DICPCG: Solving for p, Initial residual = 0.286937, Final residual = 5.99637e07, No Iterations 33 DILUPBiCG: Solving for Ux, Initial residual = 0.0448669, Final residual = 2.39894e06, No Iterations 6 DILUPBiCG: Solving for Uy, Initial residual = 0.0782408, Final residual = 1.45948e06, No Iterations 7 DICPCG: Solving for p, Initial residual = 0.109591, Final residual = 5.81093e07, No Iterations 32 Parallel Iterations (two procs): DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 4.73711e06, No Iterations 10 DILUPBiCG: Solving for Uy, Initial residual = 0, Final residual = 0, No Iterations 0 DICPCG: Solving for p, Initial residual = 1, Final residual = 5.66841e07, No Iterations 40 DICPCG: Solving for p, Initial residual = 0.523592, Final residual = 8.51439e07, No Iterations 39 DILUPBiCG: Solving for Ux, Initial residual = 0.148584, Final residual = 6.18044e06, No Iterations 8 DILUPBiCG: Solving for Uy, Initial residual = 0.256618, Final residual = 5.05961e06, No Iterations 8 DICPCG: Solving for p, Initial residual = 0.379233, Final residual = 4.79055e07, No Iterations 39 DICPCG: Solving for p, Initial residual = 0.286932, Final residual = 8.27536e07, No Iterations 38 DILUPBiCG: Solving for Ux, Initial residual = 0.0448669, Final residual = 2.78802e06, No Iterations 8 DILUPBiCG: Solving for Uy, Initial residual = 0.078239, Final residual = 5.2207e06, No Iterations 7 DICPCG: Solving for p, Initial residual = 0.109579, Final residual = 7.5002e07, No Iterations 37 Parallel Iterations (four procs): DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 8.58781e06, No Iterations 10 DILUPBiCG: Solving for Uy, Initial residual = 0, Final residual = 0, No Iterations 0 DICPCG: Solving for p, Initial residual = 1, Final residual = 6.80001e07, No Iterations 42 DICPCG: Solving for p, Initial residual = 0.523592, Final residual = 7.57882e07, No Iterations 41 DILUPBiCG: Solving for Ux, Initial residual = 0.148584, Final residual = 5.92338e06, No Iterations 9 DILUPBiCG: Solving for Uy, Initial residual = 0.256618, Final residual = 2.9563e06, No Iterations 10 DICPCG: Solving for p, Initial residual = 0.379253, Final residual = 6.35855e07, No Iterations 41 DICPCG: Solving for p, Initial residual = 0.28695, Final residual = 3.74664e07, No Iterations 41 DILUPBiCG: Solving for Ux, Initial residual = 0.0448671, Final residual = 9.45275e06, No Iterations 8 DILUPBiCG: Solving for Uy, Initial residual = 0.0782388, Final residual = 9.94066e06, No Iterations 8 DICPCG: Solving for p, Initial residual = 0.109576, Final residual = 4.91478e07, No Iterations 40 Last edited by ozgur; October 27, 2012 at 09:15. 

October 30, 2012, 06:29 

#27 
New Member
Zoltan Hernadi
Join Date: Jul 2010
Posts: 12
Rep Power: 8 

Thread Tools  
Display Modes  


Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
running OpenFoam in parallel  vishwa  OpenFOAM Running, Solving & CFD  22  August 2, 2015 08:53 
Superlinear speedup in OpenFOAM 13  msrinath80  OpenFOAM Running, Solving & CFD  18  March 3, 2015 06:36 
Testing OpenFOAM parallel computing  jason.ryon  OpenFOAM  0  October 5, 2009 11:46 
Parallel solution in OpenFOAM  makaveli_lcf  OpenFOAM Running, Solving & CFD  0  September 21, 2009 08:07 
Modified OpenFOAM Forum Structure and New MailingList  pete  Site News & Announcements  0  June 29, 2009 05:56 