CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM (https://www.cfd-online.com/Forums/openfoam/)
-   -   How to test openfoam benchmark ? (https://www.cfd-online.com/Forums/openfoam/187528-how-test-openfoam-benchmark.html)

jbmvictory May 9, 2017 06:12

How to test openfoam benchmark ?
 
Please tell me how to test openfoam parallel benchmark on linux cluster? Which case is suitable to do it ? cavity of icofoam or dambreak of interfoam ? Because when i test openfoam benchmark on linux with interfoam solver , I change the cells to this :

blocks
(
hex (0 1 5 4 12 13 17 16) (92 32 2) simpleGrading (1 1 1)
hex (2 3 7 6 14 15 19 18) (76 32 2) simpleGrading (1 1 1)
hex (4 5 9 8 16 17 21 20) (92 168 2) simpleGrading (1 1 1)
hex (5 6 10 9 17 18 22 21) (16 168 2) simpleGrading (1 1 1)
hex (6 7 11 10 18 19 23 22) (76 168 2) simpleGrading (1 1 1)
);

But ,when I test with 56cores ,the test time is shortest .
When I go on increasing the cpu cores ,the test time will become longer!
I want to konw how to test openfoam benchmark with the right way so that I can obtian the test results like this :
http://www.hpcadvisorycouncil.com/pd...e_Analysis.pdf

Taataa May 16, 2017 00:36

I have found a paper that explains the process with details:
http://www.dtic.mil/get-tr-doc/pdf?AD=ADA612337

You can try to setup similar cases.

Tobi May 16, 2017 19:13

A benchmark of scaling should be done with a large case. Imagine you have a testcase with 20.000 cells and a normal one phase solver. There is finally not much to calculate and using 10 cores would slow down the whole simulation because each processor is waiting for the other one's information. I have in mind that you should have round about 10.000 to 30.000 per CPU. However, it also depends highly on the problem you want to solve and the corresponding model. Other impacts on the speed:
  • Matrix renumbering
  • Decomposition method (shared faces)
  • Complexity of the model which is solved
  • Speed of data transfer between the cores; if you have two computers (nodes) who has to communicate would be always worse than having everything on one machine
  • Solver used (PCG - GAMG ...)
  • ...
Keep in mind that decomposing will influence your numerical results if you have flow which has natural convection. The processor BC are somehow like walls and the flux calculation will depend on that. Personally I would make a benchmark with a case > 2.000.000 cells.

jbmvictory August 11, 2017 00:32

thanks a lot , i will read it carefully to find the solutions.

jbmvictory August 11, 2017 00:43

Quote:

Originally Posted by Tobi (Post 649148)
A benchmark of scaling should be done with a large case. Imagine you have a testcase with 20.000 cells and a normal one phase solver. There is finally not much to calculate and using 10 cores would slow down the whole simulation because each processor is waiting for the other one's information. I have in mind that you should have round about 10.000 to 30.000 per CPU. However, it also depends highly on the problem you want to solve and the corresponding model. Other impacts on the speed:
  • Matrix renumbering
  • Decomposition method (shared faces)
  • Complexity of the model which is solved
  • Speed of data transfer between the cores; if you have two computers (nodes) who has to communicate would be always worse than having everything on one machine
  • Solver used (PCG - GAMG ...)
  • ...
Keep in mind that decomposing will influence your numerical results if you have flow which has natural convection. The processor BC are somehow like walls and the flux calculation will depend on that. Personally I would make a benchmark with a case > 2.000.000 cells.


thanks a lot ,I will find a proper sample to test benchmark of openfoam.

jbmvictory August 11, 2017 00:45

Quote:

Originally Posted by Taataa (Post 649008)
I have found a paper that explains the process with details:
http://www.dtic.mil/get-tr-doc/pdf?AD=ADA612337

You can try to setup similar cases.

thanks a lot , i will read it carefully to find the solutions.


All times are GMT -4. The time now is 00:23.