CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (http://www.cfd-online.com/Forums/openfoam-solving/)
-   -   OpenFoam and OpenMPI (http://www.cfd-online.com/Forums/openfoam-solving/100121-openfoam-openmpi.html)

pablodecastillo April 20, 2012 14:04

OpenFoam and OpenMPI
 
Hello,

I am testing a machine with 64 (2.2 Ghz) cores (16x4), my problem is optimized to run in 8 cores.

If i send 1 job (partition in 8 cores ) , the job need over 1000 second to get 1 second of simulation.

If i send 4 jobs (every with 8 cores) , every job needs 2500 seconds to get 1 second of simulation. The RAM just using 10%.

If i send 6 jobs (every with 8 cores) , every job needs 4000 seconds to get 1 second of simulation. The RAM just using 15%.

If i send 8 jobs ( total 64 cores) , it is really really slow, (20% RAM).

Anybody can give me a clue or reason that what is happening??
It is using Ubuntu 11.10 binary version openfoam 2.1.

Basically 1 job in parallel is working perfectly but low performance when more than 1 job.

Pablo

pablodecastillo April 22, 2012 12:52

It seems that updating to 1.5.3 openmpi it is improving , but not enough, no ideas??

olivierG April 23, 2012 05:31

hello,

I am not sure to understand clearly:
- you have one machine (not a cluster), with 64 core: 4 CPu of 16 core each ?
If this is the case, you may be bandwidth limited: try to decompose your case in less than 8 core: 4, 2 or more: 16, and try again with 4/6/8 jobs.

regards,
olivier

wyldckat April 23, 2012 05:44

Greetings to all!

I was going to write about processor affinity, but this seems to have already been discussed on Pablo's other thread: http://www.cfd-online.com/Forums/har...arameters.html

Best regards,
Bruno

pablodecastillo April 23, 2012 07:28

Thanks Bruno and Olivier,

At the end it is a problem with the architecture machine, it is a AMD opteron 64 cores ( 4 socket with 16 cores ), it is sharing the floating point unit between 2 cores, so it has only 32 FPU.

It means that for numerical calculations it is like 32 cores, if more the performance is going down.

wyldckat April 23, 2012 07:48

Quote:

Originally Posted by pablodecastillo (Post 356373)
It means that for numerical calculations it is like 32 cores, if more the performance is going down.

With proper optimization options when building OpenFOAM and OpenMPI, it might be possible to overcome or minimize that issue! The simplest way is to use Gcc 4.6, preferably one of the latest ones 4.6.2 or 4.6.3.

pablodecastillo April 23, 2012 08:30

Hi Bruno,

Can you shared that proper optimization options for AMD?
Mine OF was compiled with Gcc 4.6.1, default options.

Pablo

wyldckat April 23, 2012 16:00

Hi Pablo,

For the 6200 series (I saw this post of yours), see the Gcc table from here: http://developer.amd.com/Assets/Comp...f-62004200.pdf
Caution: Do not use "-ffast-math".

By what I can see, it would be best to use Gcc 4.7.0.
(edit: I probably was thinking of the previous generation of AMD, when I wrote 4.6.3...)

The files that need modifications are:
Code:

wmake/rules/linux64Gcc/cOpt
wmake/rules/linux64Gcc/c++Opt

Add flags to the line that start with "cOPT" and "c++OPT", respectively.

As for installing Gcc 4.7.0... it depends on the Linux distribution you have, because some already have it somewhere; others will require you to do a custom build.

If your gcc and g++ binaries then have different names (e.g. gcc47), see here how you can tweak OpenFOAM to use your version: http://www.cfd-online.com/Forums/ope...tml#post278809 post #2

Best regards,
Bruno

pablodecastillo April 23, 2012 17:02

Hi Bruno,

This afternoon i added c++OPT = -O3 -mprefer-avx128 -ftree-vectorize -ffast-math (same for cOpt),
I got 20 to 25 better performance on speed , it was with gcc 4.6.

Tomorrow i will try with 4.7 how you are pointing.

Why ffast-math is not a good idea, if the main trouble with this machines is that there is only one FPU for 2 cores?

Pablo

wyldckat April 23, 2012 17:24

Quote:

Originally Posted by pablodecastillo (Post 356500)
Why ffast-math is not a good idea, if the main trouble with this machines is that there is only one FPU for 2 cores?

Like the document states:
Quote:

Enable faster, less precise math operations
And quoting from gcc's online manual: http://gcc.gnu.org/onlinedocs/gcc-4....e-Options.html
Quote:

[...] it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.

pablodecastillo April 23, 2012 17:31

Hi Bruno,

It seems that -mprefer-avx128 is the mainly factor to improve the speed. Did u get improved speed with 4.7?

Thanks

wyldckat April 24, 2012 04:17

Hi Pablo,

I don't have access to one of the latest AMD CPUs, so I can't test this particular speedup. All I know is that Gcc 4.7 has improved support for this (new) generation of AMD CPUs. And the only other compiler that should support them is Open64.

Best regards,
Bruno

pablodecastillo April 24, 2012 04:27

Any idea how compile Openfoam with Open64?

wyldckat April 24, 2012 04:33

Quote:

Originally Posted by pablodecastillo (Post 356610)
Any idea how compile Openfoam with Open64?

I've never tested it, because:
Open64 is somewhat of an experimental compiler + OpenFOAM is highly demanding in terms of C++ standards = so I've never tried it.

For comparison, the Intel C++ Compiler (ICC) requires OpenFOAM to have some modified templates, adjusted just of ICC. This is because ICC is unable to do everything that Gcc does. Therefore, in this measure of comparison, it's best to stay with Gcc.

akidess June 14, 2012 12:15

Testing on AMD Opteron(tm) Processor 6134, interFoam, damBreakFine I only get a negligible speedup using Gcc 4.7 compared to Gcc 4.4.6! This is using the stock build options (except for the compiler executable), but I do want to test using the compiler flags suggested above.

pablodecastillo June 14, 2012 13:28

Hello,

There is a document from AMD for HPC computing, because to really improve the performance, you must compile with recomended flags and modify the BIOS.

akidess June 14, 2012 14:05

Can you elaborate?

pablodecastillo June 14, 2012 14:22

If you send me one email, i can send you the AMD paper.

wyldckat June 14, 2012 15:54

Greetings to all!

I googled for "amd hpc gcc flags" (without the quotes) and the first hit was a very interesting tutorial: http://developer.amd.com/documentati...anceGains.aspx

As for the 6100 Opteron series, looks like this is the proposed compiler spec cheat-sheet: http://developer.amd.com/Assets/Comp...f-61004100.pdf
And don't forget: do NOT use ICC for AMD... ;)

Official (shortcut) page for the series: http://developer.amd.com/Magny-Cours ;)

Best regards,
Bruno

akidess June 15, 2012 03:50

Bruno, I followed this guide: http://developer.amd.com/assets/AMDGCCQuickRef.pdf

I think it's basically the same as the one you posted, but older. On a single core damBreakFine run, using the flags they suggest there (march=amdfam10, mabm, msse4a), ExecutionTime was reduced by 10% (compared to gcc 4.7 without the extra flags).

Quote:

Originally Posted by wyldckat (Post 366519)
Greetings to all!

I googled for "amd hpc gcc flags" (without the quotes) and the first hit was a very interesting tutorial: http://developer.amd.com/documentati...anceGains.aspx

As for the 6100 Opteron series, looks like this is the proposed compiler spec cheat-sheet: http://developer.amd.com/Assets/Comp...f-61004100.pdf
And don't forget: do NOT use ICC for AMD... ;)

Official (shortcut) page for the series: http://developer.amd.com/Magny-Cours ;)

Best regards,
Bruno



All times are GMT -4. The time now is 16:43.