CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Installation (
-   -   OpenFOAM 1.7.x using intel compiler and MVAPICH2 (

kpsl May 21, 2011 10:00

OpenFOAM 1.7.x using intel compiler and MVAPICH2
Dear Foamers,

I've been searching the forum for days, but I cant seem to find a thread with the answer to my problem.

I am trying to compile the latest OpenFOAM 1.7.x (git repository) using the Intel compiler and MVAPICH2.

I have modified my bashrc to read:




Also, I have added this to my


export MPI_HOME=/sw/comm/mvapich2/1.5.0-intel
_foamAddPath $MPI_ARCH_PATH/bin
_foamAddLib    $MPI_ARCH_PATH/lib

Furthermore, I crated a file in wmake/rules/linux64Icc named mplibMPI-MVAPICH2 that contains:


PINC      = -I$(MPI_ARCH_PATH)/include
PLIBS      = -L$(MPI_ARCH_PATH)/lib -lmpich -lrt

I am not sure if the contents of this file is correct, however it seems to work.

Now, I have used intel.compiler (icc and ifort) version 12.0.0 and gcc 4.5.2.

I get many warnings during compilation, all of them similar to this one:


/home/kris/OpenFOAM/OpenFOAM-1.7.x/src/OpenFOAM/lnInclude/pTraits.H(71): warning #597: "Foam::pTraits<PrimitiveType>::operator Foam::symmTensor() const [with PrimitiveType=Foam::symmTensor]" will not be called for implicit or explicit conversions
          operator PrimitiveType() const
          detected during:
            instantiation of class "Foam::pTraits<PrimitiveType> [with PrimitiveType=Foam::symmTensor]" at line 82 of "/home/kris/OpenFOAM/OpenFOAM-1.7.x/src/OpenFOAM/lnInclude/dimensionedType.H"
            instantiation of class "Foam::dimensioned<Type> [with Type=Foam::symmTensor]" at line 182 of "/home/kris/OpenFOAM/OpenFOAM-1.7.x/src/OpenFOAM/lnInclude/transform.H"

Upon completion I notice that barely half the solvers have managed to compile.

Note that I have also compiled using GCC and MVAPICH2 and this worked perfectly. Thus, the problem must lie with the Intel compiler. Am I using the wrong version? Or is there something else I have missed?

Any help would be greatly appreciated.

wyldckat May 22, 2011 02:59

Greetings Kris,

The error you are getting is very similar to this one:
A couple of solutions are listed there, so you might want to give it a try!

Best regards,

kpsl May 22, 2011 04:45

Hi Bruno,

thank you for your reply :)

I saw that post, in fact i copied and pasted the error from there since I don't have access to my log files from home.

I noticed that the -xT warning no longer occurs in the latest 1.7.x release (it does occur a lot in 1.7.1). However, the compilation still gives the warning I originally posted many times.

I am not sure where to apply -std=c++0x. Can I do it globally or do I need to go through all of the wmake files? Sorry, my compiling skills are not that developed.

wyldckat May 22, 2011 04:57

Hi Kris,

Mmm, I just remembered about a Japanese blog that has a few more instructions:
You can try to use Google's hammer translator to sort out a few more details:
The interesting detail on that blog is that Icc 12 got beaten by gcc 4.5...

Best regards,

kpsl May 22, 2011 05:08

Hi Bruno,

thank you for the great hint!! I will try this first thing tomorrow.

Interesting that gcc beats icc. I guess will compare the two myself and post my results asap.

Kind regards,

kpsl May 24, 2011 04:14

So, I managed to compile OpenFOAM without problems using the Intel compiler and MVAPICH2 thanks to Bruno's hint.

It runs fine in serial mode but it doesn't seem to work in parallel on the cluster I am using.

Whenever I submit a job I get the message:


mpiexec: Warning: tasks 0-63 exited before completing MPI startup
No further explanation is given as to what caused the error.
I will ask my network admin to see if he knows a solution.
The strange thing is that when compiling with gcc and MVAPICH2 using the exact same MPI setting, everthing works fine.

kpsl May 24, 2011 05:13

Ok the results are in!

Benchmark done using the following setup:

16 nodes using 4 cores each = 64 CPUs
Mesh containing ~3.5 million cells

Criteria: real time taken to reach 0.0002s in the simulation.

Intel.Compiler/12.0.0 & MVAPICH2/1.5.0-intel
Run1: ExecutionTime = 88.49 s ClockTime = 90 s
Run2: ExecutionTime = 87.57 s ClockTime = 88 s
Run3: ExecutionTime = 87.9 s ClockTime = 89 s
Mean: Execution Time = 87.98 s ClockTime = 89 s

Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
Run1: ExecutionTime = 83.33 s ClockTime = 87 s
Run2: ExecutionTime = 83.14 s ClockTime = 85 s
Run3: ExecutionTime = 82.78 s ClockTime = 84 s
Mean: ExecutionTime = 83.08 s ClockTime = 85.33 s

GCC clearly beats Intel's compiler!

wyldckat May 27, 2011 15:05

Hi Kris,

This is very nice indeed! But from both benchmarks, I still get the feeling that anything less than 10min isn't much of a benchmark with OpenFOAM, since what we save on computational power, might get otherwise get wasted on some other task, like file access and MPI and so on...

But still, extrapolating a 24h00 run on icc gives a whooping 23h12 on gcc in the worst cases! That's easily a meal or two!

Now the big question is: are both end results as correct as when built with the officially advised gcc 4.4 series? And how does a gcc 4.4 build take on those other two!?

Oh well... some things are best left unknown :rolleyes: ;)

Best regards,

kpsl August 12, 2011 13:44

I took the liberty of benching the Intel Compiler against Gcc once again. This time using the new OpenFOAM-2.0.x.

Mesh containing ~1 million cells.

First a rather short run on one node. Data is written once at the end of the computation:

Intel.Compiler/12.0.4 & MVAPICH2/1.5.0-intel
1 Node, 8 CPUs, Intel Xeon Harpertown E5472
Run1: ExecutionTime = 3438.8 s ClockTime = 3449 s
Run2: ExecutionTime = 3440.75 s ClockTime = 3450 s
Run3: ExecutionTime = 3448.05 s ClockTime = 3458 s

Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
1 Node, 8 CPUs each, Intel Xeon Harpertown E5472
Run1: ExecutionTime = 3286.97 s ClockTime = 3297 s
Run2: ExecutionTime = 3288.28 s ClockTime = 3297 s
Run3: ExecutionTime = 3285.53 s ClockTime = 3294 s

And the same run on 8 nodes:

Intel.Compiler/12.0.4 & MVAPICH2/1.5.0-intel
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 209.24 s ClockTime = 211 s
Run2: ExecutionTime = 208.58 s ClockTime = 209 s
Run3: ExecutionTime = 209.44 s ClockTime = 212 s

Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 177.08 s ClockTime = 178 s
Run2: ExecutionTime = 175.57 s ClockTime = 176 s
Run3: ExecutionTime = 177.66 s ClockTime = 181 s

Then i took Bruno's advice and let the computation run 5 times longer. Data is now written 5 times in total:

Intel.Compiler/12.0.4 & MVAPICH2/1.5.0-intel
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 1099.89 s ClockTime = 1105 s
Run2: ExecutionTime = 1099.45 s ClockTime = 1103 s
Run3: ExecutionTime = 1096.52 s ClockTime = 1100 s

Gcc/4.5.2 & MVAPICH2/1.5.0-gcc
8 Nodes, 4 CPUs each, Intel Xeon Gainestown X5570
Run1: ExecutionTime = 931.13 s ClockTime = 936 s
Run2: ExecutionTime = 932.93 s ClockTime = 938 s
Run3: ExecutionTime = 930.48 s ClockTime = 936 s

As you can see, Gcc beats Intel by about 15% everytime. This seems to be a lot more than with 1.7.x, although i must admit, the "smaller" mesh may have something to do with it.

All times are GMT -4. The time now is 23:34.