CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Installation (
-   -   PBS queuing with intel MPI (

hartinger August 14, 2006 06:22

Hi, we've got a new cluster

we've got a new cluster available at our college(imperial) with 160 zeon-nodes, PBS queuing system, RED HAT installation. So I am more than eager to get OpenFOAM running. But they have their communication geared up to that intel-mpi-2.0.1. Lam is not supported, mpich throws
[0] MPI Abort by user Aborting program !
[0] Aborting program!
which is not too helpful.

So, I would like to use their intel-mpi. As far as I understand, I need to recompile the Pstream library and then compile the executable (trying interFoam for a start).
In order to do that I want to introduce 'setenv WM_MPLIB MPI_INTEL' in 'OpenFOAM/OpenFOAM-1.3/.OpenFOAM-1.3/bashrc' and then have a section 'elif [ .$WM_MPLIB = .MPI_INTEL ]; then...' in 'OpenFOAM/OpenFOAM-1.3/bashrc' accordingly.

From the cluster admin I got the following example on how to link the mpi library:
"g++ -o foo foo.c ${MPI_LIBS} -I ${MPI_HOME}/include"
$MPI_LIBS = -L/apps/intel/ict/mpi/2.0.1/lib -lmpi -lmpigf -lmpigi -lrt -lpthread -ldl
$MPI_HOME = /apps/intel/ict/mpi/2.0.1/include

It seems to me, that all necessary information is there to put it together, but my knowledge of the FOAM build system is not sufficient. I tried a couple of things, but never worked out.

So, how do I do it? Please help! 160 bored nodes are waiting for me.

Thank you

All times are GMT -4. The time now is 21:35.