CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Installation (https://www.cfd-online.com/Forums/openfoam-installation/)
-   -   Grid Engine OpenFOAM15dev and OpenMPI124 (https://www.cfd-online.com/Forums/openfoam-installation/57220-grid-engine-openfoam15dev-openmpi124.html)

tian February 17, 2009 10:41

Hi, I have a cluster with O
 
Hi,

I have a cluster with OpenMPI-1.2.4 and Grid Engine. Did somebody know how I can tell OpenFOAM that it use the nativ OpenMPI and not the OpenMPI-1.2.6 in the ThirdParty? Thanks

Bye
Thomas

olesen February 17, 2009 10:59

I don't know what the sourcefo
 
I don't know what the sourceforge "dev" version has, but in the normal distribution you should take a look at the etc/settings.sh file to find the OPENMPI settings. You could try adding your own definitions. Eg, you could define WM_MPLIB=SYSTEM_OPENMPI in the etc/bashrc and handle this case in etc/settings.sh

Keep in mind that you should also examine src/Pstream/Allwmake to make sure the Pstreams get re-made correctly. With any luck, you won't run into any linker errors.

However, there shouldn't be any particular reason that you can't just use the version in ThirdParty, it should coexist fine with any number of other versions of openmpit (provided that your LD_LIBRARY_PATH etc aren't cross-contaminated).

bastil February 17, 2009 11:54

I agree with Mark. We do it th
 
I agree with Mark. We do it the same way. Building against other MPIs is hard. I tried it once but finally went back to "default" MPI.

Regards

tian February 17, 2009 15:24

Hi, thank for the answer. F
 
Hi,

thank for the answer. For my understanding:

in the cluster there is OpenMPI-1.2.4:
./apps/openmpi/openmpi-1.2.4-gcc/

In the etc/basrh:
export WM_MLIB=OPENMPI

In the settings.sh:
OPENMPI)
# mpi_version=openmpi-1.2.6
mpi_version=openmpi-1.2.4-gcc

# export MPI_HOME=$WM_THIRD_PARTY_DIR/$mpi_version
export MPI_HOME=/apps/openmpi/$mpi_version

# export MPI_ARCH_PATH=$MPI_HOME/platforms/$WM_OPTIONS
export MPI_ARCH_PATH=/apps/openmpi/$mpi_version

# Tell OpenMPI where to find its install directory
export OPAL_PREFIX=$MPI_ARCH_PATH

_foamAddPath $MPI_ARCH_PATH/bin
_foamAddLib $MPI_ARCH_PATH/lib

export FOAM_MPI_LIBBIN=$FOAM_LIBBIN/$mpi_version
unset mpi_version
;;

Than I have to re-made the Pstream, right? Thanks for help

Bye
Thomas

tian February 18, 2009 13:05

Hi, I tried without success
 
Hi,

I tried without success. I try to re-made the Pstream and this happen:

tian@poseidon051:~/OpenFOAM/OpenFOAM-1.5-dev/src/Pstream> ./Allwmake
+ wmake libso dummy
/home/tian/OpenFOAM/ThirdParty/gcc-4.3.1/platforms/linux64/bin/../lib/gcc/x86_64 -unknown-linux-gnu/4.3.1/../../../../lib64/libstdc++.so: file not recognized: File format not recognized
collect2: ld returned 1 exit status
make: *** [/home/tian/OpenFOAM/OpenFOAM-1.5-dev/lib/linux64GccDPOpt/dummy/libPstream.so] Fehler 1
+ case "$WM_MPLIB" in
+ export WM_OPTIONS=linux64GccDPOptOPENMPI
+ WM_OPTIONS=linux64GccDPOptOPENMPI
+ set +x

Note: ignore spurious warnings about missing mpicxx.h headers
+ wmake libso mpi
/home/tian/OpenFOAM/ThirdParty/gcc-4.3.1/platforms/linux64/bin/../lib/gcc/x86_64 -unknown-linux-gnu/4.3.1/../../../../lib64/libstdc++.so: file not recognized: File format not recognized
collect2: ld returned 1 exit status
make: *** [/home/tian/OpenFOAM/OpenFOAM-1.5-dev/lib/linux64GccDPOpt//libPstream.so] Fehler 1
tian@poseidon051:~/OpenFOAM/OpenFOAM-1.5-dev/src/Pstream>

Some idea about it?

Thanks

Bye
Thomas

asaha February 19, 2009 01:30

Is there a problem running gri
 
Is there a problem running grid engine in cluster with OpenFOAM version of openmpi.

I think if the openmpi version in the cluster supports grid engine then you may be able to use the openmpi provided by OpenFOAM without going through the above.

In fact I had a problem in my cluster where the cluster openmpi did not have support for grid engine.

You can check the support for grid engine in the cluster version of openmpi as below:

ompi_info | grep gridengine

The result should be as below.

MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.4)
MCA pls: gridengine (MCA v1.0, API v1.3, Component v1.2.4)

asaha February 19, 2009 01:48

Is there a problem running gri
 
Is there a problem running grid engine in cluster with OpenFOAM version of openmpi.

I think if the openmpi version in the cluster supports grid engine then you may be able to use the openmpi provided by OpenFOAM without going through the above.

In fact I had a problem in my cluster where the cluster openmpi did not have support for grid engine.

You can check the support for grid engine in the cluster version of openmpi as below:

ompi_info | grep gridengine

The result should be as below.

MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.4)
MCA pls: gridengine (MCA v1.0, API v1.3, Component v1.2.4)

tian February 25, 2009 07:58

Hi, I tried all things and
 
Hi,

I tried all things and it is not working. It seems that something is wrong with OpenMPI and Grid Engine together.

ompi_info gave me the same info:
MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.4)
MCA pls: gridengine (MCA v1.0, API v1.3, Component v1.2.4)


My startscript.sh: "qsub startscript.sh"

#!/bin/bash
#$ -S /bin/bash
#$ -pe mpi 12
#$ -cwd

export PATH="/apps/openmpi/1.2.4-gcc/bin"
export LD_LIBRARY_PATH="/apps/openmpi/1.2.4-gcc/lib"

mpirun -mca pls_gridengine_verbose 1 -np $NSLOTS -machinefile ~/mpichhosts.$JOB_ID echo "hallo"

Everytime I got this error:
error: executing task of job 1464184 failed:
[poseidon058:07894] ERROR: A daemon on node poseidon024 failed to start as expected.
[poseidon058:07894] ERROR: There may be more information available from
[poseidon058:07894] ERROR: the 'qstat -t' command on the Grid Engine tasks.
[poseidon058:07894] ERROR: If the problem persists, please restart the
[poseidon058:07894] ERROR: Grid Engine PE job
[poseidon058:07894] ERROR: The daemon exited unexpectedly with status 1.

qsat -t gave not information about this error. Has somebody experience with it? Thanks!

Bye
Thomas

tian February 25, 2009 09:08

Hi, I found it out: I ha
 
Hi,

I found it out:

I had to set: "control_slaves TRUE" in the PE Grid Engine

Thanks
Bye
Thomas

tian February 25, 2009 09:59

Hi, now I have some questio
 
Hi,

now I have some question about DecomposeParcDict.

How can I set up this file automaticly from the grid engine? I mean, I changed:

numberOfSubdomains 4;
to
numberOfSubdomains slots;

If I use method metis than I have to write the number "1" depend of the slots that I got in this file everytime???

Can somebody show me a good solution , thanks!

Bye
Thomas

olesen February 25, 2009 10:21

In general, I would turn the p
 
In general, I would turn the problem around the other way. Extract the numberOfSubdomains and use it to form your -pe request. This corresponds better at least to our usage pattern: eg, restarting an existing parallel calculation without decomposing yet again. If the domain is already decomposed, you can just pick up the number of processor* directories.


However, if you want it the otherway around, you'll need to add the corresponding logic to you GridEngine job script.
if [ "$NSLOTS" -gt 1 ]
then
# create/edit dictionary, do decompose
# run applicatioon with -parallel
else
# run applicatioon without -parallel
fi

Use your favourite tool (sed, perl, shell) to create/edit the dictionary.

You should note that decomposePar in OpenFOAM-1.5 also has -force and -ifRequired options that prove useful within this type of scripting.

gschaider February 26, 2009 10:43

Hi Alex! I know I'm a bit a
 
Hi Alex!

I know I'm a bit annoying with this PyFoam-stuff, but if good old Metis is alright for decomposing you might consider this:

http://openfoamwiki.net/index.php/Co...luster_support

But be warned: It works nicely for our place, but I don't know if anybody else uses that part of PyFoam

Bernhard


All times are GMT -4. The time now is 19:21.