|
[Sponsors] |
![]() |
![]() |
#1 |
Senior Member
|
Hi
recently i am install the OpenFOAM 1.4.1 on a cluster,which already intall the MPICH-1.2.5 in /home/software/mpich-1.2.5-pgirsh .so i setup the .basrhrc in ~/OpenFOAM/OpenFOAM-1.4.1: export MPICH_PATH=/home/software/mpich-1.2.5-pgirsh export MPICH_ARCH_PATH=$MPICH_PATH export MPICH_ROOT=$MPICH_ARCH_PATH AddLib $MPICH_ARCH_PATH/lib AddPath $MPICH_ARCH_PATH/bin export FOAM_MPI_LIBBIN=$FOAM_LIBBIN/mpich-1.2.5-pgirsh and bashrc in ~/OpenFOAM/OpenFOAM-1.4.1/.OpenFOAM-1.4.1 export WM_MPLIB=MPICH(in linux system) and build the Pstream in ~/OpenFOAM/OpenFOAM-1.4.1/src/Pstream but when i do: mpirun -machinefile interFoam/damBreak/system/machi nes `which interFoam` interFoam/ damBreak the case can not run with error message: --> FOAM FATAL ERROR : bool Pstream::init(int& argc, char**& argv) : attempt to run parallel on 1 processor#0 Foam::error::printStack(Foam: ![]() #1 Foam::error::abort() in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPOpt/libOpenFOAM.so" #2 Foam::Pstream::init(int&, char**&) in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPOpt/mpich-1.2.5-pgirsh/li bPstream.so" #3 Foam::argList::argList(int&, char**&, bool, bool) in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPOpt/libOpenFOAM.so" #4 main in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/applications/bin/linuxGccDPOpt/interFoa m" #5 __libc_start_main in "/lib/tls/libc.so.6" #6 Foam::regIOobject::readIfModified() in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/applications/bin/linuxGccDPOpt/interFoa m" From function Pstream::init(int& argc, char**& argv) in file Pstream.C at line 72. FOAM aborting /home/software/mpich-1.2.5-pgirsh/bin/mpirun.ch_p4: line 243: 22644 Aborted /home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/applications/bin/linuxGccDPOpt/interFoam "interFoam/" "damBreak" -p4pg /home/yjs10023k/OpenFOAM/yjs10023k-1.4.1/run/tutorials/PI22590 -p4wd /home/yjs10023k/OpenFOAM/yjs10023k-1.4.1/run/tutorials how can i resolve it? and if i run case in openmpi in the OpenFOAM package it will run with only specaill two machine. or how can i setup the openmpi to run with other one? thanks! yours wayne |
|
![]() |
![]() |
![]() |
![]() |
#2 |
Member
Marco Kupiainen
Join Date: Mar 2009
Posts: 31
Rep Power: 16 ![]() |
Hi,
I am hope you're aware of the fact that in order to make a parallel computation using OF you have to preprocess the computational case on a machine that has sufficient amount of internal memory to fit the whole case, i.e. run decomposePar on your case for a predetermined number of processors. After that you can run your parallel case using runPar. After the simulation has terminated you need to postprocess it on a machine with again sufficient amount of internal memory using reconstructPar. I don't think your MPI needs any special setup. marco |
|
![]() |
![]() |
![]() |
![]() |
#3 |
Senior Member
|
Hi Marco
thanks ! i have done the "decomposePar" and i can run these case with openmpi before.but the problem is that i can run by openmpi with only two special nodes of cluster(i can run on node A with node B,but not node A with node C),i think that is because of setting of network of openmpi(when i run with openmpi,the network is using ssh,not rsh,and it needs password for the nodes ),i don`t know how to setup openmpi to let the case run with whole nodes of cluster.so i turn to the mpich which has already been installed on the cluster in the /home/software. wayne |
|
![]() |
![]() |
![]() |
![]() |
#4 |
Member
Martin Aunskjaer
Join Date: Mar 2009
Location: Denmark
Posts: 53
Rep Power: 16 ![]() |
Hi,
Both MPICH and OpenMPI use ssh to connect to remote hosts by defualt. So if it doesn't work with OpenMPI I wouldn't expect it to work with MPICH either, assuming communication is your only problem. Your second post suggests you can run in parallel using OpenMPI on two nodes, but not run at all using MPICH. This could indicate some problem other than a failed ssh session. Or it just means that OpenMPI and MPICH react differently to the same failure. If ssh is your problem I recommend setting up your remaining nodes to use DSA authentication for ssh remote connections. There a lots of guides to do that, one of them can be found at OpenMPI: http://www.open-mpi.org/faq/?category=rsh |
|
![]() |
![]() |
![]() |
![]() |
#5 |
Senior Member
|
Hi
thanks!i can run it with MPICH now.i dont know how about openMPI.i will try it later for i have met some other problem my setting of MPICH for openFOAM earlier has nothing wrong i guess.the difference is the format of machifne file for MPICH and openMPI is different. for MPICH machine file should be : aa:2 bb:2 cc:2 .... for openMPI aa cpu=2 bb cpu=2 cc cpu=2 ... and i change the machine file then it works well thank you all wayne |
|
![]() |
![]() |
![]() |
![]() |
#6 |
Senior Member
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 20 ![]() |
Hi wayne,
which supercomputer are you using? ssc? I remember they are using mvapich.
__________________
~ Daniel WEI ------------- Boeing Research & Technology - China Beijing, China |
|
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
MPICH | r2d2 | OpenFOAM Installation | 1 | June 15, 2005 19:32 |
PVM or MPICH | CFX User | CFX | 5 | November 3, 2004 16:31 |
LAM/MPI or MPICH? | Junseok Kim | Main CFD Forum | 0 | January 15, 2004 07:10 |
MPICH-Variables | James | Siemens | 4 | October 22, 2002 10:14 |
MPICH | Junseok Kim | Main CFD Forum | 1 | November 5, 2000 22:07 |