Hi recently i am install th
recently i am install the OpenFOAM 1.4.1 on a cluster,which already intall the MPICH-1.2.5 in /home/software/mpich-1.2.5-pgirsh .so i setup the .basrhrc in ~/OpenFOAM/OpenFOAM-1.4.1:
and bashrc in ~/OpenFOAM/OpenFOAM-1.4.1/.OpenFOAM-1.4.1
export WM_MPLIB=MPICH(in linux system)
and build the Pstream in ~/OpenFOAM/OpenFOAM-1.4.1/src/Pstream
but when i do:
mpirun -machinefile interFoam/damBreak/system/machi
nes `which interFoam` interFoam/ damBreak
the case can not run with error message:
--> FOAM FATAL ERROR : bool Pstream::init(int& argc, char**& argv) : attempt to run parallel on 1 processor#0 Foam::error::printStack(Foam:http://www.cfd-online.com/OpenFOAM_D...part/proud.gifstream&) in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPOpt/libOpenFOAM.so"
#1 Foam::error::abort() in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPOpt/libOpenFOAM.so"
#2 Foam::Pstream::init(int&, char**&) in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPOpt/mpich-1.2.5-pgirsh/li bPstream.so"
#3 Foam::argList::argList(int&, char**&, bool, bool) in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPOpt/libOpenFOAM.so"
#4 main in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/applications/bin/linuxGccDPOpt/interFoa m"
#5 __libc_start_main in "/lib/tls/libc.so.6"
#6 Foam::regIOobject::readIfModified() in "/home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/applications/bin/linuxGccDPOpt/interFoa m"
From function Pstream::init(int& argc, char**& argv)
in file Pstream.C at line 72.
/home/software/mpich-1.2.5-pgirsh/bin/mpirun.ch_p4: line 243: 22644 Aborted /home/yjs10023k/OpenFOAM/OpenFOAM-1.4.1/applications/bin/linuxGccDPOpt/interFoam "interFoam/" "damBreak" -p4pg /home/yjs10023k/OpenFOAM/yjs10023k-1.4.1/run/tutorials/PI22590 -p4wd /home/yjs10023k/OpenFOAM/yjs10023k-1.4.1/run/tutorials
how can i resolve it?
and if i run case in openmpi in the OpenFOAM package it will run with only specaill two machine. or how can i setup the openmpi to run with other one?
Hi, I am hope you're aware of
I am hope you're aware of the fact that in order to make a parallel computation using OF you have to preprocess the computational case on a machine that has sufficient amount of internal memory to fit the whole case, i.e. run decomposePar on your case for a predetermined number of processors. After that you can run your parallel case using runPar. After the simulation has terminated you need to postprocess it on a machine with again sufficient amount of internal memory using reconstructPar.
I don't think your MPI needs any special setup.
Hi Marco thanks ! i hav
i have done the "decomposePar" and i can run these case with openmpi before.but the problem is that i can run by openmpi with only two special nodes of cluster(i can run on node A with node B,but not node A with node C),i think that is because of setting of network of openmpi(when i run with openmpi,the network is using ssh,not rsh,and it needs password for the nodes ),i don`t know how to setup openmpi to let the case run with whole nodes of cluster.so i turn to the mpich which has already been installed on the cluster in the /home/software.
Hi, Both MPICH and OpenMPI us
Both MPICH and OpenMPI use ssh to connect to remote hosts by defualt. So if it doesn't work with OpenMPI I wouldn't expect it to work with MPICH either, assuming communication is your only problem.
Your second post suggests you can run in parallel using OpenMPI on two nodes, but not run at all using MPICH. This could indicate some problem other than a failed ssh session. Or it just means that OpenMPI and MPICH react differently to the same failure.
If ssh is your problem I recommend setting up your remaining nodes to use DSA authentication for ssh remote connections. There a lots of guides to do that, one of them can be found at OpenMPI:
Hi thanks！i can run it wi
thanks！i can run it with MPICH now.i dont know how about openMPI.i will try it later for i have met some other problem
my setting of MPICH for openFOAM earlier has nothing wrong i guess.the difference is the format of machifne file for MPICH and openMPI is different. for MPICH machine file should be :
and i change the machine file then it works well
thank you all
which supercomputer are you using? ssc? I remember they are using mvapich.
|All times are GMT -4. The time now is 00:59.|