CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (http://www.cfd-online.com/Forums/openfoam-solving/)
-   -   OF 2.0.1 parallel running problems (http://www.cfd-online.com/Forums/openfoam-solving/95794-2-0-1-parallel-running-problems.html)

moser_r January 3, 2012 09:56

OF 2.0.1 parallel running problems
 
I have been running some cases on a single processor with no problem under OF 2.0.1. With the case being quite large, I wanted to start running it in parallel on my multi-core machine, as I had been doing under OF 1.7.1. I started using the command:

mpirun -np 4 simpleFoam -parallel > log &

which is exactly what I had been using under OF 1.7.1. My log file contained the following message:

Warning: Command line arguments for program should be given
after the program name. Assuming that -parallel is a
command line argument for the program.
Missing: program name
Program simpleFoam either does not exist, is not
executable, or is an erroneous argument to mpirun.

I also tried running using foamJob (foamJob -parallel simpleFoam), but then get the following output in the log file:




--> FOAM FATAL ERROR:
bool IPstream::init(int& argc, char**& argv) : attempt to run parallel on 1 processor

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 80.

FOAM aborting

#0 Foam::error::printStack(Foam::Ostream&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#1 Foam::error::abort() in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#2 Foam::UPstream::init(int&, char**&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/openmpi-system/libPstream.so"
#3 Foam::ParRunControl::runPar(int&, char**&) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#4 Foam::argList::argList(int&, char**&, bool, bool) in "/opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so"
#5
in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/simpleFoam"
#6 __libc_start_main in "/lib/libc.so.6"
#7
in "/opt/openfoam201/platforms/linux64GccDPOpt/bin/simpleFoam"
[linux1:08080] *** Process received signal ***
[linux1:08080] Signal: Aborted (6)
[linux1:08080] Signal code: (-6)
[linux1:08080] [ 0] /lib/libc.so.6(+0x33af0) [0x7fa3421cfaf0]
[linux1:08080] [ 1] /lib/libc.so.6(gsignal+0x35) [0x7fa3421cfa75]
[linux1:08080] [ 2] /lib/libc.so.6(abort+0x180) [0x7fa3421d35c0]
[linux1:08080] [ 3] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam5error5abortEv+0x241) [0x7fa34301c031]
[linux1:08080] [ 4] /opt/openfoam201/platforms/linux64GccDPOpt/lib/openmpi-system/libPstream.so(_ZN4Foam8UPstream4initERiRPPc+0x450) [0x7fa341f97e30]
[linux1:08080] [ 5] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam13ParRunControl6runParERiRP Pc+0x15) [0x7fa343039f95]
[linux1:08080] [ 6] /opt/openfoam201/platforms/linux64GccDPOpt/lib/libOpenFOAM.so(_ZN4Foam7argListC1ERiRPPcbb+0x29d3) [0x7fa343037f33]
[linux1:08080] [ 7] simpleFoam() [0x417b93]
[linux1:08080] [ 8] /lib/libc.so.6(__libc_start_main+0xfd) [0x7fa3421bac4d]
[linux1:08080] [ 9] simpleFoam() [0x4160e9]
[linux1:08080] *** End of error message ***
Aborted


What do I need to do differently under OF2.0.1 to get the case to run in parallel?

Many thanks

Richard

moser_r January 9, 2012 05:48

In case it is useful to others, I have found the solution to my problem, which actually had nothing to do with OpenFOAM. I had installed a PGI Fortran compiler on the same machine, which included an MPI module. This MPI module was set-up to look in different locations on my machine. The solution was to move the line relating to OpenFOAM in my bashrc file to the end of the file (after the path statements for the PGI compiler), so the correct version of MPI was run.

Jedoch February 28, 2012 08:45

Hi moser_r,

Fortunately I found your post. The error you described is the same which is driving me insane. Could you please tell me, what exactly you wrote in your bashrc to solve the problem?

Thank you!

moser_r March 6, 2012 03:18

Sorry for not replaying sooner - I have been travelling for work. My solution was to take the line relating to OpenFOAM in my bashrc file (source /opt/openfoam201/etc/bashrc) and move it to the end of the file. This sorted out the issues I had - hope it does the same for you!

hawkeye321 January 17, 2013 12:08

MPI Issue on OpenFOAM 2.1
 
Hi Foamers

I am having the same issue discussed above! The error I get is
------------------------------------------------------------
bool IPstream::init(int& argc, char**& argv) : attempt to run parallel on 1 processor

From function
UPstream::init(int& argc, char**& argv) From function
UPstream::init(int& argc, char**& argv) in file
UPstream.C in file at line UPstream.C at line 8181..


FOAM aborting
-------------------------------------------------------------
I have moved the line relating OpenFOAM to the end of bashrc file, but still have the problem! Any comments?

buffi January 20, 2013 17:11

If you want to run your case parallel, you have to decompose it first using decomposePar. You need to write a decomposeParDict that tells decomposePar how many domains you want and how the whole domain is to be split.

The error message you have seen doesn't have anything to do with the OP's problem.

MangoMango March 4, 2014 06:17

Hi,

I'm faceing the same problem. What is meant by "have found the solution to my problem, which actually had nothing to do with OpenFOAM. I had installed a PGI Fortran compiler on the same machine, which included an MPI module. This MPI module was set-up to look in different locations on my machine. The solution was to move the line relating to OpenFOAM in my bashrc file to the end of the file (after the path statements for the PGI compiler), so the correct version of MPI was run."

cheers
Alex

MangoMango March 18, 2014 09:49

..buffi is right. In my case it was a simple solution. I had to increase the core number in my submit script for the hpc-cluster. (the number of subdomains (decompParDict) must be exactly equal to the number of threads in mpirun - of course, but the cluster which I'm using has a different thread-manager semi-automatic);)


All times are GMT -4. The time now is 03:37.