CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM (https://www.cfd-online.com/Forums/openfoam/)
-   -   OpenFoam 2.0.1 interFoam (https://www.cfd-online.com/Forums/openfoam/92987-openfoam-2-0-1-interfoam.html)

wnowak1 September 30, 2011 16:27

OpenFoam 2.0.1 interFoam
 
I managed to compile OpenFoam 2.0.1 on Linux (RHEL 4.8), and after running decomposePar (damBreak), I'm trying to run interFoam -parallel using mpirun as noted in the documentation.

I'm getting the following error:

--> FOAM FATAL ERROR:
bool IPstream::init(int& argc, char**& argv) : attempt to run parallel on 1 processor

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 80.

FOAM aborting


decomposePar generated 4 "processor1-4" directories.

We have intel's mpi on an HPC cluster for mpirun.

Any help is greatly appreciated.

Bernhard September 30, 2011 16:51

Can you post the exact command you used for mpirun? How did you define the distribution among the CPUs?

wnowak1 September 30, 2011 17:07

We use qsub to submit jobs, which is the standard way on our cluster. It allocates free nodes based on availability and resources required.

However, i did try running the command without using qsub:

mpirun mpd.hosts -np 4 interFoam -parallel > test.log &

The line "nodes=1:ppn=8" indicates one compute node using eight processors.

I'm not quite sure how to define the distribution among CPUs. I ran decomposePar and it generated four processor directories.

#!/bin/bash

## PBS job submission settings:

##PBS -N CS5
#PBS -l nodes=1:ppn=8
#PBS -l walltime=1:00:00
#PBS -W x=NACCESSPOLICY:SINGLEJOB
#PBS -m ae
#PBS -M email
#PBS -j oe
#PBS -e exec.err
#PBS -o exec.log

mpirun interFoam -parallel

wyldckat October 1, 2011 17:06

Greetings to both!

@wnowak1: Might I suggest searching in Google:
Code:

site:cfd-online.com/Forums qsub openfoam
I'm only able to suggest this because I'm not familiar with qsub :(

Best regards,
Bruno

Bernhard October 3, 2011 02:02

In my qsub script to run in parallel, I use

-pe mpi_shm 4
Here 4 is the amount of CPUs, and mpi_shm forces shared memory usage, basically restricting to run on one node.

Do you use the system mpirun or the Third-Party mpirun? In the latter case as far as I remember you have to compile it with GridEngine support. Good luck!

akidess October 3, 2011 02:37

Did mpirun without using the queuing system work or not? It's not clear from your post. You tell Openfoam how many CPUs to use in system/decomposeParDict (which is clearly set to 4 in your case). Then you ask for 8 nodes, which is a waste, but not the issue at hand right now. I believe what you are missing in the PBS script is the "-np" option you used when you tried without PBS.

- Anton

Quote:

Originally Posted by wnowak1 (Post 326305)
We use qsub to submit jobs, which is the standard way on our cluster. It allocates free nodes based on availability and resources required.

However, i did try running the command without using qsub:

mpirun mpd.hosts -np 4 interFoam -parallel > test.log &

The line "nodes=1:ppn=8" indicates one compute node using eight processors.

I'm not quite sure how to define the distribution among CPUs. I ran decomposePar and it generated four processor directories.

#!/bin/bash

## PBS job submission settings:

##PBS -N CS5
#PBS -l nodes=1:ppn=8
#PBS -l walltime=1:00:00
#PBS -W x=NACCESSPOLICY:SINGLEJOB
#PBS -m ae
#PBS -M email
#PBS -j oe
#PBS -e exec.err
#PBS -o exec.log

mpirun interFoam -parallel


akidess October 3, 2011 02:39

Quote:

Originally Posted by Bernhard (Post 326451)
Do you use the system mpirun or the Third-Party mpirun? In the latter case as far as I remember you have to compile it with GridEngine support. Good luck!

The original poster is using PBS/Torque, which is unrelated to Grid Engine (SGE).

kwardle October 5, 2011 10:47

HI,
Depending on what mpi implementation you are using you need to add a few things to your mpirun call. First, the qsub flag "-l nodes=1:Ppn=8" (not cap P but the forum gives a stupid smiley otherwise) tells qsub how to schedule the job, but mpirun does not get this info unless you tell it to. For example, here is a qsub script I use:
Code:

#!/bin/sh

#PBS -l nodes=25:ppn=8
#PBS -l walltime=72:00:00
#PBS -j oe
#PBS -N jobname
##PBS -W depend=afterany:699146

cd ${PBS_O_WORKDIR}

NN=`cat ${PBS_NODEFILE} | wc -l`
echo $NN
cat ${PBS_NODEFILE} > nodes

mpirun -machinefile ${PBS_NODEFILE} -np $NN interFoam -parallel > jobname-$NN.out

exit 0

This is perhaps a bit more complicated than you are asking for--I hate having to remember to change the number of processors in the qsub flag and also in the mpirun command so that is why I set it to the variable NN and use that. Also, you will need the -machinefile flag to tell it which nodes to use.
Hope this helps!
-Kent

wnowak1 October 5, 2011 16:09

Quote:

Originally Posted by akidess (Post 326453)
Did mpirun without using the queuing system work or not? It's not clear from your post. You tell Openfoam how many CPUs to use in system/decomposeParDict (which is clearly set to 4 in your case). Then you ask for 8 nodes, which is a waste, but not the issue at hand right now. I believe what you are missing in the PBS script is the "-np" option you used when you tried without PBS.

- Anton

mpirun did not run without the queuing system. I receive the same error message: "attempt to run parallel on 1 processor"

wnowak1 October 5, 2011 16:12

Quote:

Originally Posted by kwardle (Post 326784)
HI,
Depending on what mpi implementation you are using you need to add a few things to your mpirun call. First, the qsub flag "-l nodes=1:Ppn=8" (not cap P but the forum gives a stupid smiley otherwise) tells qsub how to schedule the job, but mpirun does not get this info unless you tell it to. For example, here is a qsub script I use:
Code:

#!/bin/sh

#PBS -l nodes=25:ppn=8
#PBS -l walltime=72:00:00
#PBS -j oe
#PBS -N jobname
##PBS -W depend=afterany:699146

cd ${PBS_O_WORKDIR}

NN=`cat ${PBS_NODEFILE} | wc -l`
echo $NN
cat ${PBS_NODEFILE} > nodes

mpirun -machinefile ${PBS_NODEFILE} -np $NN interFoam -parallel > jobname-$NN.out

exit 0

This is perhaps a bit more complicated than you are asking for--I hate having to remember to change the number of processors in the qsub flag and also in the mpirun command so that is why I set it to the variable NN and use that. Also, you will need the -machinefile flag to tell it which nodes to use.
Hope this helps!
-Kent

Thanks Kent. I've tried this script as well, but I still receive that same error message "attempt to run parallel on 1 processor".

Perhaps this has something to do with it:

When I run decomposePar, it creates the 8 Processor directories but the output of decomposePar shows this:


nProcs : 1



$ decomposePar
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 2.0.1 |
| \\ / A nd | Web: www.OpenFOAM.com |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 2.0.1
Exec : decomposePar
Date : Oct 05 2011
Time : 15:11:59
Host : host1
PID : 15665
Case : 2.0.1/damBreak
nProcs : 1


Does the nProcs have anything to do with this ?

kwardle October 5, 2011 16:19

Wait, is that the complete output of decomposePar? You should see it break up the mesh showing you the number of cells and face patches for each processor and then the last lines should be field transfers to each processor 0 through 7.

Note that decomposePar itself is NOT a parallel application, run it in serial.

Scratch that, I see that you did say it creates the 8 processor directories so you must have run it correctly. So you did have the "-np 8" flag in your mpirun command?

wnowak1 October 5, 2011 16:50

Quote:

Originally Posted by kwardle (Post 326828)
Wait, is that the complete output of decomposePar? You should see it break up the mesh showing you the number of cells and face patches for each processor and then the last lines should be field transfers to each processor 0 through 7.

Note that decomposePar itself is NOT a parallel application, run it in serial.

Scratch that, I see that you did say it creates the 8 processor directories so you must have run it correctly. So you did have the "-np 8" flag in your mpirun command?

Yes, I've truncated the output of the output from decomposePar. I used -np 8 in my mpirun command.


All times are GMT -4. The time now is 15:52.