CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

OpenFoam 2.0.1 interFoam

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 30, 2011, 16:27
Default OpenFoam 2.0.1 interFoam
  #1
New Member
 
Join Date: Sep 2011
Posts: 5
Rep Power: 14
wnowak1 is on a distinguished road
I managed to compile OpenFoam 2.0.1 on Linux (RHEL 4.8), and after running decomposePar (damBreak), I'm trying to run interFoam -parallel using mpirun as noted in the documentation.

I'm getting the following error:

--> FOAM FATAL ERROR:
bool IPstream::init(int& argc, char**& argv) : attempt to run parallel on 1 processor

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 80.

FOAM aborting


decomposePar generated 4 "processor1-4" directories.

We have intel's mpi on an HPC cluster for mpirun.

Any help is greatly appreciated.
wnowak1 is offline   Reply With Quote

Old   September 30, 2011, 16:51
Default
  #2
Senior Member
 
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 21
Bernhard is on a distinguished road
Can you post the exact command you used for mpirun? How did you define the distribution among the CPUs?
Bernhard is offline   Reply With Quote

Old   September 30, 2011, 17:07
Default
  #3
New Member
 
Join Date: Sep 2011
Posts: 5
Rep Power: 14
wnowak1 is on a distinguished road
We use qsub to submit jobs, which is the standard way on our cluster. It allocates free nodes based on availability and resources required.

However, i did try running the command without using qsub:

mpirun mpd.hosts -np 4 interFoam -parallel > test.log &

The line "nodes=1pn=8" indicates one compute node using eight processors.

I'm not quite sure how to define the distribution among CPUs. I ran decomposePar and it generated four processor directories.

#!/bin/bash

## PBS job submission settings:

##PBS -N CS5
#PBS -l nodes=1pn=8
#PBS -l walltime=1:00:00
#PBS -W x=NACCESSPOLICY:SINGLEJOB
#PBS -m ae
#PBS -M email
#PBS -j oe
#PBS -e exec.err
#PBS -o exec.log

mpirun interFoam -parallel
wnowak1 is offline   Reply With Quote

Old   October 1, 2011, 17:06
Default
  #4
Retired Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,974
Blog Entries: 45
Rep Power: 128
wyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to allwyldckat is a name known to all
Greetings to both!

@wnowak1: Might I suggest searching in Google:
Code:
site:cfd-online.com/Forums qsub openfoam
I'm only able to suggest this because I'm not familiar with qsub

Best regards,
Bruno
__________________
wyldckat is offline   Reply With Quote

Old   October 3, 2011, 02:02
Default
  #5
Senior Member
 
Bernhard
Join Date: Sep 2009
Location: Delft
Posts: 790
Rep Power: 21
Bernhard is on a distinguished road
In my qsub script to run in parallel, I use

-pe mpi_shm 4
Here 4 is the amount of CPUs, and mpi_shm forces shared memory usage, basically restricting to run on one node.

Do you use the system mpirun or the Third-Party mpirun? In the latter case as far as I remember you have to compile it with GridEngine support. Good luck!
Bernhard is offline   Reply With Quote

Old   October 3, 2011, 02:37
Default
  #6
Senior Member
 
akidess's Avatar
 
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 29
akidess will become famous soon enough
Did mpirun without using the queuing system work or not? It's not clear from your post. You tell Openfoam how many CPUs to use in system/decomposeParDict (which is clearly set to 4 in your case). Then you ask for 8 nodes, which is a waste, but not the issue at hand right now. I believe what you are missing in the PBS script is the "-np" option you used when you tried without PBS.

- Anton

Quote:
Originally Posted by wnowak1 View Post
We use qsub to submit jobs, which is the standard way on our cluster. It allocates free nodes based on availability and resources required.

However, i did try running the command without using qsub:

mpirun mpd.hosts -np 4 interFoam -parallel > test.log &

The line "nodes=1pn=8" indicates one compute node using eight processors.

I'm not quite sure how to define the distribution among CPUs. I ran decomposePar and it generated four processor directories.

#!/bin/bash

## PBS job submission settings:

##PBS -N CS5
#PBS -l nodes=1pn=8
#PBS -l walltime=1:00:00
#PBS -W x=NACCESSPOLICY:SINGLEJOB
#PBS -m ae
#PBS -M email
#PBS -j oe
#PBS -e exec.err
#PBS -o exec.log

mpirun interFoam -parallel
akidess is offline   Reply With Quote

Old   October 3, 2011, 02:39
Default
  #7
Senior Member
 
akidess's Avatar
 
Anton Kidess
Join Date: May 2009
Location: Germany
Posts: 1,377
Rep Power: 29
akidess will become famous soon enough
Quote:
Originally Posted by Bernhard View Post
Do you use the system mpirun or the Third-Party mpirun? In the latter case as far as I remember you have to compile it with GridEngine support. Good luck!
The original poster is using PBS/Torque, which is unrelated to Grid Engine (SGE).
akidess is offline   Reply With Quote

Old   October 5, 2011, 10:47
Default
  #8
Senior Member
 
Kent Wardle
Join Date: Mar 2009
Location: Illinois, USA
Posts: 219
Rep Power: 21
kwardle is on a distinguished road
HI,
Depending on what mpi implementation you are using you need to add a few things to your mpirun call. First, the qsub flag "-l nodes=1:Ppn=8" (not cap P but the forum gives a stupid smiley otherwise) tells qsub how to schedule the job, but mpirun does not get this info unless you tell it to. For example, here is a qsub script I use:
Code:
#!/bin/sh

#PBS -l nodes=25:ppn=8
#PBS -l walltime=72:00:00
#PBS -j oe
#PBS -N jobname
##PBS -W depend=afterany:699146

cd ${PBS_O_WORKDIR}

NN=`cat ${PBS_NODEFILE} | wc -l`
echo $NN
cat ${PBS_NODEFILE} > nodes

mpirun -machinefile ${PBS_NODEFILE} -np $NN interFoam -parallel > jobname-$NN.out

exit 0
This is perhaps a bit more complicated than you are asking for--I hate having to remember to change the number of processors in the qsub flag and also in the mpirun command so that is why I set it to the variable NN and use that. Also, you will need the -machinefile flag to tell it which nodes to use.
Hope this helps!
-Kent
kwardle is offline   Reply With Quote

Old   October 5, 2011, 16:09
Default
  #9
New Member
 
Join Date: Sep 2011
Posts: 5
Rep Power: 14
wnowak1 is on a distinguished road
Quote:
Originally Posted by akidess View Post
Did mpirun without using the queuing system work or not? It's not clear from your post. You tell Openfoam how many CPUs to use in system/decomposeParDict (which is clearly set to 4 in your case). Then you ask for 8 nodes, which is a waste, but not the issue at hand right now. I believe what you are missing in the PBS script is the "-np" option you used when you tried without PBS.

- Anton
mpirun did not run without the queuing system. I receive the same error message: "attempt to run parallel on 1 processor"
wnowak1 is offline   Reply With Quote

Old   October 5, 2011, 16:12
Default
  #10
New Member
 
Join Date: Sep 2011
Posts: 5
Rep Power: 14
wnowak1 is on a distinguished road
Quote:
Originally Posted by kwardle View Post
HI,
Depending on what mpi implementation you are using you need to add a few things to your mpirun call. First, the qsub flag "-l nodes=1:Ppn=8" (not cap P but the forum gives a stupid smiley otherwise) tells qsub how to schedule the job, but mpirun does not get this info unless you tell it to. For example, here is a qsub script I use:
Code:
#!/bin/sh

#PBS -l nodes=25:ppn=8
#PBS -l walltime=72:00:00
#PBS -j oe
#PBS -N jobname
##PBS -W depend=afterany:699146

cd ${PBS_O_WORKDIR}

NN=`cat ${PBS_NODEFILE} | wc -l`
echo $NN
cat ${PBS_NODEFILE} > nodes

mpirun -machinefile ${PBS_NODEFILE} -np $NN interFoam -parallel > jobname-$NN.out

exit 0
This is perhaps a bit more complicated than you are asking for--I hate having to remember to change the number of processors in the qsub flag and also in the mpirun command so that is why I set it to the variable NN and use that. Also, you will need the -machinefile flag to tell it which nodes to use.
Hope this helps!
-Kent
Thanks Kent. I've tried this script as well, but I still receive that same error message "attempt to run parallel on 1 processor".

Perhaps this has something to do with it:

When I run decomposePar, it creates the 8 Processor directories but the output of decomposePar shows this:


nProcs : 1



$ decomposePar
/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \\ / O peration | Version: 2.0.1 |
| \\ / A nd | Web: www.OpenFOAM.com |
| \\/ M anipulation | |
\*---------------------------------------------------------------------------*/
Build : 2.0.1
Exec : decomposePar
Date : Oct 05 2011
Time : 15:11:59
Host : host1
PID : 15665
Case : 2.0.1/damBreak
nProcs : 1


Does the nProcs have anything to do with this ?
wnowak1 is offline   Reply With Quote

Old   October 5, 2011, 16:19
Default
  #11
Senior Member
 
Kent Wardle
Join Date: Mar 2009
Location: Illinois, USA
Posts: 219
Rep Power: 21
kwardle is on a distinguished road
Wait, is that the complete output of decomposePar? You should see it break up the mesh showing you the number of cells and face patches for each processor and then the last lines should be field transfers to each processor 0 through 7.

Note that decomposePar itself is NOT a parallel application, run it in serial.

Scratch that, I see that you did say it creates the 8 processor directories so you must have run it correctly. So you did have the "-np 8" flag in your mpirun command?
kwardle is offline   Reply With Quote

Old   October 5, 2011, 16:50
Default
  #12
New Member
 
Join Date: Sep 2011
Posts: 5
Rep Power: 14
wnowak1 is on a distinguished road
Quote:
Originally Posted by kwardle View Post
Wait, is that the complete output of decomposePar? You should see it break up the mesh showing you the number of cells and face patches for each processor and then the last lines should be field transfers to each processor 0 through 7.

Note that decomposePar itself is NOT a parallel application, run it in serial.

Scratch that, I see that you did say it creates the 8 processor directories so you must have run it correctly. So you did have the "-np 8" flag in your mpirun command?
Yes, I've truncated the output of the output from decomposePar. I used -np 8 in my mpirun command.
wnowak1 is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
SpalartAllmaras wall function in OpenFOAM 2.0.1 moser_r OpenFOAM Running, Solving & CFD 4 September 18, 2013 17:37
OpenFoam 2.0.1 installation shailesh.nitk OpenFOAM Installation 4 October 4, 2011 08:50
[swak4Foam] OpenFOAM 1.6 and 1.7 with interFoam, groovyBC give different strange results Arnoldinho OpenFOAM Community Contributions 7 December 9, 2010 16:29
Cross-compiling OpenFOAM 1.7.0 on Linux for Windows 32 and 64bits with Mingw-w64 wyldckat OpenFOAM Announcements from Other Sources 3 September 8, 2010 06:25
Modified OpenFOAM Forum Structure and New Mailing-List pete Site News & Announcements 0 June 29, 2009 05:56


All times are GMT -4. The time now is 21:26.