CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Installation (http://www.cfd-online.com/Forums/openfoam-installation/)
-   -   Problems running in parallel - Pstream not available (http://www.cfd-online.com/Forums/openfoam-installation/124166-problems-running-parallel-pstream-not-available.html)

dark lancer September 28, 2013 09:27

Problems running in parallel - Pstream not available
 
hi
I run my case in 1core without any error but when I run with mpirun I face this error:
Quote:

./Allrun
Quote:

--> FOAM FATAL ERROR:
Trying to use the dummy Pstream library.
This dummy library cannot be used in parallel mode

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 37.

FOAM exiting



--> FOAM FATAL ERROR:
Trying to use the dummy Pstream library.
This dummy library cannot be used in parallel mode

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 37.

FOAM exiting

-----------------------------------------------------------------------------
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-----------------------------------------------------------------------------
I do this but not work:
Quote:

cd $FOAM_LIBBIN
mv dummy/ dummy__
and
cd $FOAM_SRC
cd Pstream
./Allwmake
Quote:

hadi@172-15-2-53:~/OpenFOAM/OpenFOAM-2.1.1/src/Pstream> ./Allwmake
+ wmake libso dummy
'/home/hadi/OpenFOAM/OpenFOAM-2.1.1/platforms/linux64GccDPOpt/lib/dummy/libPstream.so' is up to date.
+ case "$WM_MPLIB" in
+ set +x

Note: ignore spurious warnings about missing mpicxx.h headers

wmake libso mpi
g++: error: libtool:: No such file or directory
g++: error: link:: No such file or directory
make: *** [/home/hadi/OpenFOAM/OpenFOAM-2.1.1/platforms/linux64GccDPOpt/lib/openmpi-system/libPstream.so] Error 1

I do this but not work:
Quote:

Code:
echo "$WM_PROJECT_DIR/etc/bashrc"
Find the line that says:
Code:
export WM_MPLIB=OPENMPI
Change it to:
Code:
export WM_MPLIB=SYSTEMOPENMPI
Start a new terminal and try again running the application in parallel.
If it still has problems, go to OpenFOAM's folder and run Allwmake once again:
Code:
cd $WM_PROJECT_DIR ./Allwmake > make.log 2>&1

wyldckat September 28, 2013 10:40

Hi Hadi,

It looks like you have an incomplete system installation.
What Linux Distribution are you using and which installation instructions did you follow?

Best regards,
Bruno

dark lancer September 29, 2013 07:55

OpenFOAM2.1.1
suse 12.3
I use this installation instructions


wyldckat September 29, 2013 08:07

Hi Hadi,

What do the following commands give you:
Code:

echo $WM_MPLIB
echo $FOAM_MPI_LIBBIN
mpirun --version

Edit the file "$HOME/.bashrc" and tell me what are the last lines you have got that relate to OpenFOAM.

Best regards,
Bruno

dark lancer September 30, 2013 08:15

I give this result:
Code:

hadi@172-15-2-53:~> echo $WM_MPLIB
SYSTEMOPENMPI
hadi@172-15-2-53:~> echo $FOAM_MPI_LIBBIN

hadi@172-15-2-53:~> mpirun --version
-----------------------------------------------------------------------------
Synopsis:      mpirun [options] <app>
                mpirun [options] <where> <program> [<prog args>]

Description:    Start an MPI application in LAM/MPI.

Notes:
                [options]      Zero or more of the options listed below
                <app>          LAM/MPI appschema
                <where>        List of LAM nodes and/or CPUs (examples
                                below)
                <program>      Must be a LAM/MPI program that either
                                invokes MPI_INIT or has exactly one of
                                its children invoke MPI_INIT
                <prog args>    Optional list of command line arguments
                                to <program>

Options:
                -c <num>        Run <num> copies of <program> (same as -np)
                -client <rank>  <host>:<port>
                                Run IMPI job; connect to the IMPI server <host>
                                at port <port> as IMPI client number <rank>
                -D              Change current working directory of new
                                processes to the directory where the
                                executable resides
                -f              Do not open stdio descriptors
                -ger            Turn on GER mode
                -h              Print this help message
                -l              Force line-buffered output
                -lamd          Use LAM daemon (LAMD) mode (opposite of -c2c)
                -nger          Turn off GER mode
                -np <num>      Run <num> copies of <program> (same as -c)
                -nx            Don't export LAM_MPI_* environment variables
                -O              Universe is homogeneous
                -pty / -npty    Use/don't use pseudo terminals when stdout is
                                a tty
                -s <nodeid>    Load <program> from node <nodeid>
                -sigs / -nsigs  Catch/don't catch signals in MPI application
                -ssi <n> <arg>  Set environment variable LAM_MPI_SSI_<n>=<arg>
                -toff          Enable tracing with generation initially off
                -ton, -t        Enable tracing with generation initially on
                -tv        Launch processes under TotalView Debugger
        -v              Be verbose
                -w / -nw        Wait/don't wait for application to complete
                -wd <dir>      Change current working directory of new
                                processes to <dir>
                -x <envlist>    Export environment vars in <envlist>

Nodes:          n<list>, e.g., n0-3,5
CPUS:          c<list>, e.g., c0-3,5
Extras:        h (local node), o (origin node), N (all nodes), C (all CPUs)

Examples:      mpirun n0-7 prog1
                Executes "prog1" on nodes 0 through 7.

                mpirun -lamd -x FOO=bar,DISPLAY N prog2
                Executes "prog2" on all nodes using the LAMD RPI. 
                In the environment of each process, set FOO to the value
                "bar", and set DISPLAY to the current value.

                mpirun n0 N prog3
                Run "prog3" on node 0, *and* all nodes.  This executes *2*
                copies on n0.

                mpirun C prog4 arg1 arg2
                Run "prog4" on each available CPU with command line
                arguments of "arg1" and "arg2".  If each node has a
                CPU count of 1, the "C" is equivalent to "N".  If at
                least one node has a CPU count greater than 1, LAM
                will run neighboring ranks of MPI_COMM_WORLD on that
                node.  For example, if node 0 has a CPU count of 4 and
                node 1 has a CPU count of 2, "prog4" will have
                MPI_COMM_WORLD ranks 0 through 3 on n0, and ranks 4
                and 5 on n1.

                mpirun c0 C prog5
                Similar to the "prog3" example above, this runs "prog5"
                on CPU 0 *and* on each available CPU.  This executes
                *2* copies on the node where CPU 0 is (i.e., n0).
                This is probably not a useful use of the "C" notation;
                it is only shown here for an example.

Defaults:      -c2c -w -pty -nger -nsigs
-----------------------------------------------------------------------------

basharc
Code:

# Sample .bashrc for SuSE Linux
# Copyright (c) SuSE GmbH Nuernberg

# There are 3 different types of shells in bash: the login shell, normal shell
# and interactive shell. Login shells read ~/.profile and interactive shells
# read ~/.bashrc; in our setup, /etc/profile sources ~/.bashrc - thus all
# settings made here will also take effect in a login shell.
#
# NOTE: It is recommended to make language settings in ~/.profile rather than
# here, since multilingual X sessions would not work properly if LANG is over-
# ridden in every subshell.

# Some applications read the EDITOR variable to determine your favourite text
# editor. So uncomment the line below and enter the editor of your choice :-)
#export EDITOR=/usr/bin/vim
#export EDITOR=/usr/bin/mcedit

# For some news readers it makes sense to specify the NEWSSERVER variable here
#export NEWSSERVER=your.news.server

# If you want to use a Palm device with Linux, uncomment the two lines below.

# For some (older) Palm Pilots, you might need to set a lower baud rate
# e.g. 57600 or 38400; lowest is 9600 (very slow!)
#
#export PILOTPORT=/dev/pilot
#export PILOTRATE=115200

test -s ~/.alias && . ~/.alias || true
source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI
source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI

export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc

export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib

source /opt/intel/bin/compilervars.sh intel64

source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI
source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI


wyldckat October 2, 2013 08:06

Quote:

Originally Posted by dark lancer (Post 454261)
.bashrc
Code:

/opt/mpich2-install/bin

Quick answer: The problem is that your system's MPI is actually MPICH2. For that, follow the instructions given here: http://www.cfd-online.com/Forums/ope...tml#post383090 post #9

dark lancer October 6, 2013 07:59

I don't know how to unistall mpich and I check this :
Code:

hadi@:~> ls -l 'which mpich'
ls: cannot access which mpich: No such file or directory
hadi@:~> ls -l `which mpich`
which: no mpich in (/home/hadi/OpenFOAM/ThirdParty-2.1.1/platforms/linux64Gcc/paraview-3.12.0/bin:/home/hadi/OpenFOAM/hadi-2.1.1/platforms/linux64GccDPOpt/bin:/home/hadi/OpenFOAM/site/2.1.1/platforms/linux64GccDPOpt/bin:/home/hadi/OpenFOAM/OpenFOAM-2.1.1/platforms/linux64GccDPOpt/bin:/home/hadi/OpenFOAM/OpenFOAM-2.1.1/bin:/home/hadi/OpenFOAM/OpenFOAM-2.1.1/wmake:/opt/intel/composer_xe_2011_sp1.11.339/bin/intel64:/home/hadi/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/X11R6/bin:/usr/games:/opt/kde3/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc:/opt/intel/composer_xe_2011_sp1.11.339/mpirt/bin/intel64)
total 83448
drwxr-xr-x 2 hadi users    4096 Sep 14 14:22 bin
drwxr-xr-x 2 hadi users    4096 Aug 17 07:36 Desktop
drwxr-xr-x 2 hadi users    4096 Aug 17 07:36 Documents
drwxr-xr-x 2 hadi users    4096 Sep 20 12:12 Downloads
drwxr-xr-x 3 hadi users    4096 Aug 31 12:14 mpich-install
drwxr-xr-x 2 hadi users    4096 Aug 17 07:36 Music
drwxr-xr-x 4 hadi users    4096 Sep  4 07:29 OpenFOAM
-rw-r--r-- 1 hadi root  30709473 Aug 17 07:36 OpenFOAM-2.1.1.tgz
drwxr-xr-x 2 hadi users    4096 Aug 17 07:36 Pictures
drwxr-xr-x 2 hadi users    4096 Aug 17 07:36 Public
drwxr-xr-x 2 hadi users    4096 Aug 17 07:35 public_html
drwxr-xr-x 2 hadi users    4096 Aug 17 07:36 Templates
-rw-r--r-- 1 hadi users      256 Aug 30 08:38 test1.f90
-rw-r--r-- 1 hadi users      97 Aug 30 08:41 test.cpp
-rw-r--r-- 1 hadi users      36 Aug 25 14:05 test.f90
-rw-r--r-- 1 hadi root  54677441 Aug 17 07:36 ThirdParty-2.1.1.tgz
drwxr-xr-x 2 hadi users    4096 Aug 17 07:36 Videos

for this:

Code:

hadi@:~> ls -l `which mpicc`
-rwxr-xr-x 1 root root 31376 Jan 27  2013 /usr/bin/mpicc

but I delete the PATH of mpich in the bashrc but not work
Code:

#export PILOTPORT=/dev/pilot
#export PILOTRATE=115200

test -s ~/.alias && . ~/.alias || true
source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI
source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI

#export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc

export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc


export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib

source /opt/intel/bin/compilervars.sh intel64

source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI
source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI


wyldckat October 6, 2013 11:21

Hi Hadi,

Now I'm the one confused on what exactly you want to do here :(

OK, first let's do some clean up:
  1. Change this:
    Quote:

    Originally Posted by dark lancer (Post 455286)
    Code:

    #export PILOTPORT=/dev/pilot
    #export PILOTRATE=115200

    test -s ~/.alias && . ~/.alias || true
    source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI
    source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI

    #export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc

    export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc


    export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib

    source /opt/intel/bin/compilervars.sh intel64

    source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI
    source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI


  2. To this:
    Code:

    #export PILOTPORT=/dev/pilot
    #export PILOTRATE=115200

    test -s ~/.alias && . ~/.alias || true

    #export  PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/

    export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/

    export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib

    source /opt/intel/bin/compilervars.sh intel64

    source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI

    Notice that I removed the extra source command lines you had for OpenFOAM. One at the end is enough.
    In addition, I removed the reference to "/usr/bin/mpicc", because that's an executable file, not a folder.


Now, let's try and get something clear here: what MPI toolboxes do you have installed in your system and which one do you want to use?


Best regards,
Bruno

dark lancer October 7, 2013 08:32

I change to this:
Quote:

# If you want to use a Palm device with Linux, uncomment the two lines below.
# For some (older) Palm Pilots, you might need to set a lower baud rate
# e.g. 57600 or 38400; lowest is 9600 (very slow!)
#
#export PILOTPORT=/dev/pilot
#export PILOTRATE=115200

test -s ~/.alias && . ~/.alias || true

#export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc

export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/

export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib

source /opt/intel/bin/compilervars.sh intel64

source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENI
A long time ago install mpich2 on suse 12.2 and update to 12.3 and now I install OpenFOAM2.1.1 on suse 12.3 now I run my case with this command:
Quote:

./Allrun
I face error that explain in the perviouse posts

I use mpirun and I don't now what do you mean form this''what MPI toolboxes do you have installed in your system and which one do you want to use?''?

wyldckat October 7, 2013 16:50

Hi Hadi,

Quote:

Originally Posted by dark lancer (Post 455487)
I use mpirun and I don't now what do you mean form this''what MPI toolboxes do you have installed in your system and which one do you want to use?''?

OK, I'll try to explain this simply. From the information you are providing, it looks like you have 3 MPI toolboxes in your machine:
  1. MPICH2
  2. Open-MPI
  3. Intel MPI
The executable "mpirun" exists in all 3 toolboxes.
My question is which MPI toolbox you want to use, because you cannot have all 3 versions working at the same time in the same shell environment.


I suggest that you study the following blog post, as well as the links it provides: Advanced tips for working with the OpenFOAM shell environment

Best regards,
Bruno

dark lancer October 8, 2013 03:39

hi Bruno and thanks for all

I want to use Open-MPI for OpenFOAM because I think Open-MPI set with OpenFOAM and Intel MPI I think but I'm not sure use for intel fortran that exist on system
We have a code of fortran that when we decide run it by parallel use MPICH2
and now I must uninstall MPICH2 that Open-MPI work and I don't know how to uninstall MPICH2??

a question??
this case is test???

Quote:

Searching a little in the internet, I've found out that "--showme:compile" and "--showme:link" are Open-MPI options only. To change that to work with MPICH2, I've had to edit "/opt/openfoam211/etc/config/settings.sh" file and exchange "mpicc --showme:compile" and "mpicc --showme:link" for "mpicc -compile-info" and "mpicc -link-info".
if I can set OpenFOAM with MPICH2 is very good because We can run fortran and OpenFOAM together.

wyldckat October 13, 2013 05:09

Hi Hadi,

I haven't tested the following steps, but it should work:
  1. Edit the "~/.bashrc" file and change this line:
    Code:

    source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=SYSTEMOPENMPI
    to this:
    Code:

    source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=MPICH
  2. Edit the file "etc/config/settings.sh" and search for this block of code:
    Code:

    MPICH)
        export FOAM_MPI=mpich2-1.1.1p1
        export MPI_HOME=$WM_THIRD_PARTY_DIR/$FOAM_MPI
        export MPI_ARCH_PATH=$WM_THIRD_PARTY_DIR/platforms/$WM_ARCH$WM_COMPILER/$FOAM_MPI

        _foamAddPath    $MPI_ARCH_PATH/bin

        # 64-bit on OpenSuSE 12.1 uses lib64 others use lib
        _foamAddLib    $MPI_ARCH_PATH/lib$WM_COMPILER_LIB_ARCH
        _foamAddLib    $MPI_ARCH_PATH/lib

        _foamAddMan    $MPI_ARCH_PATH/share/man
        ;;

  3. Change that block to this:
    Code:

    MPICH)
        # Use the system installed MPICH2, get library directory via mpicc
        export FOAM_MPI=mpich2

        libDir=`mpicc -link-info | sed -e 's/.*-L\([^ ]*\).*/\1/'`

        # Bit of a hack: strip off 'lib' and hope this is the path to MPICH2
        # include files and libraries.
        export MPI_ARCH_PATH="${libDir%/*}"

        _foamAddLib    $libDir
        unset libDir
        ;;

  4. Start a new terminal window.
  5. Build OpenFOAM:
    Code:

    foam
    ./Allwmake > make.log 2>&1

If everything goes well, at the end of the build, it should have everything working as intended.

Best regards,
Bruno

dark lancer October 13, 2013 07:40

this my bashrc and Don't need chenge the red line or add a line for mpich?
Quote:

#export PILOTPORT=/dev/pilot
#export PILOTRATE=115200

test -s ~/.alias && . ~/.alias || true

#export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/opt/mpich2-install/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/:/usr/bin/mpicc

export PATH=${PATH}:${HOME}/bin:/opt/intel/bin:/home/moosaie/programs/ParaView-3.14.1-Linux-64bit/bin/

export LD_LIBRARY_PATH=/usr/lib/mpi/gcc/openmpi/lib

source /opt/intel/bin/compilervars.sh intel64

source /home/hadi/OpenFOAM/OpenFOAM-2.1.1/etc/bashrc WM_NCOMPPROCS=7 WM_MPLIB=MPICH
and this my "etc/config/settings.sh"
Quote:

OPENMPI)
export FOAM_MPI=openmpi-1.5.3
# optional configuration tweaks:
_foamSource `$WM_PROJECT_DIR/bin/foamEtcFile config/openmpi.sh`

export MPI_ARCH_PATH=$WM_THIRD_PARTY_DIR/platforms/$WM_ARCH$WM_COMPILER/$FOAM_MPI

# Tell OpenMPI where to find its install directory
export OPAL_PREFIX=$MPI_ARCH_PATH

_foamAddPath $MPI_ARCH_PATH/bin

# 64-bit on OpenSuSE 12.1 uses lib64 others use lib
_foamAddLib $MPI_ARCH_PATH/lib$WM_COMPILER_LIB_ARCH
_foamAddLib $MPI_ARCH_PATH/lib

_foamAddMan $MPI_ARCH_PATH/man
;;

MPICH)
# Use the system installed MPICH2, get library directory via mpicc
export FOAM_MPI=mpich2

libDir=`mpicc -link-info | sed -e 's/.*-L\([^ ]*\).*/\1/'`

# Bit of a hack: strip off 'lib' and hope this is the path to MPICH2
# include files and libraries.
export MPI_ARCH_PATH="${libDir%/*}"

_foamAddLib $libDir
unset libDir
;;

MPICH-GM)
export FOAM_MPI=mpich-gm
export MPI_ARCH_PATH=/opt/mpi
export MPICH_PATH=$MPI_ARCH_PATH
export GM_LIB_PATH=/opt/gm/lib64

_foamAddPath $MPI_ARCH_PATH/bin

# 64-bit on OpenSuSE 12.1 uses lib64 others use lib
_foamAddLib $MPI_ARCH_PATH/lib$WM_COMPILER_LIB_ARCH
_foamAddLib $MPI_ARCH_PATH/lib

_foamAddLib $GM_LIB_PATH
;;

HPMPI)
export FOAM_MPI=hpmpi
export MPI_HOME=/opt/hpmpi
export MPI_ARCH_PATH=$MPI_HOME

_foamAddPath $MPI_ARCH_PATH/bin

case `uname -m` in
i686)
_foamAddLib $MPI_ARCH_PATH/lib/linux_ia32
;;

x86_64)
_foamAddLib $MPI_ARCH_PATH/lib/linux_amd64
;;
ia64)
_foamAddLib $MPI_ARCH_PATH/lib/linux_ia64
;;
*)
echo Unknown processor type `uname -m` for Linux
;;
after it in new terminal go to this direction
Quote:

cd OpenFOAM/OpenFOAM2.1.1
then

Quote:

./Allwmake > make.log 2>&1
when finished I use
Quote:

./Allrun
but
Quote:

It seems that there is no lamd running on the host 172-15-2-53.

This indicates that the LAM/MPI runtime environment is not operating.
The LAM/MPI runtime environment is necessary for the "mpirun" command.

Please run the "lamboot" command the start the LAM/MPI runtime
environment. See the LAM/MPI documentation for how to invoke
"lamboot" across multiple machines.
-----------------------------------------------------------------------------
Quote:

hadi@172-15-2-53:~/OpenFOAM/OpenFOAM-2.1.1/tutorials/discreteMethods/dsmcFoam/ulthemstest1> lamboot -v

LAM 7.1.4/MPI 2 C++/ROMIO - Indiana University

n-1<24035> ssi:boot:base:linear: booting n0 (localhost)
n-1<24035> ssi:boot:base:linear: finished
I use this command
Quote:

mpirun -np 4 dsmcFoam -parallel >& log &
this my log file
Quote:

--> FOAM FATAL ERROR:
Trying to use the dummy Pstream library.
This dummy library cannot be used in parallel mode

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 37.

FOAM exiting



--> FOAM FATAL ERROR:
Trying to use the dummy Pstream library.
This dummy library cannot be used in parallel mode

From function UPstream::init(int& argc, char**& argv)
in file UPstream.C at line 37.

FOAM exiting

-----------------------------------------------------------------------------
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).

mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-----------------------------------------------------------------------------

wyldckat October 13, 2013 07:48

Hi Hadi,

Well, at least it's looking like you're closer to solving this. The problem here is that the MPICH2 service - which responsible for coordinating the communications between the parallel processes - is not running.

How do you launch in parallel the FORTRAN programs that have been built with MPICH2?

Best regards,
Bruno

PS: Please follow the instructions given on the second link on my signature, namely as to how to post code and application outputs here on the forum. The idea is that the correct way is to use [CODE] and not [QUOTE].

wyldckat October 13, 2013 14:13

Hi Hadi,

OK, I noticed that you updated your post.

Try this:
Code:

echo export WM_MPLIB=MPICH > $WM_PROJECT_DIR/etc/prefs.sh
Now try running mpirun again.

Best regards,
Bruno


All times are GMT -4. The time now is 08:36.