CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Installation (https://www.cfd-online.com/Forums/openfoam-installation/)
-   -   [OpenFOAM.org] Problems with installing openfoam 5.0 on HPC Cluster (https://www.cfd-online.com/Forums/openfoam-installation/202601-problems-installing-openfoam-5-0-hpc-cluster.html)

sjlouie91 June 5, 2018 10:03

Problems with installing openfoam 5.0 on HPC Cluster
 
Hi Foamers,


I am new to the application of OpenFOAM.


I want to OpenFOAM5.0 on HPC cluster. I know that if I want to implement a new turbulence model, I have to install OF on my local user directory on Cluster instead of using the default version.


I have found some tutorials about installing OF2.x version on cluster, these are the main procedures: (Gcc, MPI etc. are already installed on cluster)

1)
cp /cineca/prod/applications/openfoam/2.3.0-gnu-4.7.2/openmpi--1.6.3--gnu--4.7.2/OF_2.3.0_LOCAL_INSTALL_CINECA_PLX.tar .
tar -xf OF_2.3.0_LOCAL_INSTALL_CINECA_PLX.tar
rm OF_2.3.0_LOCAL_INSTALL_CINECA_PLX.tar
cd OF_2.3.0_LOCAL_INSTALL_CINECA_PLX
ls
OF_2.3.0_LOCAL_INSTALL_CINECA_PLX.sh PATCHES

2)

Execute the script to download, configure and install your local version of OpenFOAM:
./OF_2.3.0_LOCAL_INSTALL_CINECA_PLX.sh
The installation dir, in this case, is located in your $HOME space
FOAM_INST_DIR=$HOME/OpenFOAM
The OpenFOAM environment of the local installation is set in
$HOME/OpenFOAM/OpenFOAM-2.3.x/etc/bashrc
3) Load the environment of your local installation
module load gnu/4.7.2
module load openmpi/1.6.3--gnu--4.7.2
source $HOME/OpenFOAM/OpenFOAM-2.3.x/etc/bashrc
and you will have
$WM_PROJECT_VERSION=2.3.x
$FOAM_INST_DIR=/plx/userinternal/ispisso0/OpenFOAM
$WM_PROJECT_USER_DIR=/plx/userinternal/ispisso0/OpenFOAM/ispisso0-2.3.x



Now I can find the tar files in the cluster among which OpenFOAM-5.0 is the default opt version I can use directly:

5-0.tar.gz backup2.zig backup.zip bashrc OpenFOAM-5.0

In addition, these modules are now available on cluster:
module load comp/gcc/6.3.0
module load mpi/openmpi/2.1.0/gcc



I have some questions:

1) Can I just install OF with this file by copying it to my user directory?

2) there is an isolate bashrc file in the directory. I looked inside the file and found that there's only a change with the WM_COMPILER=Icc. Can I just use this bashrc to configure?
3) Apart from the above procedures, what else should I consider? or Could I just use ./Allwmake to compile?


Looking forward to reading your answers!


Jin

sjlouie91 January 11, 2019 14:56

Hello Bruno,

I know this is an old thread. But I come across the same problem with Openfoam5.0 installed on the HPC cluster and intelMPI. Listed below is the jobscript what I used.
module load mpi/intelmpi/2018.1.163
module load intel-studio-2018
module load comp/gcc/6.3.0
module load comp/cmake/3.7.2
module load lib/scotch/6.3.0
source /home/OpenFOAM/OpenFOAM-5.0/etc/bashrc

I can run the simulations previously and there's no error. However, when I rerun the cases and kept all things unchanged, it doesn't work and the error are:
–> FOAM FATAL ERROR:
Trying to use the dummy Pstream library.
This dummy library cannot be used in parallel mode
From function static bool Foam:UPstream::init(init &, char **&)
in file UPstream.C at line 37.
FOAM exiting.

I did some checks as you suggested.
1) echo $FOAM_MPI
when I typed this command, it shows nothing. And then I source the bashrc file and it shows:
Warning in /home/OpenFOAM/OpenFOAM-5.0/etc/config.sh/settings:
MPI_ROOT not a valid mpt installation directory or ending in a ‘/’.
Please set MPI_ROOT to the mpt installation directory.
MPI_ROOT currently set to ’ '.

So what can I do now to solve this problem?

Thanks in advance!

Kind regards,
Jin

[Moderator note: Moved from https://www.cfd-online.com/Forums/op...llel-mode.html ]

wyldckat January 12, 2019 08:45

Greetings Jin,

I've moved your old post into the "OpenFOAM Installation" sub-forum and then moved your more recent post into the same thread as the old one, so that we can stay on the same topic.

From the old post, it seems that you had tried to compile with Icc and Open-MPI. But on the more recent post, you seem to be trying to use Icc and Intel MPI.

Furthermore, it is possible that someone changed the Intel software stack in the cluster, so it's possible that you built OpenFOAM with an older version of Intel software than the one you are trying to use now.

Either way, first let's find out what you used in the old build. Run the following command:
Code:

find $FOAM_LIBBIN/ -type d
and it will tell us what directories exist within the main OpenFOAM library folder. It should give you something similar to this:
Code:

/home/ofuser/OpenFOAM/OpenFOAM-5/platforms/linux64GccDPInt32Opt/lib/
/home/ofuser/OpenFOAM/OpenFOAM-5/platforms/linux64GccDPInt32Opt/lib/dummy
/home/ofuser/OpenFOAM/OpenFOAM-5/platforms/linux64GccDPInt32Opt/lib/openmpi-system
/home/ofuser/OpenFOAM/OpenFOAM-5/platforms/linux64GccDPInt32Opt/lib/paraview-5.4

What I'm looking for is the folder name related to the MPI that was used.

From there, we can workout if you should be trying to use Intel MPI or Open-MPI.


If you did compile with Intel MPI, Intel has changed the installation paths sometime ago and has become a bit of a weird convention... which I haven't double-checked yet as to what should be done with it. So once we figure out which one you used, we can then decide from there.

Best regards,
Bruno

sjlouie91 January 15, 2019 07:27

Hello Bruno,

Thanks so much for your reply. I have done what you suggested and it shows:

/home/OpenFOAM/OpenFOAM-5.0/platforms/linux64IccDPInt32Opt/lib
/home/OpenFOAM/OpenFOAM-5.0/platforms/linux64IccDPInt32Opt/dummy
/home/OpenFOAM/OpenFOAM-5.0/platforms/linux64IccDPInt32Opt/2018.1.163

So I think I actually use the IntelMPI 2018.1.163 which is one of the module that I load.

Best regards,
Jin

wyldckat January 20, 2019 15:35

Quick answer:
  1. Edit the file "/home/OpenFOAM/OpenFOAM-5.0/etc/bashrc"
  2. Look for this block or similar:
    Code:

    #- MPI implementation:
    #    WM_MPLIB = SYSTEMOPENMPI | OPENMPI | SYSTEMMPI | MPICH | MPICH-GM | HPMPI
    #              | MPI | FJMPI | QSMPI | SGIMPI | INTELMPI
    export WM_MPLIB=SYSTEMOPENMPI

  3. I'm guessing that you have this:
    Code:

    export WM_MPLIB=INTELMPI
    If not, then perhaps it's best to change it to that.
  4. Then edit the file "/home/OpenFOAM/OpenFOAM-5.0/etc/config.sh/mpi"
  5. Look for this line:
    Code:

    INTELMPI)
  6. You should then see a fairly large block, something like this:
    Code:

    INTELMPI)
        # No trailing slash
        [ "${MPI_ROOT%/}" = "${MPI_ROOT}" ] || MPI_ROOT="${MPI_ROOT%/}"

        export FOAM_MPI="${MPI_ROOT##*/}"
        export MPI_ARCH_PATH=$MPI_ROOT

        if [ ! -d "$MPI_ROOT" -o -z "$MPI_ARCH_PATH" ]
        then
            echo "Warning in $WM_PROJECT_DIR/etc/config.sh/settings:" 1>&2
            echo "    MPI_ROOT not a valid mpt installation directory or ending" \
                " in a '/'." 1>&2
            echo "    Please set MPI_ROOT to the mpt installation directory." 1>&2
            echo "    MPI_ROOT currently set to '$MPI_ROOT'" 1>&2
        fi

        if [ "$FOAM_VERBOSE" -a "$PS1" ]
        then
            echo "Using INTEL MPI:" 1>&2
            echo "    MPI_ROOT : $MPI_ROOT" 1>&2
            echo "    FOAM_MPI : $FOAM_MPI" 1>&2
        fi

        _foamAddPath    $MPI_ARCH_PATH/bin64
        _foamAddLib    $MPI_ARCH_PATH/lib64
        ;;

  7. So you now have two alternatives:
    1. You can ignore the warning about "MPI_ROOT" not being a valid directory and modify this line:
      Code:

          export FOAM_MPI="${MPI_ROOT##*/}"
      to this:
      Code:

          export FOAM_MPI=2018.1.163
    2. Or run this command before you source the bashrc file:
      Code:

      export MPI_ROOT=$I_MPI_ROOT
      Although you may want to first check what it says:
      Code:

      echo $I_MPI_ROOT
      This variables should be defined when you use module load for Intel MPI.


All times are GMT -4. The time now is 00:09.