CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Installation (https://www.cfd-online.com/Forums/openfoam-installation/)
-   -   [OpenFOAM.com] Install OpenFOAM 3.0.1 on cluster without root access; mpi error (https://www.cfd-online.com/Forums/openfoam-installation/228481-install-openfoam-3-0-1-cluster-without-root-access-mpi-error.html)

Madeinspace July 2, 2020 18:16

Install OpenFOAM 3.0.1 on cluster without root access; mpi error
 
1 Attachment(s)
Greetings. My apologies to start this new thread. There are several such threads but none of them worked for me.


I am trying to install OF3.0.1 on CentOS 7.8.2003 University Cluster without root access.


I followed the instructions#3 for CentOS 7.1 mentioned on the link below.
https://openfoamwiki.net/index.php/I...CentOS_SL_RHEL


As with simillar other instructions to install OpenFOAM without the root access, I skipped step 1,2,3, and 4. The step 5,6, and 7 worked as intended. In the step#8, I first needed to load openmpi from the cluster. I then cheked the available openmpi on the cluster. The results were as below


Quote:

$ module spider openmpi

------------------------------------------------------------------------------------------------
OpenMPI:
------------------------------------------------------------------------------------------------
Description:
The Open MPI Project is an open source MPI-3 implementation.

Versions:
OpenMPI/2.1.1
OpenMPI/3.1.1
OpenMPI/3.1.3
OpenMPI/3.1.4
OpenMPI/4.0.3

------------------------------------------------------------------------------------------------
For detailed information about a specific "OpenMPI" package (including how to load the modules) us
e the module's full name.
Note that names that have a trailing (E) are extensions provided by other modules.
For example: $ module spider OpenMPI/4.0.3
In order to load OpenMPI/4.0.3, I first needed to load GCC/9.3.0.



Quote:

module spider OpenMPI/4.0.3

------------------------------------------------------------------------------------------------
OpenMPI: OpenMPI/4.0.3
------------------------------------------------------------------------------------------------
Description:
The Open MPI Project is an open source MPI-3 implementation.


You will need to load all module(s) on any one of the lines below before the "OpenMPI/4.0.3" mod
ule is available to load.

GCC/9.3.0
So I loaded both of them
Quote:

module load GCC/9.3.0
module load OpenMPI/4.0.3
Then I execute step#8 but with a little tweak as mention on the other thread. However, I am not sure if this is how it should be done. This is how step#8 looks like
Quote:

module load mpi/openmpi-x86_64 || export PATH=$PATH:/usr/lib64/openmpi/bin
This is how I changed it
Quote:

module load OpenMPI/4.0.3 || export PATH=$PATH:/usr/lib64/openmpi/bin
Accordingly, I also changed the alias in step#8,
Quote:

echo "alias of301='module load OpenMPI/4.0.3; source \$HOME/OpenFOAM/OpenFOAM-3.0.1/etc/bashrc $FOAM_SETTINGS'" >> $HOME/.bashrc
Then I executed step 9 and 10. There were no warning or errors. Then when executing step 11.2, it stopped earlier than it should. And it had failed.

When I chedked the log file, the first error says:

Quote:

In file included from common.h:106,
from dgraph_halo.c:64:
dgraph_halo.c: In function ‘dgraphHaloSync2’:
/apps/Hebbe7/software/Compiler/GCC/9.3.0/OpenMPI/4.0.3/include/mpi.h:322:57: error: static assertion failed: "MPI_Type_extent was removed in MPI-3.0. Use MPI_Type_get_extent instead."
322 | #define THIS_SYMBOL_WAS_REMOVED_IN_MPI30(func, newfunc) _Static_assert(0, #func " was removed in MPI-3.0. Use " #newfunc " instead.")
| ^~~~~~~~~~~~~~
I don't know what causign this error but I suspect it is from the loading of OpemMPI. I tried with ohter version of OpenMPI but they all resulted the same.

I have also attached the log output of the compilation.

Could anyone please help me out here.

Best regards,



fertinaz July 4, 2020 14:16

I think your choice of MPI and GCC is wrong.

OF-3.0.1 was released in December 2015. However, OpenMPI-4.x is quite new. The thing is when a new major MPI version is released, it usually doesn't maintain the backward compatibility. So, I checked ThirdParty-3.0.1 and looks like it is shipped with OpenMPI-1.10. It is better if you use that version.

Same goes for GCC as well. I'd use GCC-4.8

These versions are quite old and might not be installed in your cluster. In that case, you can contact your cluster admins. But they might complain for the very same reason :) Then, of course, it's always possible to compile every tool you need from scratch but that introduces some other issues such as you might have to configure your mpi to able to benefit from IB, disk space (gcc build takes up a lot of space) etc. But it's good practice.

Also, a few steps caught my attention which I believe might not be required:
  • Update PATH and LD_LIBRARY_PATH: This is not needed if the module system in your university cluster is working properly. When you load a module, your environment is updated accordingly.
  • Creating an alias in .bahrc: Since the idea is to run on a cluster, I believe you'll eventually try multi-node executions through a scheduler. If you create an alias in your bashrc as if you're running in your local environment, the process manager won't be able to source the OF environment when your job is loaded in a compute node. Therefore, a better approach is completely to ignore your bashrc file in your account and add sourcing to your job script.

Hope this helps


All times are GMT -4. The time now is 19:31.