CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM (https://www.cfd-online.com/Forums/openfoam/)
-   -   Open MPI error (https://www.cfd-online.com/Forums/openfoam/229289-open-mpi-error.html)

GrivalszkiP August 3, 2020 08:33

Open MPI error
 
Hi!

I get this error message if I try to run parallel stuff:

Code:

vituki@VHDT08:/mnt/d/griva_modellek/wavebreak$ mpirun -np 8 snappyHexMesh -parallel
--------------------------------------------------------------------------
There are not enough slots available in the system to satisfy the 8
slots that were requested by the application:

  snappyHexMesh

Either request fewer slots for your application, or make more slots
available for use.

A "slot" is the Open MPI term for an allocatable unit where we can
launch a process.  The number of slots available are defined by the
environment in which Open MPI processes are run:

  1. Hostfile, via "slots=N" clauses (N defaults to number of
    processor cores if not provided)
  2. The --host command line parameter, via a ":N" suffix on the
    hostname (N defaults to 1 if not provided)
  3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.)
  4. If none of a hostfile, the --host command line parameter, or an
    RM is present, Open MPI defaults to the number of processor cores

In all the above cases, if you want Open MPI to default to the number
of hardware threads instead of the number of processor cores, use the
--use-hwthread-cpus option.

Alternatively, you can use the --oversubscribe option to ignore the
number of available slots when deciding the number of processes to
launch.
--------------------------------------------------------------------------

I use OpenFOAM v2006 and OpenMPI v4.0.3. I have 8 threads (4 cores) in my computer and decomposePar run without problems. There weren't any problems like this before. Thank you in advance!

hokhay August 17, 2020 02:30

Have you checked whether the multi-threading function is on?
Check how many CPU you have got with command 'htop'

GrivalszkiP August 18, 2020 11:01

Thank you, I have fixed it:

Newer versions of mpi detect wrong input by users, if they declare 8 cores when there are only 4 available.
You have 2 options:
- switch to 4 threads and gain more performance
- add the option --use-hwthread-cpus and run your simulation on lower performance with 8 threads

Turin Turambar April 27, 2021 08:41

Hello



I had the same error before. My computer has 8 cores and I was not able to run the simulation in parallel for more than 4 cores. I used "-oversubscribe" flag which allows processes on a node than processing elements (from mpirun man page), and it solved my problem. Here is the example:


Code:



mpirun -oversubscribe -np 8 interFoam -parallel | tee log.interFoam


I hope it is also a convenient way of solving this issue.


Best regards

shamantic October 24, 2022 08:35

multithreading with runParallel
 
in OpenFOAM-v2206/bin/tools/RunFunctions,

I added --use-hwthread-cpus to the lines starting $mpirun, like this:


$mpirun --use-hwthread-cpus -n $nProcs $appRun $appArgs "$@" </dev/null >> $logFile 2>&1

thiagopl December 9, 2022 07:07

Quote:

Originally Posted by Turin Turambar (Post 802608)
Hello
I had the same error before. My computer has 8 cores and I was not able to run the simulation in parallel for more than 4 cores. I used "-oversubscribe" flag which allows processes on a node than processing elements (from mpirun man page), and it solved my problem. Here is the example:
Code:

mpirun -oversubscribe -np 8 interFoam -parallel | tee log.interFoam
I hope it is also a convenient way of solving this issue.
Best regards

I had the same problem and your solution worked for me as well. Thank you.

shamantic January 13, 2023 15:38

Guess that --oversubscribe does not throw an error if you use more threads as provided by your CPUs. --use-hwthread-cpus will still limit to the CPU provided threads. I do prefer this because I want to avoid overprovisioning above the hardware capabilities as this reduces efficency.


All times are GMT -4. The time now is 20:38.