|
[Sponsors] |
August 3, 2020, 09:33 |
Open MPI error
|
#1 |
Member
Grivalszki Péter
Join Date: Mar 2019
Location: Budapest, Hungary
Posts: 39
Rep Power: 7 |
Hi!
I get this error message if I try to run parallel stuff: Code:
vituki@VHDT08:/mnt/d/griva_modellek/wavebreak$ mpirun -np 8 snappyHexMesh -parallel -------------------------------------------------------------------------- There are not enough slots available in the system to satisfy the 8 slots that were requested by the application: snappyHexMesh Either request fewer slots for your application, or make more slots available for use. A "slot" is the Open MPI term for an allocatable unit where we can launch a process. The number of slots available are defined by the environment in which Open MPI processes are run: 1. Hostfile, via "slots=N" clauses (N defaults to number of processor cores if not provided) 2. The --host command line parameter, via a ":N" suffix on the hostname (N defaults to 1 if not provided) 3. Resource manager (e.g., SLURM, PBS/Torque, LSF, etc.) 4. If none of a hostfile, the --host command line parameter, or an RM is present, Open MPI defaults to the number of processor cores In all the above cases, if you want Open MPI to default to the number of hardware threads instead of the number of processor cores, use the --use-hwthread-cpus option. Alternatively, you can use the --oversubscribe option to ignore the number of available slots when deciding the number of processes to launch. -------------------------------------------------------------------------- |
|
August 17, 2020, 03:30 |
|
#2 |
Member
Join Date: Nov 2014
Posts: 92
Rep Power: 12 |
Have you checked whether the multi-threading function is on?
Check how many CPU you have got with command 'htop' |
|
August 18, 2020, 12:01 |
|
#3 |
Member
Grivalszki Péter
Join Date: Mar 2019
Location: Budapest, Hungary
Posts: 39
Rep Power: 7 |
Thank you, I have fixed it:
Newer versions of mpi detect wrong input by users, if they declare 8 cores when there are only 4 available. You have 2 options: - switch to 4 threads and gain more performance - add the option --use-hwthread-cpus and run your simulation on lower performance with 8 threads |
|
April 27, 2021, 09:41 |
|
#4 |
New Member
Emre
Join Date: Oct 2016
Posts: 5
Rep Power: 10 |
Hello
I had the same error before. My computer has 8 cores and I was not able to run the simulation in parallel for more than 4 cores. I used "-oversubscribe" flag which allows processes on a node than processing elements (from mpirun man page), and it solved my problem. Here is the example: Code:
mpirun -oversubscribe -np 8 interFoam -parallel | tee log.interFoam I hope it is also a convenient way of solving this issue. Best regards |
|
October 24, 2022, 09:35 |
multithreading with runParallel
|
#5 |
New Member
Klaus Rädecke
Join Date: Jun 2009
Location: Rüsselsheim, Germany
Posts: 9
Rep Power: 17 |
in OpenFOAM-v2206/bin/tools/RunFunctions,
I added --use-hwthread-cpus to the lines starting $mpirun, like this: $mpirun --use-hwthread-cpus -n $nProcs $appRun $appArgs "$@" </dev/null >> $logFile 2>&1 |
|
December 9, 2022, 08:07 |
|
#6 | |
Member
Thiago Parente Lima
Join Date: Sep 2011
Location: Diamantina, Brazil.
Posts: 65
Rep Power: 15 |
Quote:
__________________
Fields of interest: buoyantFoam, chtMultRegionFoam. |
||
January 13, 2023, 16:38 |
|
#7 |
New Member
Klaus Rädecke
Join Date: Jun 2009
Location: Rüsselsheim, Germany
Posts: 9
Rep Power: 17 |
Guess that --oversubscribe does not throw an error if you use more threads as provided by your CPUs. --use-hwthread-cpus will still limit to the CPU provided threads. I do prefer this because I want to avoid overprovisioning above the hardware capabilities as this reduces efficency.
|
|
Tags |
error, mpirun, openmpi, parallel, slots |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
[swak4Foam] swak4foam for OpenFOAM 4.0 | mnikku | OpenFOAM Community Contributions | 80 | May 17, 2022 09:06 |
Compile calcMassFlowC | aurore | OpenFOAM Programming & Development | 13 | March 23, 2018 08:43 |
OpenFOAM without MPI | kokizzu | OpenFOAM Installation | 4 | May 26, 2014 10:17 |
Compiling dynamicTopoFvMesh for OpenFOAM 2.1.x | Saxwax | OpenFOAM Installation | 25 | November 29, 2013 06:34 |
How to get the max value of the whole field | waynezw0618 | OpenFOAM Running, Solving & CFD | 4 | June 17, 2008 06:07 |