CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   --cpus-per-task using SU2 op HPC (https://www.cfd-online.com/Forums/su2/255228-cpus-per-task-using-su2-op-hpc.html)

CFDinTwente March 26, 2024 15:37

--cpus-per-task using SU2 op HPC
 
Dear all,

I can't seem to use the full capacity of my node using slurm. I am using the batch file below but when checking the node with htop I see only 4 cores busy, while I was under the impression that I would see 4*9=36 nodes doing work. When going above the 36 nodes I get the misconfiguration error, so that seems to be the actual limit to me but I do not get why it is then idling on most cores. I hope you can give me some advice.

Code:

#!/bin/bash

#SBATCH -J mu_m16_hdef_v1                          # job name, don't use spaces
#SBATCH --nodes=1                        # number of nodes
#SBATCH --ntasks=4                      # number of MPI ranks
#SBATCH --cpus-per-task=9              # number of threads per rank
#SBATCH --mem-per-cpu=1GB                # amount of memory per core
#SBATCH --threads-per-core=2            # number of threads per core
#SBATCH --time=48:00:00                # time limit hh:mm:ss
#SBATCH -p 50_procent_max_7_days        # Partition
#SBATCH -o out.log                      # output log file
#SBATCH -e err.log

# module to use
module load gcc/9.4.0
module load intel/oneapi/2023.1.0
source setvars.sh

# log hostname
pwd; hostname; date

# executable
SU2=/home/SU2_software/bin/SU2_CFD

# config file
cfgFile=run.cfg

# set environment variables
NTHREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$NTHREADS
export UCX_TLS=ud,sm,self

# display configuration
echo "Number of nodes used        : "$SLURM_NNODES
echo "Number of MPI ranks        : "$SLURM_NTASKS
echo "Number of threads          : "$SLURM_CPUS_PER_TASK
echo "Number of MPI ranks per node: "$SLURM_TASKS_PER_NODE
echo "Number of threads per core  : "$SLURM_THREADS_PER_CORE
echo "Name of nodes used          : "$SLURM_JOB_NODELIST

srun $SU2 $cfgFile


bigfootedrockmidget March 27, 2024 08:09

hybrid openmp/mpi can be a bit more tricky to setup than just pure mpi. There was a thread on this before:
https://www.cfd-online.com/Forums/su...ance-gain.html

CFDinTwente March 27, 2024 10:26

Thank you, I will give it a look.


All times are GMT -4. The time now is 15:02.