CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > SU2

--cpus-per-task using SU2 op HPC

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By bigfootedrockmidget

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 26, 2024, 15:37
Question --cpus-per-task using SU2 op HPC
  #1
New Member
 
Marcel
Join Date: Jul 2023
Location: Netherlands
Posts: 3
Rep Power: 2
CFDinTwente is on a distinguished road
Dear all,

I can't seem to use the full capacity of my node using slurm. I am using the batch file below but when checking the node with htop I see only 4 cores busy, while I was under the impression that I would see 4*9=36 nodes doing work. When going above the 36 nodes I get the misconfiguration error, so that seems to be the actual limit to me but I do not get why it is then idling on most cores. I hope you can give me some advice.

Code:
#!/bin/bash

#SBATCH -J mu_m16_hdef_v1                           # job name, don't use spaces
#SBATCH --nodes=1                        # number of nodes
#SBATCH --ntasks=4                       # number of MPI ranks
#SBATCH --cpus-per-task=9               # number of threads per rank 
#SBATCH --mem-per-cpu=1GB                 # amount of memory per core
#SBATCH --threads-per-core=2             # number of threads per core
#SBATCH --time=48:00:00                 # time limit hh:mm:ss 
#SBATCH -p 50_procent_max_7_days         # Partition
#SBATCH -o out.log                       # output log file
#SBATCH -e err.log 

# module to use
module load gcc/9.4.0
module load intel/oneapi/2023.1.0
source setvars.sh

# log hostname
pwd; hostname; date

# executable
SU2=/home/SU2_software/bin/SU2_CFD

# config file
cfgFile=run.cfg

# set environment variables
NTHREADS=$SLURM_CPUS_PER_TASK
export OMP_NUM_THREADS=$NTHREADS
export UCX_TLS=ud,sm,self

# display configuration
echo "Number of nodes used        : "$SLURM_NNODES
echo "Number of MPI ranks         : "$SLURM_NTASKS 
echo "Number of threads           : "$SLURM_CPUS_PER_TASK
echo "Number of MPI ranks per node: "$SLURM_TASKS_PER_NODE
echo "Number of threads per core  : "$SLURM_THREADS_PER_CORE
echo "Name of nodes used          : "$SLURM_JOB_NODELIST

srun $SU2 $cfgFile
CFDinTwente is offline   Reply With Quote

Old   March 27, 2024, 08:09
Default
  #2
Senior Member
 
bigfoot
Join Date: Dec 2011
Location: Netherlands
Posts: 504
Rep Power: 17
bigfootedrockmidget is on a distinguished road
hybrid openmp/mpi can be a bit more tricky to setup than just pure mpi. There was a thread on this before:
No OpenMP Performance Gain
CFDinTwente likes this.
bigfootedrockmidget is offline   Reply With Quote

Old   March 27, 2024, 10:26
Default
  #3
New Member
 
Marcel
Join Date: Jul 2023
Location: Netherlands
Posts: 3
Rep Power: 2
CFDinTwente is on a distinguished road
Thank you, I will give it a look.
CFDinTwente is offline   Reply With Quote

Reply

Tags
hpc, hpc cluster, su2


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
OpenFOAM benchmarks on various hardware eric Hardware 778 April 23, 2024 16:56
Introducing SU2 International Developers Society (IDS) fpalacios SU2 News & Announcements 1 June 17, 2019 22:38
SU2 now available on the HPCBOX, HPC Cloud Platform devzr SU2 News & Announcements 0 August 30, 2018 11:36
How to run SU2 on HPC cluster in parallel on HPC cluster? Samirs Main CFD Forum 0 July 13, 2018 00:44
SU2 installation on HPC Combas SU2 Installation 1 January 11, 2014 21:14


All times are GMT -4. The time now is 11:24.