Fluent job with Slurm
With Slurm job manager, I use the following script to run a multi-node fluent simulation.
Code:
#!/bin/bash By submitting that script, I see Code:
/state/partition1/ansys_inc/v190/fluent/fluent19.0.0/bin/fluent -r19.0.0 2ddp -g -slurm -t48 -mpi=openmpi -i fluent.journal Code:
Starting fixfiledes /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/multiport/mpi/lnamd64/openmpi/bin/mpirun --mca btl self,sm,tcp --mca btl_sm_use_knem 0 --prefix /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/multiport/mpi/lnamd64/openmpi --x LD_LIBRARY_PATH --np 48 --host compute-0-4.local /state/partition1/ansys_inc/v190/fluent/fluent19.0.0/lnamd64/2ddp_node/fluent_mpi.19.0.0 node -mpiw openmpi -pic shmem -mport 10.1.1.250:10.1.1.250:42514:0 |
Did you ever find a solution to this problem? I am encountering something similar.
|
Try fluent instead of #FLUENT
It may work (I know the question is for a few years back, but I am posting this for anyone who may have similar problem) |
All times are GMT -4. The time now is 08:38. |