CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   Parallel run of SU2 version 3.2.7 (https://www.cfd-online.com/Forums/su2/146905-parallel-run-su2-version-3-2-7-a.html)

Arad January 9, 2015 08:22

Parallel run of SU2 version 3.2.7
 
Hi
First I must say that parallel run using version 3.2.7 looks much better.
Merging the multiple domain-files and creation of a single flow.dat file is
more elegant and efficient.

However, I think something is still missing:

In many cases the run command requires the specifications of special parameters (like rdma protocol, machinefiles with a list of compute nodes and so forth). I do not see how these parameter can be included using parallel_computation.py .

Yes, one can run mpiexec -n N SU2_PRT to partition the mesh to N files like before and then use mpiexec -n N SU2_CFD directly with any parameter. However here we miss the nice merging at the end of the run that 3.2.7 parallel_computation.py provides.

Is there a way around to enjoy all the worlds ?

Thanks,
Eran Arad

hlk January 11, 2015 02:04

Quote:

Originally Posted by Arad (Post 526754)
Hi
First I must say that parallel run using version 3.2.7 looks much better.
Merging the multiple domain-files and creation of a single flow.dat file is
more elegant and efficient.

However, I think something is still missing:

In many cases the run command requires the specifications of special parameters (like rdma protocol, machinefiles with a list of compute nodes and so forth). I do not see how these parameter can be included using parallel_computation.py .

Yes, one can run mpiexec -n N SU2_PRT to partition the mesh to N files like before and then use mpiexec -n N SU2_CFD directly with any parameter. However here we miss the nice merging at the end of the run that 3.2.7 parallel_computation.py provides.

Is there a way around to enjoy all the worlds ?

Thanks,
Eran Arad

Thank you for your question.
Since these types of parameters are specific to the cluster being used, they are not set within the python script. Workload managers such as slurm (which one will be specific to the cluster you are using) can set up these parameters automatically. The system administrator for the cluster you are using should be able to help you with this.

For running on a PC rather than a cluster this won't be necessary - most of the time, either the python script by itself or preceded by mpirun -n N parallel_computation.py should work.


All times are GMT -4. The time now is 09:01.