CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   SU2 (http://www.cfd-online.com/Forums/su2/)
-   -   How to run SU2 on cluster of computers (How to specify nodes ?) (http://www.cfd-online.com/Forums/su2/117448-how-run-su2-cluster-computers-how-specify-nodes.html)

aero_amit May 8, 2013 15:03

How to run SU2 on cluster of computers (How to specify nodes ?)
 
Dear SU2 Developers,

Details of running SU2 on multicore Machine is given in the manual. If I want to run it on cluster of workstation (each having multiple cores), what is the command/details (How to specify node number etc)?

GPU computing is becoming very popular (cost effective/fast computing). Is there any plan to release a GPU version of SU2 in near future?

Thanks

Santiago Padron May 9, 2013 13:26

Hi,

In order to run SU2 in parallel, first you will need to compile the code in parallel, and then you can use the python script in the following manner
$ parallel_computation.py -f your_config_file.cfg -p 4.
-f for file, -p for number of processors.

You should note that different clusters have their own protocol to submit jobs, so you will have to look into that with you cluster administrator.

Santiago

copeland May 9, 2013 13:27

Hi Harry,

It sounds like you are able to run parallel cases on a multi-core workstation using the parallel run tools included in the SU2 distribution. If this is not the case, then the manual provides details on installing with MPI and executing parallel cases using the parallel_computation.py python script.

The syntax for running SU2 on clusters depends on the job-submittal environment installed on the cluster. There are several standards (slurm, PBC, etc.) and depending on the unique environment of your cluster, you may be able to use the existing python tools directly or make small modifications to make it work. If you can provide me with some more details, I may be able to help more, but most computing groups have some introductory documentation on how to submit jobs to the cluster. I recommend seeking this information out and I think the path forward will become more clear.

GPU computing has the potential to be very powerful. The development team has discussed this, but, at the current time, we don't have anyone working on it. If members of the community are interested, we encourage folks to take on the challenge!


-Sean

aero_amit May 30, 2013 05:38

Thanks Santiago/Cope,

With small modifications in python script, I am able to run problem on distributed cluster.

:)

Abhii May 30, 2013 11:12

Hi Aero_Amit

could you please specify as to what modifications you did on your code in order to submit your code...? I am facing similar probles when trying to run parallel computation on cluster

aero_amit June 1, 2013 02:26

Hi Abhii,

As developers already said it depends on job-submittal environment.
For me adding hostfile with mpirun worked.

CrashLaker March 31, 2014 21:16

Hello.

I've just added the host file but now all the nodes are doing the same job :(

Anyone?

hejiandong April 25, 2014 01:20

Quote:

Originally Posted by aero_amit (Post 431307)
Hi Abhii,

As developers already said it depends on job-submittal environment.
For me adding hostfile with mpirun worked.

Hi aero_amit,

Can u tell me adding hostfile in which script, because i can,t find mpirun in parallel_computation.py and shape_optimization.py.

For Fluent, we can use -cnf to specify nodes.

hejiandong April 25, 2014 03:25

Quote:

Originally Posted by copeland (Post 426398)
Hi Harry,

It sounds like you are able to run parallel cases on a multi-core workstation using the parallel run tools included in the SU2 distribution. If this is not the case, then the manual provides details on installing with MPI and executing parallel cases using the parallel_computation.py python script.

The syntax for running SU2 on clusters depends on the job-submittal environment installed on the cluster. There are several standards (slurm, PBC, etc.) and depending on the unique environment of your cluster, you may be able to use the existing python tools directly or make small modifications to make it work. If you can provide me with some more details, I may be able to help more, but most computing groups have some introductory documentation on how to submit jobs to the cluster. I recommend seeking this information out and I think the path forward will become more clear.

GPU computing has the potential to be very powerful. The development team has discussed this, but, at the current time, we don't have anyone working on it. If members of the community are interested, we encourage folks to take on the challenge!


-Sean

I have the same problem, environment on our cluster is PBC.

For fluent, we can use fluent -tx -ssh -cnf="hostfile"
to specify the nodes to use.

However,for SU2, how to solve this problem?

Thanks a lot!

CrashLaker April 25, 2014 09:14

Quote:

Originally Posted by hejiandong (Post 488111)
I have the same problem, environment on our cluster is PBC.

For fluent, we can use fluent -tx -ssh -cnf="hostfile"
to specify the nodes to use.

However,for SU2, how to solve this problem?

Thanks a lot!

You could either manually edit it at /SU2_RUN/SU2/run/interface.py or modify parallel_computation.py to accept another input (longest way).

If you choose the first one you should modify the hostfile PBC creates for you. (I've never used PBC but some of them creates a random nodefile name. So you could just do something like this :
cp $NODEFILE hosts).
Or you implement parallel_computation.py to receive the hostfile name.

hejiandong April 26, 2014 02:45

Quote:

Originally Posted by CrashLaker (Post 488182)
You could either manually edit it at /SU2_RUN/SU2/run/interface.py or modify parallel_computation.py to accept another input (longest way).

If you choose the first one you should modify the hostfile PBC creates for you. (I've never used PBC but some of them creates a random nodefile name. So you could just do something like this :
cp $NODEFILE hosts).
Or you implement parallel_computation.py to receive the hostfile name.

Thanks, it helps me a lot!

I have edited the interface.py file. with mpirun -hostfile myhostfile -np..........
and then parallel_computation.py can work on the bode specified in my hostfile.

however, shape_optimization.py can not work..can anyone give me some suggestions?

CrashLaker April 27, 2014 22:18

Quote:

Originally Posted by hejiandong (Post 488295)
Thanks, it helps me a lot!

I have edited the interface.py file. with mpirun -hostfile myhostfile -np..........
and then parallel_computation.py can work on the bode specified in my hostfile.

however, shape_optimization.py can not work..can anyone give me some suggestions?

I think we need more experts to hop in here.
From what I've seen searching through shape_optimization.py is that it calls "from scipy.optimize import fmin_slsqp" which I'm afraid it isn't parallelized yet.

http://docs.scipy.org/doc/scipy-0.13...min_slsqp.html

hejiandong April 28, 2014 01:55

Quote:

Originally Posted by CrashLaker (Post 488555)
I think we need more experts to hop in here.
From what I've seen searching through shape_optimization.py is that it calls "from scipy.optimize import fmin_slsqp" which I'm afraid it isn't parallelized yet.

http://docs.scipy.org/doc/scipy-0.13...min_slsqp.html

Thanks a lot, i have solved this problem by specified the directory of hostfile in mpirun rather than just specified the name of hostfile..

nilesh May 20, 2014 02:34

i have the same problem
 
Quote:

Originally Posted by CrashLaker (Post 483106)
Hello.

I've just added the host file but now all the nodes are doing the same job :(

Anyone?

All my nodes are doing the same job as you mentioned. I do not understand anything about host file. Please help me with it.
I have an i7 machine with 4 cores (multithreaded to 8).
Thanks.

CrashLaker May 20, 2014 08:30

Quote:

Originally Posted by nilesh (Post 492987)
All my nodes are doing the same job as you mentioned. I do not understand anything about host file. Please help me with it.
I have an i7 machine with 4 cores (multithreaded to 8).
Thanks.

Seems like there's a problem with your mpi or the way
you're linking the LD_LIBRARY_PATH env.
For example:
When using MPICH2 you should've to add it's lib to LD_LIBRARY_PATH.

Using hostfile is needed when you want to run it on more than 1 computer. But since you have only 1 using -np will be just fine.

Are you using the 3.0 or 3.1 version?
At first I recommend you to recked your mpi installation.

nilesh May 20, 2014 09:20

Quote:

Originally Posted by CrashLaker (Post 493095)
Seems like there's a problem with your mpi or the way
you're linking the LD_LIBRARY_PATH env.
For example:
When using MPICH2 you should've to add it's lib to LD_LIBRARY_PATH.

Using hostfile is needed when you want to run it on more than 1 computer. But since you have only 1 using -np will be just fine.

Are you using the 3.0 or 3.1 version?
At first I recommend you to recked your mpi installation.

I am using version 3.1. Where and how to add this LD_LIBRARY_PATH ?

CrashLaker May 20, 2014 09:37

Quote:

Originally Posted by nilesh (Post 493108)
I am using version 3.1. Where and how to add this LD_LIBRARY_PATH ?

You can add this to your .bashrc file to make it permanent or you can add this to your current session.

Open .bashrc; Find "export LD_LIBRARY_PATH"; Create a new line below that and write "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/pathtompich2/lib"

or add this on your terminal:
root$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/pathtompich2/lib

I recommend the latter one so that it doesn't interfere on another applications

nilesh May 21, 2014 09:32

Quote:

Originally Posted by CrashLaker (Post 493110)
You can add this to your .bashrc file to make it permanent or you can add this to your current session.

Open .bashrc; Find "export LD_LIBRARY_PATH"; Create a new line below that and write "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/pathtompich2/lib"

or add this on your terminal:
root$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/pathtompich2/lib

I recommend the latter one so that it doesn't interfere on another applications

I tried your suggestions but the problem still persists. I also tried installing mpich2 again. DDC is unable to divide the mesh. Even if a divided mesh created from another machine is provided, SU2_CFD runs the whole problem in parallel on each node.
Mysteriously, I tried installing it on another machine and its working on a CentOS machine with the same config procedure.

CrashLaker May 21, 2014 11:09

Quote:

Originally Posted by nilesh (Post 493343)
I tried your suggestions but the problem still persists. I also tried installing mpich2 again. DDC is unable to divide the mesh. Even if a divided mesh created from another machine is provided, SU2_CFD runs the whole problem in parallel on each node.
Mysteriously, I tried installing it on another machine and its working on a CentOS machine with the same config procedure.

Did you check if the mpirun (which mpirun) you're using is mpich2?

nilesh May 22, 2014 05:21

Solved!!!
 
Quote:

Originally Posted by CrashLaker (Post 493385)
Did you check if the mpirun (which mpirun) you're using is mpich2?

Thanks a lot Mr. Carlos, I highly appreciate your help in this matter.
The problem has probably got to do with the way Ubuntu installs mpi from the package manager. All mpis somehow get installed in the same directory if done automatically and then it becomes really difficult to ensure which one is being run.

The solution:
I removed all mpi's and then installed mpich (required for other purposes) from the package manager. Then manually downloaded and installed mpich2 from source to another directly and finally added "export PATH=/usr/lib/mpich2/bin:$PATH" to my bashrc. Its finally up and running!!!! :) :) :)

Is there a way to mark this post so that it could be easier for other users facing a similar problem?


All times are GMT -4. The time now is 03:36.