|
[Sponsors] |
How to run SU2 on cluster of computers (How to specify nodes ?) |
![]() |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
![]() |
![]() |
#1 |
Member
Amit
Join Date: May 2013
Posts: 85
Rep Power: 13 ![]() |
Dear SU2 Developers,
Details of running SU2 on multicore Machine is given in the manual. If I want to run it on cluster of workstation (each having multiple cores), what is the command/details (How to specify node number etc)? GPU computing is becoming very popular (cost effective/fast computing). Is there any plan to release a GPU version of SU2 in near future? Thanks |
|
![]() |
![]() |
![]() |
![]() |
#2 |
New Member
Santiago Padron
Join Date: May 2013
Posts: 17
Rep Power: 13 ![]() |
Hi,
In order to run SU2 in parallel, first you will need to compile the code in parallel, and then you can use the python script in the following manner $ parallel_computation.py -f your_config_file.cfg -p 4. -f for file, -p for number of processors. You should note that different clusters have their own protocol to submit jobs, so you will have to look into that with you cluster administrator. Santiago |
|
![]() |
![]() |
![]() |
![]() |
#3 |
Member
Sean R. Copeland
Join Date: Jan 2013
Posts: 40
Rep Power: 13 ![]() |
Hi Harry,
It sounds like you are able to run parallel cases on a multi-core workstation using the parallel run tools included in the SU2 distribution. If this is not the case, then the manual provides details on installing with MPI and executing parallel cases using the parallel_computation.py python script. The syntax for running SU2 on clusters depends on the job-submittal environment installed on the cluster. There are several standards (slurm, PBC, etc.) and depending on the unique environment of your cluster, you may be able to use the existing python tools directly or make small modifications to make it work. If you can provide me with some more details, I may be able to help more, but most computing groups have some introductory documentation on how to submit jobs to the cluster. I recommend seeking this information out and I think the path forward will become more clear. GPU computing has the potential to be very powerful. The development team has discussed this, but, at the current time, we don't have anyone working on it. If members of the community are interested, we encourage folks to take on the challenge! -Sean |
|
![]() |
![]() |
![]() |
![]() |
#4 |
Member
Amit
Join Date: May 2013
Posts: 85
Rep Power: 13 ![]() |
Thanks Santiago/Cope,
With small modifications in python script, I am able to run problem on distributed cluster. ![]() |
|
![]() |
![]() |
![]() |
![]() |
#5 |
New Member
Join Date: Feb 2013
Posts: 12
Rep Power: 13 ![]() |
Hi Aero_Amit
could you please specify as to what modifications you did on your code in order to submit your code...? I am facing similar probles when trying to run parallel computation on cluster |
|
![]() |
![]() |
![]() |
![]() |
#6 |
Member
Amit
Join Date: May 2013
Posts: 85
Rep Power: 13 ![]() |
Hi Abhii,
As developers already said it depends on job-submittal environment. For me adding hostfile with mpirun worked. |
|
![]() |
![]() |
![]() |
![]() |
#7 |
Member
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12 ![]() |
Hello.
I've just added the host file but now all the nodes are doing the same job ![]() Anyone? |
|
![]() |
![]() |
![]() |
![]() |
#8 | |
New Member
何建东
Join Date: Jun 2013
Posts: 22
Rep Power: 13 ![]() |
Quote:
Can u tell me adding hostfile in which script, because i can,t find mpirun in parallel_computation.py and shape_optimization.py. For Fluent, we can use -cnf to specify nodes. |
||
![]() |
![]() |
![]() |
![]() |
#9 | |
New Member
何建东
Join Date: Jun 2013
Posts: 22
Rep Power: 13 ![]() |
Quote:
For fluent, we can use fluent -tx -ssh -cnf="hostfile" to specify the nodes to use. However,for SU2, how to solve this problem? Thanks a lot! |
||
![]() |
![]() |
![]() |
![]() |
#10 | |
Member
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12 ![]() |
Quote:
If you choose the first one you should modify the hostfile PBC creates for you. (I've never used PBC but some of them creates a random nodefile name. So you could just do something like this : cp $NODEFILE hosts). Or you implement parallel_computation.py to receive the hostfile name. |
||
![]() |
![]() |
![]() |
![]() |
#11 | |
New Member
何建东
Join Date: Jun 2013
Posts: 22
Rep Power: 13 ![]() |
Quote:
I have edited the interface.py file. with mpirun -hostfile myhostfile -np.......... and then parallel_computation.py can work on the bode specified in my hostfile. however, shape_optimization.py can not work..can anyone give me some suggestions? |
||
![]() |
![]() |
![]() |
![]() |
#12 | |
Member
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12 ![]() |
Quote:
From what I've seen searching through shape_optimization.py is that it calls "from scipy.optimize import fmin_slsqp" which I'm afraid it isn't parallelized yet. http://docs.scipy.org/doc/scipy-0.13...min_slsqp.html |
||
![]() |
![]() |
![]() |
![]() |
#13 | |
New Member
何建东
Join Date: Jun 2013
Posts: 22
Rep Power: 13 ![]() |
Quote:
|
||
![]() |
![]() |
![]() |
![]() |
#14 | |
New Member
nilesh
Join Date: Mar 2014
Location: Kanpur / Mumbai, India
Posts: 27
Rep Power: 12 ![]() |
Quote:
I have an i7 machine with 4 cores (multithreaded to 8). Thanks. |
||
![]() |
![]() |
![]() |
![]() |
#15 | |
Member
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12 ![]() |
Quote:
you're linking the LD_LIBRARY_PATH env. For example: When using MPICH2 you should've to add it's lib to LD_LIBRARY_PATH. Using hostfile is needed when you want to run it on more than 1 computer. But since you have only 1 using -np will be just fine. Are you using the 3.0 or 3.1 version? At first I recommend you to recked your mpi installation. |
||
![]() |
![]() |
![]() |
![]() |
#16 | |
New Member
nilesh
Join Date: Mar 2014
Location: Kanpur / Mumbai, India
Posts: 27
Rep Power: 12 ![]() |
Quote:
|
||
![]() |
![]() |
![]() |
![]() |
#17 | |
Member
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12 ![]() |
Quote:
Open .bashrc; Find "export LD_LIBRARY_PATH"; Create a new line below that and write "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/pathtompich2/lib" or add this on your terminal: root$ LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/pathtompich2/lib I recommend the latter one so that it doesn't interfere on another applications |
||
![]() |
![]() |
![]() |
![]() |
#18 | |
New Member
nilesh
Join Date: Mar 2014
Location: Kanpur / Mumbai, India
Posts: 27
Rep Power: 12 ![]() |
Quote:
Mysteriously, I tried installing it on another machine and its working on a CentOS machine with the same config procedure. |
||
![]() |
![]() |
![]() |
![]() |
#19 | |
Member
Carlos Alexandre Tomigawa Aguni
Join Date: Mar 2014
Posts: 40
Rep Power: 12 ![]() |
Quote:
|
||
![]() |
![]() |
![]() |
![]() |
#20 | |
New Member
nilesh
Join Date: Mar 2014
Location: Kanpur / Mumbai, India
Posts: 27
Rep Power: 12 ![]() |
Quote:
The problem has probably got to do with the way Ubuntu installs mpi from the package manager. All mpis somehow get installed in the same directory if done automatically and then it becomes really difficult to ensure which one is being run. The solution: I removed all mpi's and then installed mpich (required for other purposes) from the package manager. Then manually downloaded and installed mpich2 from source to another directly and finally added "export PATH=/usr/lib/mpich2/bin:$PATH" to my bashrc. Its finally up and running!!!! ![]() ![]() ![]() Is there a way to mark this post so that it could be easier for other users facing a similar problem? |
||
![]() |
![]() |
![]() |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Cant run in parallel on two nodes using OpenMPI | CHristofer | Main CFD Forum | 0 | October 26, 2007 09:54 |
Cluster run without master node | Mikhail | CFX | 4 | September 29, 2005 08:58 |
Minimum number of nodes to run CFX in parallel | Rui | CFX | 3 | April 11, 2005 20:46 |
Maximum number of nodes in cluster | Marat Hoshim | Main CFD Forum | 7 | April 9, 2001 17:11 |
Linux Cluster Computing | Mark Rist | Main CFD Forum | 5 | September 10, 2000 05:51 |