CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   SU2 Tutorial 2 Parallel Computation (https://www.cfd-online.com/Forums/su2/132437-su2-tutorial-2-parallel-computation.html)

CrashLaker March 31, 2014 22:19

Running Oneram6 in parallel issues
 
Hello guys.
I've ran Oneram6 in serial successfully. Now I'm facing some issues trying to run it in parallel.

What should I do in order to paralellize SU2's 2nd Tutorial?
What's parallel_computation.py actually doing?

I'm running on a cluster that has 6 nodes that has 2 Xeons each.

I've ran this tests:
Code:

/opt/su2/bin/parallel_computation.py -f inv_ONERAM6.cfg -p 12
/opt/su2mpich2/bin/parallel_computation.py -f inv_ONERAM6.cfg -p 12

This is my ./configure settings in which I tried 2 versions of MPI (Intel MPI & MPICH3).
MPICH 3.1
Intel MPI 4.1
Icc/icpc 14.0.1
Code:

./configure --prefix="/opt/su2mpich2" --with-Metis-lib="/opt/metis1/lib"
--with-Metis-include="/opt/metis1/include" --with-Metis-version=5
--with-MPI="/opt/mpich2/bin/mpicxx"

Code:

./configure --prefix="/opt/su2" --with-Metis-lib="/opt/metis1/lib"
--with-Metis-include="/opt/metis1/include" --with-Metis-version=5
--with-MPI="/opt/intel/impi/4.1.3.048/intel64/bin/mpicxx"

41 53.673333 -6.208682 -5.711006 0.286449 0.011889
41 53.693333 -6.208682 -5.711006 0.286449 0.011889
41 53.682143 -6.208682 -5.711006 0.286449 0.011889
41 53.572857 -6.208682 -5.711006 0.286449 0.011889
41 53.863571 -6.208682 -5.711006 0.286449 0.011889
41 53.754524 -6.208682 -5.711006 0.286449 0.011889
41 53.650000 -6.208682 -5.711006 0.286449 0.011889
41 53.890476 -6.208682 -5.711006 0.286449 0.011889
41 53.882857 -6.208682 -5.711006 0.286449 0.011889
41 53.901667 -6.208682 -5.711006 0.286449 0.011889
41 53.967381 -6.208682 -5.711006 0.286449 0.011889
41 53.824048 -6.208682 -5.711006 0.286449 0.011889
42 53.672791 -6.255845 -5.757245 0.286496 0.011876
42 53.678605 -6.255845 -5.757245 0.286496 0.011876
42 53.692791 -6.255845 -5.757245 0.286496 0.011876
42 53.572093 -6.255845 -5.757245 0.286496 0.011876
42 53.856279 -6.255845 -5.757245 0.286496 0.011876
42 53.745814 -6.255845 -5.757245 0.286496 0.011876
42 53.651628 -6.255845 -5.757245 0.286496 0.011876
42 53.879767 -6.255845 -5.757245 0.286496 0.011876
42 53.877209 -6.255845 -5.757245 0.286496 0.011876
42 53.894419 -6.255845 -5.757245 0.286496 0.011876
42 53.961628 -6.255845 -5.757245 0.286496 0.011876
42 53.823721 -6.255845 -5.757245 0.286496 0.011876
43 53.672955 -6.302153 -5.803464 0.286533 0.011862
43 53.675909 -6.302153 -5.803464 0.286533 0.011862
43 53.692273 -6.302153 -5.803464 0.286533 0.011862
43 53.571818 -6.302153 -5.803464 0.286533 0.011862
43 53.737727 -6.302153 -5.803464 0.286533 0.011862
43 53.847955 -6.302153 -5.803464 0.286533 0.011862
43 53.652500 -6.302153 -5.803464 0.286533 0.011862
43 53.876136 -6.302153 -5.803464 0.286533 0.011862
43 53.886364 -6.302153 -5.803464 0.286533 0.011862
43 53.876818 -6.302153 -5.803464 0.286533 0.011862
43 53.957045 -6.302153 -5.803464 0.286533 0.011862
43 53.822045 -6.302153 -5.803464 0.286533 0.011862

Note that each iteration is printed 12 times..

The same goes for any number inserted in p.

Note : I'm using MPI + Metis. Not CGNS.

thanks in advance!

hlk April 4, 2014 16:17

I have gotten this type of behavior when SU2 was not compiled with parallel tools. Go back to your configuration and check to make sure that the paths to the appropriate libraries are correct, and look through the config output to make sure that SU2 is being compiled with mpi.

CrashLaker April 4, 2014 16:35

Quote:

Originally Posted by hlk (Post 483940)
I have gotten this type of behavior when SU2 was not compiled with parallel tools. Go back to your configuration and check to make sure that the paths to the appropriate libraries are correct, and look through the config output to make sure that SU2 is being compiled with mpi.

I've compiled it with lots of different kinds of configuration. Even the script (configure) says it has support for MPI and METIS.

Do you strongly think this is a problem with MPI over anything else?..

hlk April 4, 2014 16:50

when you say that the configure script says it has MPI support, I assume you mean that there is a line in config.log like:
MPI support: yes

I see that you also commented on http://www.cfd-online.com/Forums/su2...ify-nodes.html

Since you are running on a cluster, as mentioned in the post linked above, you should also look into the cluster-specific requirements. Unfortunately, that is beyond my expertise, and you will need to talk to the cluster administrator or another user familiar with the specifics of your cluster.

Santiago Padron April 4, 2014 17:27

Your original post had said that you compiled without Metis. Metis is needed to run a parallel computation, that is probably why you are seeing the repeated output. I would recommend you make clean and then compile again with Metis support.

As for your question of "What's parallel_computation.py actually doing?"
The parallel_comutation.py script automatically handles the domain decomposition with SU2_DDC, execution of SU2_CFD, and the merging of the decomposed files using SU2_SOL. This is described in more detail in Tutorial 6 - Turbulent ONERA M6.

hexchain April 5, 2014 06:14

Hi,

Could you please explain how do you compile SU2 with Intel MPI? My try failed with several "mpi.h must be included before stdio.h" errors, and maunally #undef SEEK_* makes it fail later on some MPI functions.

hlk April 5, 2014 13:39

The general directions for compiling with MPI and Metis can be found near the bottom of the following page:
http://adl-public.stanford.edu/docs/...on+from+Source

CrashLaker April 5, 2014 16:14

Quote:

Originally Posted by Santiago Padron (Post 483947)
Your original post had said that you compiled without Metis. Metis is needed to run a parallel computation, that is probably why you are seeing the repeated output. I would recommend you make clean and then compile again with Metis support.

As for your question of "What's parallel_computation.py actually doing?"
The parallel_comutation.py script automatically handles the domain decomposition with SU2_DDC, execution of SU2_CFD, and the merging of the decomposed files using SU2_SOL. This is described in more detail in Tutorial 6 - Turbulent ONERA M6.

Hello Santiago. Thanks for your reply. The very first attempt was without Metis but now Metis and MPI are installed.
Thanks for commenting about Turbulent Onera I'm going to check it out there for in depth details.

What should happen after running SU2_DDC? Should it create new mesh files depending on the number of divisions specified?
I'm asking that because after I run SU2_DDC it doesn't create anything. Do you think there's a problem there?

Hexchain.
I was able to compile with Intel MPI easily but I've read some threads in which you had to add mpi.h include in 3 files (It's a thread in this forum but now I don't know them exactly).


All times are GMT -4. The time now is 17:04.