|
[Sponsors] |
June 22, 2014, 14:14 |
Which mpi to use
|
#1 |
Member
Hedley
Join Date: May 2014
Posts: 52
Rep Power: 11 |
I have su2 3.2 running on linux . All works fine except when trying to use more than 1 core . When running parallel_computations.py -f ...cfg -n 4 all four cores run the same solve instead of sharing the workload .
Any help will be most appreciated . Do I have to use mpich ? |
|
June 22, 2014, 15:06 |
|
#2 |
Member
Jianming Liu
Join Date: Mar 2009
Location: China
Posts: 71
Rep Power: 17 |
You should install parallel version of SU2 correctly. It is very simple to install on linux. For me, I just do like following
./configure --prefix=$HOME/SU2_v3p2/SU2 --with-CGNS-lib=$HOME/cgnslib_3.1.3/src/lib --with-CGNS-include=$HOME/cgnslib_3.1.3/src --with-MPI=/usr/lib64/mpi/gcc/openmpi/bin/mpicxx CXXFLAGS="-O3" If you do not want to use cgns grid, you can omit the option on cgns. If you want to use cgns grid, you should install cngs library first. In my machin, the directory of cgns is in $HOME/cgnslib_3.1.3. And if you want to run the code in parallel manner, you firstly have installed openmpi or mpich correctly. Then type make and make install. The process is very simple. Good luck Jianming |
|
June 22, 2014, 16:26 |
|
#3 |
Member
Hedley
Join Date: May 2014
Posts: 52
Rep Power: 11 |
Thanks I have been doing this for days and days - I thought the CXXFLAGS would correct the issue BUT perhaps my instruction is incorrect as each core runs each iteration so the solve is in fact slower as it is doing the solve 4 times.
my command is - parallel_computation.py -f inv_NACA0012.cfg -n4 |
|
June 22, 2014, 16:31 |
each core runs the same thread ?
|
#4 |
Member
Hedley
Join Date: May 2014
Posts: 52
Rep Power: 11 |
here is the output showing each core running the same thing after entering
parallel_computation.py -f inv_NACA0012.cfg -n 4 |
|
June 24, 2014, 15:32 |
|
#5 |
New Member
Michael Colonno
Join Date: Jan 2013
Location: Stanford, CA
Posts: 28
Rep Power: 13 |
Any MPI implementation will work: we commonly use both MPICH2 and OpemMPI on a variety of platforms.
|
|
June 24, 2014, 15:48 |
|
#6 |
Member
Hedley
Join Date: May 2014
Posts: 52
Rep Power: 11 |
Thanks for response -after many days and nights I gave up on windows and got my Linux one working by purchasing Intel MPI $ 499 and everything worked . I had endless issues with Open MPI and MPICH which cost way more than that in unproductive time .
The next step for me is understanding mesh generation which seem to be the real key to good results in the toolchain , after which paraview ... the journey continues. |
|
June 24, 2014, 16:00 |
|
#7 |
New Member
Michael Colonno
Join Date: Jan 2013
Location: Stanford, CA
Posts: 28
Rep Power: 13 |
Sorry to hear about the difficulties; we've never had to do much of anything with either MPICH2 or OpenMPI beside the usual ./configure, make, make install even on our high-speed network fabric. Glad you found a solution that works for you.
|
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
mpirun, best parameters | pablodecastillo | Hardware | 18 | November 10, 2016 12:36 |
MPI error | florencenawei | OpenFOAM Installation | 3 | October 10, 2011 01:21 |
Error using LaunderGibsonRSTM on SGI ALTIX 4700 | jaswi | OpenFOAM | 2 | April 29, 2008 10:54 |
Is Testsuite on the way or not | lakeat | OpenFOAM Installation | 6 | April 28, 2008 11:12 |
MPI and parallel computation | Wang | Main CFD Forum | 7 | April 15, 2004 11:25 |