Which mpi to use
I have su2 3.2 running on linux . All works fine except when trying to use more than 1 core . When running parallel_computations.py -f ...cfg -n 4 all four cores run the same solve instead of sharing the workload .
Any help will be most appreciated . Do I have to use mpich ? |
You should install parallel version of SU2 correctly. It is very simple to install on linux. For me, I just do like following
./configure --prefix=$HOME/SU2_v3p2/SU2 --with-CGNS-lib=$HOME/cgnslib_3.1.3/src/lib --with-CGNS-include=$HOME/cgnslib_3.1.3/src --with-MPI=/usr/lib64/mpi/gcc/openmpi/bin/mpicxx CXXFLAGS="-O3" If you do not want to use cgns grid, you can omit the option on cgns. If you want to use cgns grid, you should install cngs library first. In my machin, the directory of cgns is in $HOME/cgnslib_3.1.3. And if you want to run the code in parallel manner, you firstly have installed openmpi or mpich correctly. Then type make and make install. The process is very simple. Good luck Jianming |
Thanks I have been doing this for days and days - I thought the CXXFLAGS would correct the issue BUT perhaps my instruction is incorrect as each core runs each iteration so the solve is in fact slower as it is doing the solve 4 times.
my command is - parallel_computation.py -f inv_NACA0012.cfg -n4 |
each core runs the same thread ?
1 Attachment(s)
here is the output showing each core running the same thing after entering
parallel_computation.py -f inv_NACA0012.cfg -n 4 |
Any MPI implementation will work: we commonly use both MPICH2 and OpemMPI on a variety of platforms.
|
Thanks for response -after many days and nights I gave up on windows and got my Linux one working by purchasing Intel MPI $ 499 and everything worked . I had endless issues with Open MPI and MPICH which cost way more than that in unproductive time .
The next step for me is understanding mesh generation which seem to be the real key to good results in the toolchain , after which paraview ... the journey continues. |
Sorry to hear about the difficulties; we've never had to do much of anything with either MPICH2 or OpenMPI beside the usual ./configure, make, make install even on our high-speed network fabric. Glad you found a solution that works for you.
|
All times are GMT -4. The time now is 22:55. |