CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > SU2 Installation

Which mpi to use

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   June 22, 2014, 14:14
Default Which mpi to use
  #1
Member
 
Hedley
Join Date: May 2014
Posts: 48
Rep Power: 3
hedley is on a distinguished road
I have su2 3.2 running on linux . All works fine except when trying to use more than 1 core . When running parallel_computations.py -f ...cfg -n 4 all four cores run the same solve instead of sharing the workload .

Any help will be most appreciated . Do I have to use mpich ?
hedley is offline   Reply With Quote

Old   June 22, 2014, 15:06
Default
  #2
Member
 
Jianming Liu
Join Date: Mar 2009
Location: China
Posts: 66
Rep Power: 8
liujmljm is on a distinguished road
You should install parallel version of SU2 correctly. It is very simple to install on linux. For me, I just do like following

./configure --prefix=$HOME/SU2_v3p2/SU2 --with-CGNS-lib=$HOME/cgnslib_3.1.3/src/lib --with-CGNS-include=$HOME/cgnslib_3.1.3/src --with-MPI=/usr/lib64/mpi/gcc/openmpi/bin/mpicxx CXXFLAGS="-O3"


If you do not want to use cgns grid, you can omit the option on cgns. If you want to use cgns grid, you should install cngs library first. In my machin, the directory of cgns is in $HOME/cgnslib_3.1.3. And if you want to run the code in parallel manner, you firstly have installed openmpi or mpich correctly.
Then type make and make install. The process is very simple.

Good luck

Jianming
liujmljm is offline   Reply With Quote

Old   June 22, 2014, 16:26
Default
  #3
Member
 
Hedley
Join Date: May 2014
Posts: 48
Rep Power: 3
hedley is on a distinguished road
Thanks I have been doing this for days and days - I thought the CXXFLAGS would correct the issue BUT perhaps my instruction is incorrect as each core runs each iteration so the solve is in fact slower as it is doing the solve 4 times.

my command is -

parallel_computation.py -f inv_NACA0012.cfg -n4
hedley is offline   Reply With Quote

Old   June 22, 2014, 16:31
Default each core runs the same thread ?
  #4
Member
 
Hedley
Join Date: May 2014
Posts: 48
Rep Power: 3
hedley is on a distinguished road
here is the output showing each core running the same thing after entering

parallel_computation.py -f inv_NACA0012.cfg -n 4
Attached Images
File Type: png multi iterations.PNG (56.4 KB, 23 views)
hedley is offline   Reply With Quote

Old   June 24, 2014, 15:32
Default
  #5
New Member
 
Michael Colonno
Join Date: Jan 2013
Location: Stanford, CA
Posts: 28
Rep Power: 4
mcolonno is on a distinguished road
Any MPI implementation will work: we commonly use both MPICH2 and OpemMPI on a variety of platforms.
mcolonno is offline   Reply With Quote

Old   June 24, 2014, 15:48
Default
  #6
Member
 
Hedley
Join Date: May 2014
Posts: 48
Rep Power: 3
hedley is on a distinguished road
Thanks for response -after many days and nights I gave up on windows and got my Linux one working by purchasing Intel MPI $ 499 and everything worked . I had endless issues with Open MPI and MPICH which cost way more than that in unproductive time .

The next step for me is understanding mesh generation which seem to be the real key to good results in the toolchain , after which paraview ... the journey continues.
hedley is offline   Reply With Quote

Old   June 24, 2014, 16:00
Default
  #7
New Member
 
Michael Colonno
Join Date: Jan 2013
Location: Stanford, CA
Posts: 28
Rep Power: 4
mcolonno is on a distinguished road
Sorry to hear about the difficulties; we've never had to do much of anything with either MPICH2 or OpenMPI beside the usual ./configure, make, make install even on our high-speed network fabric. Glad you found a solution that works for you.
mcolonno is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
mpirun, best parameters pablodecastillo Hardware 17 April 27, 2012 13:05
MPI error florencenawei OpenFOAM Installation 3 October 10, 2011 01:21
Error using LaunderGibsonRSTM on SGI ALTIX 4700 jaswi OpenFOAM 2 April 29, 2008 10:54
Is Testsuite on the way or not lakeat OpenFOAM Installation 6 April 28, 2008 11:12
MPI and parallel computation Wang Main CFD Forum 7 April 15, 2004 11:25


All times are GMT -4. The time now is 21:30.