CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

Infiniband

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 24, 2006, 04:35
Default Hello! I could use some hel
  #1
New Member
 
Alexander Rudert
Join Date: Mar 2009
Location: Freiberg, Saxony, Germany
Posts: 1
Rep Power: 0
alexander_rudert is on a distinguished road
Hello!

I could use some help with starting Foam/LAM on a linux cluster with infinband interconnect. So far i can only use the ethernet between the nodes. I tried lamboot -v -ssi rpi ib hostfile, but i only get the message ib module not found. Does somebody know this problem?
And another question: How much better is infiniband compared to gigabit ethernet? Has somebody experience with infiniband?
alexander_rudert is offline   Reply With Quote

Old   February 24, 2006, 04:56
Default I guess you shouldnt use LAM a
  #2
Senior Member
 
Francesco Del Citto
Join Date: Mar 2009
Location: Zürich Area, Switzerland
Posts: 237
Rep Power: 18
fra76 is on a distinguished road
I guess you shouldnt use LAM as mpi library, but you should use mvapich, the mpi implementation for infiniband.
I'll try it as soon as I can, and I'll let you know.
Regarding Infiniband, it's really much better than Gigabit. The transfer rate of infiniband, measured from application, is about 960MB/s (Megabyte/s), and the latency time is very, very low.
You can therefore use a high number of processors in a parallel run.
As an example, my application saturates gigabit with 12 processes, while you can scale up to 48 and more with infiniband, reducing wallclock computational time.
It worths to spend some time in understanding how to use infiniband, really.
fra76 is offline   Reply With Quote

Old   February 24, 2006, 05:51
Default I have a similar issue. The la
  #3
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
I have a similar issue. The lam compilation shipped with OpenFOAM does not include support for Myrinet, but I know that lam does have a native Myrinet module. Does anyone know which options you need to invoke to compile these modules into lam?
eugene is offline   Reply With Quote

Old   February 24, 2006, 06:22
Default you have to build LAM from sou
  #4
Member
 
stefan
Join Date: Mar 2009
Posts: 96
Rep Power: 17
stefanke is on a distinguished road
you have to build LAM from source for native support of Myrinet.

you can do this with the option gm:
./configure --prefix=/your/lam/dir --with-gm=/path/to/gm ...
stefanke is offline   Reply With Quote

Old   February 24, 2006, 06:28
Default We have recently made a test G
  #5
Senior Member
 
Håkan Nilsson
Join Date: Mar 2009
Location: Gothenburg, Sweden
Posts: 203
Rep Power: 18
hani is on a distinguished road
We have recently made a test Gigabit v.s. Infiniband. I will post a short report on that on the discussion page soon. The test does not show a significant improvement when using Infiniband, at least for the case that was used in the test. The test was a simpleFoam computation with 10^6 cells distributed on 1-16 CPUs. In the higher range of CPU numbers there might be some improvement when using Infiniband, and if using even more CPUs Infiniband might be better relative to Gigabit. For 16 CPU's the parallel efficiency is however only 0.45 for Gigabit and 0.51 for Infiniband, which is not really great.

My collaboration partner at Gridcore (www.gridcore.se) did the technical stuff. He just re-compiled pstream.so and liked it to the correct Infiniband mpi-library.

Håkan.
hani is offline   Reply With Quote

Old   February 24, 2006, 06:35
Default The flag mentioned above is st
  #6
Member
 
stefan
Join Date: Mar 2009
Posts: 96
Rep Power: 17
stefanke is on a distinguished road
The flag mentioned above is still usable but deprecated, please use the "--with-rpi-gm=PATH" flag instead!

For further details have a look at the LAM/MPI Installation Guide.
stefanke is offline   Reply With Quote

Old   February 24, 2006, 06:51
Default we hare recently made a test G
  #7
Senior Member
 
Francesco Del Citto
Join Date: Mar 2009
Location: Zürich Area, Switzerland
Posts: 237
Rep Power: 18
fra76 is on a distinguished road
we hare recently made a test Gigabit vs Infiniband on an AMD Opteron Cluster with another application (a Navier-Stokes software for engine application with structured grids and moving meshes), written in Fortran77/Fortran95 on an mpi base, and the differences between the networks are really relevant.
Depending on the case, we have saturated Gigabit with 12 processors (speedup<1), while with infiniband the speedup grows till 24 processors used, which was the maximum number of processes we have tested.
With a bigger test case, with 16 processors the time spent was 113 min with gigabit and 75 with infiniband. We couldn't test more than 16 processors with gigabit, but using Infiniband the speedup always grows, till 48 processors, which is the total number of CPUs of the cluster.
With combustion, moreover, the value of the speedup obviously grows, with an efficency of parallelism up to 63% with 30 processors.
I've no experience with OpenFOAM over Infiniband, but with some kind of applications, this kind of network is what really makes the difference.
fra76 is offline   Reply With Quote

Old   February 27, 2006, 06:00
Default Dear Håkan, Are you compari
  #8
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
Dear Håkan,

Are you comparing versus a sequential run? Or is this the %cpu utilization? Is the difference due to the number of sweeps being much larger?

Did you try various decompositions? Hierarchical with almost equal number in x,y,z is a good starting point.
Did you try scheduledTransfer on/off?
mattijs is offline   Reply With Quote

Old   February 27, 2006, 07:23
Default Dear 'Infiniband Interest Grou
  #9
Senior Member
 
Håkan Nilsson
Join Date: Mar 2009
Location: Gothenburg, Sweden
Posts: 203
Rep Power: 18
hani is on a distinguished road
Dear 'Infiniband Interest Group',

You can find a short report on my test of Infiniband here: Infiniband_vs_Gigabit.pdf.gz

I would be very happy to get some comments on how the test was made. I will soon make similar comparisons with InfiniPath, and I would like to make as good a test as possible.

I hope that the document answers the questions from Mattijs. Some additional answers follow:

The number of iterations for p are slightly changed with the decomposition, typically: 229,251,258,261,271 for 1,2,4,8,16 CPU's. The other equations take 1 iteration for all decompositions. Is this what you are referring to 'number of sweeps'? I don't know if the actual linear solver (ICCG) does more sweeps per iteration.

I did not try various decompositions, only load balanced Metis. This test is only a small project on the side of what I should really be doing, and I don't have the time to try all options. There are many ways of decomposing, but I think that the Metis decomposition should be a good starting point.

I did not try scheduledTransfer on/off. Can you please tell me what this will affect, and how to do it? Thank you on advance, Mattijs!

I have asked before about some hints on how to make the comparisons the best way, and that question is still open.

Håkan.
hani is offline   Reply With Quote

Old   February 27, 2006, 07:53
Default scheduledTransfer makes sure o
  #10
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
scheduledTransfer makes sure one side (process) of a processor patch is in receive mode while the other one is sending. Means you can use mpi_send instead of mpi_bsend. Sometimes benificial (no buffer, no thread?), sometimes detrimental (more chance of everyone waiting for the slowest).
mattijs is offline   Reply With Quote

Old   February 27, 2006, 08:49
Default Hi Håkan! I recently got a
  #11
Assistant Moderator
 
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51
gschaider will become famous soon enoughgschaider will become famous soon enough
Hi Håkan!

I recently got a benchmark paper by Cisco where they compared Fluent on GEthernet with Infiniband. For the case that was similar to yours (850k cells) for the number of processors you had available their results are very similar to yours: the speedup with EN and IB are almost the same. For larger numbers of processors (they had 64 nodes) IB performs better.

The other results (for smaller runs) suggest in my opinion that for Infiniband to be of use you either have to have small runs and/or a large number (>>10) of nodes. I'd say that the major advantage of IB is not the bandwidth but the low latency.

(If you're interested I can send you the PDF)
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request
gschaider is offline   Reply With Quote

Old   May 24, 2006, 02:33
Default Hi Infiniband users! I have
  #12
Senior Member
 
Jens Klostermann
Join Date: Mar 2009
Posts: 117
Rep Power: 17
jens_klostermann is on a distinguished road
Hi Infiniband users!

I have problems compiling lam7.1.1 with Infiniband support. What I did (all what is in Allwmake file in the src directory):

gmake distclean
rm -rf $LAM_ARCH_PATH

./configure \
--prefix=$LAM_ARCH_PATH \
--with-rpi-ib=/usr/ibgd/driver/infinihost \
--enable-shared \
--disable-static \
--without-romio \
--without-mpi2cpp \
--without-profiling \
--without-fc

gmake


So when I compile it I get the following error

gmake[6]: Entering directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib/src'
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_ack.lo -MD -MP -MF ".deps/ssi_rpi_ib_ack.Tpo" -c -o ssi_rpi_ib_ack.lo ssi_rpi_ib_ack.c; \
then mv -f ".deps/ssi_rpi_ib_ack.Tpo" ".deps/ssi_rpi_ib_ack.Plo"; else rm -f ".deps/ssi_rpi_ib_ack.Tpo"; exit 1; fi
mkdir .libs
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_ack.lo -MD -MP -MF .deps/ssi_rpi_ib_ack.Tpo -c ssi_rpi_ib_ack.c -fPIC -DPIC -o .libs/ssi_rpi_ib_ack.o
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_actions.lo -MD -MP -MF ".deps/ssi_rpi_ib_actions.Tpo" -c -o ssi_rpi_ib_actions.lo ssi_rpi_ib_actions.c; \
then mv -f ".deps/ssi_rpi_ib_actions.Tpo" ".deps/ssi_rpi_ib_actions.Plo"; else rm -f ".deps/ssi_rpi_ib_actions.Tpo"; exit 1; fi
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_actions.lo -MD -MP -MF .deps/ssi_rpi_ib_actions.Tpo -c ssi_rpi_ib_actions.c -fPIC -DPIC -o .libs/ssi_rpi_ib_actions.o
ssi_rpi_ib_actions.c: In function 'send_peer_fc_info':
ssi_rpi_ib_actions.c:1202: warning: right shift count >= width of type
ssi_rpi_ib_actions.c:1244: warning: left shift count >= width of type
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_bitmap.lo -MD -MP -MF ".deps/ssi_rpi_ib_bitmap.Tpo" -c -o ssi_rpi_ib_bitmap.lo ssi_rpi_ib_bitmap.c; \
then mv -f ".deps/ssi_rpi_ib_bitmap.Tpo" ".deps/ssi_rpi_ib_bitmap.Plo"; else rm -f ".deps/ssi_rpi_ib_bitmap.Tpo"; exit 1; fi
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_bitmap.lo -MD -MP -MF .deps/ssi_rpi_ib_bitmap.Tpo -c ssi_rpi_ib_bitmap.c -fPIC -DPIC -o .libs/ssi_rpi_ib_bitmap.o
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_dreg.lo -MD -MP -MF ".deps/ssi_rpi_ib_dreg.Tpo" -c -o ssi_rpi_ib_dreg.lo ssi_rpi_ib_dreg.c; \
then mv -f ".deps/ssi_rpi_ib_dreg.Tpo" ".deps/ssi_rpi_ib_dreg.Plo"; else rm -f ".deps/ssi_rpi_ib_dreg.Tpo"; exit 1; fi
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_dreg.lo -MD -MP -MF .deps/ssi_rpi_ib_dreg.Tpo -c ssi_rpi_ib_dreg.c -fPIC -DPIC -o .libs/ssi_rpi_ib_dreg.o
ssi_rpi_ib_dreg.c:40: error: static declaration of 'lam_ssi_rpi_ib_env_mempool' follows non-static declaration
./rpi_ib_dreg.h:24: error: previous declaration of 'lam_ssi_rpi_ib_env_mempool' was here
gmake[6]: *** [ssi_rpi_ib_dreg.lo] Error 1
gmake[6]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib/src'
gmake[5]: *** [all] Error 2
gmake[5]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib/src'
gmake[4]: *** [all-recursive] Error 1
gmake[4]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib'
gmake[3]: *** [all-recursive] Error 1
gmake[3]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi'
gmake[2]: *** [all-recursive] Error 1
gmake[2]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share'
gmake: *** [all-recursive] Error 1



Any help is apreciated. Thanks

Jens
jens_klostermann is offline   Reply With Quote

Old   May 24, 2006, 02:34
Default Hi Infiniband users! I have
  #13
Senior Member
 
Jens Klostermann
Join Date: Mar 2009
Posts: 117
Rep Power: 17
jens_klostermann is on a distinguished road
Hi Infiniband users!

I have problems compiling lam7.1.1 with Infiniband support. What I did (all what is in Allwmake file in the src directory):

gmake distclean
rm -rf $LAM_ARCH_PATH

./configure \
--prefix=$LAM_ARCH_PATH \
--with-rpi-ib=/usr/ibgd/driver/infinihost \
--enable-shared \
--disable-static \
--without-romio \
--without-mpi2cpp \
--without-profiling \
--without-fc

gmake


So when I compile it I get the following error

gmake[6]: Entering directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib/src'
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_ack.lo -MD -MP -MF ".deps/ssi_rpi_ib_ack.Tpo" -c -o ssi_rpi_ib_ack.lo ssi_rpi_ib_ack.c; \
then mv -f ".deps/ssi_rpi_ib_ack.Tpo" ".deps/ssi_rpi_ib_ack.Plo"; else rm -f ".deps/ssi_rpi_ib_ack.Tpo"; exit 1; fi
mkdir .libs
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_ack.lo -MD -MP -MF .deps/ssi_rpi_ib_ack.Tpo -c ssi_rpi_ib_ack.c -fPIC -DPIC -o .libs/ssi_rpi_ib_ack.o
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_actions.lo -MD -MP -MF ".deps/ssi_rpi_ib_actions.Tpo" -c -o ssi_rpi_ib_actions.lo ssi_rpi_ib_actions.c; \
then mv -f ".deps/ssi_rpi_ib_actions.Tpo" ".deps/ssi_rpi_ib_actions.Plo"; else rm -f ".deps/ssi_rpi_ib_actions.Tpo"; exit 1; fi
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_actions.lo -MD -MP -MF .deps/ssi_rpi_ib_actions.Tpo -c ssi_rpi_ib_actions.c -fPIC -DPIC -o .libs/ssi_rpi_ib_actions.o
ssi_rpi_ib_actions.c: In function 'send_peer_fc_info':
ssi_rpi_ib_actions.c:1202: warning: right shift count >= width of type
ssi_rpi_ib_actions.c:1244: warning: left shift count >= width of type
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_bitmap.lo -MD -MP -MF ".deps/ssi_rpi_ib_bitmap.Tpo" -c -o ssi_rpi_ib_bitmap.lo ssi_rpi_ib_bitmap.c; \
then mv -f ".deps/ssi_rpi_ib_bitmap.Tpo" ".deps/ssi_rpi_ib_bitmap.Plo"; else rm -f ".deps/ssi_rpi_ib_bitmap.Tpo"; exit 1; fi
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_bitmap.lo -MD -MP -MF .deps/ssi_rpi_ib_bitmap.Tpo -c ssi_rpi_ib_bitmap.c -fPIC -DPIC -o .libs/ssi_rpi_ib_bitmap.o
if /bin/sh ../libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID="" -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_dreg.lo -MD -MP -MF ".deps/ssi_rpi_ib_dreg.Tpo" -c -o ssi_rpi_ib_dreg.lo ssi_rpi_ib_dreg.c; \
then mv -f ".deps/ssi_rpi_ib_dreg.Tpo" ".deps/ssi_rpi_ib_dreg.Plo"; else rm -f ".deps/ssi_rpi_ib_dreg.Tpo"; exit 1; fi
gcc -DHAVE_CONFIG_H -I. -I. -I. -DLAM_SSI_RPI_IB_TINYMSGLEN=32768 -DLAM_SSI_RPI_IB_PORT=-1 -DLAM_SSI_RPI_IB_HCA_ID= -DLAM_SSI_RPI_IB_NUM_ENVELOPES=64 -I../../../../../share/include -I../../../../../share/include -I../../../../../share/include -DLAM_BUILDING=1 -DLAM_BUILDING=1 -I/usr/ibgd/driver/infinihost/include -O3 -m64 -fPIC -MT ssi_rpi_ib_dreg.lo -MD -MP -MF .deps/ssi_rpi_ib_dreg.Tpo -c ssi_rpi_ib_dreg.c -fPIC -DPIC -o .libs/ssi_rpi_ib_dreg.o
ssi_rpi_ib_dreg.c:40: error: static declaration of 'lam_ssi_rpi_ib_env_mempool' follows non-static declaration
./rpi_ib_dreg.h:24: error: previous declaration of 'lam_ssi_rpi_ib_env_mempool' was here
gmake[6]: *** [ssi_rpi_ib_dreg.lo] Error 1
gmake[6]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib/src'
gmake[5]: *** [all] Error 2
gmake[5]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib/src'
gmake[4]: *** [all-recursive] Error 1
gmake[4]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi/ib'
gmake[3]: *** [all-recursive] Error 1
gmake[3]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi/rpi'
gmake[2]: *** [all-recursive] Error 1
gmake[2]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share/ssi'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/home/klosterm/OpenFOAM/OpenFOAM-1.3/src/lam-7.1.1/share'
gmake: *** [all-recursive] Error 1



Any help is appreciated. Thanks

Jens
jens_klostermann is offline   Reply With Quote

Old   May 24, 2006, 04:46
Default How about using openmpi? This
  #14
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
How about using openmpi? This might have more uptodate infiniband. We include 1.0.2a7.
mattijs is offline   Reply With Quote

Old   May 24, 2006, 07:08
Default When I compile openmpi everyth
  #15
Senior Member
 
Jens Klostermann
Join Date: Mar 2009
Posts: 117
Rep Power: 17
jens_klostermann is on a distinguished road
When I compile openmpi everything works fine.

When I recompile the the Pstream libs I get no errors, but the following warnings

OPwrite.C: In static member function ‘static bool Foam:Pstream::write(int, const char*, std::streamsize, bool)’:
OPwrite.C:77: warning: use of old-style cast
OPwrite.C:89: warning: use of old-style cast


1. Are these warnings problematic?

However I ignored them and did:

mpirun -v --mca btl mvapi,self -np 4 --hostfile ompimachinefile -ssh "/home/klosterm/OpenFOAM/OpenFOAM-1.3/applications/bin/linuxAMD64Gcc4DPOpt/inter Foam . dambreak -parallel"
with the following error as result:


[stokes:25010] [0,0,0] ORTE_ERROR_LOG: Not implemented in file rmgr_urm.c at line 177
[stokes:25010] [0,0,0] ORTE_ERROR_LOG: Not implemented in file rmgr_urm.c at line 365
[stokes:25010] mpirun: spawn failed with errno=-7


Thank you for helping!

Jens
jens_klostermann is offline   Reply With Quote

Old   May 25, 2006, 05:40
Default Those warnings are not problem
  #16
Senior Member
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,419
Rep Power: 26
mattijs is on a distinguished road
Those warnings are not problematic.

Never ran openmpi+infiniband so cannot help you there. Can you post the solution if you find it out?
mattijs is offline   Reply With Quote

Old   June 12, 2006, 05:00
Default So the problem is solved. Open
  #17
Senior Member
 
Jens Klostermann
Join Date: Mar 2009
Posts: 117
Rep Power: 17
jens_klostermann is on a distinguished road
So the problem is solved. OpenFOAM 1.3 is running with infiniband and openmpi. I use the openmpi version 1.2a1r10111, but I think the one which is shipped with OpenFOAM 1.3 is also working.

The problem was that it was not possible to run different versions of mpirun (lam, mpich or openmpi) for different users at the same time. So we hat a lam job running over ethernet which was somehow blocking the mpirun of openmpi.
jens_klostermann is offline   Reply With Quote

Old   March 7, 2007, 12:32
Default Hello Jens, I am very inter
  #18
Member
 
anne dejoan
Join Date: Mar 2009
Location: madrid, spain
Posts: 66
Rep Power: 17
anne is on a distinguished road
Hello Jens,

I am very interested in having an idea
about your compilation of openmpi with infiniband.

I am trying several mpi on my infiniband (and let already a message about mvapich) but after having read this threat I tried to compile
openmpi included in openfoam distribution and did not succeed. I have some errors put at the end of my present mail.
These errors are independant of the option inifiniband in my configure file.


How does your openmpi version work?


Thanks

Anne

------------------------------------
gmake[5]: *** No hay ninguna regla para construir el objetivo `distclean'. Alto.
gmake[5]: Leaving directory `/afs/ciemat.es/users/u5303/OpenFOAM/OpenFOAM-1.3/src/openmpi-1.0.2a7/ompi/mca/i o/romio/romio'
gmake[4]: *** [distclean-recursive] Error 1
gmake[4]: Leaving directory `/afs/ciemat.es/users/u5303/OpenFOAM/OpenFOAM-1.3/src/openmpi-1.0.2a7/ompi/mca/i o/romio'
gmake[3]: *** [distclean-recursive] Error 1
gmake[3]: Leaving directory `/afs/ciemat.es/users/u5303/OpenFOAM/OpenFOAM-1.3/src/openmpi-1.0.2a7/ompi/mca/i o'
gmake[2]: *** [distclean-recursive] Error 1
gmake[2]: Leaving directory `/afs/ciemat.es/users/u5303/OpenFOAM/OpenFOAM-1.3/src/openmpi-1.0.2a7/ompi/mca'
gmake[1]: *** [distclean-recursive] Error 1
gmake[1]: Leaving directory `/afs/ciemat.es/users/u5303/OpenFOAM/OpenFOAM-1.3/src/openmpi-1.0.2a7/ompi'
gmake: *** [distclean-recursive] Error 1
--------------------------------------------------
anne is offline   Reply With Quote

Old   March 11, 2007, 16:40
Default Hi Anne, I had a lot of tro
  #19
Senior Member
 
Jens Klostermann
Join Date: Mar 2009
Posts: 117
Rep Power: 17
jens_klostermann is on a distinguished road
Hi Anne,

I had a lot of troubles setting up OpenFOAM for infiniband communication. So far it works with openmpi for almost a year now. From my experience I suggest to use always the most recent version of openmpi.

Regards Jens
jens_klostermann is offline   Reply With Quote

Old   February 8, 2008, 13:43
Default Hi - I'm new to OpenFOAM. I
  #20
mellanoxuser
Guest
 
Posts: n/a
Hi - I'm new to OpenFOAM.

I ran damBreak3d using openMPI over Infiniband with the "runnproc" scripts 1-8.

Now I want to try 16 processes and more.

My questions is:
1) Should I clone the case = nproc8 and create a "nproc16" with FoamX (FoamX won't start on my system) ?

2) Is it best to create a new case? Is there a way from command line to create a new case?

3) I copied the case "nproc8" and changed "8" to 16 but Foam IO had trouble opening the controlDict file
  Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
HPMPI Infiniband problem carsten OpenFOAM Bugs 2 January 25, 2009 15:36
OpenFOAM and infiniband mrangitschdowcom OpenFOAM Installation 5 October 30, 2008 07:47
infiniband interconnect help rjh FLUENT 0 February 4, 2008 11:41
OpenFOAM 141 parallel results infiniband vs gigabit vs SMP msrinath80 OpenFOAM Running, Solving & CFD 10 November 30, 2007 18:11
Interesting behavior when using LAM without ib rpi on an infiniband cluster OF 141 msrinath80 OpenFOAM Running, Solving & CFD 0 October 2, 2007 16:31


All times are GMT -4. The time now is 17:09.