Hi all, I need to get OpenFO
I need to get OpenFOAM running with our infiniband systems, and can't quite figure out what to do next. I've successfully rebuild the openmpi version in the ThirdParty directory to use our OFED libraries to use the Infiniband connections. What do I need to do next? There is no separate make structure in the Pstream directory in the sources, and I can't quite figure out what else I need to do...
There is a bit in http://op
There is a bit in
Hi again all, I did a make
Hi again all,
I did a make distclean in the OpenMPI directory of the ThirdParty area, then reconfigured using
./configure --with-openib=/usr/local/ofed to point to our ofed installation. The configure and compile worked just fine, as far as I could tell. I then went and recompiled the Pstream directory with Allwmake. interFoam runs on my infiniband nodes now, but I can't tell if it's using infiniband transport or ip over infiniband. It runs slower than running on our Gigabit connected machines (I select the machines just by changing the nodes file), so I think something is still wrong. Will the Pstream still work on both ethernet and infiniband?
Hi Michael, did you resolve
did you resolve your problem?
I'm struggled with the same - I'm not sure, if Foam takes the infiniband or ethernet connection. I also compiled openmpi with infiniband support.
Hi, Some hopefully helpful
Some hopefully helpful coments from my side:
1. Make you sure openmpi is using the infinband interconnect, e.g. with ping-pong test (search for: osu-benchmark) infiniband should be about 3-5 Ás for mesage sizes up to 1024 bytes (Ethernet usually ten times slower)
If openmpi is not running with infiniband look here OpenMPI FAQ for help.
2. OpenMPI uses by default the fastest interconnect at least up to v 1.2.6, so if everything is set up right it should use infiniband by default.
3. It think was Pstream is compiled and linked against OpenMPI it has not to be recompiled as long as you still use OpenMPI as MPI.
Jens & Thomas, Still no lu
Jens & Thomas,
Still no luck with the infiniband. I get exactly the same timing with infiniband connected nodes as with gigabit connected ones. Fluent seems to run OK on the infiniband nodes, so it looks like ib is working. Also, when I start the job, there is infiniband traffic. It might be using ip over infiniband, I just can't tell. I've been trying to recompile (again) and have run into different problems. It doesn't seem to find the mpi.h headers any more.
I also tried using mpirun with -mca btl ib,self as a parameter, but it wouldn't start up. If I used -mca btl ib,tcp,self it did, because it found the tcp connections (I think).
I haven't run the tests in the openmpi directory, I will this morning.
|All times are GMT -4. The time now is 22:41.|