CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   CFX (http://www.cfd-online.com/Forums/cfx/)
-   -   Clusters on linux: PVM vs. HP MPI (http://www.cfd-online.com/Forums/cfx/22122-clusters-linux-pvm-vs-hp-mpi.html)

Alexey February 2, 2006 11:09

Clusters on linux: PVM vs. HP MPI
 
In last volume of "ansys solutions" there is the article about advantages of HP MPI. Ansys CFX-10.0 (for linux) includs HP MPI support. Are there any test results or recomendations?

Stevie Wonder February 2, 2006 11:23

Re: Clusters on linux: PVM vs. HP MPI
 
You can use HP-MPI on CFX-10. It works pretty well. But I'm not sure if it's already faster than PVM.

Regards, S. W.

Jeff February 3, 2006 10:26

Re: Clusters on linux: PVM vs. HP MPI
 
The advantage of HP-MPI is that it uses the Myrinet connection between nodes (1Gb) rather than standard TCP (100 Mb). Parallel runs have to communicate between the partitions and using Myrinet speeds up that communication.

That being said, I have found that if the problem is not already communication limited, HP-MPI using Myrinet is slightly slower than standard MPI over TCP. I believe the benefit will come when you subdivide beyond 1 partition per 500K nodes. Normally MPI will start to "tail off" in efficiency beyond this point. HP-MPI should delay the tail-off effect until a much higher partition level. I haven't verified this however.

Jeff

Pete February 3, 2006 11:11

Re: Clusters on linux: PVM vs. HP MPI
 
Hi Jeff.... Is your comment valid for SGI-Altix-MPI as well?

Steve February 8, 2006 11:33

Re: Clusters on linux: PVM vs. HP MPI
 
Jeff- First, use of Myrinet requires Myrinet hardware. HP-MPI isn't magically going to use Myrinet if all you have is an ethernet connection between the nodes in your cluster. Whether you have 100Mb "fast" ethernet or 1Gb Gigabit ethernet, you still have to use TCP because its ethernet. You are mixing up hardware (Myrinet vs ethernet) with software (HP-MPI vs PVM).


All times are GMT -4. The time now is 03:16.