|
[Sponsors] |
Connect two workstations: 10GB-Ethernet or Infiniband? |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
April 29, 2019, 03:22 |
Connect two workstations: 10GB-Ethernet or Infiniband?
|
#1 |
New Member
Anonymous
Join Date: Apr 2019
Posts: 4
Rep Power: 7 |
Hello,
I am running some CFD simulations that take a few days and make use of pretty much all resources available in my workstation (6 cores, 32 GB RAM). Since I have a similar workstation which is not in use, I have thought I could connect them together to run the simulations in parallel, which begs the question about the connection type. I am assuming 1GB-Ethernet would be too slow, so I have thought about buying two 10GB-Ethernet cards, plug them into PCI-e x4 buses and link them together directly with a cable, without a switch or anything like that. Is that a sensible option? Does the type of cable (for instance, crossover vs. normal) affect performance? Can I expect the simulations to run noticeably faster with 10GB-Ethernet than with 1GB-Ethernet? I have read in older posts in these forums that some people advise using Infiniband instead, but my experience or knowledge about it is literally zero. Is that good advice also when connecting only two machines? I have looked around for Infiniband cards. From what I see, they are connected to PCI-e x8 or x16 buses. Is that right? Can I then link them together directly with a cable or do I need a switch or anything like that? Generally speaking, I am looking for a sensible setup to run the simulations in parallel across both workstations. Suggestions of particular models or products are more than welcome; I already understand that there might be other alternatives and the one you would be suggesting would simply exemplify the argument. Finally, I am assuming that all three options (1 GB-Ethernet, 10 GB-Ethernet and Infiniband) will be equally difficult to setup, particularly from a software point of view (both machines are running Linux, with kernels 4.4.0-146-generic and 3.10.0-957.10.1.el7.x86_64). Am I mistaken? Thank you very much in advance! |
|
April 30, 2019, 13:25 |
|
#2 |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,675
Rep Power: 66 |
Yes both Infiniband and ethernet are about equally annoying to setup in linux.
Infiniband is a lot faster than 1GB ethernet. But 10GB can be quite competitive with 10GB Infiniband. You would have to do some serious research to decide between 10GB vs 10GB Infiniband. There's also 40 GB Infiniband... On 1GB ethernet, you should not expect linear scaling (i.e. 2x faster because you now have 2x machines or anywhere close). Expect something like 20-40% scaling. If you connect your ethernet cards directly without a switch, this can only be done with a crossover cable. There's no difference between a normal and cross-over cable other than the fact that the in/out cables have been crossed on the crossover cable. (You can make your own crossover cable by taking apart the connector on one end and crossing the pair). But getting a switch and avoiding the crossover cable is not a bad idea unless you absolutely don't want to send any more $ or you don't have any more electrical sockets to plug the switch into. Infiniband, just like ethernet, can also be directly connected without a switch but also does not need a special crossover cable. |
|
April 30, 2019, 14:48 |
|
#3 |
Senior Member
Joern Beilke
Join Date: Mar 2009
Location: Dresden
Posts: 498
Rep Power: 20 |
For only 2 machines it should be ok to start with normal 1GB and check the speedup. I do this with 2 six-core machines running CCM+ and it works very well.
|
|
May 2, 2019, 02:45 |
|
#4 |
New Member
Anonymous
Join Date: Apr 2019
Posts: 4
Rep Power: 7 |
Thank you very much for your answers.
@LuckyTran, I have tested it with a normal cable and it does work, at least with my current 1-GB Ethernet cards. Apparently, it does [wikihow.com] because modern devices can do Auto MDI-X [wikipedia.org]. However, I wonder if performance would be better with a crossover cable. On another note, I can afford installing a switch, but I would rather not unless performance can be expected to improve. Why do you suggest doing it? By the way, if I do buy 10Gb-Ethernet cards, would I need new cables or are the cables the same as for 1Gb-Ethernet? @JBeilke, My setup would be very similar to yours: one 6-core machine and one 4-core machine running CCM+. I will do some tests with 1GB-Ethernet. Do you know a way of monitoring the Ethernet load or any other procedure to determine whether the connection speed is being a bottleneck? I'd definitely rather spend some money in the cards than settling with significantly lower performance. |
|
May 2, 2019, 04:40 |
|
#5 |
Senior Member
Joern Beilke
Join Date: Mar 2009
Location: Dresden
Posts: 498
Rep Power: 20 |
The network utilisation will change over time. So interpreting these values is not straightforward.
My guess would be that the 4 cores machine is older and might be the limiting factor. |
|
May 2, 2019, 04:53 |
|
#6 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46 |
Checking network traffic might be a bit overkill for what you want to see: does the interconnect seriously bottleneck your simulations.
An easier approach: run the simulation on all cores of the slower machine, then add the same amount of cores from the faster machine. This scaling allows you to judge whether the interconnect is good enough. And you can also judge how much performance there is to be gained -if any- by investing in a faster interconnect. Gigabit Ethernet is usually good enough for connecting 2 rather slow nodes. We do just that for CCM+. |
|
May 2, 2019, 05:45 |
|
#7 |
New Member
Anonymous
Join Date: Apr 2019
Posts: 4
Rep Power: 7 |
Yes, that sounds like a sensible thing to do. I will test it that way, thank you!
|
|
May 10, 2019, 08:13 |
|
#8 |
New Member
Anonymous
Join Date: Apr 2019
Posts: 4
Rep Power: 7 |
I have tested it as suggested and it turns out that the limiting factor is indeed the lower computer, not the network.
However, the speedup I am getting is not satisfactory yet and I am considering buying new hardware, but that is a different story and I will start a new thread in the hardware forum if I end up needing some help with that. Thanks again to everyone! |
|
May 10, 2019, 10:19 |
|
#9 | |
Senior Member
Lucky
Join Date: Apr 2011
Location: Orlando, FL USA
Posts: 5,675
Rep Power: 66 |
Quote:
I also forgot that for gigabit ethernet, a crossover cable isn't needed anyway. So you have nothing to worry about. The cables for 10 gige is either cat5/6/7 and the only difference between these physically is the shielding and specwise the level of guaranteed signal quality. Cat5 can and probably will work, but you have to do testing to make sure. At short distances, all will perform the same. |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Infiniband vs. Ethernet | Rec | Hardware | 6 | October 2, 2019 19:57 |
Parallel running on four T5610 workstations using a Infiniband switch. | DungPham | Hardware | 2 | October 2, 2019 12:59 |
error: connect failed | destroy | FLUENT | 7 | March 18, 2018 14:59 |
2016 Xeons processors, Intel Omni-Path interconnect and Infiniband direct connect | trampoCFD | Hardware | 5 | January 11, 2016 01:29 |
connect failed with errno%3d113 | wedsall | OpenFOAM Running, Solving & CFD | 2 | November 3, 2008 04:17 |