|
[Sponsors] |
2016 Xeons processors, Intel Omni-Path interconnect and Infiniband direct connect |
![]() |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
![]() |
![]() |
#1 |
Member
|
Hi
1/ Does anyone know when the 2016 xeon 2U processors will become available? 2/ Does anyone know when the Intel Omni-Path Fabric will become available? 3/I read an article a while ago where Mellanox said they had a clear technological lead over Intel. I wondering if that's possible technically to have an advantage over Intel because they already do the interconnect between processors. Is there such a huge gap in the technology required to get 8 processors working together on the same motherboard and 5000 in a cluster? 4/Is it still possible with the latest infiniband technology to have a direct connect between two infiniband cards i.e; with no switch? 5/Is direct connect possible with Omni-Path? Thanks Guillaume with trampoCFD |
|
![]() |
![]() |
![]() |
![]() |
#2 | |
Member
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 14 ![]() |
Quote:
My cluster is 2 nodes of Dual Xeon E5-2667 v2, connected directly using Infiniband and no switch. I dont know if any changes to Infiniband has occured during the last two years. Best regards Kim Bindesbøll |
||
![]() |
![]() |
![]() |
![]() |
#3 |
New Member
Join Date: Feb 2010
Posts: 17
Rep Power: 15 ![]() |
And does you cluster scale well on small simulation, around the 2M cells mark?
I noticed in my previous company that the direct connect was probably nowhere near as good as it should have been. I wonder if direct connect increases the latency somehow. |
|
![]() |
![]() |
![]() |
![]() |
#4 |
Member
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 14 ![]() |
See this post:
http://www.cfd-online.com/Forums/har...l#post452000#8 Later scaling tests I have done, see attached picture. |
|
![]() |
![]() |
![]() |
![]() |
#5 |
New Member
Join Date: Feb 2010
Posts: 17
Rep Power: 15 ![]() |
Hi Kim, thanks for that, so the infiniband models were run with direct connect : just a card in each node, with an infiniband cable and no switch, correct?
|
|
![]() |
![]() |
![]() |
![]() |
#6 |
Member
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 14 ![]() |
Yes, correct: Only 2 nodes and Infiniband card in each and no switch. Of cause you can only do this with 2 nodes.
|
|
![]() |
![]() |
![]() |
Tags |
infiniband, intel, mellanox, omni-path, xeon |
Thread Tools | Search this Thread |
Display Modes | |
|
|