CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

2016 Xeons processors, Intel Omni-Path interconnect and Infiniband direct connect

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes
  • 1 Post By bindesboll

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 8, 2015, 12:09
Default 2016 Xeons processors, Intel Omni-Path interconnect and Infiniband direct connect
  #1
Member
 
Guillaume Jolly
Join Date: Dec 2015
Posts: 64
Rep Power: 11
trampoCFD is on a distinguished road
Send a message via Skype™ to trampoCFD
Hi
1/ Does anyone know when the 2016 xeon 2U processors will become available?
2/ Does anyone know when the Intel Omni-Path Fabric will become available?
3/I read an article a while ago where Mellanox said they had a clear technological lead over Intel. I wondering if that's possible technically to have an advantage over Intel because they already do the interconnect between processors. Is there such a huge gap in the technology required to get 8 processors working together on the same motherboard and 5000 in a cluster?
4/Is it still possible with the latest infiniband technology to have a direct connect between two infiniband cards i.e; with no switch?
5/Is direct connect possible with Omni-Path?

Thanks
Guillaume with trampoCFD
trampoCFD is offline   Reply With Quote

Old   December 22, 2015, 08:52
Default
  #2
Member
 
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 16
bindesboll is on a distinguished road
Quote:
Originally Posted by trampoCFD View Post
4/Is it still possible with the latest infiniband technology to have a direct connect between two infiniband cards i.e; with no switch?
Two years ago it was.
My cluster is 2 nodes of Dual Xeon E5-2667 v2, connected directly using Infiniband and no switch.
I dont know if any changes to Infiniband has occured during the last two years.

Best regards
Kim Bindesbøll
bindesboll is offline   Reply With Quote

Old   December 23, 2015, 06:31
Default
  #3
New Member
 
Join Date: Feb 2010
Posts: 17
Rep Power: 16
ACmate is on a distinguished road
And does you cluster scale well on small simulation, around the 2M cells mark?

I noticed in my previous company that the direct connect was probably nowhere near as good as it should have been. I wonder if direct connect increases the latency somehow.
ACmate is offline   Reply With Quote

Old   January 4, 2016, 03:17
Default
  #4
Member
 
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 16
bindesboll is on a distinguished road
See this post:
http://www.cfd-online.com/Forums/har...l#post452000#8

Later scaling tests I have done, see attached picture.
Attached Images
File Type: png Scaling2.PNG (61.1 KB, 78 views)
wyldckat likes this.
bindesboll is offline   Reply With Quote

Old   January 8, 2016, 09:18
Default
  #5
New Member
 
Join Date: Feb 2010
Posts: 17
Rep Power: 16
ACmate is on a distinguished road
Hi Kim, thanks for that, so the infiniband models were run with direct connect : just a card in each node, with an infiniband cable and no switch, correct?
ACmate is offline   Reply With Quote

Old   January 11, 2016, 02:29
Default
  #6
Member
 
Kim Bindesbøll Andersen
Join Date: Oct 2010
Location: Aalborg, Denmark
Posts: 39
Rep Power: 16
bindesboll is on a distinguished road
Yes, correct: Only 2 nodes and Infiniband card in each and no switch. Of cause you can only do this with 2 nodes.
bindesboll is offline   Reply With Quote

Reply

Tags
infiniband, intel, mellanox, omni-path, xeon

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 14:55.