CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

InfiniBand vs Omni-Path

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 10, 2021, 10:27
Default InfiniBand vs Omni-Path
  #1
New Member
 
Join Date: Feb 2020
Posts: 5
Rep Power: 6
dygu is on a distinguished road
Hello,

I'm curious if anyone has experience with using Cornelis Omni-Path interconnects (OPA100) instead of InfiniBand (for example EDR/HDR100) in particular for OpenFOAM use.

Our supplier shared some benchmarks from Cornelis showing very similar if not better performance using dual-rail OPA100 vs HDR200, although this is not OpenFOAM-specific. NASA’s Discovery cluster based on OPA100 benchmarked against EDR/HDR100 across hundreds of nodes and found very similar performance for their applications.

Here's one presentation I found with OpenFOAM-specific benchmarking of the OPA-based Galileo cluster: https://wiki.openfoam.com/images/0/00/HPC_Bench.pdf
which found super-linear scaling in some case where they attribute that to the memory bandwidth.

In short, are there any disadvantages of OPA100 compared to InfiniBand for OpenFOAM use? Also, how big of a factor is the 100 vs 200 Gbps rating for something like a ~3000 core cluster?

Thanks
dygu is offline   Reply With Quote

Old   September 21, 2021, 19:56
Default Introducing Trampo
  #2
Member
 
Guillaume Jolly
Join Date: Dec 2015
Posts: 63
Rep Power: 10
trampoCFD is on a distinguished road
Send a message via Skype™ to trampoCFD
Hi Dygu
The last time I checked, OP was significantly slower than Infiniband for CFD clusters. I believe this is the reason why Intel dropped OP.

I would recommend to stick with InfiniBand on medium to large clusters until there is clear evidence than OP works for your application (software and cells/core).

We run on a 150,000 cores supercomputer with an InfiniBand HDR 200 interconnect. We could do some tests for you if you want. We mainly work with STAR-CCM+ but can easily run an OpenFOAM case if you need. Please PM me if you think this could help your decision making.

Best regards
gui@trampoCFD

Last edited by trampoCFD; October 11, 2021 at 23:30.
trampoCFD is offline   Reply With Quote

Old   October 8, 2021, 02:43
Default
  #3
New Member
 
M-G
Join Date: Apr 2016
Posts: 28
Rep Power: 9
digitalmg is on a distinguished road
Quote:
Originally Posted by trampoCFD View Post
Hi Dygu
Last time I checked, OP was significantly slower than Infiniband for CFD clusters. I believe that the reason why Intel dropped OP.

I would recommend to stick with InfiniBand on medium to large clusters until there clear evidence than OP works for your application (software and cells/core).

We run on a 150,000 cores supercomputer with an infiniband HDR 200 interconnect. We could do some test for you if you wanted. We do work mainly with STAR-CCM+ but can easily run an OpenFOAM case if you need. Please PM me if you think this could help your decision making.

Best regards
gui@trampoCFD
Hi,
Would you please let us know what is the hardware specification of a single node in your supercomputer ? How many cores does a node have and how many of them are being used in calculations ?
digitalmg is offline   Reply With Quote

Old   October 10, 2021, 18:30
Default
  #4
Member
 
Guillaume Jolly
Join Date: Dec 2015
Posts: 63
Rep Power: 10
trampoCFD is on a distinguished road
Send a message via Skype™ to trampoCFD
Hi M G
the hardware details are:

150,360cores (2x 24cores Intel Xeon Cascade Lake Platinum 8274 3.2 GHz processors /node)
HDR 4x (200Gbps) InfiniBand
RAM: 192GB DDR4 2.93 GHz /node

22,792 cores (2x 14cores Intel Xeon Broadwell E5-2690 v4 2.6 GHz processors /node)
HDR 4x (200Gbps) InfiniBand
RAM: 125GB DDR4 2.4 GHz /node

They are on our website: https://trampocfd.com/pages/new-pricing

The cores per processor were optimised for memory bandwidth: all cores are in use.
trampoCFD is offline   Reply With Quote

Old   October 11, 2021, 01:37
Default
  #5
New Member
 
M-G
Join Date: Apr 2016
Posts: 28
Rep Power: 9
digitalmg is on a distinguished road
Quote:
Originally Posted by trampoCFD View Post
Hi M G
the hardware details are:

150,360cores (2x 24cores Intel Xeon Cascade Lake Platinum 8274 3.2 GHz processors /node)
HDR 4x (200Gbps) InfiniBand
RAM: 192GB DDR4 2.93 GHz /node

22,792 cores (2x 14cores Intel Xeon Broadwell E5-2690 v4 2.6 GHz processors /node)
HDR 4x (200Gbps) InfiniBand
RAM: 125GB DDR4 2.4 GHz /node

They are on our website: https://trampocfd.com/pages/new-pricing

The cores per processor were optimised for memory bandwidth: all cores are in use.
Hi.
Would you consider a system with dual AMD EPYC 75F3 (32 core per CPU and 8 memory channel per CPU) an optimized system for memory bandwidth and CFD issue while are cores are occupied ? if you want to purchase nodes for your cloud by today, what would be your choice ?
digitalmg is offline   Reply With Quote

Old   October 11, 2021, 02:00
Default
  #6
Member
 
Guillaume Jolly
Join Date: Dec 2015
Posts: 63
Rep Power: 10
trampoCFD is on a distinguished road
Send a message via Skype™ to trampoCFD
Hi MG,
yes a dual AMD EPYC 75F3 is the highest memory bandwidth and highest frequency 64 core computer. But it's not the best value for money, and I can't tell if that's the system for you without a bit more information.

Let's continue this conversation privately please, we're on a public forum here. I'll send you a private message.
trampoCFD is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Extruding surface/domain mesh in helical path rahulksoni Pointwise & Gridgen 6 March 4, 2018 10:57
[waves2Foam] Error: "The directory path WAVES_DIR=:/home/waves2Foam does not exist." Dmitrjs OpenFOAM Community Contributions 3 June 27, 2014 08:53
OF 1.6 | Ubuntu 9.10 (64bit) | GLIBCXX_3.4.11 not found piprus OpenFOAM Installation 22 February 25, 2010 13:43
OpenFOAM on MinGW crosscompiler hosted on Linux allenzhao OpenFOAM Installation 127 January 30, 2009 19:08
One critical error while compiling lam711 billpeace OpenFOAM Installation 0 October 30, 2006 04:41


All times are GMT -4. The time now is 16:05.