|
[Sponsors] |
June 10, 2021, 08:39 |
Server on 2 CPU - AMD EPYC 7313
|
#1 |
New Member
Sergey
Join Date: Jan 2018
Posts: 18
Rep Power: 8 |
I selected the following configuration for the settlement cluster, I plan to purchase 3 settlement servers.
Configuration 1 server: SuperMicro H12DSi-N6 motherboard - 1 pcs. RAM 8Gb DDR4 3200MHz Samsung ECC Reg OEM - 16 pcs. Server processor AMD EPYC 7313 - 2 pcs. Cooler Noctua NH-U9 TR4-SP3 - 2 pieces SSD 500Gb Samsung 970 EVO Plus (MZ-V7S500BW) - 1 pc. Fan Be Quiet Silent Wings 3 - 140mm High-Speed - 4 pcs. Power supply unit 1000W Be Quiet Straight Power 11 Platinum - 1 piece Be Quiet Dark Base 900 Black case - 1 pc. Questions: 1. Are the computer parts selected correctly? 2. I didn’t find 8Gb Dual Rank memory with 3200 MHz, how much is lost in performance compared to 2666 MHz with Dual Rank? 3. How reasonable is it to use InfiniBand? If justified, what kind of equipment would you recommend, the manufacturer of the model? Is it a good idea to connect directly without a switch? What wires should I use? 4. Do I need to speed up the master server compared to the slaves? Faster processor, disk, memory? |
|
June 11, 2021, 01:19 |
|
#2 | ||||
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46 |
Quote:
If you are using large workstation cases anyway, you can pick larger CPU coolers. NH-U14S TR4-SP3. 1000W for the PSU is total overkill for the system as configured. You could save some money here if there are no plans to upgrade with more power-hungry components later. Quote:
Quote:
Quote:
Just another compute node that also handles login to the cluster? -> No additional requirements House some fancy storage system on top of being a regular compute node? Enable GUI acces to pre- and post-processing? -> you'll need some more/better parts, an some CPU performance to spare. |
|||||
June 18, 2021, 04:05 |
|
#3 |
New Member
Sergey
Join Date: Jan 2018
Posts: 18
Rep Power: 8 |
Are the components selected correctly based on price - speed? Or would you choose other components?
I decided to try using IB, what cards and switches would you recommend for me? If i make a simple computer a master, and 3 servers will be engaged in the calculation, won't this be a bottleneck? In the calculations I use Ansys CFX 17.2, which will be faster with a 16 core processor and 3200 MHz or a 24 core processor and 2650 MHz? If you have come across Ansys products, does it make sense to upgrade to the latest Ansys version? Will it work faster? |
|
June 18, 2021, 18:12 |
Infiniband without router
|
#4 |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 308
Rep Power: 12 |
If you get a very cheap ConnectX3 dual port FDR Infiniband card for each of your nodes, you will have very good performance due to a direct interconnect between all three nodes. A router makes set-up easier, but mine is very noisy so I try not to use it. Download latest drivers, ibtools and opensm from Mellanox. It will give you SR-IOV so you can set up virtual interfaces for virtual machines in case you run nodes as virtual machines. Avoid "Pro" cards, because they provide only 40 Gb/s ethernet over Infiniband, instead of 56 Gb/s Infiniband (as well as ethernet over IB). Look for 354 in the number designation of the card if I remember correctly. There are faster versions of Infiniband, nowadays, but with just three nodes it should't make much difference because you will have very good bandwidth and very low latency already. mpi is optimized for infiniband with direct memory access, so there is no need for tuning. Non-Mellanox cards can be flashed with mftflint to the latest Mellanox. It is not hard.
Caution: If you need to get this up and running for a time critical application at work, get a router, because it will do the routing automatically and make most of it plug and play. |
|
July 2, 2021, 04:50 |
|
#5 | |
New Member
Sergey
Join Date: Jan 2018
Posts: 18
Rep Power: 8 |
Quote:
Could you recommend a router for this task? Optimal price / performance |
||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Xeon Gold Cascade Lake vs Epyc Rome - CFX & Fluent - Benchmarks (Windows Server 2019) | SLC | Hardware | 18 | June 13, 2020 16:48 |
AMD Epyc CFD benchmarks with Ansys Fluent | flotus1 | Hardware | 55 | November 12, 2018 05:33 |
Superlinear speedup in OpenFOAM 13 | msrinath80 | OpenFOAM Running, Solving & CFD | 18 | March 3, 2015 05:36 |
Star cd es-ice solver error | ernarasimman | STAR-CD | 2 | September 12, 2014 00:01 |
OpenFOAM 13 AMD quadcore parallel results | msrinath80 | OpenFOAM Running, Solving & CFD | 1 | November 10, 2007 23:23 |