CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Hardware (https://www.cfd-online.com/Forums/hardware/)
-   -   Server on 2 CPU - AMD EPYC 7313 (https://www.cfd-online.com/Forums/hardware/236693-server-2-cpu-amd-epyc-7313-a.html)

Rec June 10, 2021 09:39

Server on 2 CPU - AMD EPYC 7313
 
I selected the following configuration for the settlement cluster, I plan to purchase 3 settlement servers.

Configuration 1 server:
SuperMicro H12DSi-N6 motherboard - 1 pcs.
RAM 8Gb DDR4 3200MHz Samsung ECC Reg OEM - 16 pcs.
Server processor AMD EPYC 7313 - 2 pcs.
Cooler Noctua NH-U9 TR4-SP3 - 2 pieces
SSD 500Gb Samsung 970 EVO Plus (MZ-V7S500BW) - 1 pc.
Fan Be Quiet Silent Wings 3 - 140mm High-Speed ​​- 4 pcs.
Power supply unit 1000W Be Quiet Straight Power 11 Platinum - 1 piece
Be Quiet Dark Base 900 Black case - 1 pc.

Questions:
1. Are the computer parts selected correctly?
2. I didnít find 8Gb Dual Rank memory with 3200 MHz, how much is lost in performance compared to 2666 MHz with Dual Rank?
3. How reasonable is it to use InfiniBand? If justified, what kind of equipment would you recommend, the manufacturer of the model? Is it a good idea to connect directly without a switch? What wires should I use?
4. Do I need to speed up the master server compared to the slaves? Faster processor, disk, memory?

flotus1 June 11, 2021 02:19

Quote:

1. Are the computer parts selected correctly?
According to the specs, the case you picked can't hold SSI-EEB motherboards. Maybe you can squeeze it in with some modifications, maybe you can't. I can recommend the Phanteks Enthoo Pro for this kind of board.
If you are using large workstation cases anyway, you can pick larger CPU coolers. NH-U14S TR4-SP3.
1000W for the PSU is total overkill for the system as configured. You could save some money here if there are no plans to upgrade with more power-hungry components later.
Quote:

2. I didn’t find 8Gb Dual Rank memory with 3200 MHz, how much is lost in performance compared to 2666 MHz with Dual Rank?
AFAIK, 8GB DDR4-3200 reg ECC is only sold in single-rank variants. Dropping down to DDR4-2666 just to get dual-rank memory is not recommended, i.e. slower.
Quote:

3. How reasonable is it to use InfiniBand? If justified, what kind of equipment would you recommend, the manufacturer of the model? Is it a good idea to connect directly without a switch? What wires should I use?
If you are still debating whether to use Infiniband or not, I would recommend the "NT" variant of your motherboard. It comes with 10Gigabit LAN onboard. Maybe you don't need Infiniband for your applications, 10G Ethernet can be fine for small CFD clusters, if you can sacrifice some strong scaling capability at the low cell count side.
Quote:

4. Do I need to speed up the master server compared to the slaves? Faster processor, disk, memory?
Totally depends on what the head node is supposed to be used for.
Just another compute node that also handles login to the cluster? -> No additional requirements
House some fancy storage system on top of being a regular compute node? Enable GUI acces to pre- and post-processing? -> you'll need some more/better parts, an some CPU performance to spare.

Rec June 18, 2021 05:05

Are the components selected correctly based on price - speed? Or would you choose other components?

I decided to try using IB, what cards and switches would you recommend for me?

If i make a simple computer a master, and 3 servers will be engaged in the calculation, won't this be a bottleneck?

In the calculations I use Ansys CFX 17.2, which will be faster with a 16 core processor and 3200 MHz or a 24 core processor and 2650 MHz?

If you have come across Ansys products, does it make sense to upgrade to the latest Ansys version? Will it work faster?

wkernkamp June 18, 2021 19:12

Infiniband without router
 
If you get a very cheap ConnectX3 dual port FDR Infiniband card for each of your nodes, you will have very good performance due to a direct interconnect between all three nodes. A router makes set-up easier, but mine is very noisy so I try not to use it. Download latest drivers, ibtools and opensm from Mellanox. It will give you SR-IOV so you can set up virtual interfaces for virtual machines in case you run nodes as virtual machines. Avoid "Pro" cards, because they provide only 40 Gb/s ethernet over Infiniband, instead of 56 Gb/s Infiniband (as well as ethernet over IB). Look for 354 in the number designation of the card if I remember correctly. There are faster versions of Infiniband, nowadays, but with just three nodes it should't make much difference because you will have very good bandwidth and very low latency already. mpi is optimized for infiniband with direct memory access, so there is no need for tuning. Non-Mellanox cards can be flashed with mftflint to the latest Mellanox. It is not hard.



Caution: If you need to get this up and running for a time critical application at work, get a router, because it will do the routing automatically and make most of it plug and play.

Rec July 2, 2021 05:50

Quote:

Originally Posted by wkernkamp (Post 806411)
Caution: If you need to get this up and running for a time critical application at work, get a router, because it will do the routing automatically and make most of it plug and play.

Thank you for your answer

Could you recommend a router for this task?
Optimal price / performance


All times are GMT -4. The time now is 03:48.