|
[Sponsors] |
Hardware Suggestions for Ansys Mechanical, Sherlock and Mechanical |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
October 12, 2022, 19:34 |
Hardware Suggestions for Ansys Mechanical, Sherlock and Mechanical
|
#1 |
New Member
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4 |
Hi All,
I work at a small business that has started to get into the simulation space and have been lurking the forums here a bit which has been very helpful, especially the stickies. There's likely still some considerations I'm missing, but I think I'm at the point of a rough outline of how things should look, would appreciate any suggestions or comments. To answer the checklist before posting from the sticky: 1. Intended software is Ansys Icepack, Sherlock and mechanical. 2. The plan is to be licensed for 32 Cores. 3. Unclear yet as to scale/detail, however primarily circuit boards and relatively small mechanical components. Cell count is TBD. 4. Budget is $10k, including additional networking equipment for integration. 5. The setting for engineering in a business environment. The intention is also to set it up as a server and install RSM on one box shared between a couple of users. 6. Flexible on sourcing, but currently figuring we would probably get more value from building ourselves. 7. Location is USA. So far the 7373X in a 2P configuration and 7573x have caught my eye, but I'm not sure how the performance scales in Icepack, Sherlock and Mechanical versus Fluent. I noticed in some published benchmarks for Mechanical the 73F3 was used, so I'm not sure if these programs benefit more from increased frequency than cache. The Xenon 8362 or 8253 2P is also being considered, but from what little I've seen the AMD offerings seem likely to be faster. Uncertainty on the cell and DoF count seems like it's definitely a big x factor on our end, especially for RAM sizing and network speed considering it will be going over the network. My hope is room to expand the memory and a decent NIC can somewhat mitigate the concern. Not sure how much of a hit only using half the RAM channels would be though. Also, what is the performance difference of a 2P 16 core and a 1P 32 core system all else being equal? Not sure what the relative impact is of a 2x larger cache versus increased latency between sockets. Maybe workload dependent? Currently the rough build is: Gigabyte MZ72-HBO Mobo AMD 7373x x2 CPU Silverstone XE04SP3 4U Cooler (don't think I can fit the larger Artic SP3 cooler on the motherboard) 512GB (32x8) 3200mhz Registered ECC RAM Mellanox ConnectX-5 NIC L45000 8 Fan 4U Case Thanks, Nic |
|
October 12, 2022, 23:02 |
|
#2 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
Very powerful system, however you should fill all 16 channels for best CFD performance, so do 16x 16GB. |
||
October 13, 2022, 11:07 |
|
#3 | |
New Member
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4 |
Quote:
I'm not sure if anyone's tried this (if not, perhaps for good reason) but one idea that came up was if he have more cores than Ansys was licensed for, like for example with two 7473x, using the excess cores for other tasks or VMs on the system. Not sure if this would result in a relatively minor performance degradation or more trouble than it's worth due to resource contention. *Edit* also, should there be additional headroom in terms of core count on the solver for additional tasks not directly related to the solver, such as say retrieving a solve to be able to run? Last edited by Kips; October 13, 2022 at 12:21. Reason: Added additional section |
||
October 13, 2022, 14:03 |
|
#4 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
512 GB should be more than adequate. The number of extra cores beyond those used for CFD depends on the intended workload for other processes such as VMs. The memory access is typically a CFD bottleneck. Therefore, VMs that run memory intensive applications under high load, such as your company database server should not be run in parallel with the CFD calculation as performance for both CFD and database will drop. Lightly loaded VMs can coexist with the CFD calculation just fine. I sometimes run a CFD case on a few cores less than the maximum available to have a responsive desktop. |
||
October 14, 2022, 11:56 |
|
#5 | |
New Member
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4 |
Quote:
The thought was to have the engineer's VMs running alongside the solver server on the same box, but they would likely be fairly memory intensive. For the few more cores an 18 core like the Xeon 6354 over the 16 core 6346 seems like it might be worthwhile. |
||
October 14, 2022, 16:01 |
|
#6 | |
Senior Member
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14 |
Quote:
2. The proposed machine is in the top 1% capability for both CPUs and core memory size. So if your problem size is so large that it would not fit in this machine (unlikely) you will need to run it on a cluster somewhere or add a machine to form a cluster yourself. 3. An engineer's VMs will occupy a bit of core memory each (of which you have plenty). However, this memory is accessed very infrequently when compared to the access needs of CFD. It is this frequent access that taxes the memory bandwidth which is the bottle neck here. 4. Only if your engineers are planning to run CFD inside their VMs do you have a problem with bandwidth contention. They will quickly figure out how to sequence big jobs to not get in each others way. |
||
October 14, 2022, 16:47 |
|
#7 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Gotta say, I'm not a huge fan of the idea of using "excess" cores on a compute node for VMs. Even a cheap dedicated machine will yield a much better user experience, and avoid headaches during the setup.
I would strongly advise against that. |
|
October 17, 2022, 18:24 |
|
#8 | |
New Member
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4 |
Quote:
I think this is probably for the best on further consideration. Unfortunately the engineer machines will likely have to be virtualized one way or the other, but I think separating it out to another box is probably the way to go. |
||
October 17, 2022, 18:57 |
|
#9 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49 |
Yeah, virtualization is not the problem I'm seeing here.
Running the VMs alongside the solvers on the same hardware is. They will compete for the limited shared CPU resources. Leading to fluctuations in solver run times. And more importantly sluggishness of the VMs. Doing CAD that way could become pretty annoying. You could partially work around that with Epyc by assigning the VMs to the cores of one chiplet exclusively. Which will of course reduce available resources for compute. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|