CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

Hardware Suggestions for Ansys Mechanical, Sherlock and Mechanical

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 12, 2022, 19:34
Smile Hardware Suggestions for Ansys Mechanical, Sherlock and Mechanical
  #1
New Member
 
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4
Kips is on a distinguished road
Hi All,

I work at a small business that has started to get into the simulation space and have been lurking the forums here a bit which has been very helpful, especially the stickies. There's likely still some considerations I'm missing, but I think I'm at the point of a rough outline of how things should look, would appreciate any suggestions or comments.

To answer the checklist before posting from the sticky:

1. Intended software is Ansys Icepack, Sherlock and mechanical.
2. The plan is to be licensed for 32 Cores.
3. Unclear yet as to scale/detail, however primarily circuit boards and relatively small mechanical components. Cell count is TBD.
4. Budget is $10k, including additional networking equipment for integration.
5. The setting for engineering in a business environment. The intention is also to set it up as a server and install RSM on one box shared between a couple of users.
6. Flexible on sourcing, but currently figuring we would probably get more value from building ourselves.
7. Location is USA.

So far the 7373X in a 2P configuration and 7573x have caught my eye, but I'm not sure how the performance scales in Icepack, Sherlock and Mechanical versus Fluent. I noticed in some published benchmarks for Mechanical the 73F3 was used, so I'm not sure if these programs benefit more from increased frequency than cache. The Xenon 8362 or 8253 2P is also being considered, but from what little I've seen the AMD offerings seem likely to be faster.

Uncertainty on the cell and DoF count seems like it's definitely a big x factor on our end, especially for RAM sizing and network speed considering it will be going over the network. My hope is room to expand the memory and a decent NIC can somewhat mitigate the concern. Not sure how much of a hit only using half the RAM channels would be though.

Also, what is the performance difference of a 2P 16 core and a 1P 32 core system all else being equal? Not sure what the relative impact is of a 2x larger cache versus increased latency between sockets. Maybe workload dependent?

Currently the rough build is:

Gigabyte MZ72-HBO Mobo
AMD 7373x x2 CPU
Silverstone XE04SP3 4U Cooler (don't think I can fit the larger Artic SP3 cooler on the motherboard)
512GB (32x8) 3200mhz Registered ECC RAM
Mellanox ConnectX-5 NIC
L45000 8 Fan 4U Case

Thanks,
Nic
Kips is offline   Reply With Quote

Old   October 12, 2022, 23:02
Default
  #2
Senior Member
 
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14
wkernkamp is on a distinguished road
Quote:
Originally Posted by Kips View Post
Currently the rough build is:

Gigabyte MZ72-HBO Mobo
AMD 7373x x2 CPU
Silverstone XE04SP3 4U Cooler (don't think I can fit the larger Artic SP3 cooler on the motherboard)
512GB (32x8) 3200mhz Registered ECC RAM
Mellanox ConnectX-5 NIC
L45000 8 Fan 4U Case

Very powerful system, however you should fill all 16 channels for best CFD performance, so do 16x 16GB.
wkernkamp is offline   Reply With Quote

Old   October 13, 2022, 11:07
Default
  #3
New Member
 
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4
Kips is on a distinguished road
Quote:
Originally Posted by wkernkamp View Post
Very powerful system, however you should fill all 16 channels for best CFD performance, so do 16x 16GB.
Thanks, I think that's probably a good idea. I think we should hopefully be a ways away from exceeding 512GB. Hopefully.

I'm not sure if anyone's tried this (if not, perhaps for good reason) but one idea that came up was if he have more cores than Ansys was licensed for, like for example with two 7473x, using the excess cores for other tasks or VMs on the system. Not sure if this would result in a relatively minor performance degradation or more trouble than it's worth due to resource contention.

*Edit* also, should there be additional headroom in terms of core count on the solver for additional tasks not directly related to the solver, such as say retrieving a solve to be able to run?

Last edited by Kips; October 13, 2022 at 12:21. Reason: Added additional section
Kips is offline   Reply With Quote

Old   October 13, 2022, 14:03
Default
  #4
Senior Member
 
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14
wkernkamp is on a distinguished road
Quote:
Originally Posted by Kips View Post
Thanks, I think that's probably a good idea. I think we should hopefully be a ways away from exceeding 512GB. Hopefully.

I'm not sure if anyone's tried this (if not, perhaps for good reason) but one idea that came up was if he have more cores than Ansys was licensed for, like for example with two 7473x, using the excess cores for other tasks or VMs on the system. Not sure if this would result in a relatively minor performance degradation or more trouble than it's worth due to resource contention.

*Edit* also, should there be additional headroom in terms of core count on the solver for additional tasks not directly related to the solver, such as say retrieving a solve to be able to run?

512 GB should be more than adequate.


The number of extra cores beyond those used for CFD depends on the intended workload for other processes such as VMs. The memory access is typically a CFD bottleneck. Therefore, VMs that run memory intensive applications under high load, such as your company database server should not be run in parallel with the CFD calculation as performance for both CFD and database will drop. Lightly loaded VMs can coexist with the CFD calculation just fine. I sometimes run a CFD case on a few cores less than the maximum available to have a responsive desktop.
wkernkamp is offline   Reply With Quote

Old   October 14, 2022, 11:56
Default
  #5
New Member
 
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4
Kips is on a distinguished road
Quote:
Originally Posted by wkernkamp View Post
512 GB should be more than adequate.


The number of extra cores beyond those used for CFD depends on the intended workload for other processes such as VMs. The memory access is typically a CFD bottleneck. Therefore, VMs that run memory intensive applications under high load, such as your company database server should not be run in parallel with the CFD calculation as performance for both CFD and database will drop. Lightly loaded VMs can coexist with the CFD calculation just fine. I sometimes run a CFD case on a few cores less than the maximum available to have a responsive desktop.
Thanks, I suppose worst case on the odd very large task the solver would start hitting the SSD as v-ram and run slowly?

The thought was to have the engineer's VMs running alongside the solver server on the same box, but they would likely be fairly memory intensive. For the few more cores an 18 core like the Xeon 6354 over the 16 core 6346 seems like it might be worthwhile.
Kips is offline   Reply With Quote

Old   October 14, 2022, 16:01
Default
  #6
Senior Member
 
Will Kernkamp
Join Date: Jun 2014
Posts: 372
Rep Power: 14
wkernkamp is on a distinguished road
Quote:
Originally Posted by Kips View Post
Thanks, I suppose worst case on the odd very large task the solver would start hitting the SSD as v-ram and run slowly?

The thought was to have the engineer's VMs running alongside the solver server on the same box, but they would likely be fairly memory intensive. For the few more cores an 18 core like the Xeon 6354 over the 16 core 6346 seems like it might be worthwhile.
1. It is not feasible to run CFD using swap space on disk as memory as that would be too slow.

2. The proposed machine is in the top 1% capability for both CPUs and core memory size. So if your problem size is so large that it would not fit in this machine (unlikely) you will need to run it on a cluster somewhere or add a machine to form a cluster yourself.

3. An engineer's VMs will occupy a bit of core memory each (of which you have plenty). However, this memory is accessed very infrequently when compared to the access needs of CFD. It is this frequent access that taxes the memory bandwidth which is the bottle neck here.

4. Only if your engineers are planning to run CFD inside their VMs do you have a problem with bandwidth contention. They will quickly figure out how to sequence big jobs to not get in each others way.
wkernkamp is offline   Reply With Quote

Old   October 14, 2022, 16:47
Default
  #7
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Gotta say, I'm not a huge fan of the idea of using "excess" cores on a compute node for VMs. Even a cheap dedicated machine will yield a much better user experience, and avoid headaches during the setup.
I would strongly advise against that.
flotus1 is offline   Reply With Quote

Old   October 17, 2022, 18:24
Default
  #8
New Member
 
Nic
Join Date: Oct 2022
Posts: 4
Rep Power: 4
Kips is on a distinguished road
Quote:
Originally Posted by wkernkamp View Post
1. It is not feasible to run CFD using swap space on disk as memory as that would be too slow.

2. The proposed machine is in the top 1% capability for both CPUs and core memory size. So if your problem size is so large that it would not fit in this machine (unlikely) you will need to run it on a cluster somewhere or add a machine to form a cluster yourself.

3. An engineer's VMs will occupy a bit of core memory each (of which you have plenty). However, this memory is accessed very infrequently when compared to the access needs of CFD. It is this frequent access that taxes the memory bandwidth which is the bottle neck here.

4. Only if your engineers are planning to run CFD inside their VMs do you have a problem with bandwidth contention. They will quickly figure out how to sequence big jobs to not get in each others way.
Appreciate the feedback regarding sizing. I think we're probably overestimating our needs, but given the cost of the software the additional hardware grunt is relatively inexpensive. Though the additional core packs are a bit pricier.

Quote:
Originally Posted by flotus1 View Post
Gotta say, I'm not a huge fan of the idea of using "excess" cores on a compute node for VMs. Even a cheap dedicated machine will yield a much better user experience, and avoid headaches during the setup.
I would strongly advise against that.
I think this is probably for the best on further consideration. Unfortunately the engineer machines will likely have to be virtualized one way or the other, but I think separating it out to another box is probably the way to go.
Kips is offline   Reply With Quote

Old   October 17, 2022, 18:57
Default
  #9
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,427
Rep Power: 49
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Yeah, virtualization is not the problem I'm seeing here.
Running the VMs alongside the solvers on the same hardware is. They will compete for the limited shared CPU resources. Leading to fluctuations in solver run times. And more importantly sluggishness of the VMs. Doing CAD that way could become pretty annoying.
You could partially work around that with Epyc by assigning the VMs to the cores of one chiplet exclusively. Which will of course reduce available resources for compute.
flotus1 is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 20:35.