CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

CFD workstation configuration calling for help

Register Blogs Community New Posts Updated Threads Search

Like Tree5Likes
  • 3 Post By flotus1
  • 2 Post By flotus1

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   July 2, 2020, 10:28
Question CFD workstation configuration calling for help
  #1
New Member
 
Freewill1's Avatar
 
Join Date: Aug 2014
Posts: 18
Rep Power: 11
Freewill1 is on a distinguished road
Hi all,

I need a machine for OpenFOAM modeling with budget of $15000~$20000.

My typical number of cells is ~125M (500x500x500).

I inclined to choose AMD's EPYC 7002 CPU because of their
  • 8 channels of ECC RAM support
  • higher price/performance ratio over Intel's Xeon CPU
Can anyone help with questions as follows:
  • Is a single-socket or two dual-socket workstation competent to the task?
  • If I choose 64-core CPU/CPUs, for the same cores (up to 64 available for each 2nd gen EPYC CPU), which should I choose, one single-socket or two dual-socket, e.g., 1x7742 (64 cores/128 threads) vs. 2x7542 (32 cores/64 threads each)?
  • How many GBs of ECC RAM should I prepare for the job, 256 GB, 512 GB, or 1.0 TB?
AMD EPYC.jpg
Here are the reference prices for key components from NEWEGG:
CPU:
1xEPYC 7742: $8,513x1 = $8,513
2xEPYC 7542: $4,122x2 = $8,244

ECC RAM (512GB = 32GBx16 optimized for the 8-channel RAM support of CPU): ~$170x16 = $2,710

Thanks in advance.

Last edited by Freewill1; July 4, 2020 at 23:02.
Freewill1 is offline   Reply With Quote

Old   July 2, 2020, 10:35
Default
  #2
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Quote:
Originally Posted by Freewill1 View Post
I want to buy a machine for OpenFOAM modeling with budget of $1500~$2000.
Missing a few zeros? The options you are discussing are way more expensive.
flotus1 is offline   Reply With Quote

Old   July 2, 2020, 10:36
Default
  #3
New Member
 
Freewill1's Avatar
 
Join Date: Aug 2014
Posts: 18
Rep Power: 11
Freewill1 is on a distinguished road
Quote:
Originally Posted by flotus1 View Post
Missing a few zeros? The options you are discussing are way more expensive.
Wrong numbers fixed.
Freewill1 is offline   Reply With Quote

Old   July 2, 2020, 11:08
Default
  #4
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Quote:
Is a single-node or two dual-node workstation competent to the task?
Anything with enough memory can run it. Rule of thumb: more CPUs equals more faster
So if you can squeeze it into your budget, two dual-socket nodes would be ideal. If you want everything inside one workstation-style box, then dual-socket is the way to go. Of course, that option will be slower than two nodes, even if they have lower core count CPUs. Avoid a single CPU with extremely high core count, that's just a waste of money for CFD.

Quote:
f I choose 64-core CPU/CPUs, for the same cores of up to 64 cores (for available 2nd gen EPYC CPUs), which should I choose, one single-node or two dual-nodes, e.g., 1x7742 (64 cores/128 threads) vs. 2x7542 (32 cores/64 threads each)?
Shared resources are key here, especially memory channels. The more CPUs you have, the more shared resources are available, which helps avoiding bottlenecks.
Sticking to your example: 2x Epyc 7542 will be about twice as fast for running simulations in OpenFOAM, compared to a single Epyc 7742. The latter will be severely limited by a lack of memory bandwidth, and stop scaling somewhere between 24-32 cores.

Quote:
How many GBs of ECC RAM should I prepare for the job, 256 GB, 512 GB, or 1.0 TB?
While 256GB should be enough for standard aerodynamic simulation with 125M cells, I would recommend 512GB total for your budget. Once you have the compute power to run larger models, your cell counts will increase automatically

With all that out of the way: don't buy the Epyc 7542 either. It costs almost twice as much as an Epyc 7452 (same core count and chip layout, but lower TDP), and the performance difference running OpenFOAM will be negligible.
If you feel up to the task of connecting two machines and running the case distributed, two machines with 2x16 or 2x24 cores and 256GB RAM (16x16GB) each would be even better.

For memory, make sure to get DDR4-3200 reg ECC. Epyc can't handle unbuffered memory.
zhaobo, Tobermory and Ry_z like this.
flotus1 is offline   Reply With Quote

Old   July 2, 2020, 11:57
Thumbs up
  #5
New Member
 
Freewill1's Avatar
 
Join Date: Aug 2014
Posts: 18
Rep Power: 11
Freewill1 is on a distinguished road
Quote:
Originally Posted by flotus1 View Post
Anything with enough memory can run it. Rule of thumb: more CPUs equals more faster
So if you can squeeze it into your budget, two dual-socket nodes would be ideal. If you want everything inside one workstation-style box, then dual-socket is the way to go. Of course, that option will be slower than two nodes, even if they have lower core count CPUs. Avoid a single CPU with extremely high core count, that's just a waste of money for CFD.


Shared resources are key here, especially memory channels. The more CPUs you have, the more shared resources are available, which helps avoiding bottlenecks.
Sticking to your example: 2x Epyc 7542 will be about twice as fast for running simulations in OpenFOAM, compared to a single Epyc 7742. The latter will be severely limited by a lack of memory bandwidth, and stop scaling somewhere between 24-32 cores.


While 256GB should be enough for standard aerodynamic simulation with 125M cells, I would recommend 512GB total for your budget. Once you have the compute power to run larger models, your cell counts will increase automatically

With all that out of the way: don't buy the Epyc 7542 either. It costs almost twice as much as an Epyc 7452 (same core count and chip layout, but lower TDP), and the performance difference running OpenFOAM will be negligible.
If you feel up to the task of connecting two machines and running the case distributed, two machines with 2x16 or 2x24 cores and 256GB RAM (16x16GB) each would be even better.

For memory, make sure to get DDR4-3200 reg ECC. Epyc can't handle unbuffered memory.
Hi Alex, thank you for your pertinent suggestions to the points.

I feel surprised by what you mentioned about the bottleneck problem of the memory bandwidth for CFD (maybe other types of numerical computations too), as well as the low efficiency of budget if choosing EPYC 7742/7542, which I was not acquainted with.
Freewill1 is offline   Reply With Quote

Old   July 2, 2020, 14:04
Default
  #6
New Member
 
Freewill1's Avatar
 
Join Date: Aug 2014
Posts: 18
Rep Power: 11
Freewill1 is on a distinguished road
Hi Alex,
After careful thinking, new questions arise for me.
If I manage to set up two nodes working as a cluster, as mentioned above as the best option among others, additional interconnection between nodes will be needed, e.g., Infiniband now at node-to-node speed of ~100Gbps at most.
Will this low-bandwidth network counteract the benifits of the much more rapid eight-channel Epyc-RAM exchange and bring in new bottleneck?
Also, Infiniband network system seems quite expensive and requires extra budget?
I have on idea if I understand these issues correctly.
Freewill1 is offline   Reply With Quote

Old   July 2, 2020, 16:23
Default
  #7
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Quote:
I feel surprised by what you mentioned about the bottleneck problem of the memory bandwidth for CFD (maybe other types of numerical computations too), as well as the low efficiency of budget if choosing EPYC 7742/7542, which I was not acquainted with.
It's not a new concept, or something I came up with.
Most FV CFD codes have a rather high code balance. Meaning a high amount of data traffic per floating point operation.
Combined with machine balance getting smaller and smaller thanks to ever increasing core counts and IPC, with memory bandwidth barely keeping up, memory bandwidth bottlenecks have long been a thing in CFD, and will become more frequent in other applications in the future.
Here is what that looks like in OpenFOAM:
scaling.png
Results were taken from the pinned thread in this sub-forum. The 7551 result is mine.
No, this is not just the test case being too small for high core counts. This case can scale to even higher core counts, albeit on several machines.

Quote:
Will this low-bandwidth network counteract the benifits of the much more rapid eight-channel Epyc-RAM exchange and bring in new bottleneck?
Infiniband as a node interconnect is not a bottleneck for OpenFOAM, especially not for only two nodes. If that was the case, clusters would not be a thing.
Parallelization in OF is implemented via MPI+domain decomposition. The only data that has to be transferred between cores are values on the domain boundaries. This pales in comparison to the amount of data that has to be accessed for the algorithm itself.
As a matter of fact, you could probably get away with 10Gig Ethernet as an interconnect for only two nodes. If you want Infiniband, all you need are two cards and a cable for two nodes. No expensive switch required. The main benefit of Infiniband over Ethernet as node interconnect is not bandwidth, but latency. So you don't have to go all in with 100G or even faster Infiniband cards. 40G or 56G would be plenty.
zhaobo and Ry_z like this.
flotus1 is offline   Reply With Quote

Old   July 2, 2020, 19:44
Thumbs up
  #8
New Member
 
Freewill1's Avatar
 
Join Date: Aug 2014
Posts: 18
Rep Power: 11
Freewill1 is on a distinguished road
Many thanks, I have learned quite a lot from your expertise.
Freewill1 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Workstation fot research position in CFD Laerte Hardware 9 June 21, 2020 15:17
Home workstation for large memory demand CFD yutsumi Hardware 17 May 5, 2020 09:13
Buying refurbished workstation for CFD fbelga Hardware 10 November 10, 2019 14:12
Alienware Area 51 R5 as a CFD workstation fusij Hardware 1 June 13, 2019 10:15
CFD Online Celebrates 20 Years Online jola Site News & Announcements 22 January 31, 2015 00:30


All times are GMT -4. The time now is 14:56.