CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

Workstation vs AWS

Register Blogs Community New Posts Updated Threads Search

Like Tree7Likes

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   August 5, 2022, 07:29
Default Workstation vs AWS
  #1
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Hello,

I've been looking to increase the simulation performance in our company. Currently there is only 1 user doing simulation and we are wondering if we want to get a powerfull workstation or move towards AWS.

Having a quick quotation, an AWS with 64 cores + 128 RAM results in something around 3500 USD a month. (less if you use Linux but we are currently with Windows)

Then, getting an AMD Threadripper with 64 cores + 128 GB of RAM + decend storage and GPU results in something under 10k USD (which would be around 3 months of AWS.

Does this make sense to you? because looking at this numbers I would go towards having your own machine and rely only on internal network transfer speeds and not having to transfer massive files via Internet...

What are your thoughts?

Thanks
cramr5 is offline   Reply With Quote

Old   August 5, 2022, 15:10
Default
  #2
Senior Member
 
Will Kernkamp
Join Date: Jun 2014
Posts: 316
Rep Power: 12
wkernkamp is on a distinguished road
Your financial logic is impeccable. For a sustained need, you have a three months return on investment. As far as equipment is concerned, the Threadripper platform is not ideal for CFD. Not bad, but not a best value for money proposition. Staying with AMD, a dual epyc system with the same number of cores would have better performance, because it has more memory channels.
trans(sonic)_pride likes this.
wkernkamp is offline   Reply With Quote

Old   August 7, 2022, 06:13
Default
  #3
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
In theory, the economics of buying vs. renting are clear.
If you use the hardware on a daily basis for long periods of time, buying is cheaper. And has other benefits as you pointed out with file transfers over potentially slow internet connection.
Renting additional hardware makes sense to cover occasional spikes in demand. Like you need to run a particularly large simulation as a one-off, or a large parameter sweep.
In short: buy what you need regularly, rent the rest.

In practice, things can be different depending on the setting you are in.
Hypothetically, imagine you are in a large company, and the IT department of said company charges an extortionate amount of money each month to your department for providing the on-site hardware. In such cases, it might turn out to be cheaper to rent from external providers, even for stuff you use 24/7.
One benefit of renting from big cloud providers: you can always get the latest and greatest hardware, without having to stick to 3 or 5 year cycles to upgrade your own hardware. Or using hardware even older than 5 years because no budget for hardware upgrades got cleared.
wkernkamp and cramr5 like this.
flotus1 is offline   Reply With Quote

Old   August 8, 2022, 04:22
Default
  #4
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Thanks guys, yes, really good points and it makes sense. I just was a bit "surprised" as I was expecting the "rental" to be a bit cheaper than what I saw. Maybe a 1/1.5 years ROI, not 3 months .

Of course the point of having always "state of the art" systems & not having to worry about maintenance, updates, downtimes, etc etc is a clear benefit. Specially for large corporations.

For our needs I am not sure what's the best, For us, being such a small company with variable needs and unclear future paths maybe having at least 1 powerful workstation could be a benefit. If then the CFD teams grows a lot maybe it's not worth it to have like 10 "super powerful machines" and rental makes more sense.
I need to have a chat with IT, accountants and so on and see how to proceed in our case.
cramr5 is offline   Reply With Quote

Old   August 8, 2022, 13:03
Default
  #5
Senior Member
 
Will Kernkamp
Join Date: Jun 2014
Posts: 316
Rep Power: 12
wkernkamp is on a distinguished road
Quote:
Originally Posted by cramr5 View Post
Thanks guys, .....
For our needs I am not sure what's the best, For us, being such a small company with variable needs and unclear future paths maybe having at least 1 powerful workstation could be a benefit. If then the CFD teams grows a lot maybe it's not worth it to have like 10 "super powerful machines" and rental makes more sense.
I need to have a chat with IT, accountants and so on and see how to proceed in our case.
CFD is very well suited to clustering. This means your first machine does not have to be "super powerful" as you can add capacity through cloning. Even with 1 Gb/s ethernet, you can achieve performance almost proportional to number of machines (for small clusters.) Your IT person probably is already thinking about upgrading your network to 2.5 Gb/s. For software licensing, the per core performance may be a factor. Not only clock frequency, but also memory bandwith and cache come into play.


What you want for your first machine is one that is easily cloned. For that, you need to look at systems with a stable socket and/or a standard form factor motherboard. This will allow you to keep all your machines at the same performance level with minor upgrades. The commonality is important for IT costs as well. Ten identical machines are as easily managed as one.


The cloning of machines is also important for rented clusters.
wkernkamp is offline   Reply With Quote

Old   August 8, 2022, 15:37
Default
  #6
New Member
 
Join Date: Jul 2022
Posts: 22
Rep Power: 3
trans(sonic)_pride is on a distinguished road
As it has already been said, I'd recommend going to epyc because of the higher bandwidth. Maybe threadripper pro (that is anything 3xxxWX or 5xxxWX not 2xxxWX) if you want to keep the memory bandwidth but get more PC like features on your motherboards. Nevertheless, epyc cpu's are the best choice, and can be had second hand for much less than a threadripper.
trans(sonic)_pride is offline   Reply With Quote

Old   August 9, 2022, 03:35
Default
  #7
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Thanks all for the informations.

Yes, I have various options around (also availability is a priority)
I've been looking at:
  • AMD 5975WX or 5995WX workstions (which has 8 mem channels)
  • Dual Xeon 6230R (2x 24 cores) (those have 6 channels each, so 12 total)
  • EPYC: potential dual EPYC CPU but that's maybe the most expensive options. Need to see the budget they are willing to invest in the system.


The Threadripper is calling me to improve the mesh time due to high single core clock compared to Intel or even epyc options.
cramr5 is offline   Reply With Quote

Old   August 9, 2022, 07:37
Default
  #8
New Member
 
Join Date: Jul 2022
Posts: 22
Rep Power: 3
trans(sonic)_pride is on a distinguished road
An important thing to consider that I don't think anyone has mentioned yet is that, due to CFD being bottlenecked by bandwidth, 2 to 4 cores per memory channel are optimal, meaning that treadrippers and epycs start scaling more and more poorly when you go above 16 cores, even though the scaling does not get bad until 32 core processors, then, going up to 64 cores shows little improvement (definetely not double). You can see this here.

About L3 caché: in comparisons of such high-end hardware, L3 caché size becomes important. That is why I'd recommend AMD. Each xeon only has about 35MB of cahé, while threadrippers have 128MB each and epyc milan x cpu's have, wait for it, 768MB per CPU, this reduces the diminishing returns of scaling past 16 cores, meaning that scaling can stay linear up to 32 cores and only then it starts to drop.

To sum up, I'd stay away from core counts higher than 32, and priorize Epyc on a dual socket motherboard, if budget allows it, you can populate both sockets, if not, an easy drop-in upgrade can be made.
7573X, 7473X, 7543 and 7532 are likely your best options (in order). If Epycs don't fit the budget, a Threadripper 5975WX will likely be good enough, even though not as easy to scale up in the future. Just remeber to fill the memory channels.
trans(sonic)_pride is offline   Reply With Quote

Old   August 9, 2022, 08:11
Default
  #9
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Wow Amazing! Thanks a lot for that info,

Another question since you seem to know a lot about that topic, What if I go with the 64 cores but then run 2 simulations in parallel (using 32 cores each), that without virtualisation of OS, all withint the same windows session.

Do you know how that scales? is Starccm (and windows I guess) able to split memory resources in an eficient way?
cramr5 is offline   Reply With Quote

Old   August 9, 2022, 12:58
Default
  #10
New Member
 
Join Date: Jul 2022
Posts: 22
Rep Power: 3
trans(sonic)_pride is on a distinguished road
I’m not really sure, but I’d say it’d be similar to running just one sim, due to the fact that you’d be allocating four memory channels per simulation and 32 cores per simulation, meaning that it would likely perform worse than a dual socket with 32 core processors or even 24 core processors.
trans(sonic)_pride is offline   Reply With Quote

Old   August 9, 2022, 21:57
Default
  #11
Member
 
Guillaume Jolly
Join Date: Dec 2015
Posts: 63
Rep Power: 10
trampoCFD is on a distinguished road
Send a message via Skype™ to trampoCFD
Hi Mark, I run a small consultancy in Australia that specialises in Star-CCM+, supercomputing and full CFD workflow automation.
I have 16 years of experience with Star-CCM+. We run a supercomputing system (200,000 cores +) which we use mostly internaly but is available commercially as well. We also sell the latest dual Epyc workstations. Please disregard our website as it is not up to date, we're currently working on both a complete rebranding and a new website.
I can fill you in on Star-CCM+ scalability, Windows vs Linux, parallel meshing etc...
Please email me at gui@trampocfd.com if you'd like to talk.

Best regards,
Guillaume Jolly
www.trampoCFD.com
trampoCFD is offline   Reply With Quote

Old   August 10, 2022, 04:55
Default
  #12
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Quote:
Originally Posted by trans(sonic)_pride View Post
I’m not really sure, but I’d say it’d be similar to running just one sim, due to the fact that you’d be allocating four memory channels per simulation and 32 cores per simulation, meaning that it would likely perform worse than a dual socket with 32 core processors or even 24 core processors.
Thanks, well I'll let know which way we go for.

Quote:
Originally Posted by trampoCFD View Post
Hi Mark, I run a small consultancy in Australia that specialises in Star-CCM+, supercomputing and full CFD workflow automation.
I have 16 years of experience with Star-CCM+. We run a supercomputing system (200,000 cores +) which we use mostly internaly but is available commercially as well. We also sell the latest dual Epyc workstations. Please disregard our website as it is not up to date, we're currently working on both a complete rebranding and a new website.
I can fill you in on Star-CCM+ scalability, Windows vs Linux, parallel meshing etc...
Please email me at gui@trampocfd.com if you'd like to talk.

Best regards,
Guillaume Jolly
www.trampoCFD.com
Thansk mr Jolly. Yes, I knew about you as I saw some of your other posts. we are in Europe which could make buy hardware from you a bit complicated (in terms of shipping, customs, etc) and in terms of cloud computing we still need to decide which way we want to operate. Thanks a lot too
cramr5 is offline   Reply With Quote

Old   August 27, 2022, 18:05
Default
  #13
Member
 
Matt
Join Date: May 2011
Posts: 43
Rep Power: 14
the_phew is on a distinguished road
For price/performance, you can't beat 2P nodes comprised of two 32-core EPYC CPUs. Used Rome generation hardware is a particular deal now; I've seen used 7532 CPUs for like $1k. Performance-wise, that chip will be at least 65% as fast as the $6k 7573X for most CFD workloads (the current top-of-the-heap for CFD until EPYC Genoa launches).

I too priced out AWS recently, and the only scenario where it works out for our company would be if a customer wanted a ton of heavy simulations finished very quickly, and was willing to pay a big premium to do it. If your CFD work load isn't that 'peaky', you always come out ahead by buying your own hardware. But you can't beat AWS for scaling up super fast to meet a big spike in demand.
the_phew is offline   Reply With Quote

Old   August 27, 2022, 22:00
Default
  #14
Senior Member
 
Will Kernkamp
Join Date: Jun 2014
Posts: 316
Rep Power: 12
wkernkamp is on a distinguished road
The Dual Epyc has very good performance:


Quote:
Originally Posted by jd210 View Post
Been wanting to add to this benchmark for some time and finally been able to do so.

OpenFOAM-v2012 running on CentOS 7.9. 2x AMD Epyc 7532 with 1TB of 3200mhz RAM. AMD equivalent to hyper threading switched off.

# cores Wall time (s):
------------------------
1 643.75
4 158.48
8 77.35
16 43.92
32 23.68
48 19.69
64 15.97

Super linear speed up to 8 cores and 6.25 iterations/s on 64 cores showing 2nd gen Zen is a big step from 1st.

I have a DL560 Gen 8 with four E5-4627 v2 that completes the benchmark in 48 seconds. Slower, but my all in cost are < $1,000. It has 128 GB DDR3 1866, 1T Nvme, dual 300GB HDD and a simple graphics card. The all-up cost of the Dual Epyc is around $3,500, I would think, if you build it yourself. So price performance of the Epyc is lower.


Somebody on the payroll just waiting for the next CFD result can add cost so quickly that the more expense machine is by far the better choice.
wkernkamp is offline   Reply With Quote

Old   August 29, 2022, 03:33
Default
  #15
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Thanks a lot for the replies everyone.

Also, just for curiosity, anyone had experience with services such Datapacket or Worldstream? or those are not really suited to CFD simulations?
cramr5 is offline   Reply With Quote

Old   August 30, 2022, 04:44
Default
  #16
New Member
 
Kansas
Join Date: Aug 2022
Posts: 1
Rep Power: 0
hamelton is on a distinguished road
We may use Azure, AWS, or a variety of other choices to give users virtual desktops with the required programs either pre-installed or given as needed. However, if you desire flexibility, protection, the comfort of use, and other benefits, you should choose a virtual desktop important to the creation rather than simply deploying a virtual machine and calling it a virtual desktop.
hamelton is offline   Reply With Quote

Old   October 10, 2022, 10:54
Default
  #17
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Quote:
Originally Posted by wkernkamp View Post
The Dual Epyc has very good performance:





I have a DL560 Gen 8 with four E5-4627 v2 that completes the benchmark in 48 seconds. Slower, but my all in cost are < $1,000. It has 128 GB DDR3 1866, 1T Nvme, dual 300GB HDD and a simple graphics card. The all-up cost of the Dual Epyc is around $3,500, I would think, if you build it yourself. So price performance of the Epyc is lower.


Somebody on the payroll just waiting for the next CFD result can add cost so quickly that the more expense machine is by far the better choice.
When you talk about "simple graphics card", what do you mean?
We are designing a server and not clear how much a GPU for StarCCM+ is used, specially when you are connected to it remotelly.

Thanks
cramr5 is offline   Reply With Quote

Old   October 10, 2022, 11:01
Default
  #18
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
CCM+ recently got updates that allow GPU acceleration. And at least from their marketing slides, it seems like they want to continue working on the capabilities. But my personal opinion on GPU acceleration for commercial CFD solvers still applies here: don't blindly trust the 1st party benchmarks, and perform your due diligence before buying into that.

I think what wkernkamp meant was unrelated though: a cheap low- to midrange graphics card is all you need in a workstation for pre- and postprocessing.
flotus1 is offline   Reply With Quote

Old   October 10, 2022, 11:06
Default
  #19
Member
 
Marc Ricart
Join Date: Jul 2022
Posts: 65
Rep Power: 3
cramr5 is on a distinguished road
Quote:
Originally Posted by flotus1 View Post
CCM+ recently got updates that allow GPU acceleration. And at least from their marketing slides, it seems like they want to continue working on the capabilities. But my personal opinion on GPU acceleration for commercial CFD solvers still applies here: don't blindly trust the 1st party benchmarks, and perform your due diligence before buying into that.

I think what wkernkamp meant was unrelated though: a cheap low- to midrange graphics card is all you need in a workstation for pre- and postprocessing.
Yes, I am not talking about GPU acceleration or calculation. Just talking about what's needed for rendering Pre/post processing becuase I think the 1st quotation we had for the server did not include any "relevant" GPU and I think something would be needed.
cramr5 is offline   Reply With Quote

Old   October 10, 2022, 11:14
Default
  #20
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
See chapter 3: General recommendations for CFD hardware [WIP]
flotus1 is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
OpenFOAM on AWS Ubuntu EC2 Instance mcc12 OpenFOAM Programming & Development 1 October 28, 2021 13:22
Suggestions for StarCCM+ Workstation configuration ifil Hardware 15 October 30, 2018 05:09
Another Workstation spec questions.... jeanhyuk Hardware 4 July 10, 2018 05:37
Abysmal performance of 64 cores opteron based workstation for CFD Fauster Hardware 8 June 4, 2018 10:51
Dual cpu workstation VS 2 node cluster single cpu workstation Verdi Hardware 18 September 2, 2013 03:09


All times are GMT -4. The time now is 02:09.