CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Hardware

Used servers for CFD

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   June 16, 2020, 15:18
Default Used servers for CFD
  #1
New Member
 
Øystein
Join Date: Jun 2020
Posts: 2
Rep Power: 0
oysteinp is on a distinguished road
I plan to buy a workstation for CFD simulations. The software is FLUENT/OpenFOAM and I expect to have cases with around 10 million polyhedral cells.


I have noticed that it is possible to get used servers at a reasonable price. I am considering two IBM x3850 X5 with 4 Xeon E7-8870 10Core 2.40GHz CPU each, and connect them together with the QPI link.



Would such a system make sense for the above mentioned task?
oysteinp is offline   Reply With Quote

Old   June 16, 2020, 16:57
Default
  #2
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
Used servers can be a cheap entry into higher performance CFD machines.
This one in particular should be pretty capable, at least for parallel workloads. I see they are sold on ebay for 250€ even in Europe, with 4 CPUs, but not much else.

There are however some drawbacks when re-purposing such a server as a workstation. Which is what makes them so cheap, because barely anyone would use these things as servers anymore. A non-exhaustive list:
  • power consumption - these old quad-socket machines draw a lot of power, even when idle. You will notice that on your electricity bill, and they are pretty capable room heaters.
  • noise - thanks to server-grade cooling, the noise these things produce can be deafening. Definitely not something you want sitting under your desk
  • single-core performance - due to the age and low clock speed of these CPUs, they lack behind more modern machines in single- and lightly-threaded workloads.
  • tinkering and research - I must admit that I never went down that rabbit hole myself. But software and hardware support is definitely something that requires some research.

Edit: turns out, QPI can be used to connect two of these nodes. That is one hell of a shared memory system with 8 CPUs

Last edited by flotus1; June 17, 2020 at 03:24.
flotus1 is offline   Reply With Quote

Old   June 17, 2020, 05:46
Default
  #3
New Member
 
Øystein
Join Date: Jun 2020
Posts: 2
Rep Power: 0
oysteinp is on a distinguished road
Thank you for your reply.


Yes, I am aware that there are some drawbacks. I had mainly thought about single core performance and power usage.


Regarding the large shared memory system, would that create a bottleneck for the CFD simulations?


The other option is to dig deeper into the pocket and buy new hardware. I have also been looking at e.g. dual epyc 7302. I know these are quite different systems and I guess epyc is in a different league when it comes to single core performance, but do you have any idea how they compare in parallel performance for CFD?
oysteinp is offline   Reply With Quote

Old   June 17, 2020, 06:12
Default
  #4
Super Moderator
 
flotus1's Avatar
 
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46
flotus1 has a spectacular aura aboutflotus1 has a spectacular aura about
In the pinned thread on this sub-forum, there are some benchmarks for Xeon E7-8870. In OpenFOAM on a single core, it gets less than half the performance of a modern CPU like Epyc Rome.
OpenFOAM benchmarks on various hardware

A shared memory system with 8 CPUs is not necessarily a problem, at least for software like OpenFOAM or Fluent, which are designed to run even on distributed memory systems. These solvers run great on first gen Epyc CPUs, which also have 8 NUMA nodes on a dual-socket system.
But since not every CPU has a direct QPI link to every other CPU, several hops may be required, and even some sharing of bandwidth. This leads to really high latency and low bandwidth between some CPUs. But probably still better than what you could do with distributed memory.
It is just software that was not designed with NUMA in mind that will struggle, and might need to be pinned to a single NUMA node.

Dual Epyc 7302 will probably still be faster running all cores compared to 8 of these CPUs. Maybe not by much, but the higher convenience factor would draw me towards the more modern system, despite higher initial cost.
flotus1 is offline   Reply With Quote

Old   October 28, 2023, 22:38
Default
  #5
Senior Member
 
Will Kernkamp
Join Date: Jun 2014
Posts: 316
Rep Power: 12
wkernkamp is on a distinguished road
In the US, there are cheap Xeon v3/v4 servers. The v4 xeons are cheap and offer DDR4-2400 memory, while the E7-8870 offers DDR3-1333. So per.processor, that is almost twice the bandwidth. The bandwidth is an important factor determjnjng solution speed. Make sure that your proposed servers with foour processors allow four channel memory. My Dell R810 did not.

If you have all 16 channels operational, you should getdecent performance from your machines. I ended up getting good results from a small cluster, but more than four machines blew my fuses. The room will warm up. I live in California. Cold climate recommended, haha.
wkernkamp is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
HP DL580 servers: How to get the most out of my 160 vCPUs! mcneilyo Hardware 1 October 24, 2016 02:43
New Servers Launched for CFD Online Website pete Site News & Announcements 0 June 12, 2015 06:09
Setup processing servers chattphotos STAR-CCM+ 0 October 10, 2013 20:33
Making a Cluster with servers at different locations? Mobz OpenFOAM Programming & Development 1 February 27, 2013 17:16
default servers in paraview Titus OpenFOAM 15 April 25, 2012 06:03


All times are GMT -4. The time now is 17:42.