CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Hardware (https://www.cfd-online.com/Forums/hardware/)
-   -   Used servers for CFD (https://www.cfd-online.com/Forums/hardware/228002-used-servers-cfd.html)

oysteinp June 16, 2020 15:18

Used servers for CFD
 
I plan to buy a workstation for CFD simulations. The software is FLUENT/OpenFOAM and I expect to have cases with around 10 million polyhedral cells.


I have noticed that it is possible to get used servers at a reasonable price. I am considering two IBM x3850 X5 with 4 Xeon E7-8870 10Core 2.40GHz CPU each, and connect them together with the QPI link.



Would such a system make sense for the above mentioned task?

flotus1 June 16, 2020 16:57

Used servers can be a cheap entry into higher performance CFD machines.
This one in particular should be pretty capable, at least for parallel workloads. I see they are sold on ebay for 250€ even in Europe, with 4 CPUs, but not much else.

There are however some drawbacks when re-purposing such a server as a workstation. Which is what makes them so cheap, because barely anyone would use these things as servers anymore. A non-exhaustive list:
  • power consumption - these old quad-socket machines draw a lot of power, even when idle. You will notice that on your electricity bill, and they are pretty capable room heaters.
  • noise - thanks to server-grade cooling, the noise these things produce can be deafening. Definitely not something you want sitting under your desk
  • single-core performance - due to the age and low clock speed of these CPUs, they lack behind more modern machines in single- and lightly-threaded workloads.
  • tinkering and research - I must admit that I never went down that rabbit hole myself. But software and hardware support is definitely something that requires some research.

Edit: turns out, QPI can be used to connect two of these nodes. That is one hell of a shared memory system with 8 CPUs :eek:

oysteinp June 17, 2020 05:46

Thank you for your reply.


Yes, I am aware that there are some drawbacks. I had mainly thought about single core performance and power usage.


Regarding the large shared memory system, would that create a bottleneck for the CFD simulations?


The other option is to dig deeper into the pocket and buy new hardware. I have also been looking at e.g. dual epyc 7302. I know these are quite different systems and I guess epyc is in a different league when it comes to single core performance, but do you have any idea how they compare in parallel performance for CFD?

flotus1 June 17, 2020 06:12

In the pinned thread on this sub-forum, there are some benchmarks for Xeon E7-8870. In OpenFOAM on a single core, it gets less than half the performance of a modern CPU like Epyc Rome.
https://www.cfd-online.com/Forums/ha...-hardware.html

A shared memory system with 8 CPUs is not necessarily a problem, at least for software like OpenFOAM or Fluent, which are designed to run even on distributed memory systems. These solvers run great on first gen Epyc CPUs, which also have 8 NUMA nodes on a dual-socket system.
But since not every CPU has a direct QPI link to every other CPU, several hops may be required, and even some sharing of bandwidth. This leads to really high latency and low bandwidth between some CPUs. But probably still better than what you could do with distributed memory.
It is just software that was not designed with NUMA in mind that will struggle, and might need to be pinned to a single NUMA node.

Dual Epyc 7302 will probably still be faster running all cores compared to 8 of these CPUs. Maybe not by much, but the higher convenience factor would draw me towards the more modern system, despite higher initial cost.

wkernkamp October 28, 2023 22:38

In the US, there are cheap Xeon v3/v4 servers. The v4 xeons are cheap and offer DDR4-2400 memory, while the E7-8870 offers DDR3-1333. So per.processor, that is almost twice the bandwidth. The bandwidth is an important factor determjnjng solution speed. Make sure that your proposed servers with foour processors allow four channel memory. My Dell R810 did not.

If you have all 16 channels operational, you should getdecent performance from your machines. I ended up getting good results from a small cluster, but more than four machines blew my fuses. The room will warm up. I live in California. Cold climate recommended, haha.


All times are GMT -4. The time now is 19:04.