CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Hardware (https://www.cfd-online.com/Forums/hardware/)
-   -   GPU acceleration in mainstream CFD solvers (https://www.cfd-online.com/Forums/hardware/241334-gpu-acceleration-mainstream-cfd-solvers.html)

FliegenderZirkus February 21, 2022 07:59

GPU acceleration in mainstream CFD solvers
 
This week a new version of Starccm+ will be released, introducing GPU acceleration, quoting:
Quote:

On the physics side, it supports both steady and unsteady​, constant density flows using the segregated solver. GPU-based calculations are compatible with most turbulence models, including RANS, DDES and Reynolds Stress Models.
https://blogs.sw.siemens.com/simcenter/gpu-acceleration-for-cfd-simulation/

A similar announcement came out last year from ANSYS, claiming GPU acceleration now being available in Fluent:
https://www.nvidia.com/en-us/data-ce.../ansys-fluent/
Both articles mention that the GPUs are used in the AMG solver (called AmgX by NVIDIA), so I'm guessing both are a similar thing? Has anyone tried this in the latest Fluent release? I'd be curious to learn how well it works in reality and if this really will be the "next big thing" like the marketing people sell it as.

flotus1 February 21, 2022 08:39

GPU acceleration was the "next big thing" 10-15 years ago. Fluent has had it for quite some time now.
The fact that it still is not the normal mode of operation for most users should tell you everything you need to know about it. It's great when it works for your requirements, but the topic is littered with pitfalls, and the benefit can be very situational.

FliegenderZirkus February 22, 2022 03:33

Hmm I was afraid it would be like that. I just found your thread with the benchmarks and it indeed doesn't look that attractive:

https://www.cfd-online.com/Forums/ha...-fluent-3.html
But still, hasn't something changed since then? Both press releases talk about 100 million cell meshes. Siemens even explicitly say the acceleration works well with the segregated solver, where as I understand less time is spent in the linear solver compared to the coupled approach. Where is the catch? I'd still be curious to hear from from someone who actually tried using an A100 card on a 100 million real life case.

flotus1 February 22, 2022 05:51

A few things have changed. Some for the better, some not so much.

Apart from the expected growth in performance and efficiency, onboard memory did increase quite a bit. So you have better chances of your model actually fitting into memory.
On the other hand, the focus for accelerator hardware development has shifted away towards AI. Which means GPUs with decent DP performance are few and far between. I do not see this trend slowing down any time soon. A lot of die space will be reserved for features that don't help with general compute tasks.
If I am not mistaken, the only somewhat recent choices for general purpose accelerators are Tesla V100, GV100 and A100. Only the GV100 is a card that can double as regular graphics card in a workstation. This means GPU acceleration for CFD is even more shifting towards datacenters. Not inherently a bad thing, but it means a higher entry bar. You won't just plug a GPU into an existing workstation and be good to go.

Just to be clear, I am not dismissing the technology itself. When it works, It works. Both for reducing run times and increasing power efficiency. And specifically for commercial CFD codes: offsetting license costs. But within the scope of this forum, not many people come here to discuss which accelerators to use for their next datacenter. And most of the time, the research required (https://www.cfd-online.com/Forums/ha...tml#post796969 chapter 3b) to make GPU acceleration viable goes a bit over the heads of people asking for hardware advice here. Which is totally understandable, the marketing around this topic does not do it any favors in my opinion.

Carter83 February 23, 2022 03:57

In ANSYS, the presented numbers I saw (a few years ago) was something like that the specialised GPUs has performance similar to a comparably priced CPU. CPU from previous generation, though. And not in all cases.
It actually made some sense, because the GPU runs were possible with the same licence, while for another CPU you would need another solver. Other than that, I cannot think of any reason for the GPU. Adding another CPU is just always more reliable solution.

trampoCFD February 23, 2022 20:37

Hi,
I have access compute node with four V100 connected through 200Gbps. I could run whatever benchmark test you need. Please send me a PM if you're interested.


All times are GMT -4. The time now is 09:10.