
[Sponsors] 
How to accelerate explicit CFD solvers? 

LinkBack  Thread Tools  Search this Thread  Display Modes 
January 22, 2021, 03:10 
How to accelerate explicit CFD solvers?

#1 
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
Hi everyone,
I would like to know how to accelerate my explicit compressible Euler solver. I have done :  conversion of the serial problem to a parallel problem that can be solved using OpenMP : Can we group multiple structured blocks together to speedup calculations?  using local timestepping  optimizing my code and data structures to use SIMD vectorization What can we do more?  I know implicit CFD solvers can be accelerated by using Multigrid methods, by using direct solvers at the lowest level of the V or W cycle, and ramping up the CFL values. Don't know much about explicit solvers. Link or name of papers or source code would be helpful. Thanks and regards ~sayan 

January 22, 2021, 04:06 

#2 
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
The best I can come up with, is to use a Geometric Multigrid Method and modify it for my 2D structured solver. That will work.
But is it possible to ramp up the CFL values somehow in explicit schemes? If we can ramp up the CFL values in explicit solvers, the performance improvements would be huge. 

January 22, 2021, 04:38 

#3 
Senior Member
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,849
Rep Power: 73 
Why are you using an explicit time integration if you are interested only in the steady state solition?


January 22, 2021, 04:43 

#4  
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
Quote:
I only know how to code good explicit solvers for now. They're easy. Also, I will use it to solve unsteady euler flow in future, so the code can be reused after some modifications in future. (EDIT : Also, the mstage local timestepping scheme uses explicit time integration even for a steady state solution.) PS: I know how a little bit about implicit solvers, but I want to progress step by step. 

January 22, 2021, 05:31 

#5 
Senior Member

If your interest is in steady state or, however, also steady, there seems to be nothing else to do.
Of course, there might be a ton of tricks on the programming side, but I guess that's not what you are asking for (and you probably know better than us) I once saw some method that parallelized in time beside space. But, honestly, seemed extremely complex and too problem dependent to be a good suggestion for any production code 

January 22, 2021, 05:44 

#6  
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
Quote:
Has there been no research done in CFL acceleration and stabilization for explicit schemes ? 

January 22, 2021, 05:56 

#7  
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
Quote:
Fluent apparently uses something called Implicit Residual Smoothing to accelerate the solution for even an explicit solver. I just found it out right now. I don't understand how it works though right now. https://www.afs.enea.it/project/nept...idualSmoothing Do you understand how it works? 

January 22, 2021, 06:08 

#8  
Senior Member

Quote:
It just works the way it is described, by taking few iterations of 18.516. I don't know why it works so, because I never studied or used it nor I have it in my code 

January 22, 2021, 06:14 

#9 
Senior Member
Join Date: Oct 2011
Posts: 242
Rep Power: 17 
Hello,
As far as I know, classical techniques for steadystate explicit solvers (ordered I believe in potential gain):  multigrid  local time stepping  residual smoothing  enthalpy damping They are introduced in many textbooks and papers, for example the reference below for structured and unstructured FV codes: COMPUTATIONAL FLUID DYNAMICS: PRINCIPLES AND APPLICATIONS, Blazek Multigrid for structured grids with some tuning and caution on the projection operators is probably a good candidate and is relatively easy to implement. Multigrid for unstructured meshes or AMR is another story though... Anyway, as stated above, I believe these acceleration techniques won't compete against a carefully implemented implicit scheme. A first order BDF timestep in the limit dt>inf may be seen as a NewtonRaphson iteration of your system R(U)=0, so you theoretically get quadratic convergence For unsteady flows, physics comes into play. Your dt may be then also dictated by the smallest temporal scale you need to catch for a relevant flow analysis. Use of a large dt is not solely a numerical stability constraint. 

January 22, 2021, 07:03 

#10  
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
Quote:
Wait ... when we do the smoothing operations in multigrid cycles to remove the high frequency signals, we're literally doing the Laplacian smoothing operation on the error from the finer mesh level. Isn't that exactly like this Implicit Residual Smoothing operation mentioned in ANSYS's doc? 

January 22, 2021, 07:11 

#11  
Senior Member

Quote:
Also, you don't tipically do Jacobi there, but at least SGS and it's not Laplacian smoothing, is still actually solving, a system of equations, despite not to convergence. Laplacian smoothing is more like explicitly iterating a steady diffusion equation. There are practical similarities because of the employed methods in certain circumstances, but that's all. Don't take me wrong, but I have to ask: have you ever coded a Jacobi or SOR iterative solver, even just for homework? 

January 22, 2021, 07:18 

#12  
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
Quote:
Yes. SOR, for an advancing front triangular grid generator. The solution of a Laplacian type equation gave the edge lengths to use for each advancing layer. SOR was fast. 

January 22, 2021, 07:24 

#13 
Senior Member


January 22, 2021, 07:30 

#14  
Senior Member
andy
Join Date: May 2009
Posts: 301
Rep Power: 18 
Quote:
In CFD we usually to want to solve a set of nonlinear equations and for problems where the significant/dominant physics affecting the stability of the scheme can be different. What you would do to optimise for low speed aeroacoustics, combustion, high swirl, transonic, etc... will be different. It isn't general although commercial companies obviously seek to get their codes to handle as many types of flows as possible albeit not necessarily optimally. Seeking to optimise code performance in the early stages of development is rarely cost effective because as the project evolves changes will almost certainly lead to it becoming suboptimal requiring it to be repeated. If performed at all as an independent task it is usually done in the later stages of development after most of the main factors influencing it have been taken. Obviously efficiency of computation shouldn't be wholly disregarded when considering what to adopt and code but implementing straightforward complete schemes (i.e. ones that can handle all the complexity of the target problem and not just a subset) and then testing to gather focused evidence on which to base refinements and optimisations of the complete scheme will be the path followed by most successful projects. Not sure I have wholly addressed the question asked and perhaps i have misunderstood your objectives but the focus of your attention, like people obsessing over particular computer languages, may be distracting you from progressing with CFD as a whole assuming that is your main objective which it may not be. 

January 22, 2021, 07:49 

#15  
Senior Member
Sayan Bhattacharjee
Join Date: Mar 2020
Posts: 495
Rep Power: 8 
Quote:
Thanks andy, I've also seen that some commercial codes prefer to go for wide range of use, and not performance. It makes sense when they consider the business profits. Nothing wrong with that. For me, I don't know much of CFD. So, I only try to do one thing at once, but try to do it well. I only like supersonic/hypersonic flows. So, I'm only focused on that. Maybe in far future I will add chemical reactions. I know my methods are highly unconventional, but based on my background, they work for me. Being able to convert the problem to use OpenMP instead of using MPI was one of the cases where not following common practice (using MPI) helped. 

January 22, 2021, 10:44 

#16  
Senior Member
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,285
Rep Power: 34 
Quote:
In CFD performance is not this simple. To illustrate this point, I had a turbulent pipe test case. When we first time did this test case with Wildkatze the Spalart Allmaras model, it took me 1500 iterations to converge. Compared to this starccm was converging the same in 400 to 500 iterations. Consider it today, I have had lots more things added to the solver and now in terms of operations current version does around 30 % more task. But the same test case now solver converges in less than 200 iterations. Around 200 is iteration count for KEpsilon and K  Omega too for the same test case. So even with higher operation count, solver now is more efficient. So these things change solver by solver basis and also they depend on algorithm too. Optimizing operation count is not always the best way. 

Thread Tools  Search this Thread 
Display Modes  


Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
CFD Salary  CFD  Main CFD Forum  17  January 3, 2017 18:09 
Current state of Open Source vs Commercial CFD solvers  Stab  Main CFD Forum  42  January 10, 2016 07:23 
Pressure vs Density based CFD solvers  Danie  Main CFD Forum  0  October 5, 2004 10:48 
ASME CFD Symposium, Atlanta, 2226 July 2001  Chris R. Kleijn  Main CFD Forum  16  October 2, 2000 10:15 
Iterative equation solvers in CFD  Vitaly Bulgakov  Main CFD Forum  32  March 1, 1999 12:11 