CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   How to accelerate explicit CFD solvers? (https://www.cfd-online.com/Forums/main/233338-how-accelerate-explicit-cfd-solvers.html)

aerosayan January 22, 2021 02:10

How to accelerate explicit CFD solvers?
 
Hi everyone,
I would like to know how to accelerate my explicit compressible Euler solver.


I have done :


- conversion of the serial problem to a parallel problem that can be solved using OpenMP : https://www.cfd-online.com/Forums/ma...culations.html


- using local time-stepping


- optimizing my code and data structures to use SIMD vectorization


What can we do more?


- I know implicit CFD solvers can be accelerated by using Multigrid methods, by using direct solvers at the lowest level of the V or W cycle, and ramping up the CFL values.


Don't know much about explicit solvers.


Link or name of papers or source code would be helpful.


Thanks and regards
~sayan

aerosayan January 22, 2021 03:06

The best I can come up with, is to use a Geometric Multigrid Method and modify it for my 2D structured solver. That will work.

But is it possible to ramp up the CFL values somehow in explicit schemes?

If we can ramp up the CFL values in explicit solvers, the performance improvements would be huge. :)

FMDenaro January 22, 2021 03:38

Why are you using an explicit time integration if you are interested only in the steady state solition?

aerosayan January 22, 2021 03:43

Quote:

Originally Posted by FMDenaro (Post 794088)
Why are you using an explicit time integration if you are interested only in the steady state solition?


I only know how to code good explicit solvers for now. They're easy.:)
Also, I will use it to solve unsteady euler flow in future, so the code can be reused after some modifications in future.


(EDIT : Also, the mstage local timestepping scheme uses explicit time integration even for a steady state solution.)



PS: I know how a little bit about implicit solvers, but I want to progress step by step. :)

sbaffini January 22, 2021 04:31

If your interest is in steady state or, however, also steady, there seems to be nothing else to do.

Of course, there might be a ton of tricks on the programming side, but I guess that's not what you are asking for (and you probably know better than us)

I once saw some method that parallelized in time beside space. But, honestly, seemed extremely complex and too problem dependent to be a good suggestion for any production code

aerosayan January 22, 2021 04:44

Quote:

Originally Posted by sbaffini (Post 794094)
If your interest is in steady state or, however, also steady, there seems to be nothing else to do.

Of course, there might be a ton of tricks on the programming side, but I guess that's not what you are asking for (and you probably know better than us)

I once saw some method that parallelized in time beside space. But, honestly, seemed extremely complex and too problem dependent to be a good suggestion for any production code


Has there been no research done in CFL acceleration and stabilization for explicit schemes ?:confused:

aerosayan January 22, 2021 04:56

Quote:

Originally Posted by sbaffini (Post 794094)
If your interest is in steady state or, however, also steady, there seems to be nothing else to do.

Of course, there might be a ton of tricks on the programming side, but I guess that's not what you are asking for (and you probably know better than us)

I once saw some method that parallelized in time beside space. But, honestly, seemed extremely complex and too problem dependent to be a good suggestion for any production code


Fluent apparently uses something called Implicit Residual Smoothing to accelerate the solution for even an explicit solver. I just found it out right now. I don't understand how it works though right now.



https://www.afs.enea.it/project/nept...idualSmoothing


Do you understand how it works?

sbaffini January 22, 2021 05:08

Quote:

Originally Posted by aerosayan (Post 794102)
Fluent apparently uses something called Implicit Residual Smoothing to accelerate the solution for even an explicit solver. I just found it out right now. I don't understand how it works though right now.



https://www.afs.enea.it/project/nept...idualSmoothing


Do you understand how it works?

Well, but that actually is an implicit approach, in the sense that you perform Jacobi like iterations. Yet, you don't have to store matrix coefficients and all.

It just works the way it is described, by taking few iterations of 18.5-16.

I don't know why it works so, because I never studied or used it nor I have it in my code

naffrancois January 22, 2021 05:14

Hello,

As far as I know, classical techniques for steady-state explicit solvers (ordered I believe in potential gain):
- multigrid
- local time stepping
- residual smoothing
- enthalpy damping

They are introduced in many textbooks and papers, for example the reference below for structured and unstructured FV codes:
COMPUTATIONAL FLUID DYNAMICS: PRINCIPLES AND APPLICATIONS, Blazek

Multigrid for structured grids with some tuning and caution on the projection operators is probably a good candidate and is relatively easy to implement. Multigrid for unstructured meshes or AMR is another story though...

Anyway, as stated above, I believe these acceleration techniques won't compete against a carefully implemented implicit scheme. A first order BDF time-step in the limit dt->inf may be seen as a Newton-Raphson iteration of your system R(U)=0, so you theoretically get quadratic convergence

For unsteady flows, physics comes into play. Your dt may be then also dictated by the smallest temporal scale you need to catch for a relevant flow analysis. Use of a large dt is not solely a numerical stability constraint.

aerosayan January 22, 2021 06:03

Quote:

Originally Posted by sbaffini (Post 794103)
Well, but that actually is an implicit approach, in the sense that you perform Jacobi like iterations. Yet, you don't have to store matrix coefficients and all.

It just works the way it is described, by taking few iterations of 18.5-16.

I don't know why it works so, because I never studied or used it nor I have it in my code


Wait ... when we do the smoothing operations in multigrid cycles to remove the high frequency signals, we're literally doing the Laplacian smoothing operation on the error from the finer mesh level.


Isn't that exactly like this Implicit Residual Smoothing operation mentioned in ANSYS's doc?

sbaffini January 22, 2021 06:11

Quote:

Originally Posted by aerosayan (Post 794114)
Wait ... when we do the smoothing operations in multigrid cycles to remove the high frequency signals, we're literally doing the Laplacian smoothing operation on the error from the finer mesh level.


Isn't that exactly like this Implicit Residual Smoothing operation mentioned in ANSYS's doc?

Well, not exactly, because in that case you do it on an equation which has matrix entries coming from the discretization and agglomeration of cells.

Also, you don't tipically do Jacobi there, but at least SGS and it's not Laplacian smoothing, is still actually solving, a system of equations, despite not to convergence.

Laplacian smoothing is more like explicitly iterating a steady diffusion equation. There are practical similarities because of the employed methods in certain circumstances, but that's all.

Don't take me wrong, but I have to ask: have you ever coded a Jacobi or SOR iterative solver, even just for homework?

aerosayan January 22, 2021 06:18

Quote:

Originally Posted by sbaffini (Post 794117)
Don't take me wrong, but I have to ask: have you ever coded a Jacobi or SOR iterative solver, even just for homework?


Yes. SOR, for an advancing front triangular grid generator.
The solution of a Laplacian type equation gave the edge lengths to use for each advancing layer.
SOR was fast.

sbaffini January 22, 2021 06:24

Quote:

Originally Posted by sbaffini (Post 794117)
Laplacian smoothing is more like explicitly iterating a steady diffusion equation.

Of course, especially with reference to the implcit residual smoothing, I actually meant IMPLICITLY iterating.

andy_ January 22, 2021 06:30

Quote:

Originally Posted by aerosayan (Post 794096)
Has there been no research done in CFL acceleration and stabilization for explicit schemes ?:confused:

The stability of an explicit scheme follows directly from it's formulation. If you want to change the stability then the scheme must change.

In CFD we usually to want to solve a set of non-linear equations and for problems where the significant/dominant physics affecting the stability of the scheme can be different. What you would do to optimise for low speed aeroacoustics, combustion, high swirl, transonic, etc... will be different. It isn't general although commercial companies obviously seek to get their codes to handle as many types of flows as possible albeit not necessarily optimally.

Seeking to optimise code performance in the early stages of development is rarely cost effective because as the project evolves changes will almost certainly lead to it becoming sub-optimal requiring it to be repeated. If performed at all as an independent task it is usually done in the later stages of development after most of the main factors influencing it have been taken.

Obviously efficiency of computation shouldn't be wholly disregarded when considering what to adopt and code but implementing straightforward complete schemes (i.e. ones that can handle all the complexity of the target problem and not just a subset) and then testing to gather focused evidence on which to base refinements and optimisations of the complete scheme will be the path followed by most successful projects.

Not sure I have wholly addressed the question asked and perhaps i have misunderstood your objectives but the focus of your attention, like people obsessing over particular computer languages, may be distracting you from progressing with CFD as a whole assuming that is your main objective which it may not be.

aerosayan January 22, 2021 06:49

Quote:

Originally Posted by andy_ (Post 794121)
Not sure I have wholly addressed the question asked and perhaps i have misunderstood your objectives but the focus of your attention, like people obsessing over particular computer languages, may be distracting you from progressing with CFD as a whole assuming that is your main objective which it may not be.


Thanks andy,



I've also seen that some commercial codes prefer to go for wide range of use, and not performance. It makes sense when they consider the business profits. Nothing wrong with that.


For me, I don't know much of CFD. So, I only try to do one thing at once, but try to do it well.


I only like supersonic/hypersonic flows. So, I'm only focused on that. Maybe in far future I will add chemical reactions.


I know my methods are highly unconventional, but based on my background, they work for me. Being able to convert the problem to use OpenMP instead of using MPI was one of the cases where not following common practice (using MPI) helped. :)

arjun January 22, 2021 09:44

Quote:

Originally Posted by aerosayan (Post 794125)
Thanks andy,

I've also seen that some commercial codes prefer to go for wide range of use, and not performance. It makes sense when they consider the business profits. Nothing wrong with that.


In CFD performance is not this simple. To illustrate this point, I had a turbulent pipe test case. When we first time did this test case with Wildkatze the Spalart Allmaras model, it took me 1500 iterations to converge. Compared to this starccm was converging the same in 400 to 500 iterations.

Consider it today, I have had lots more things added to the solver and now in terms of operations current version does around 30 % more task. But the same test case now solver converges in less than 200 iterations. Around 200 is iteration count for K-Epsilon and K - Omega too for the same test case.

So even with higher operation count, solver now is more efficient.

So these things change solver by solver basis and also they depend on algorithm too. Optimizing operation count is not always the best way.


All times are GMT -4. The time now is 17:21.