CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Pros and cons of Lattize Boltzmann Method vs explicit FVM (https://www.cfd-online.com/Forums/main/247145-pros-cons-lattize-boltzmann-method-vs-explicit-fvm.html)

ander January 16, 2023 07:59

Pros and cons of Lattize Boltzmann Method vs explicit FVM
 
My knowledge of the LBM is very limited, but it is my understanding that it can be implemented efficiently on GPU's since each "cell's" update is only dependent on surrounding cells. I also assume then that this update is explicit, since otherwise some linear system would likely have had to be solved and the GPU use would be less efficient (?)

My question then is why is this method likely more efficient than for instance an explicit method for compressible flows, where each cell is also only dependent on locally nearby cells. As an example, a second order MUSCL scheme for a 2D grid could have a nine point stencil as shown below:

OOXOO
OOXOO
XXXXX
OOXOO
OOXOO

(X'es are part of the stencil)

while a basic LBM could have the following nine point stencil in 2D:

XXX
XXX
XXX

I guess both of these methods could be effectively parallelized on a GPU, but my impression is still stat LBM is regarded as superior in this regard. If this impression is true, what is the reason for it? Is it due to less restrictive stable time step? Ofc for an explicit implementation of Navier Stokes, the viscous terms will put large restrictions on the time step due to the von neumann number which scales with 1/dx^2.

I hope someone can enlighten me:)

FMDenaro January 16, 2023 11:22

Quote:

Originally Posted by ander (Post 842961)
My knowledge of the LBM is very limited, but it is my understanding that it can be implemented efficiently on GPU's since each "cell's" update is only dependent on surrounding cells. I also assume then that this update is explicit, since otherwise some linear system would likely have had to be solved and the GPU use would be less efficient (?)

My question then is why is this method likely more efficient than for instance an explicit method for compressible flows, where each cell is also only dependent on locally nearby cells. As an example, a second order MUSCL scheme for a 2D grid could have a nine point stencil as shown below:

OOXOO
OOXOO
XXXXX
OOXOO
OOXOO

(X'es are part of the stencil)

while a basic LBM could have the following nine point stencil in 2D:

XXX
XXX
XXX

I guess both of these methods could be effectively parallelized on a GPU, but my impression is still stat LBM is regarded as superior in this regard. If this impression is true, what is the reason for it? Is it due to less restrictive stable time step? Ofc for an explicit implementation of Navier Stokes, the viscous terms will put large restrictions on the time step due to the von neumann number which scales with 1/dx^2.

I hope someone can enlighten me:)




Your example of the MUSCL stencil says that the 1D version is simply extended to 2D by factorization on each direction.

Actually, since a FV formulation requires the surface integration of the flux, the stencil can be, in general, fully 2D. Just consider a FV more accurate than second order (or consider a second order scheme with the trapezoidal rule instead of the mean value formula for the integral).

ander January 16, 2023 16:55

I found this kind of implementation in a paper together with the third order TVD runge-kutta method and implemented it myself with satisfactory results, but I see your point.

Even if a larger stencil was used for the FVM it would still be effective for GPU implementation and my question remains.

sbaffini January 17, 2023 05:40

Not an LBM expert here, but I have been around here enough to remember this question, in a form or another, resurfacing different times, and a fast search on the forum just confirmed this.

I'm not writing this to blame you for not doing a search yourself, but to say that I still have to see, after all those questions, any convincing evidence for LBM to be used in general which is not just for the sake of it.

Also, it's not that some user provided arguments that did not convince me, it seems just that there is no user on this forum that would use LBM over more traditional CFD methods per se. I still don't know if it is because of limitations of this forum or LBM.

I can't tell you why or how LBM does or not something, but the idea of using a general CFD code efficiently on GPUs is indeed relevant. How efficiently this can be done, I think, is the main culprit, with respect to LBM, and why it generally has less traction.

In my opinion, but I am no expert on it as well, to implement something efficiently on GPUs, you should stay just there, not go in and out for the main computational kernel. This, in turn, introduces some limits on what you can efficiently do or not. Most super-speed claims I have seen on LBM just come from purely uniform grids, so I deduce that it is a part of the picture in some way.

If I had to say, traditional CFD codes really shine, today, in generality and flexibility, all things that seems would need to be compromised somehow to work well on GPUs.

However, let me also add that traditional CFD codes also have a vast range of stability, robustness and accuracy issues to control, even on uniform meshes. I am under the impression that, despite the limitations it has, LBM has just less things to control to be ready to go. So, it might also be that LBM is a more friendly black box solver, which coupled with the general GPU availability, kind of met a certain market.

Let me also say that whenever I'll start using GPUs, I'll probably start with a uniform grid traditional CFD solver and not LBM, exactly to understand how fast it can be compared to LBM.

ander January 18, 2023 11:24

I realize that I would have to read up more on LBM to have much meaningful to respond to that comment. My impression is also that efficient GPU implementations get the best speedup from uniform grids, however I remember a showcase of a solver by spacex that uses oct tree based AMR together with GPU's: https://www.youtube.com/watch?v=vYA0...nsideHPCReport (this video might be familiar to most cfd online users). Don't know how much speedup is gained here compared to a CPU implementation.

Anyway, it would be interesting to see a comparison in terms of speed of gpu implemented explicit fvm and lbm for uniform grids.

arjun January 18, 2023 15:02

The problem with this topic is that there are not many people who have good enough experience with all three types of methods (i include pressure based methods too).

1. The major issue with the comparison that i see from the people who use LBM is that they make comparison with other methods using the same mesh or the meshes of similar sizes. Then they say that LBM is this times faster. Often (or unless one is doing LES) one can obtain the same results with much coarser meshes.

2. Often one can obtain good results with steady state calculations and many companies do. So the comparisons only make sense only in cases where transient simulation is must.


3. To the main question, about density based explicit solvers vs lbm that is asked the answer is bit more complicated than you might expect. The normal LBM because it needs less operations per time step could be faster. BUT if one uses some specialized time stepping method then density based solvers could be very fast. One example would be to use Exponential integrator, that would allow much larger time steps that lbm would not be able to keep up to. GPU implementation of this is possible.

3.b. There are finite analytical methods that also allow very large time steps from the fact that they are constructed from analytical solutions, if implemented on GPU could be many times faster than their lbm counterparts.


4. There are finite volume, suitable for unstructed grids, lattice boltzmann methods. In their main form they are very slow because they require very very small time steps (due to their advection). BUT for them there are few new type of forumations that fix this issue and allow very large times steps (they use something called negative viscosity!!). Those methods could compete with exponential integrator and finite analytical methods. (this is why the answer is not clear cut to OP).


5. The major advantages of LBM over other methods is that they perform very good at lower timestep sizes where as pressure based methods decouple and create lots of problems. So the places where lower timesteps is unavoidable there lbm is better way to go. (combustion, plasma etc are such areas).


6. I do not consider robustness and stability to be big issue with other methods because have worked on it for years. Accuracy needs to be better.

randolph January 28, 2024 12:34

Arjun,

Could you please elaborate more on the fifth statement?

Thank you in advance,
Rdf

arjun January 29, 2024 13:45

Quote:

Originally Posted by randolph (Post 863874)
Arjun,

Could you please elaborate more on the fifth statement?

Thank you in advance,
Rdf






In pressure based solvers, the velocity and pressure are usually coupled by Rhie and Chow type dissipation methods. Here the flux has additional flux due to Rhie and Chow controbution which is inversely propertional to diagonal of momentum matrix.

The diagonal of momentum matrix has unsteady contribution in the form of (rho Volume ) / time_step_size.

That means smaller the time step size is bigger the momentum diagonal will be. Hence for very small time step sizes inverse of momentum matrix diagonal is approaching zero. That means the effect of Rhie and Chow flux will be going to 0 with time step size approaching 0. Resulting in decoupling of the velocity and pressure.


All times are GMT -4. The time now is 03:22.