CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Pros and cons of Lattize Boltzmann Method vs explicit FVM

Register Blogs Community New Posts Updated Threads Search

Like Tree3Likes
  • 1 Post By ander
  • 1 Post By sbaffini
  • 1 Post By arjun

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 16, 2023, 07:59
Default Pros and cons of Lattize Boltzmann Method vs explicit FVM
  #1
Member
 
Anders Aamodt Resell
Join Date: Dec 2021
Location: Oslo, Norway
Posts: 64
Rep Power: 4
ander is on a distinguished road
My knowledge of the LBM is very limited, but it is my understanding that it can be implemented efficiently on GPU's since each "cell's" update is only dependent on surrounding cells. I also assume then that this update is explicit, since otherwise some linear system would likely have had to be solved and the GPU use would be less efficient (?)

My question then is why is this method likely more efficient than for instance an explicit method for compressible flows, where each cell is also only dependent on locally nearby cells. As an example, a second order MUSCL scheme for a 2D grid could have a nine point stencil as shown below:

OOXOO
OOXOO
XXXXX
OOXOO
OOXOO

(X'es are part of the stencil)

while a basic LBM could have the following nine point stencil in 2D:

XXX
XXX
XXX

I guess both of these methods could be effectively parallelized on a GPU, but my impression is still stat LBM is regarded as superior in this regard. If this impression is true, what is the reason for it? Is it due to less restrictive stable time step? Ofc for an explicit implementation of Navier Stokes, the viscous terms will put large restrictions on the time step due to the von neumann number which scales with 1/dx^2.

I hope someone can enlighten me
sbaffini likes this.
ander is offline   Reply With Quote

Old   January 16, 2023, 11:22
Default
  #2
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,770
Rep Power: 71
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
Quote:
Originally Posted by ander View Post
My knowledge of the LBM is very limited, but it is my understanding that it can be implemented efficiently on GPU's since each "cell's" update is only dependent on surrounding cells. I also assume then that this update is explicit, since otherwise some linear system would likely have had to be solved and the GPU use would be less efficient (?)

My question then is why is this method likely more efficient than for instance an explicit method for compressible flows, where each cell is also only dependent on locally nearby cells. As an example, a second order MUSCL scheme for a 2D grid could have a nine point stencil as shown below:

OOXOO
OOXOO
XXXXX
OOXOO
OOXOO

(X'es are part of the stencil)

while a basic LBM could have the following nine point stencil in 2D:

XXX
XXX
XXX

I guess both of these methods could be effectively parallelized on a GPU, but my impression is still stat LBM is regarded as superior in this regard. If this impression is true, what is the reason for it? Is it due to less restrictive stable time step? Ofc for an explicit implementation of Navier Stokes, the viscous terms will put large restrictions on the time step due to the von neumann number which scales with 1/dx^2.

I hope someone can enlighten me



Your example of the MUSCL stencil says that the 1D version is simply extended to 2D by factorization on each direction.

Actually, since a FV formulation requires the surface integration of the flux, the stencil can be, in general, fully 2D. Just consider a FV more accurate than second order (or consider a second order scheme with the trapezoidal rule instead of the mean value formula for the integral).
FMDenaro is offline   Reply With Quote

Old   January 16, 2023, 16:55
Default
  #3
Member
 
Anders Aamodt Resell
Join Date: Dec 2021
Location: Oslo, Norway
Posts: 64
Rep Power: 4
ander is on a distinguished road
I found this kind of implementation in a paper together with the third order TVD runge-kutta method and implemented it myself with satisfactory results, but I see your point.

Even if a larger stencil was used for the FVM it would still be effective for GPU implementation and my question remains.
ander is offline   Reply With Quote

Old   January 17, 2023, 05:40
Default
  #4
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Not an LBM expert here, but I have been around here enough to remember this question, in a form or another, resurfacing different times, and a fast search on the forum just confirmed this.

I'm not writing this to blame you for not doing a search yourself, but to say that I still have to see, after all those questions, any convincing evidence for LBM to be used in general which is not just for the sake of it.

Also, it's not that some user provided arguments that did not convince me, it seems just that there is no user on this forum that would use LBM over more traditional CFD methods per se. I still don't know if it is because of limitations of this forum or LBM.

I can't tell you why or how LBM does or not something, but the idea of using a general CFD code efficiently on GPUs is indeed relevant. How efficiently this can be done, I think, is the main culprit, with respect to LBM, and why it generally has less traction.

In my opinion, but I am no expert on it as well, to implement something efficiently on GPUs, you should stay just there, not go in and out for the main computational kernel. This, in turn, introduces some limits on what you can efficiently do or not. Most super-speed claims I have seen on LBM just come from purely uniform grids, so I deduce that it is a part of the picture in some way.

If I had to say, traditional CFD codes really shine, today, in generality and flexibility, all things that seems would need to be compromised somehow to work well on GPUs.

However, let me also add that traditional CFD codes also have a vast range of stability, robustness and accuracy issues to control, even on uniform meshes. I am under the impression that, despite the limitations it has, LBM has just less things to control to be ready to go. So, it might also be that LBM is a more friendly black box solver, which coupled with the general GPU availability, kind of met a certain market.

Let me also say that whenever I'll start using GPUs, I'll probably start with a uniform grid traditional CFD solver and not LBM, exactly to understand how fast it can be compared to LBM.
ander likes this.
sbaffini is offline   Reply With Quote

Old   January 18, 2023, 11:24
Default
  #5
Member
 
Anders Aamodt Resell
Join Date: Dec 2021
Location: Oslo, Norway
Posts: 64
Rep Power: 4
ander is on a distinguished road
I realize that I would have to read up more on LBM to have much meaningful to respond to that comment. My impression is also that efficient GPU implementations get the best speedup from uniform grids, however I remember a showcase of a solver by spacex that uses oct tree based AMR together with GPU's: https://www.youtube.com/watch?v=vYA0...nsideHPCReport (this video might be familiar to most cfd online users). Don't know how much speedup is gained here compared to a CPU implementation.

Anyway, it would be interesting to see a comparison in terms of speed of gpu implemented explicit fvm and lbm for uniform grids.
ander is offline   Reply With Quote

Old   January 18, 2023, 15:02
Default
  #6
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,273
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
The problem with this topic is that there are not many people who have good enough experience with all three types of methods (i include pressure based methods too).

1. The major issue with the comparison that i see from the people who use LBM is that they make comparison with other methods using the same mesh or the meshes of similar sizes. Then they say that LBM is this times faster. Often (or unless one is doing LES) one can obtain the same results with much coarser meshes.

2. Often one can obtain good results with steady state calculations and many companies do. So the comparisons only make sense only in cases where transient simulation is must.


3. To the main question, about density based explicit solvers vs lbm that is asked the answer is bit more complicated than you might expect. The normal LBM because it needs less operations per time step could be faster. BUT if one uses some specialized time stepping method then density based solvers could be very fast. One example would be to use Exponential integrator, that would allow much larger time steps that lbm would not be able to keep up to. GPU implementation of this is possible.

3.b. There are finite analytical methods that also allow very large time steps from the fact that they are constructed from analytical solutions, if implemented on GPU could be many times faster than their lbm counterparts.


4. There are finite volume, suitable for unstructed grids, lattice boltzmann methods. In their main form they are very slow because they require very very small time steps (due to their advection). BUT for them there are few new type of forumations that fix this issue and allow very large times steps (they use something called negative viscosity!!). Those methods could compete with exponential integrator and finite analytical methods. (this is why the answer is not clear cut to OP).


5. The major advantages of LBM over other methods is that they perform very good at lower timestep sizes where as pressure based methods decouple and create lots of problems. So the places where lower timesteps is unavoidable there lbm is better way to go. (combustion, plasma etc are such areas).


6. I do not consider robustness and stability to be big issue with other methods because have worked on it for years. Accuracy needs to be better.
aerosayan likes this.
arjun is offline   Reply With Quote

Old   January 28, 2024, 12:34
Default
  #7
Senior Member
 
Reviewer #2
Join Date: Jul 2015
Location: Knoxville, TN
Posts: 141
Rep Power: 10
randolph is on a distinguished road
Arjun,

Could you please elaborate more on the fifth statement?

Thank you in advance,
Rdf
randolph is offline   Reply With Quote

Old   January 29, 2024, 13:45
Default
  #8
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,273
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by randolph View Post
Arjun,

Could you please elaborate more on the fifth statement?

Thank you in advance,
Rdf





In pressure based solvers, the velocity and pressure are usually coupled by Rhie and Chow type dissipation methods. Here the flux has additional flux due to Rhie and Chow controbution which is inversely propertional to diagonal of momentum matrix.

The diagonal of momentum matrix has unsteady contribution in the form of (rho Volume ) / time_step_size.

That means smaller the time step size is bigger the momentum diagonal will be. Hence for very small time step sizes inverse of momentum matrix diagonal is approaching zero. That means the effect of Rhie and Chow flux will be going to 0 with time step size approaching 0. Resulting in decoupling of the velocity and pressure.
arjun is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
CFD consulting pros and cons shaileshbg Main CFD Forum 7 November 17, 2021 07:24
Pros and cons of the method of Space-Time Conservation Element and Solution Element Dwayne Main CFD Forum 0 October 11, 2013 07:43
current state of Lattice Boltzmann method? phsieh2005 Main CFD Forum 16 August 13, 2013 06:37
Pros and Cons for CFX, CFdesign, COMSOL Val Main CFD Forum 3 June 10, 2011 02:20
Gambit - Virtual Geometries (Pro's & Con's) James Date FLUENT 3 August 11, 2003 15:46


All times are GMT -4. The time now is 16:17.