CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Coupled Solver Include Y-velocity terms in X-momentum equation?

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   May 25, 2020, 12:55
Default Coupled Solver Include Y-velocity terms in X-momentum equation?
  #1
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
All,

I have a general question about coupled solvers for NS equations: does it simply put the segregated coefficients for X-momentum Y-momentum and continuity into a giant matrix and solve, or does it fully linearize all the terms?

i.e. in the matrix, do the X-momentum equations include implicit y-velocity components in the matrix, and vice versa?

I hope my question is clear.

TIA

EDIT: to further clarify-- does the benefit of stability and convergence rate only come with coupled solvers when all the implicit terms are included i.e. fully-linearized, or in general, is there benefit even if the coefficients are essentially the same as the segregated solver, but the pressure and velocity are more directly coupled?

Last edited by arkie87; May 25, 2020 at 12:59. Reason: clarification
arkie87 is offline   Reply With Quote

Old   May 25, 2020, 16:04
Default
  #2
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
I can't answer for any form of coupled solvers, but for preconditioned density based ones you have the implicit terms with respect to all the variables. That is, the implicit block is w.r.t. each of the variables. I expect the same for pressure based ones, but I expect it to also depend on other aspects (like if incompressible or not, how it is coded, etc.)
sbaffini is offline   Reply With Quote

Old   May 25, 2020, 16:12
Default
  #3
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Consider also that, once you have a coupled solver, it costs you nothing to add all the implicit terms.

Also, I don't expect any advantage by simply solving the equations together.
sbaffini is offline   Reply With Quote

Old   May 25, 2020, 17:51
Default
  #4
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by sbaffini View Post
Consider also that, once you have a coupled solver, it costs you nothing to add all the implicit terms.
If you are putting all the equations into a matrix, you lose the banded structure anyway, so using an efficient algorithm (such as pentadiagonal/tridiagonal) is no longer an option. But I would still think that adding more implicit terms further increases computational cost even further, no?


Quote:
Originally Posted by sbaffini View Post
Also, I don't expect any advantage by simply solving the equations together.
If the X- and Y-momentum equations include an implicit pressure term, but they do not include an implicit Y-velocity and X-velocity term, respectively, would you still not expect any improved convergence (this is the "coupled" solver I intended; sorry if that wasnt clear).

Also, your answers are very helpful. Thanks again!
arkie87 is offline   Reply With Quote

Old   May 25, 2020, 18:20
Default
  #5
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Banded structure is preserved at the block level, so it really isn't an issue. The point is, once you have a block instead of a single scalar, filling the block or not is not going to impact the overall cost.

Using only the pressure coupling, instead of the full one, is along the same line.

The point is that once you solve for a coupled set of equations, say n, you incur a cost, which is mostly memory, which is a factor n^2 higher, and you want a good reason for it. Adding few more coefficients to a matrix that already has a given dimension will cost you nothing and is just the core of the coupling.

I don't know the details for pressure based coupled solvers, but I think they are no different. Also, the term you allude to for the inter-velocity coupling is actually a mass flux that contains the pressure as well for Rhie-Chow like treatments, so you kind of want to consider it in any case
sbaffini is offline   Reply With Quote

Old   May 25, 2020, 18:35
Default
  #6
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by sbaffini View Post
Banded structure is preserved at the block level, so it really isn't an issue. The point is, once you have a block instead of a single scalar, filling the block or not is not going to impact the overall cost.
I assume by block you are referring to the matrix i.e. once we are solving for all scalars at once in the matrix, then the banded structure wont be preserved. And once the banded structure isnt preserved, it doesnt cost anything to further disturb the banded structure by adding more implicit terms. Is that right?

Quote:
Originally Posted by sbaffini View Post
Using only the pressure coupling, instead of the full one, is along the same line.
So you still maintain that adding implicit pressure terms but not implicit inter-velocity terms wont accelerate convergence? The way i see it, it is sort of like using the SIMPLER (R added) algorithm, where the pressure equation is solved directly (instead of pressure correction, but instead of having to solve momentum equations, pressure correction equation, and pressure equation, we solve the entire block once (which is like momentum equations and pressure equations, but without pressure correction equation). So perhaps, convergence is similar to SIMPLER algorithm?

Quote:
Originally Posted by sbaffini View Post
The point is that once you solve for a coupled set of equations, say n, you incur a cost, which is mostly memory, which is a factor n^2 higher, and you want a good reason for it. Adding few more coefficients to a matrix that already has a given dimension will cost you nothing and is just the core of the coupling.
I would think the denser the matrix, the more steps in the Gaussian elimination process too, no? It might be negligible though.

Also, if we are dealing with sparse matrices, then the memory requirements should be closer to n, not n^2, no?

Quote:
Originally Posted by sbaffini View Post
I don't know the details for pressure based coupled solvers, but I think they are no different. Also, the term you allude to for the inter-velocity coupling is actually a mass flux that contains the pressure as well for Rhie-Chow like treatments, so you kind of want to consider it in any case
Ive seen the term "Rhie-Chow" thrown around but have no idea what it means. I guess I will look into it.

Thanks again!
arkie87 is offline   Reply With Quote

Old   May 25, 2020, 19:14
Default
  #7
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Well... no, no and no.

Given a traditional matrix of a segregated solver with scalar elements aij, a coupled solver has matrix elements Aij. That is, each element is an nxn matrix itself, or a block. The overall matrix is said to be a block matrix. The matrix structure is the same of the scalar one if looked at the block level.

So a coupled solver already has nxn more memory allocated. You can fill the blocks on the diagonal only (roughly equivalent to just putting the segregated equations in the same system), but it comes at practically no cost filling the rest of the block.

I don't know if adding only pressure dependence is enough. What I claim is that it has no sense to have an nxn block and filling it only in part.

In a blocked matrix algebraic solver for CFD, gaussian elimination enters in the inversion of the single Aij blocks, but certainly not the full matrix. The system is typically solved using iterative methods whose main difference with respect to non blocked versions is the use of the inverse instead of division by matrix elements. Again, given the additional cost of the gaussian elimination, it makes little difference if the block is full or not.

The only way a momentum equation depends from the other ones is trough the mass flux in the convective term. When you use Rhie-Chow, this mass flux also contains the pressure. So, even if you only want to take pressure into account, you still have to consider it in the convective term as well. At that point, dude, just do it as it is intended to be.
sbaffini is offline   Reply With Quote

Old   May 25, 2020, 20:57
Default
  #8
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by sbaffini View Post
Well... no, no and no.

Given a traditional matrix of a segregated solver with scalar elements aij, a coupled solver has matrix elements Aij. That is, each element is an nxn matrix itself, or a block. The overall matrix is said to be a block matrix. The matrix structure is the same of the scalar one if looked at the block level.
So to be clear, if we are solving 3D Flow, then if aij is n x n, Aij is 4 x 4 (or 4n x 4n)?


Quote:
Originally Posted by sbaffini View Post
So a coupled solver already has nxn more memory allocated.
I dont see how. It should only be 4x more memory for 3D problems, since the matrix is sparse.

Quote:
Originally Posted by sbaffini View Post
You can fill the blocks on the diagonal only (roughly equivalent to just putting the segregated equations in the same system)
Agreed and makes sense.

Quote:
Originally Posted by sbaffini View Post
but it comes at practically no cost filling the rest of the block.
The more non-zero elements, the more operations required in the Guassian elimination. And for sparse matricies, only nonzero elements contribute to the number of operations.

Quote:
Originally Posted by sbaffini View Post
I don't know if adding only pressure dependence is enough. What I claim is that it has no sense to have an nxn block and filling it only in part.
Ok, well that is what I am asking.

Quote:
Originally Posted by sbaffini View Post
In a blocked matrix algebraic solver for CFD, gaussian elimination enters in the inversion of the single Aij blocks, but certainly not the full matrix. The system is typically solved using iterative methods whose main difference with respect to non blocked versions is the use of the inverse instead of division by matrix elements.
In my application, I am just using Matlab with A\b, not a specific CFD solver. And i am not quite sure what you are saying here. I guess you are saying that the CFD solver inverts the Aij matrix which is 4x4? I

Quote:
Originally Posted by sbaffini View Post
Again, given the additional cost of the gaussian elimination, it makes little difference if the block is full or not.
In my understanding, guassian elimination only performs operations on nonzero elements. So in theory, if you were to only fill in the diagonals of the Aij matrix, it would perform the same number of operations as if you inverted each block individually. if you think this is wrong, please kindly explain why.

Quote:
Originally Posted by sbaffini View Post
The only way a momentum equation depends from the other ones is trough the mass flux in the convective term.
Agreed.

Quote:
Originally Posted by sbaffini View Post
When you use Rhie-Chow, this mass flux also contains the pressure. So, even if you only want to take pressure into account, you still have to consider it in the convective term as well.
Is Rhie-Chow the only method for this? I can't just linearize using Taylor series expansion? If i do that, i dont introduce an additional pressure term? Or are you saying that if i just linearized it, iit would not accelerate convergence as much as using Rhie-Chow?


Quote:
Originally Posted by sbaffini View Post
At that point, dude, just do it as it is intended to be.
Please, no need to be salty.
arkie87 is offline   Reply With Quote

Old   May 25, 2020, 21:14
Default
  #9
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by sbaffini View Post
The only way a momentum equation depends from the other ones is trough the mass flux in the convective term. When you use Rhie-Chow, this mass flux also contains the pressure. So, even if you only want to take pressure into account, you still have to consider it in the convective term as well. At that point, dude, just do it as it is intended to be.
Also, I'm not sure if this matters, but I am using a staggered grid. Everything I am reading about Rhie-Chow seems to indicate it is only for pressure correction equation (i.e. not coupled) solvers (my coupled solver uses conservation of mass directly). But I suppose terms similar to Rhie-Chow could be used in coupled solvers if they help coupling...
arkie87 is offline   Reply With Quote

Old   May 25, 2020, 23:53
Default
  #10
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,273
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
For implicit type of solver


1. The coupled system does include the terms related to other variables (u,v,w and p). It is there for both pressure based and density based solver.
2. The main difference between pressure based and density based coupled solver is how their coupled matrix is constructed. They are not the same (obviously).

For density based solver in Fluent and starccm one of the core paper is this
https://arc.aiaa.org/doi/abs/10.2514/3.12946

For pressure based solver refer:
https://www.aub.edu.lb/msfea/researc...ts/CFD-P18.pdf


Now there are other coupled solvers too, there is one present in Polyflow and this is based on Newton type iteration using Jacobian of Navier Stokes to step to next time level.
This types of coupled solvers are popular in High speed flows like detonation for example though Polyflow uses them for visco-elastic flows.
arjun is offline   Reply With Quote

Old   May 26, 2020, 00:19
Default
  #11
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by arjun View Post
For implicit type of solver


1. The coupled system does include the terms related to other variables (u,v,w and p). It is there for both pressure based and density based solver.
Ok, thank you. I am still wondering if this is necessary to speed up convergence? Or is the coupling between P and velocities stronger than the coupling among the velocities themselves?

I am asking because I have two 2-D Matlab CFD codes. One uses SIMPLE and one uses a "coupled" algorithm i wrote (which basically couples velocity to pressure, but not velocity to velocity in momentum equation, but velocity is coupled to velocity in continuity equation), and it still takes much fewer iterations than SIMPLE (40 vs 400).

Quote:
Originally Posted by arjun View Post
2. The main difference between pressure based and density based coupled solver is how their coupled matrix is constructed. They are not the same (obviously).
Is it something like this Slide 23 vs 24 of the linked slide deck? Or it's not related.

http://www.tfd.chalmers.se/~hani/kur...s_20130917.pdf


Quote:
Originally Posted by arjun View Post
For density based solver in Fluent and starccm one of the core paper is this
https://arc.aiaa.org/doi/abs/10.2514/3.12946

For pressure based solver refer:
https://www.aub.edu.lb/msfea/researc...ts/CFD-P18.pdf
In the second reference you sent, I see it says the following:
Quote:
The convergence of the SIMPLE algorithm is highly affected by the explicit treatment of the pressure gradient in the
momentum equation and the velocity field in the continuity equation.
Based on that logic, my implicit inclusion of pressure in the momentum equations and implicit solution of the continuity equation should act to speed up convergence. In case that isnt clear, if the block equation is a 3x3 matrix of [P,U,V] then my matrix looks like:


P,U,V
0,X,X Continuity
X,X,0 X-Momentum
X,0,X Y-Momentum

Where 0 means all zeros, and X means some elements are filled out.
arkie87 is offline   Reply With Quote

Old   May 26, 2020, 00:38
Default
  #12
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by arjun View Post
For implicit type of solver


1. The coupled system does include the terms related to other variables (u,v,w and p). It is there for both pressure based and density based solver.
2. The main difference between pressure based and density based coupled solver is how their coupled matrix is constructed. They are not the same (obviously).

For density based solver in Fluent and starccm one of the core paper is this
https://arc.aiaa.org/doi/abs/10.2514/3.12946

For pressure based solver refer:
https://www.aub.edu.lb/msfea/researc...ts/CFD-P18.pdf


Now there are other coupled solvers too, there is one present in Polyflow and this is based on Newton type iteration using Jacobian of Navier Stokes to step to next time level.
This types of coupled solvers are popular in High speed flows like detonation for example though Polyflow uses them for visco-elastic flows.
I guess another thing to note comparing my SIMPLE code and COUPLED code is that in the limit of stokes flow (Re-->0), SIMPLE still needs 100's of iterations, whereas COUPLED requires 1. Does that sound right, or is there a problem/inconsistency in my SIMPLE code?
arkie87 is offline   Reply With Quote

Old   May 26, 2020, 06:32
Default
  #13
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
I didn't want to be salty, but we clearly are not on the same page.

You use MATLAB and divide to solve the system, so you couldn't care less how the equations are ordered in your matrix, and I suspect that you are ordering them like:

pressure eq. at node 1
pressure eq. at node 2
.
pressure eq. at node N
u eq. at node 1
u eq. at node 2
.
u eq. at node N
v eq. at node 1
v eq. at node 2
.
v eq. at node N

This is nice and all, but it is the matrix structure that you use to actually exploit Schur complement approaches, that eventually lead to segregated solvers. Nobody writes coupled solvers like that in CFD, because it is largely inefficient compared to the other way around, which is what I was referring to (and please also consider the second reference posted by Arjun, equations 30 and 43 specifically, but also the slides you posted, on page 24):

pressure eq. at node 1
u eq. at node 1
v eq. at node 1
pressure eq. at node 2
u eq. at node 2
v eq. at node 2
.
pressure eq. at node N
u eq. at node N
v eq. at node N

Or, reasoning at the block level:

block eq. at node 1
block eq. at node 2
.
block eq. at node N

where each block is an nxn (with n=3 in 2D) matrix and this is the block I was referring to in my posts. It is this block that is inverted by Gaussian elimination. Note that this is very different from your NxN blocks, with N referring to the number of nodes in the grid.

Now, the following holds for this approach:

1) Each block needs an nxn matrix already, independently from how you actually fill the block. So, for a given matrix structure, if instead of scalar elements you have nxn blocks, do we agree that the coupled solver has a memory requirement which is a factor n^2 higher?

2) If your solution variables are stored accordingly (see the OF slides you posted or the 2nd paper posted by Arjun), you exploit cache efficiency at the highest level, because you run over the nodes (faces in FV as OpenFOAM) of the grid and retrieve the relative variables only once (vs n times for the segregated solver).

3) As per 2, you have all the variables to build your full coupling and you got them as efficiently as possible. Is there a reason to drop some coupling terms, considering that the cost of updating the general matrix will be the same? In general NO, unless you know they will do some harm to the matrix properties you care the most.

Now, I certainly expect the pressure coupling to be more impactful than the velocity coupling, no doubt, but I was trying to explain that in the common approach to coupled solvers the inter-velocity coupling really comes at no additional cost.

With your ordering it is clearly different, but probably that's your last problem as well.

For what concerns Rhie-Chow, it is not relevant in your staggered case, but it is in the more general unstructured cell-centered cases.
sbaffini is offline   Reply With Quote

Old   May 26, 2020, 10:44
Default
  #14
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by sbaffini View Post
You use MATLAB and divide to solve the system, so you couldn't care less how the equations are ordered in your matrix, and I suspect that you are ordering them like:

pressure eq. at node 1
pressure eq. at node 2
.
pressure eq. at node N
u eq. at node 1
u eq. at node 2
.
u eq. at node N
v eq. at node 1
v eq. at node 2
.
v eq. at node N
You are absolutely correct.


Quote:
Originally Posted by sbaffini View Post
This is nice and all, but it is the matrix structure that you use to actually exploit Schur complement approaches, that eventually lead to segregated solvers.
Ok.

Quote:
Originally Posted by sbaffini View Post
Nobody writes coupled solvers like that in CFD
I do

Quote:
Originally Posted by sbaffini View Post
, because it is largely inefficient compared to the other way around, which is what I was referring to (and please also consider the second reference posted by Arjun, equations 30 and 43 specifically, but also the slides you posted, on page 24):

pressure eq. at node 1
u eq. at node 1
v eq. at node 1
pressure eq. at node 2
u eq. at node 2
v eq. at node 2
.
pressure eq. at node N
u eq. at node N
v eq. at node N

Or, reasoning at the block level:

block eq. at node 1
block eq. at node 2
.
block eq. at node N

where each block is an nxn (with n=3 in 2D) matrix and this is the block I was referring to in my posts. It is this block that is inverted by Gaussian elimination. Note that this is very different from your NxN blocks, with N referring to the number of nodes in the grid.
Yeah, the second reference and the slides i found are consistent with what you are saying. And i think i better understand what you mean by blocks now. Thank you.

Quote:
Originally Posted by sbaffini View Post
Now, the following holds for this approach:

1) Each block needs an nxn matrix already, independently from how you actually fill the block. So, for a given matrix structure, if instead of scalar elements you have nxn blocks, do we agree that the coupled solver has a memory requirement which is a factor n^2 higher?
That makes sense, since even if the nxn matrix starts off sparse, when you invert it, it will be full. So when you invert each block in the NxN matrix, it will be full (and 3Nx3N). But to me, that sounds like a severe drawback of this approach, and would limit the use of coupled solvers in 3D, no? Or is the matrix only ever inverted at the block level, and never at the full level?

Quote:
Originally Posted by sbaffini View Post
2) If your solution variables are stored accordingly (see the OF slides you posted or the 2nd paper posted by Arjun), you exploit cache efficiency at the highest level, because you run over the nodes (faces in FV as OpenFOAM) of the grid and retrieve the relative variables only once (vs n times for the segregated solver).
I am bit new to cache efficiency, but i think that makes sense. Not sure if I see why you have to retrieve the variables n times for the segregated solver though, but perhaps that is because i just vectorize my code in matlab and get the coefficients all at once, rather than get the coefficients on demand.


Quote:
Originally Posted by sbaffini View Post
3) As per 2, you have all the variables to build your full coupling and you got them as efficiently as possible. Is there a reason to drop some coupling terms, considering that the cost of updating the general matrix will be the same? In general NO, unless you know they will do some harm to the matrix properties you care the most.

Now, I certainly expect the pressure coupling to be more impactful than the velocity coupling, no doubt, but I was trying to explain that in the common approach to coupled solvers the inter-velocity coupling really comes at no additional cost.
Yeah, I see what you are saying that its relatively trivial to add in the other terms. Though I will say that Ref 2 that Arjun sent has the a_uv=a_vu=0, so they effectively drop those terms we are discussing. Based on that, it seems like Ref 2 and my method are equivalent, but the matricies are set up and ordered differently.


Quote:
Originally Posted by sbaffini View Post
For what concerns Rhie-Chow, it is not relevant in your staggered case, but it is in the more general unstructured cell-centered cases.
Ok, good to know. It seems like the ordering process is a bit more difficult with staggered grid, no? Since the number of pressure variables =/= U velocity =/= V velocity, so the pattern will break near the boundary. Is that a problem? Perhaps that is a reason to consider collocated variables, but then I will have to implement Rhie-Chow...

The last thing I am confused about is how the coupled solver loops/works. It seems like you are saying it inverts only the blocks i.e. 3x3 matrix, and then moves around to each cell. Thus, it requires storage of the full 3Nx3N matrix, which runs into the problem i mentioned earlier. The other thing I am confused about is how this achieves the full coupling. It seems like it couples U,V, and P together well, but decouples them from their neighboring cells. So it seems like there would need to be lots of Guass-Seidel iterations to achieve convergence of the coupled solver, which defeats the purpose in my mind.

Alternatively, it inverts the blocks of the first cell (3x3), and then inverts the blocks of its neighboring cells (3x3nb). I suppose then it could multiply the first row of the matrix by the RHS (b) to get the values of U V and P in the first cell? Is that right (is that what Schur complement approach is)? It could then forget the matrix inverse values and move onto the next cell.


Once again, thanks again for all your help!
arkie87 is offline   Reply With Quote

Old   May 26, 2020, 13:46
Default
  #15
Senior Member
 
Filippo Maria Denaro
Join Date: Jul 2010
Posts: 6,776
Rep Power: 71
FMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura aboutFMDenaro has a spectacular aura about
I haven't read carefully all the comments and answers, the last comments seem to address that one of the reasons for confusion is about the difference between the matrix form of the governing equations and the matrix of the resulting algebric systems, isn't that?
FMDenaro is offline   Reply With Quote

Old   May 26, 2020, 14:59
Default
  #16
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,273
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by arkie87 View Post
The last thing I am confused about is how the coupled solver loops/works. It seems like you are saying it inverts only the blocks i.e. 3x3 matrix, and then moves around to each cell. Thus, it requires storage of the full 3Nx3N matrix, which runs into the problem i mentioned earlier. The other thing I am confused about is how this achieves the full coupling. It seems like it couples U,V, and P together well, but decouples them from their neighboring cells. So it seems like there would need to be lots of Guass-Seidel iterations to achieve convergence of the coupled solver, which defeats the purpose in my mind.
!



On top of that Gauss Seidel does not really work in pressure based coupled solver. This is one of the many issues with pressure based coupled solver.



Coupled solver on paper is nice but in cases where grid is not so good, you might find that it is less efficient than a well implemented SIMPLE. For example if you have an example of coupled solver then try comparing it with Fluent's SIMPLE which is example of a good SIMPLE implementation.



Fluent also have pressure based coupled solver (from 2007) and there is a reason why it is not their main solver.
arjun is offline   Reply With Quote

Old   May 26, 2020, 15:06
Default
  #17
Member
 
Raphael
Join Date: Nov 2012
Posts: 68
Rep Power: 13
arkie87 is on a distinguished road
Quote:
Originally Posted by arjun View Post
On top of that Gauss Seidel does not really work in pressure based coupled solver. This is one of the many issues with pressure based coupled solver.
So you seem to be indicating that the coupled solver in fluent solves U V and P at a given cell in a coupled manner, but decouples (or explicitly solves) the U V P of the neighboring cells?

To me, that seems like it should completely negate the benefit of the coupled solver due to Gauss Seidel iterations being slow... though I guess fluent just uses AMG rather than Gauss Seidel. Though I'm not sure how one implements AMG or GMG with a blocked matrix.
arkie87 is offline   Reply With Quote

Old   May 26, 2020, 15:12
Default
  #18
Senior Member
 
Arjun
Join Date: Mar 2009
Location: Nurenberg, Germany
Posts: 1,273
Rep Power: 34
arjun will become famous soon enougharjun will become famous soon enough
Quote:
Originally Posted by arkie87 View Post
So you seem to be indicating that the coupled solver in fluent solves U V and P at a given cell in a coupled manner, but decouples (or explicitly solves) the U V P of the neighboring cells?

To me, that seems like it should completely negate the benefit of the coupled solver due to Gauss Seidel iterations being slow... though I guess fluent just uses AMG rather than Gauss Seidel. Though I'm not sure how one implements AMG or GMG with a blocked matrix.



Fluent uses AMG and needs some sort of under-relaxation (or diagonal dominance) to make that AMG work. It is shown that for such problems Gauss Seidel is not a good smoother. Braess-Sarazin smoother is preferred.


http://perso.unifr.ch/ales.janka/papers/emg_slides.pdf
arjun is offline   Reply With Quote

Old   May 26, 2020, 16:19
Default
  #19
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Quote:
Originally Posted by arkie87 View Post
That makes sense, since even if the nxn matrix starts off sparse, when you invert it, it will be full. So when you invert each block in the NxN matrix, it will be full (and 3Nx3N). But to me, that sounds like a severe drawback of this approach, and would limit the use of coupled solvers in 3D, no? Or is the matrix only ever inverted at the block level, and never at the full level?
No, forget about inverting the whole matrix at full level, which is feasible only in MATLAB for very few cells. Real CFD codes actually use some form of iterative algebraic solver to solve the full system (either for the n segregated or the 1 coupled). In such solvers you typically need to divide the residual or the rhs by the diagonal coefficient aii (think about gauss-seidel). Now, what happens in a coupled solver is that your diagonal coefficient is not anymore a single scalar, but the nxn matrix Aii, so instead of dividing the rhs by aii you actually multiply it by the inverse of Aii, that's where the inversion comes in for coupled solvers. But Aii blocks (and Aij as well, for that matter) are already full, and so are their inverse. So, to summarize, you only invert the diagonal nxn blocks, you do it once per outer iteration, and you actually store the inverse in place, so the overall overhead is relatively limited.

Quote:
Originally Posted by arkie87 View Post
I am bit new to cache efficiency, but i think that makes sense. Not sure if I see why you have to retrieve the variables n times for the segregated solver though, but perhaps that is because i just vectorize my code in matlab and get the coefficients all at once, rather than get the coefficients on demand.
Imagine a segregated solver, that solves for u, then v, then p. I make an example for finite volumes because it is more natural for me, but the same apply to other methods. In solving for u, you make a loop over all the faces of the grid, retrieve the variables on the left and right of a face and compute the fluxes which are then assigned to the left and right residuals, or something along these lines. Then you do your loop over the cells (as opposed to faces) to compute additional source terms, etc.
Now, no matter how well ordered are the faces and cells, there is always some cache issue in retrieving variables from cells and putting back some other values. That is, at some point (unless the problem is very small), you will need a cell value that is out of cache, during each of those loops. Now, solving a segregated problem implies that those problems are encountered n times. Also, you kind of end up retrieving and computing the same variable multiple times (imagine the diffusion coefficients, for example).

With a coupled solver you do only one face loop and one cell loop, retrieve all the variables only once, compute what you need once, and put back stuff once. All the variables of a cell, in my ordering, are stored together and don't incur cache misses when retrieved together. Actually, this effectively reduces the number of cells in cache (because there are more variables per cell), but the cache misses are reduced.

Quote:
Originally Posted by arkie87 View Post
Yeah, I see what you are saying that its relatively trivial to add in the other terms. Though I will say that Ref 2 that Arjun sent has the a_uv=a_vu=0, so they effectively drop those terms we are discussing. Based on that, it seems like Ref 2 and my method are equivalent, but the matricies are set up and ordered differently.
Yes, it seems so, but the u-v coupling in the paper would be in the term mf, which is in afuu and afvv (see eq. 23). They decide to skip it, but including it would have added no cost. Note that with Rhie-Chow mf also contains the pressure gradient (see equations 24 and 25 in the paper).

Quote:
Originally Posted by arkie87 View Post
Ok, good to know. It seems like the ordering process is a bit more difficult with staggered grid, no? Since the number of pressure variables =/= U velocity =/= V velocity, so the pattern will break near the boundary. Is that a problem? Perhaps that is a reason to consider collocated variables, but then I will have to implement Rhie-Chow...
Yes, I guess that coupling equations in a staggered framework comes with its own problems.

Quote:
Originally Posted by arkie87 View Post
The last thing I am confused about is how the coupled solver loops/works. It seems like you are saying it inverts only the blocks i.e. 3x3 matrix, and then moves around to each cell. Thus, it requires storage of the full 3Nx3N matrix, which runs into the problem i mentioned earlier. The other thing I am confused about is how this achieves the full coupling. It seems like it couples U,V, and P together well, but decouples them from their neighboring cells. So it seems like there would need to be lots of Guass-Seidel iterations to achieve convergence of the coupled solver, which defeats the purpose in my mind.

Alternatively, it inverts the blocks of the first cell (3x3), and then inverts the blocks of its neighboring cells (3x3nb). I suppose then it could multiply the first row of the matrix by the RHS (b) to get the values of U V and P in the first cell? Is that right (is that what Schur complement approach is)? It could then forget the matrix inverse values and move onto the next cell.


Once again, thanks again for all your help!
As said above, inversion is only involved for the diagonal blocks. Coupling with neighbor cells is the same as for segregated solvers.
sbaffini is offline   Reply With Quote

Old   May 26, 2020, 16:32
Default
  #20
Senior Member
 
sbaffini's Avatar
 
Paolo Lampitella
Join Date: Mar 2009
Location: Italy
Posts: 2,151
Blog Entries: 29
Rep Power: 39
sbaffini will become famous soon enoughsbaffini will become famous soon enough
Send a message via Skype™ to sbaffini
Quote:
Originally Posted by arjun View Post
On top of that Gauss Seidel does not really work in pressure based coupled solver. This is one of the many issues with pressure based coupled solver.
In contrast, a simple Symmetric Gauss Seidel (just plain SGS, not as smoother or preconditioner), works ridiculously well in the coupled density based solver.

I once found myself at a dinner with a group of prominent linear algebra professors, except that I didn't know it (I knew they were somehow involved in math/engineering, but didn't know exactly on which topic), and then I suddenly dropped this bomb (using plain SGS in a production code that solves large system of equations)... they were all ... I was all "I know, but it just works"... it was fun
sbaffini is offline   Reply With Quote

Reply

Tags
coupled solver, linearize, segregated solver


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Fluent do not use my velocity field(by UDF) to solve energy equation tangleiplus Fluent UDF and Scheme Programming 6 January 21, 2019 21:28
Domain Reference Pressure and mass flow inlet boundary AdidaKK CFX 75 August 20, 2018 05:37
Question about adaptive timestepping Guille1811 CFX 25 November 12, 2017 17:38
pressure-based coupled solver for compressible NS equation wangmianzhi Main CFD Forum 19 July 29, 2016 03:37
Explicit evaluation of all the terms of momentum equation Andrea_85 OpenFOAM 0 November 25, 2014 09:39


All times are GMT -4. The time now is 23:36.