CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   On the Roe approximate Riemann solver in the preconditioned density based method (https://www.cfd-online.com/Forums/main/236872-roe-approximate-riemann-solver-preconditioned-density-based-method.html)

sbaffini June 20, 2021 10:32

On the Roe approximate Riemann solver in the preconditioned density based method
 
Dear all,

the Roe approximate Riemann solver is typically written for a general equation in the following form:

\frac{\partial \mathbf{Q}}{\partial t} + \frac{\partial \mathbf{F}\left(\mathbf{Q}\right)}{\partial x} = 0

which is then linearized around a state \widetilde{\mathbf{Q}} to produce:


\frac{\partial \mathbf{Q}}{\partial t} + \mathbf{A}\left(\widetilde{\mathbf{Q}}\right)\frac{\partial \mathbf{Q}}{\partial x} = 0

where:

\mathbf{A} = \frac{\partial \mathbf{F}}{\partial \mathbf{Q}}

is the Jacobian of the flux \mathbf{F} with respect to the conserved variables \mathbf{Q}. Besides some obvious properties that the state \widetilde{\mathbf{Q}} has to satisfy when used to evaluate the Jacobian, there is also the so called conservative property:

\mathbf{A}\left(\widetilde{\mathbf{Q}}\right) \left(\mathbf{Q}_R-\mathbf{Q}_L\right) = \mathbf{F}_R-\mathbf{F}_L.

Turns out that for the Euler equations written in conservative variables and the ideal gas EOS, the state \widetilde{\mathbf{Q}}\left(\mathbf{Q}_L,\mathbf{Q}_R\right) can be evaluated with the so called Roe averages:

\widetilde{\phi} = \frac{\sqrt{\rho_L} \phi_L + \sqrt{\rho_R} \phi_R}{\sqrt{\rho_L}+\sqrt{\rho_L}}

for the three velocity components and the enthalpy, and with:

\widetilde{\rho} = \sqrt{\rho_L \rho_R}

for the density. These are all the quantities needed in the Jacobian, which include the speed of sound. So far so good, one can easily follow the Roe derivation of such state under the given assumptions and no specific problem arises.

For non ideal gases the matter is more complicated because, roughly speaking, there are more unknowns now coming from the partial derivatives in the EOS (which can't be expressed anymore, in general, in terms of the remaining variables). Yet, the literature on possible solutions seems abundant. Again, so far so good.

My question here, however, concerns the situation when the above approach is applied to a preconditioned density based solver, and more specifically in the context of a cell-centered FV solver. Say, like the following ones:

Merkle, Sullivan, Buelow, Venkateswaran: Computation of Flows with Arbitrary Equations of State, AIAA Journal Vol. 36, No. 4, April 1998

Weiss, Maruszewski, Smith: Implicit Solution of Preconditioned Navier–Stokes Equations Using Algebraic Multigrid, AIAA Journal Vol. 37, No. 1, January 1999

In this case, one actually ends up having to solve the following problem instead:

\mathbf{\Gamma}\frac{\partial \mathbf{q}}{\partial t} + \mathbf{\Gamma}\left[\mathbf{\Gamma}^{-1} \left(\mathbf{AM}\right)\right]\frac{\partial \mathbf{q}}{\partial x} = 0

where \mathbf{q}=\left[p, u, v, w, T\right]^T are the primitive variables the method solves for, \mathbf{Q}=\left[\rho, \rho u, \rho v, \rho w, \rho E\right]^T are the conserved variables and

\mathbf{AM} = \frac{\partial \mathbf{F}}{\partial \mathbf{Q}} \frac{\partial \mathbf{Q}}{\partial \mathbf{q}}

is the Jacobian with respect to the primitive variables. The method also requires the following quantities from the EOS: \rho\left(p,T\right), H\left(p,T\right), \rho_p\left(p,T\right), \rho_T\left(p,T\right), H_p\left(p,T\right) and H_T\left(p,T\right).

The preconditioning comes from \mathbf{\Gamma}=\mathbf{M}^m, which is a modified version of \mathbf{M} where, in place of \rho_p a modified one is used:

\left(\frac{\partial \rho}{\partial p}\right)' = \frac{1}{V_r^2}-\frac{\rho_T \left(1-\rho H_p\right)}{\rho H_T}

where a properly defined V_r plays the role of an artificial speed of sound that keeps the system hyperbolic and well conditioned at all speeds. Now, the question is: where do I even start to try to satisfy the conservation property as per the original Roe scheme? In particular, how does the preconditioning affect it? Is still \mathbf{A} the relevant matrix? Or do I have to use \mathbf{AM}? (Not even considering the possibility of \mathbf{\Gamma}^{-1} \left(\mathbf{AM}\right)). Thus, for example, might it make sense to consider the conservation property in the following form?

\mathbf{AM}\left(\mathbf{q}_R-\mathbf{q}_L\right) = \mathbf{F}_R-\mathbf{F}_L.

Does this produce the same result of the original one? I think not.

Note that some codes, Fluent for example, simply use arithmetic averages of the R and L states and don't care at all of the Roe average. Until today, I tought I was actually using the Roe averages, that is: density, enthalpy and all the primitive variables were computed a la Roe, then those were used to compute the remaining derivatives. But then I realized that this might not be correct, not even for ideal gases (I already knew it isn't for general EOS). For example, consider the ideal gas EOS p=\rho R T and the resulting \rho_p derivative:

\frac{\partial \rho}{\partial p} = \frac{1}{RT} = \frac{\rho}{p}

This also holds for the case of the arithmetic average. Still, using the Roe averages for p and T turns out instead that:

\frac{1}{R\widetilde{T}} \neq \frac{\widetilde{\rho}}{\widetilde{p}}

So, in order to recover the correct Roe average for the ideal gas case requires, at least, a careful implementation of the thermodynamic derivatives, which I had not done. That is, the two versions being different, only one of the two can then reproduce the original Roe state.

Note that none of this ever affected any practical computation of the scheme, that I know of. In particular, my implementation also switches to the HLLE+ scheme for the cases where an entropy fix is needed, thus, in the end, is not a true Roe scheme in any case. Still, my lack of understanding on the matter has now scared me off enough to look for more answers.

Thanks to anyone who can point me to the relevant literature or provide his own interpretation of the matter. More specifically, how do I compute the Roe averages in the case the primitive variables are used and how do I compute the EOS derivatives in order to recover the original Roe ones for the ideal gas case? Are they even recoverable at all in my case?

FMDenaro June 20, 2021 12:25

Quote:

Originally Posted by sbaffini (Post 806471)
Dear all,

the Roe approximate Riemann solver is typically written for a general equation in the following form:

\frac{\partial \mathbf{Q}}{\partial t} + \frac{\partial \mathbf{F}\left(\mathbf{Q}\right)}{\partial x} = 0

which is then linearized around a state \widetilde{\mathbf{Q}} to produce:


\frac{\partial \mathbf{Q}}{\partial t} + \mathbf{A}\left(\widetilde{\mathbf{Q}}\right)\frac{\partial \mathbf{Q}}{\partial x} = 0

where:

\mathbf{A} = \frac{\partial \mathbf{F}}{\partial \mathbf{Q}}

is the Jacobian of the flux \mathbf{F} with respect to the conserved variables \mathbf{Q}. Besides some obvious properties that the state \widetilde{\mathbf{Q}} has to satisfy when used to evaluate the Jacobian, there is also the so called conservative property:

\mathbf{A}\left(\widetilde{\mathbf{Q}}\right) \left(\mathbf{Q}_R-\mathbf{Q}_L\right) = \mathbf{F}_R-\mathbf{F}_L.

Turns out that for the Euler equations written in conservative variables and the ideal gas EOS, the state \widetilde{\mathbf{Q}}\left(\mathbf{Q}_L,\mathbf{Q}_R\right) can be evaluated with the so called Roe averages:

\widetilde{\phi} = \frac{\sqrt{\rho_L} \phi_L + \sqrt{\rho_R} \phi_R}{\sqrt{\rho_L}+\sqrt{\rho_L}}

for the three velocity components and the enthalpy, and with:

\widetilde{\rho} = \sqrt{\rho_L \rho_R}

for the density. These are all the quantities needed in the Jacobian, which include the speed of sound. So far so good, one can easily follow the Roe derivation of such state under the given assumptions and no specific problem arises.

For non ideal gases the matter is more complicated because, roughly speaking, there are more unknowns now coming from the partial derivatives in the EOS (which can't be expressed anymore, in general, in terms of the remaining variables). Yet, the literature on possible solutions seems abundant. Again, so far so good.

My question here, however, concerns the situation when the above approach is applied to a preconditioned density based solver, and more specifically in the context of a cell-centered FV solver. Say, like the following ones:

Merkle, Sullivan, Buelow, Venkateswaran: Computation of Flows with Arbitrary Equations of State, AIAA Journal Vol. 36, No. 4, April 1998

Weiss, Maruszewski, Smith: Implicit Solution of Preconditioned Navier–Stokes Equations Using Algebraic Multigrid, AIAA Journal Vol. 37, No. 1, January 1999

In this case, one actually ends up having to solve the following problem instead:

\mathbf{\Gamma}\frac{\partial \mathbf{q}}{\partial t} + \mathbf{\Gamma}\left[\mathbf{\Gamma}^{-1} \left(\mathbf{AM}\right)\right]\frac{\partial \mathbf{q}}{\partial x} = 0

where \mathbf{q}=\left[p, u, v, w, T\right]^T are the primitive variables the method solves for, \mathbf{Q}=\left[\rho, \rho u, \rho v, \rho w, \rho E\right]^T are the conserved variables and

\mathbf{AM} = \frac{\partial \mathbf{F}}{\partial \mathbf{Q}} \frac{\partial \mathbf{Q}}{\partial \mathbf{q}}

is the Jacobian with respect to the primitive variables. The method also requires the following quantities from the EOS: \rho\left(p,T\right), H\left(p,T\right), \rho_p\left(p,T\right), \rho_T\left(p,T\right), H_p\left(p,T\right) and H_T\left(p,T\right).

The preconditioning comes from \mathbf{\Gamma}=\mathbf{M}^m, which is a modified version of \mathbf{M} where, in place of \rho_p a modified one is used:

\left(\frac{\partial \rho}{\partial p}\right)' = \frac{1}{V_r^2}-\frac{\rho_T \left(1-\rho H_p\right)}{\rho H_T}

where a properly defined V_r plays the role of an artificial speed of sound that keeps the system hyperbolic and well conditioned at all speeds. Now, the question is: where do I even start to try to satisfy the conservation property as per the original Roe scheme? In particular, how does the preconditioning affect it? Is still \mathbf{A} the relevant matrix? Or do I have to use \mathbf{AM}? (Not even considering the possibility of \mathbf{\Gamma}^{-1} \left(\mathbf{AM}\right)). Thus, for example, might it make sense to consider the conservation property in the following form?

\mathbf{AM}\left(\mathbf{q}_R-\mathbf{q}_L\right) = \mathbf{F}_R-\mathbf{F}_L.

Does this produce the same result of the original one? I think not.

Note that some codes, Fluent for example, simply use arithmetic averages of the R and L states and don't care at all of the Roe average. Until today, I tought I was actually using the Roe averages, that is: density, enthalpy and all the primitive variables were computed a la Roe, then those were used to compute the remaining derivatives. But then I realized that this might not be correct, not even for ideal gases (I already knew it isn't for general EOS). For example, consider the ideal gas EOS p=\rho R T and the resulting \rho_p derivative:

\frac{\partial \rho}{\partial p} = \frac{1}{RT} = \frac{\rho}{p}

This also holds for the case of the arithmetic average. Still, using the Roe averages for p and T turns out instead that:

\frac{1}{R\widetilde{T}} \neq \frac{\widetilde{\rho}}{\widetilde{p}}

So, in order to recover the correct Roe average for the ideal gas case requires, at least, a careful implementation of the thermodynamic derivatives, which I had not done.

Note that none of this ever affected any practical computation of the scheme, that I know of. In particular, my implementation also switches to the HLLE+ scheme for the cases where an entropy fix is needed, thus, in the end, is not a true Roe scheme in any case. Still, my lack of understanding on the matter has now scared me off enough to look for more answers.

Thanks to anyone who can point me to the relevant literature or provide his own interpretation of the matter. More specifically, how do I compute the Roe averages in the case the primitive variables are used and how do I compute the EOS derivatives in order to recover the original Roe ones for the ideal gas case? Are they even recoverable at all in my case?






Paolo, not in my field of personal work experience, I suppose you already checked Leveque and Hirsch, right?


I can suggest also this review https://www.sciencedirect.com/scienc...050?via%3Dihub


maybe some check to the textbook "Chung T.J. Computational fluid dynamics (CUP, 2002)" could be worthwhile.

sbaffini June 20, 2021 13:17

Quote:

Originally Posted by FMDenaro (Post 806473)
Paolo, not in my field of personal work experience, I suppose you already checked Leveque and Hirsch, right?


I can suggest also this review https://www.sciencedirect.com/scienc...050?via%3Dihub


maybe some check to the textbook "Chung T.J. Computational fluid dynamics (CUP, 2002)" could be worthwhile.

Unfortunately, most of the material I have ever found is not based on the primitive variables I am using and/or preconditioning. Thus, as I am not trained on this as well, I am at stuck at my capability to come up with something, which is not exactly aligned with my time frame.

The only papers I ever found on the subject are those I mentioned (and their direct descendant/parents from the same authors), and they don't mention anything or don't use Roe averages in the Fluent case. Every other paper I found (even books) just acritically copy that material.

In the end, as I said, this is not, nor has ever been, a real issue per se, but I'm now starting to feel uncomfortable without certain pieces of information.

FMDenaro June 20, 2021 13:25

Quote:

Originally Posted by sbaffini (Post 806478)
Unfortunately, most of the material I have ever found is not based on the primitive variables I am using and/or preconditioning. Thus, as I am not trained on this as well, I am at stuck at my capability to come up with something, which is not exactly aligned with my time frame.

The only papers I ever found on the subject are those I mentioned (and their direct descendant/parents from the same authors), and they don't mention anything or don't use Roe averages in the Fluent case. Every other paper I found (even books) just acritically copy that material.

In the end, as I said, this is not, nor has ever been, a real issue per se, but I'm now starting to feel uncomfortable without certain pieces of information.




Have you checked also in the report "Building your own shock tube" by J. Naber?

The primitive variables are used there.

sbaffini June 29, 2021 06:57

Just an update.

In the end, I decided that the EOS is not, by itself, ambiguous on how certain properties should be calculated, no matter what.

In particular, in no occasion should I compute any thermodynamic quantity starting from anything different than the independent variables.

Thus, as my independent variables are p and T, for the ideal gas EOS it means that, for example:

\frac{\partial \rho}{\partial p} = \frac{1}{RT}

is the only correct way to compute the density derivative with respect to pressure; that is, density can't appear in it (i.e., I can't use \rho/p).

Will see if this turns out problematic, but for the moment I feel it is the only reasonable approach. If something different is needed for the Roe scheme, than this must be handled outside the EOS routine, no matter how bad it looks. Still, with respect to a classical Roe scheme, this should only affect how the speed of sound is computed in the intermediate state.

Eifoehn4 June 29, 2021 12:55

Dear Paolo,

the exact Riemann solver for the linearized hyperbolic system, more precisely the Roe solver for the Euler equations with general EoS, is not straight forward. With general EoS i refer to non-calorically perfect EoS, this includes e.g. ideal gases, real EoS or multi-component systems.

The caloric perfect EoS case is a lucky accident: In general it is not possible to fulfill thermodynamic consistency and all three Roe properties (i) hyperbolicity, (ii) consistency, (iii) conservation, simultaneously. That makes total sense, since (as the name suggests) you are playing with mean values using the Roe averages. Simply spoken, thermodynamic laws are generally not applicable on averages.

A neccecary condition to be fullfilled is, see e.g. Guardone and Quartapelle:

d p =\frac{\partial p \left( \boldsymbol {\tilde{Q}} \right)}{\partial \boldsymbol{\tilde{Q}}}

Summarizing: You have to define new reasonable conditions for the new unknowns, especially for the last Roe property.

Original Roe:
Roe

Approximation of original Roe:
Pike

First multi-component Roe:
Abgrall

First real EoS Roe:
Glaister

Ideas of Larrouturou for multi-component systems:
Larrouturou

Other generalizations of Roe method:
Liou
Cox
Toumi
Vinokur
Guardone
Mottura

Regards

FMDenaro June 29, 2021 13:07

Again, no direct experience about that issue but the LES field of compressible flows has to do with the local averaging of the EOS. Is that approach valid?

Eifoehn4 June 29, 2021 14:18

I am no expert for compressible LES, especially for non-calorically perfect EoS. However, let me explain the problem you are facing with general EoS. The onedimensional Euler equations are given as

\begin{pmatrix}
           \rho \\
           \rho u \\
           \rho e 
         \end{pmatrix}_t +\begin{pmatrix}
           \rho u \\
           p + \rho u^2 \\
            [ \rho e +p ] u
         \end{pmatrix}_x = 0.

Moreover, we have to keep in mind that an arbitrary caloric EoS (also called incomplete EOS) is needed to close the system p=p(\rho,\epsilon). Now we define the closed system as:

\underbrace{\begin{pmatrix}
           \rho \\
           \rho u \\
           \rho e 
         \end{pmatrix}_t +\begin{pmatrix}
           \rho u \\
           p+  \rho u^2 \\
            [ \rho e +p ] u
         \end{pmatrix}_x = 0 , \quad p=p(\rho,\epsilon) }_{:=\text{PDE}}.

Considering a Riemann problem (RP) with arbitrary constant values left and right:

Situation A: A unique caloric perfect EoS, e.g. p=\left(\kappa - 1\right) \rho \epsilon

\text{PDE}  \underset{RP}{|} \text{PDE}

Situation B: Spatial varying \kappa, e.g. left p=\left(\kappa_1 - 1\right) \rho \epsilon, right p=\left(\kappa_2 - 1\right) \rho \epsilon

\text{PDE}_1  \underset{RP}{|} \text{PDE}_2

Since the EoS is part of your equation system, you now challenge the problem of solving a RP for two different closed PDEs. As a byproduct, you are facing the problem of so called spurious pressure and velocity ocillations.

Regards

FMDenaro June 29, 2021 14:36

Quote:

Originally Posted by Eifoehn4 (Post 807134)
I am no expert for compressible LES, especially for non-calorically perfect EoS. However, let me explain the problem you are facing with general EoS. The onedimensional Euler equations are given as

\begin{pmatrix}
           \rho \\
           \rho u \\
           \rho e 
         \end{pmatrix}_t +\begin{pmatrix}
           \rho u \\
           p + \rho u^2 \\
            [ \rho e +p ] u
         \end{pmatrix}_x = 0.

Moreover, we have to keep in mind that an arbitrary caloric EoS (also called incomplete EOS) is needed to close the system p=p(\rho,\epsilon). Now we define the closed system as:

\underbrace{\begin{pmatrix}
           \rho \\
           \rho u \\
           \rho e 
         \end{pmatrix}_t +\begin{pmatrix}
           \rho u \\
           p+  \rho u^2 \\
            [ \rho e +p ] u
         \end{pmatrix}_x = 0 , \quad p=p(\rho,\epsilon) }_{:=\text{PDE}}.

Considering a Riemann problem (RP) with arbitrary constant values left and right:

Situation A: A unique caloric perfect EoS, e.g. p=\left(\kappa - 1\right) \rho \epsilon

\text{PDE}  \underset{RP}{|} \text{PDE}

Situation B: Spatial varying \kappa, e.g. left p=\left(\kappa_1 - 1\right) \rho \epsilon, right p=\left(\kappa_2 - 1\right) \rho \epsilon

\text{PDE}_1  \underset{RP}{|} \text{PDE}_2

Since the EoS is part of your equation system, you now challenge the problem of solving a RP for two different closed PDEs. As a byproduct, you are facing the problem of so called spurious pressure and velocity ocillations.

Regards






I think that the problems should be better clarified between cell-averaged variables and pointwise fluxes. You do not write the differential form on the pointwise variables but on the cell-averaged variables.

I have no great experience but I think that the PDEs are the issues, you need to write them as non-closed PDEs.

sbaffini June 30, 2021 03:50

Quote:

Originally Posted by Eifoehn4 (Post 807129)
Dear Paolo,

the exact Riemann solver for the linearized hyperbolic system, more precisely the Roe solver for the Euler equations with general EoS, is not straight forward. With general EoS i refer to non-calorically perfect EoS, this includes e.g. ideal gases, real EoS or multi-component systems.

The caloric perfect EoS case is a lucky accident: In general it is not possible to fulfill thermodynamic consistency and all three Roe properties (i) hyperbolicity, (ii) consistency, (iii) conservation, simultaneously. That makes total sense, since (as the name suggests) you are playing with mean values using the Roe averages. Simply spoken, thermodynamic laws are generally not applicable on averages.

A neccecary condition to be fullfilled is, see e.g. Guardone and Quartapelle:

d p =\frac{\partial p \left( \boldsymbol {\tilde{Q}} \right)}{\partial \boldsymbol{\tilde{Q}}}

Summarizing: You have to define new reasonable conditions for the new unknowns, especially for the last Roe property.

Original Roe:
Roe

Approximation of original Roe:
Pike

First multi-component Roe:
Abgrall

First real EoS Roe:
Glaister

Ideas of Larrouturou for multi-component systems:
Larrouturou

Other generalizations of Roe method:
Liou
Cox
Toumi
Vinokur
Guardone
Mottura

Regards

Thank you very much Eifoehn4, I was indeed also in the need of some feedback on the reference literature.

So, if I understand correctly, even just pretending that a certain average reduces to the Roe one when applied to non ideal gases is not, indeed, as easy as it may sound? Even if I don't care at all of what happens in the other non ideal cases?

@Filippo, at the present state I only care about 1st order, constant states in left and right cells for a given face in 1D. Actually, if we look at this from the FD perspective, we don't even need the average concept, I think. I would just like that in the most relevant compressible case I'm going to handle I will correctly upwind the correct Riemann invariants.

FMDenaro June 30, 2021 04:17

Quote:

Originally Posted by sbaffini (Post 807166)
Thank you very much Eifoehn4, I was indeed also in the need of some feedback on the reference literature.

So, if I understand correctly, even just pretending that a certain average reduces to the Roe one when applied to non ideal gases is not, indeed, as easy as it may sound? Even if I don't care at all of what happens in the other non ideal cases?

@Filippo, at the present state I only care about 1st order, constant states in left and right cells for a given face in 1D. Actually, if we look at this from the FD perspective, we don't even need the average concept, I think. I would just like that in the most relevant compressible case I'm going to handle I will correctly upwind the correct Riemann invariants.




Paolo, let us think first to the continuous set of equations, including the EOS. You know that the concept of the Riemann problem makes sense in a more different way, the differential form of the equations (you wouyld use in the FD manner) governs the variables that are averaged, see Eq.(4.9) in the textbook of Leveque.

The fact that you use a first order discretization fro the flux reconstruction does not change the initial formulation of conservation laws. I think that one should first describe the mathematical problem in terms of cella-averaged values, pointwise flux, pointwise and averaged EOS and then introduce the discretization.


Is maybe that issue generating confusion?

sbaffini June 30, 2021 05:48

Just for your interest, this is what I have so far in my scheme for the dissipation part \mathbf{d}=\mathbf{\Gamma}\mathbf{R}|\mathbf{\Lambda}|\mathbf{R}^{-1}(\mathbf{q}_R-\mathbf{q}_L)(but note that I don't use it in this form, I still use matrix products to avoid loosing control):

d_1 = \rho_T|V_n|a+\rho b
d_2 = u d_1 + \rho|V_n|\left(u_R-u_L\right) + \rho n_x c
d_3 = v d_1 + \rho|V_n|\left(v_R-v_L\right) + \rho n_y c
d_4 = w d_1 + \rho|V_n|\left(w_R-w_L\right) + \rho n_z c
d_5 = H d_1 + \rho|V_n|e+ \rho V_n c

with:

a = \left(T_R-T_L\right)-\frac{\delta}{\rho C_p} \left(P_R-P_L\right)
b = \frac{\theta_2}{2\rho s_2V_r^2}\left(P_R-P_L\right)+\frac{\theta_3}{2s_2V_r^2}\left[\mathbf{n} \cdot \left(\mathbf{V}_R-\mathbf{V}_L\right)\right]
c = \frac{\theta_1}{2 \rho s_2} \left(P_R-P_L\right) +\left(\frac{\theta_2}{2 s_2} -|V_n| \right) \left[\mathbf{n} \cdot \left(\mathbf{V}_R-\mathbf{V}_L\right)\right]
e = C_p a + \mathbf{V} \cdot \left(\mathbf{V}_R-\mathbf{V}_L\right)

and:
\theta_0 = |s_1+s_2|+|s_1-s_2| = 2MAX\left(s1,s2\right)
\theta_1 = |s_1+s_2|-|s_1-s_2| = 2MIN\left(s1,s2\right)
\frac{\theta_2}{2s2} = \frac{\theta_1}{2s2}\left(V_n-s_1\right)-\frac{\theta_0}{2}
\frac{\theta_3}{2s2} = \frac{\theta_1}{2s2}\left[\left(V_n-s_1\right)^2+s_2^2\right]-\left(V_n-s_1\right)\theta_0
s_1 = \left(1-\alpha\right)V_n
s_2 = \sqrt{\left(\alpha V_n \right)^2+V_r^2}
\alpha = \frac{1-\beta V_r^2}{2}

where \beta=\rho_P+\rho T \delta /(\rho Cp) is the inverse of the speed of sound squared, \delta = 1 - \rho H_p=-\rho_T T/\rho and anything without the R/L subscript to be evaluated at the Roe average. One could actually further simplify things by nothing that:

V_n-s_1 = \alpha V_n
\left(V_n-s_1\right)^2+s_2^2=2 \alpha^2 V_n^2 + V_r^2

but I've found difficulties in proceeding further toward a version which could even remotely resemble the Roe one (not sure if it is also due to the certainly different eigenvectors I'm using).

sbaffini June 30, 2021 05:58

Quote:

Originally Posted by FMDenaro (Post 807169)
Paolo, let us think first to the continuous set of equations, including the EOS. You know that the concept of the Riemann problem makes sense in a more different way, the differential form of the equations (you wouyld use in the FD manner) governs the variables that are averaged, see Eq.(4.9) in the textbook of Leveque.

The fact that you use a first order discretization fro the flux reconstruction does not change the initial formulation of conservation laws. I think that one should first describe the mathematical problem in terms of cella-averaged values, pointwise flux, pointwise and averaged EOS and then introduce the discretization.


Is maybe that issue generating confusion?

At the moment, everything generates confusion :confused:

Eifoehn4 June 30, 2021 06:58

Quote:

Originally Posted by sbaffini (Post 807179)
At the moment, everything generates confusion :confused:

FMDenaro is right saying that the use of a first order discretization does not change the initial formulation of conservation laws, the Riemann problem itself. As long as your EoS preserves the convexity of your hyperbolic system, the Riemann problem is still unique and can be calculated, more or less straightforward.

However, I can only speak from my own experience. The step towards non-caloric EoS is not as easy as it seems. You will encounter problems that you did not realize existed in the first place.

Eifoehn4 June 30, 2021 12:31

If you are more interested in the Riemann problem theory for real materials i can suggest:

Menikoff and Plohr

One of my favorite papers so far.

Regards

sbaffini July 10, 2021 06:03

Another doubt I have regards the unsteady term for the preconditioned system when preconditioning is actually turned off.

As I wrote, for the preconditooned case one usually solves for:

\mathbf{\Gamma}\frac{\partial \mathbf{q}}{\partial \tau} + \mathbf{\Gamma}\left[\mathbf{\Gamma}^{-1} \left(\mathbf{AM}\right)\right]\frac{\partial \mathbf{q}}{\partial x} = 0

where time really has to be interpreted as pseudo-time and the equations are correct only at steady state. Now with preconditioning "turned off" one would end up with:

\mathbf{M}\frac{\partial \mathbf{q}}{\partial t} + \mathbf{M}\left[\mathbf{M}^{-1} \left(\mathbf{AM}\right)\right]\frac{\partial \mathbf{q}}{\partial x} = 0

where

\mathbf{A} = \frac{\partial \mathbf{F}}{\partial \mathbf{Q}}

and

\mathbf{M} = \frac{\partial \mathbf{Q}}{\partial \mathbf{q}}

The question then is, from the continuous point of view, as now we use \mathbf{M} (instead of its preconditioning altered version \mathbf{\Gamma}), is this now time accurate? If it is so, imagine the following discretization of the unsteady term:

\frac{\partial \mathbf{q}}{\partial t} \biggr\rvert^{n+1} = \frac{a\mathbf{q}^{n+1}+b\mathbf{q}^{n}+c\mathbf{q}^{n-1}}{d}

If it is time accurate, does it mean that the following is correct?

\frac{\partial \mathbf{Q}}{\partial t} \biggr\rvert^{n+1} = \frac{a\mathbf{Q}^{n+1}+b\mathbf{Q}^{n}+c\mathbf{Q}^{n-1}}{d} = \left(\mathbf{M}\frac{\partial \mathbf{q}}{\partial t}\right) \biggr\rvert^{n+1} =\mathbf{M}^{n+1}\frac{a\mathbf{q}^{n+1}+b\mathbf{q}^{n}+c\mathbf{q}^{n-1}}{d}

I'm not sure that this actually holds at all.

EDIT: Just to give context, the relevance of this comes from the fact that, if all of this holds, and one uses primitive variables, then one can completely avoid conservative variables also for the unsteady cases.

EDIT2: Apparently, someone even made a paper on this kind of obviou possibility (https://research-information.bris.ac...ne.0195494.pdf). Yet, besides the journal quality, they seem too naive to be trusted.

EDIT3: While the above is certainly true at the continuous level, I think it is also certainly false that it holds discretely (just as for the Roe average) for anything non linear. If indeed one consider the NS equations with an ideal gas and the backward Euler discretization of the density time derivative, what one obtains is:

\rho^{n+1} - \rho^{n} = \frac{1}{RT^{n+1}} \left(p^{n+1}-p^n\right) -\frac{p^{n+1}}{R{T^2}^{n+1}}\left(T^{n+1}-T^n\right)

which is generally different from the actual difference in density as computed at the two times:

\rho^{n+1} - \rho^{n} = \frac{p^{n+1}}{RT^{n+1}}-\frac{p^n}{RT^n}

So the question is, is this at least second order accurate in time? I think that one can show that this is indeed the case for the ideal gas by nothing that:

d\rho_{computed} = d\rho_{exact}\left(1-\frac{T^{n+1}-T^n}{T^{n+1}}\right) = d\rho_{exact} + O\left(d\rho_{exact}dT\right) = d\rho_{exact} + O\left(dt^2\right)

but I'm unsure on how to prove this in general (nor I actually tested all the remaining terms of the NSE)

EDIT 4: Apparently, for the given 1st order backward Euler case with the idal gas EOS, all the terms are indeed exact within a 2nd order error term, except the kinetic energy contribution to the total energy, which also has a first order error of the form:

-\frac{\rho^{n+1}-\rho^n}{2}\left(u^nu^{n+1}+v^nv^{n+1}+w^nw^{n+1}\right)

AshwaniAssam July 14, 2021 04:00

Roe average should (most likely!) not be used. Simple arithmetic average should be used from our experience and also as per Weiss, 1995 work where it is clearly stated


"Roe-averaged values are not used with preconditioning"


https://arc.aiaa.org/doi/abs/10.2514...rnalCode=aiaaj

sbaffini March 13, 2022 18:57

Quote:

Originally Posted by sbaffini (Post 807108)
Just an update.

In the end, I decided that the EOS is not, by itself, ambiguous on how certain properties should be calculated, no matter what.

In particular, in no occasion should I compute any thermodynamic quantity starting from anything different than the independent variables.

Thus, as my independent variables are p and T, for the ideal gas EOS it means that, for example:

\frac{\partial \rho}{\partial p} = \frac{1}{RT}

is the only correct way to compute the density derivative with respect to pressure; that is, density can't appear in it (i.e., I can't use \rho/p).

Will see if this turns out problematic, but for the moment I feel it is the only reasonable approach. If something different is needed for the Roe scheme, than this must be handled outside the EOS routine, no matter how bad it looks. Still, with respect to a classical Roe scheme, this should only affect how the speed of sound is computed in the intermediate state.

Just an update... in the end this turned out problematic. In particular, I found that the temperature derivative, \rho_T, and only that, needs to be computed as:

\frac{\partial \rho}{\partial T} = -\frac{\rho}{T}

using the Roe density... still don't know why. So I had to use an ugly hack just for that:

\left(\frac{\partial \rho}{\partial T}\right)_{ROE} = \frac{\partial \rho}{\partial T}\left(p_{ROE},T_{ROE}\right)\frac{\rho_{ROE}}{\rho\left(p_{ROE},T_{ROE}\right)}

arjun March 14, 2022 03:09

Quote:

Originally Posted by sbaffini (Post 824043)
Just an update... in the end this turned out problematic. In particular, I found that the temperature derivative, \rho_T, and only that, needs to be computed as:

\frac{\partial \rho}{\partial T} = -\frac{\rho}{T}

using the Roe density... still don't know why. So I had to use an ugly hack just for that:

\left(\frac{\partial \rho}{\partial T}\right)_{ROE} = \frac{\partial \rho}{\partial T}\left(p_{ROE},T_{ROE}\right)\frac{\rho_{ROE}}{\rho\left(p_{ROE},T_{ROE}\right)}



if you are talking about d rho / dP for construction of linear system to solve pressure correction then upwinded values of d rho / dP work just fine (I have n't looked into code for 2 years so bit vague here).

Same should be okay for drho/dT I believe. For compressible flow I do not remember to do any hack and everything works just fine (Other than the upwinded values when needed).

sbaffini March 14, 2022 03:37

Quote:

Originally Posted by arjun (Post 824052)
if you are talking about d rho / dP for construction of linear system to solve pressure correction then upwinded values of d rho / dP work just fine (I have n't looked into code for 2 years so bit vague here).

Same should be okay for drho/dT I believe. For compressible flow I do not remember to do any hack and everything works just fine (Other than the upwinded values when needed).

I am referring to the fact that, in my preconditioned Roe solver, I enter with input (pL,uL,vL,wL,TL) for the left cell and (pR,uR,vR,wR,TR) for my right cell and I have to determine (p,u,v,w,T,rho,H,drho/dp,drho/dT,dH/dp,dH/dT) at the Roe state so that I can properly build the dissipation matrix. What I found is that the following receipt works:

1) rho, u, v, w, p, T and H are taken as classical roe averages

2) All the derivatives, except drho/dT, are computed from primitive variables only (i.e., p and T) at the Roe state computed in 1

3) drho/dT is computed as in 2, but then needs to be multiplied by the ratio between rho computed in 1 and rho computed in 2

This, actually, is independent from the specific average used in 1, and also works if it is an arithmetic average. The very important step is 3, which means that drho/dT in the Roe state for an ideal gas must be computed as -rho_ROE/T_ROE.

Nothing of this seems relevant if you work with conservative variables in ideal gas cases, where you can just use the straightforward Roe receipt.

Note that this receipt can solve, for example, the following shock tube case:

\rho_L = 1, u_L = 1, p_L = 10^{-6}; \rho_R = 1, u_R = -1, p_R = 10^{-6}

which I couldn't solve with any other combination (the only thing I haven't tried yet is using step 1 for all the variables). All of this might or not have something to do with my specific eigenvectors, but I have no idea at the moment.


All times are GMT -4. The time now is 01:15.