http://https://openfoamwiki.net/inde...oam-extend-4.1

Code:

dnf install -y python3-pip m4 flex bison git git-core mercurial cmake cmake-gui openmpi openmpi-devel metis metis-devel metis64 metis64-devel llvm llvm-devel zlib zlib-devel ....

Code:

{ echo 'export PATH=/usr/local/cuda/bin:$PATH' echo 'module load mpi/openmpi-x86_64' }>> ~/.bashrc

Code:

cd ~ mkdir foam && cd foam git clone https://git.code.sf.net/p/foam-extend/foam-extend-4.1 foam-extend-4.1

Code:

{ echo '#source ~/foam/foam-extend-4.1/etc/bashrc' echo "alias fe41='source ~/foam/foam-extend-4.1/etc/bashrc' " }>> ~/.bashrc

Code:

pip install --user PyFoam

Code:

cd ~/foam/foam-extend-4.1/etc/ cp prefs.sh-EXAMPLE prefs.sh

/usr/bin/bison

Code:

# Specify system openmpi # ~~~~~~~~~~~~~~~~~~~~~~ export WM_MPLIB=SYSTEMOPENMPI # System installed CMake export CMAKE_SYSTEM=1 export CMAKE_DIR=/usr/bin/cmake # System installed Python export PYTHON_SYSTEM=1 export PYTHON_DIR=/usr/bin/python # System installed PyFoam export PYFOAM_SYSTEM=1 # System installed ParaView export PARAVIEW_SYSTEM=1 export PARAVIEW_DIR=/usr/bin/paraview # System installed bison export BISON_SYSTEM=1 export BISON_DIR=/usr/bin/bison # System installed flex. FLEX_DIR should point to the directory where # $FLEX_DIR/bin/flex is located export FLEX_SYSTEM=1 export FLEX_DIR=/usr/bin/flex #export FLEX_DIR=/usr # System installed m4 export M4_SYSTEM=1 export M4_DIR=/usr/bin/m4

Code:

foam Allwmake.firstInstall -j

In conclusion:

look at your mesh and don't use relativeSizes for layer addition. ]]>

I, Dr. Prabhakar Bhandari looking for an collaborative research in the field of microchannel heat sink. The work is totally numerical simulation based. If any body interested can email me on prabhakar.bhandari40@gmail.com ]]>

Quote:

Hi,
In icoFoam's code, we have: Code:
fvScalarMatrix pEqn ( fvm::laplacian(rAU, p) == fvc::div(phiHbyA) ); This equation is deduced by myself. If it was wrong just correct me. |

The argument of fvc::div(phiHbyA) is declared as a surfaceScalarField:

Code:

const surfaceScalarField& phiHbyA,

That gives a hint that the class function fvc::div() must have a constructor that takes a surfaceScalarField and return a volVectorField by summing the 6 surface fluxes of each cell and dividing by the cell's volume, to finish the job of computing the divergence of a volume vector field by way of the total surface flux of the cell divided by the cell volume.

The openFoam.com code browser indeed points to https://www.openfoam.com/documentati...ce.html#l00161

Code:

namespace fvc { // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // template<class Type> tmp<GeometricField<Type, fvPatchField, volMesh>> div ( const GeometricField<Type, fvsPatchField, surfaceMesh>& ssf ) { return tmp<GeometricField<Type, fvPatchField, volMesh>> ( new GeometricField<Type, fvPatchField, volMesh> ( "div("+ssf.name()+')', fvc::surfaceIntegrate(ssf) ) ); }

Code:

namespace Foam { // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // namespace fvc { // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // template<class Type> void surfaceIntegrate ( Field<Type>& ivf, const GeometricField<Type, fvsPatchField, surfaceMesh>& ssf ) { const fvMesh& mesh = ssf.mesh(); const labelUList& owner = mesh.owner(); const labelUList& neighbour = mesh.neighbour(); const Field<Type>& issf = ssf; forAll(owner, facei) { ivf[owner[facei]] += issf[facei]; ivf[neighbour[facei]] -= issf[facei]; } forAll(mesh.boundary(), patchi) { const labelUList& pFaceCells = mesh.boundary()[patchi].faceCells(); const fvsPatchField<Type>& pssf = ssf.boundaryField()[patchi]; forAll(mesh.boundary()[patchi], facei) { ivf[pFaceCells[facei]] += pssf[facei]; } } ivf /= mesh.Vsc(); }

Regrettably, MPI and parallel distributed computing is one of those areas where textbooks and online examples (even SO) are largely useless, as most (if not all) of them just simply go into the details of how to use this or that feature. But most examples are so toy level that you even find cases where the presented code just works for a fixed number of processes (aaargh!).

Every MPI use case is very specific but, I want to present here two examples that are so simple and stupid that it is a shame they are not present in every single book or tutorial. Yet, they have both very practical use cases.

The first MPI piece I want to present is, superficially, related to the necessity to perform a deadlock avoiding loop among all processes. This problem, however, really is a twofold one. On one side, the main issue is: how do I automatically schedule the communications between processes so that, for any couple of processes and , if the communication partner of process is process then the communication partner of process is process ? With a proper communication schedule in place, avoiding deadlock, which is the second part of the problem, is then definitely a triviality (also present in most MPI examples). It turns out that such communication schedule is a triviality as well, to the point of being a single liner solution, so it really is a shame that it is not presented anywhere (to the best of my knowledge). So, here it is a pseudocode example of a loop where each process communicates with each other process (including itself) without deadlock (full Fortran example here):

Code:

nproc = mpi_comm_size !How many processes myid = mpi_comm_rank !My rank among processes !Loop over all processes for i = 1 : nprocs !My communication partner at the i-th stage myp = modulo(i-myid-1,nproc) if (myid>myp) then !The process with higher rank sends and then receives mpi_send mpi_recv elseif (myid<myp) then !The process with lower rank receives and then sends mpi_recv mpi_send else !This is me, no send or recv actually needed endif endfor

Now, the pseudo-code above (and the one in the linked gist as well) is just an example, and you should not,

The second easy MPI piece is related to a very simple case: how would you write your own reduce operation without using mpi_reduce but just mpi_send/recv? While this might just look like a textbook exercise, it is indeed relevant for those cases where the needed reduce operation is not among the intrinsic ones of MPI (SUM, MAX, etc.). I first had a need for it when working on parallel statistics (e.g., how to compute the spatial statistics of a field variable in a finite volume code without using allreduce, which costs just like an mpi_barrier?).

For the most complex cases, MPI provides both an mpi_type_create_struct routine to create a generic user-defined data type and mpi_op_create routine to define a custom reduction operation for said data type (see, for example, here, here and here). Unfortunately, they can't actually cover all use cases, or at least not so straightforwardly.

However, if there are no specific needs in terms of the associativity of the reduce operator, it turns out that you can write your own reduce algorithm with no more than 25 lines of code (full Fortran example here). The algorithm has 3 very simple steps:

- Determine pp2, the largest power of 2 integer smaller than or equal to the current number of processes
- If the current number of processes is above pp2, ranks beyond pp2 just send their data to ranks that are lower by exactly pp2 (e.g., for 7 processes, zero indexed, ranks from 4 to 6 will send, respectively, to ranks from 0 to 2), which will perform the reduce operation
- Log2(pp2) iterations are performed where, at each iteration, the higher half of ranks up to a given power of 2, ppd (=pp2 at the beginning of the first iteration), send their data to the lower half (shifting by ppd/2), which will then reduce it. At the end of each iteration ppd is divided by 2.

The final reduce operation will then be available to the process with rank 0 (which could simply send it to the required root, if different). Note that the algorithm still needs nproc-1 messages to be sent/received but, differently from the naive case where a root process directly receives all the messages one after the other, here the messages of each stage are always between different couples of processes. Of course, by using such simple approach you abandon the possibility to use any possible optimizations on the MPI side (which, however, might not be available at all for the user defined data type and operator), but there is a very large gain in flexibility and simplicity.

If there are associativity needs, you can instead follow this Fortran example, where the MPICH binomial tree algorithm is used. The algorithm (which is shorter and largely more elegant, yet a bit dense in its logic) performs the reduction by following the rank order of the processes, so it is very easy to map them in order to follow a more specific order. ]]>

In all such cases, the formulas presented before are still valid but need a slight rearrangement in order to fit the new conditions. Nothing of this is really new, the concpet is as old as the book of Patankar (probably older) and this is just one of the latest additions. Also, major commercial CFD codes have been offering this for decades. Still, I am not aware of full formulas available in the more general case. Everything I write here for the temperature just straightforwardly applies to other scalars with similar equations. Of course, as per the original wall function ODE, the assumption is that of steady state. That is, the solved problem might or not be steady, but the boundary conditions are, in fact, derived by solving a steady state problem (this was, as a matter of fact, true also for the wall function ODE, even if the unsteady term was considered among the non equilibrium ones).

We start by noting that the wall heat flux formula provided here can actually be rewritten as follows (also taking into account that the original derivation used a wrong sign for ease of exposition):

Where and it is recognized that, being independent, at the first order, from the flow conditions and being an integral that grows with wall distance, the non equilibrium terms are, indeed, just an explicit source term for the near wall cell. In practice, the source term is also assimilable to a non orthogonal correction thus, in the following, we will simply consider the point , with temperature and distance from the wall to be along the normal to the wall. A similar reasoning can also be done for the viscous dissipation that, as presented here, is independent from the temperature distribution and can be absorbed by the same source term as well.

We want to extend the formula above to the case where neither nor are actually given. Also, we assume that and , that is, they are only known as the values at the extreme of an n layers thin wall. So we have , , ..., and , and the same for . They are are the temperatures and fluxes at the intefraces between the n layers of the thin wall. Each one of these n layers will have its own thickness , thermal conductivity and possibly a source term . Finally, we want to consider two possible boundary conditions, either directly given or given as:

with being non-linearly dependent from and where we assumed that the are positive if entering the domain.

In order to formalize a solution, we need to complement the fundamental conservation statement that holds for a layer in the thin wall with a similar relation for the temperature jump across the layer, . Considering that, in our model, the heat conduction equation that holds in each layer has the form:

solving it with proper boundary conditions one easily obtains that:

The two jump relations for single layers above can then be used to obtain jump relations across the whole thin wall of n layers. For the flux it is just, again, a simple conservation statement:

For the temperatures one obtains:

Assigning:

where the latter 3 can all be pre-computed and stored, then it is a matter of simple manipulation (yet a quite long one, which is omitted here), to show that our initial formula can now be generally expressed as follows:

where and depend from the specific boundary condition in use. More specifically, for directly assigned, one has and . For the general convection/radiation boundary condition (or just one of the two, by zeroing the coefficient of the other) one has:

Finally, the coupled case is simply obtained by using , and where the subscript refers to the quantities taken from the coupled side.

The corresponding value of is instead given by:

The last step missing is the determination of , which depends from . A relation for the latter can be obtained, but itself, of course, will depend from . This relation (whose derivation is again simple but long, so it is omitted here) can then be used in tandem with the one for iteratively:

I have found that, starting from no more than 10 iterations are necessary to converge on and .

One thing which is worth highlighting here in the more general context of wall functions is that, as a matter of fact, non equilibrium/viscous dissipation terms (from both sides of the thin wall, if a coupled bc is used) do not directly enter the modifications presented above. That is, they still appear as in the original wall function formulation. This might very easily go unnoticed if a thermal wall function is just presented as a single relation for without distinguishing the roles of the terms.

Another thing worth noting is that, if one decided to solve the original ODE as it was (say, with a tridiagonal algorithm as in one of the scripts provided here), instead of directly integrating it as done here, it would have been impossible to let the exact form above emerge, leading to 3 major consequences: 1) inability to separate the non-equilibrium/viscous dissipation part from the rest (that is, in the ODE solution one only has the wall values, not their dependence), 2) either the need to solve the 1D problem also in the thin wall and/or coupled side or the need to iterate the solution on both sides exchanging wall values at each iteration and 3) which is a consequences of 1 and 2, the impossibility to set up the problem for an implicit implementation in the coupled case, which might have major consequences on convergence in certain cases.

Finally, I want to mention that all the above developments are also relevant for the Musker-modified Spalart-Allmaras model for which, given the proper conditions (equilibrium for the velocity, steady state in general), they represent a full analytical solution, which now is thus extended to the present more general boundary conditions. ]]>

with its obvious extension to the velocity case. In order to go back to the framework presented here one should notice that:

from which, it follows that:

which is all that is needed to compute (numerically if not doable analytically) the remaining integrals for non equilibrium and/or TKE production terms.

For the non equilibrium terms this leads to the following:

Hence, integration by parts finally leads to:

Formally, this is the generalized trick that I used here to extend the Reichardt wall law to constant only non equilibrium cases (i.e., i=0). ]]>

A first integration leads to:

where is the integral of and has the same form we assumed for , just with its coefficients divided by . Then a further integration leads to:

The first integral in the equation above has been the subject of the previous posts, while the neglected viscous dissipation term is the last term in the equation. Now, there are two ways which could be used to proceed. A first one involves invoking the velocity equation and transforming the viscous dissipation term as follows:

where has the same form as , but with the velocity coefficients. At this point, then, one can note that the integral has the exact same form of the first one, but with the additional velocity in it. I have not attempted to solve it with none of the models presented but, even if doable, I expect high levels of cumbersomeness.

One could approximate the integral by assuming a constant, representative, value for , say, and take it out of the integral. In this case, while one should still be careful in mixing the velocity numerator with the temperature denominator in the integrand, the resulting integral would be formally identical to the first one, which we have solved in previous posts.

A second, preferred way to proceed would instead first recognize that the integral above can be transformed as:

Hence integration by parts leads to the following result:

At this point we are still left with an even more cumbersome integral to evaluate, if possible at all. However, this integral is exactly 0 in two special cases: fully laminar ones () and when . Another advanatge of this form is that if we now assume a constant, representative, value for , say, and take it out of the integral, we are left with no more work to do and just obtain:

which also always works in the cases where the integral is exactly 0. Of course, we are now left with determining , but this seems a more reasonable task, also because it must be representative just for the region where derivative in the integrand is not 0 and not the full y length. A value for commonly used with standard wall functions is the velocity at . By extension, for the Musker-Monkewitz wall function one could use the velocity at , the profile constant modified by the ratio (see here). In more general cases, it seems that a good generalization for is the velocity at the point where , which is where the term under the derivative reaches half its maximum excursion.

However, this whole presentation is an approximate solution to the original problem for the viscous dissipation, that's why I left it out of the general discussion of the previous posts. ]]>

The first group of scripts is actually made of functions, that you are not supposed to directly call or modify:

**muskersp.m**: returns , and as shown here. It only works for N up to 0 (constant non equilibrium terms)**EDIT: There is an apparently innocuous mistake in the limiting behavior of s, as it should not use the factor prprt for the thermal case. It seems innocuous as it only affects the first order term, which probably is already negligible, and only for the temperature case (so in the scripts it is only used for plotting purposes, and probably never at those low ).****standardsp.m**: returns the same quantities but for the standard wall function presented here. It works for arbitrary N but the scripts below only test it up to 0 (constant non equilibrium terms)**iteryv.m**: returns the value of needed by standardsp.m for**numericalsp.m**: a generic wall function that takes as input a function handle for the turbulent viscosity ratio and provides the required integrals by numerical integration. It works for arbitrary N (but, again, it is only tested for N up to 0). It only works for turbulent viscosities going to 0 at least as fast as .**standard.m**: the actual standard wall function adapted to the present framework for comparisons. Only works for equilibrium cases (N=-1)**spiter.m**: the function that performs the iterations on the wall functions to find following what is presented here**sa1d.m**: solves the original ODE of the problem (actually a slightly more general one) using a second order cell centered finite volume discretization, the Spalart-Allmaras turbulence model (actually a generalized version that also works with the Musker profile) and the tridiagonal solver

The second group of scripts is the one actually making the tests and the comparisons:

**check.m**: for given flow conditions, it tests that a) the analytical musker profile in muskersp is consistent with the one obtained trough numerical integration with numericalsp, b) that the numerical musker profile (and, by point a above, also the analytical one) is consistent with the Spalart-Allmaras direct solution of the ODE with a properly modified fv1 function c) that the numericalsp routine (already tested in a and b above) is consistent with the non modified Spalart-Allmaras solution of the ODE when the relative turbulent viscosity profile is used. Note that the velocity wall function can only match the Spalart-Allmaras solution in the equilibrium case (dpdx=0), but in the same case the temperature equation can have non equilibrium terms and the solutions will match.**wfiter.m**: for a given wall function (user selectable) and flow conditions plots the iterations to find as produced by itersp.m**wfcompare.m**: a straightforward comparison of multiple wall functions for given flow conditions**wfmap.m**: for the musker profile (but easily adaptable to other profiles) it maps the solution of the iterative procedure to find in a certain range of and parameters.**apr.m**: a comparison of the variations of the constant in the Musker profile and in the standard profile as a function of giving a sensation of how the variation of the Musker constant with is, indeed, a sort of Jayatilleke term.

Few comments on the use of the scripts above and some claims made in previous posts.

In the second post of this series I mentioned that, when using the Spalart-Allmaras model, one could avoid iterations to find by using . This is confirmed in check.m where the Spalart-Allmaras boundary conditions for the solution of the 1D ODE are obtained from the relation above using as input the value obtained by the underlying wall function.

In the same post it is also mentioned that one could precompute the wall function for a given range of parameters and later interpolate from it. Such precomputation is what the script wfmap.m does, and you can also see from the commented parts how it could be adapted to directly use the 1D ODE solution instead of the musker profile actually used by the script.

Still in the same post, I mentioned how, for non equilibrium cases, the iterative computation of might become cumbersome as multiple solution branches might exist for non favourable pressure gradients. This can be easily visualized by using wfiter.m.

It has been mentioned here that in the standard profile can be obtained by few iterations of the Halley's method. The specific implementation is in iteryv.m.

In the last post I have mentioned that the van Driest damped mixing length is, indeed, mostly equivalent to the Musker profile for the incompressible formulation used here. This is clearly shown in wfcompare.m. While I don't directly provide the 1D ODE solution with the mixing length (just its wall function form trough numericalsp), I do for two different turbulent viscosity formulations trough the Spalart-Allmaras model in check.m and show that they correspond to the analogous formulation used as wall function.

All the scripts are in a zipped folder for convenience (as a maximum of 5 files can be uploaded here) and have been tested with MATLAB R2022a.

where is the von Karman constant and is a constant that specifies the for which but, in practical terms has the same role of in the standard wall function of the previous post.

There are really several reasons for which this profile is relevant. First of all, it is a continuous, all profile which has the correct limiting near wall behavior as well as the correct logarithmic behavior. Second, as already mentioned here, it has a nice closed form solution. The third reason, as already stated here, but worth restating here, it allows computing the temperature integrals:

from the same formula for the velocity integrals:

by properly redefining the two profile constants and , so it works for arbitrary ratios (or, seen differently, the constant redefinition of the model is, in its own, a model like the Jayatilleke term for the standard wall function). Finally, the model has a strong similarity with the behavior that the Spalart-Allmaras turbulent viscosity assumes in equilibrium conditions ( in the SA model):

Which means that, for a properly coded Spalart-Allmaras model, changing a constant and an exponent is all that is required to let it have a closed form analytical solution for arbitrary ratios which is also, obviously, its own wall function.

Note that the original Spalart-Allmaras turbulent viscosity can be analytically integrated as well. But differently from the Musker one, the solution is only known in terms of numerical constants. Which means that, for each given ratio, the numerical constants must be recomputed, which is not very useful.

Back to the present Musker-Monkewitz solution, after substitution of the turbulent viscosity profile in the integrals above, and reminding that we only need the one for the velocity (from which temperature and scalars can be obtained), one obtains:

for the velocity and:

for the average turbulent kinetic energy production, with . Now, it is easy to obtain closed form solutions to the integrals above for each value of i, but a closed form solution that works for arbitrary i seems out of reach. In the final scripts I will provide a solution just for the constant non equilibrium term (thus for i=-1 and i=0), but here I just want to highlight that, heuristically, solutions to the integrals above seem to have the following forms ():

with:

where the and are constants which are only function of . Thus, notably, one only has to compute a given integral with a symbolic toolbox and pick up the constants expressions, as the functions are fixed (and 3 of 4 common between the two integrals).

The last thing which is worth mentioning about the Musker-Monkewitz wall function is that, as will be shown running the scripts provided in the last post of this series, it very closely follows a turbulent viscosity prescription which is very popular in the LES community that directly solves the original ODE with a tridiagonal solver (like this), the van Driest damped mixing length:

Thus, the take home message here is that, unless you are adding compressibility effects, you are wasting resources, because the Musker-Monkewitz profile is, to all the practical means, identical. Large differences only arise at very large ratios, say, 100 or more.

Finally, following the general discussion here, it is worth mentioning that the same feature that makes the Musker-Monkewitz profile adaptable to any ratio, also makes it easily adaptable to any relevant turbulence model (yet, of course, not exactly). What is needed is just a different law to modify the profile constant, one that can be derived by adapting the Musker profile to the given model profile instead of the one embedded in the model. Such adaptation, which should be done in equilibrium conditions, can be very easily performed as a non linear least-squares optimization. ]]>

where is the von Karman constant and is, for the moment, an unspecified positive parameter. One can then show that the following results:

where , , and:

The general problem with the standard formulation is that, while it is reasonable to pick up an value for the velocity case, even a second one for the TKE production (rigorous doesn't mean stupid, so if different values work better for and , why not?), it is not reasonable to manually pick a value in for each value of the ratio .

As it turns out, however, the above formulation requires modifying only for , but just works in all the other cases. This statement just accounts of the fact that for the turbulent viscosity ratio becomes less and less important, while it becomes more and more important in every detail when multiplied by the ratio .

The case is correctly accounted by the present formulation because it has not neglected the 1 at the denominator of the integrand function in and because the solution is not arbitrarily expressed in terms of the velocity one with all the logarithmic constants lumped in a single one (typically E). The fact that the formulation still doesn't work for is just a statement of the fact that, in this case, the turbulent viscosity ratio becomes important for the temperature distribution before it does for the velocity one, so in must be reduced. Let's call this new value (yet, this is only needed for ).

As the Jayatilleke P term exactly embeds this exact same concept, it is easy to show that it can be used by computing from the following non linear, implicit equation (a similar one must be solved also for the classical standard wall function):

where is the original value used in the velocity profile and P is the mentioned Jayatilleke term (but any similar correlation can substitute it in the equation above, say the Spalding one). I have found that, initializing as , 3 iterations of the Halley's method are sufficient to compute close to machine precision for any practical value of . ]]>

with analogous steps also required for the temperature and scalars. If one has a mean to univocally/externally determine , then velocity, temperature and scalars are all equivalent and the relation above (properly adjusted for temperature and scalars) can be used in one shot to determine the wall flux (or the wall value if the flux is known). For (turbulent kinetic energy) based turbulence models one can, for example, assume:

For the Spalart-Allmaras model instead, one can assume (it will be shown in a following post how this is, indeed, relevant):

with the von Karman constant used in the model. These relations are strictly valid in equilibrium cases ( for all i) but are usually used in non equilibrium cases as well (there is a second advantage, besides non requiring iterations, that will be clear in a moment). If such variables/definitions are not available (say, in LES) one tipically defines:

where:

thus one ends up with an implcit non linear relation for that needs iterations to be resolved. Before going to the algorithmic part of these iterations, however, it is worth mentioning that the above choice for is largely inconvenient for non equilibrium flows, as the zero flux outcome is only possible when instead of, generally, when the numerator becomes 0 (which is the other advantage of the formulations based on turbulence variables). Following the relevant literature on the topic, we thus redefine the as follows*:

where we have introduced the nondimensional parameters:

As both appear in the definition, it seems natural to transform the equation as well, by multiplying it with the factor . The result is:

where we have introduced the additional nondimensional number:

and highlighted the general dependence of the RHS of the equation from itself trough . Despite certainly having an extra factor for all the terms (which could then be simplified to make all the numbers involved smaller), the advantage of this form is that being non dimensional it can be also solved once and for all for all the required nondimensional values and used at runtime as an interpolation table. Indeed, at this stage, it has to be reminded that the procedure is still applicable to generic functions, and their integration might be costly as well. However, even in this case one would need to solve the equation above at least once for each set of relevant parameters. And having a nondimensional equation helps developing an iterative procedure.

I have found that with the constant non equilibrium term only, for different parameterizations, the above equation can be easily solved with a Newton-Raphson method with no more than 10 iterations (starting always from the same laminar solution, which is typically very wrong for high Re):

provided that is set to 0 whenever it is equal or greater than 1. This specific modification, which reverts the method to fixed point iterations, is needed not only to avoid a possible division by 0, but also because the fixed point iterations will push the solution out of a problematic branch of the function.

I provide, again without proof, the expression for the derivative needed in the method:

where the functions have been defined in the first post of this series.

In equilibrium and favourable pressure gradient cases the above equation really has no surprises. However, for adverse pressure gradients the matter is more complicated and up to 3 solutions are typically possible. The procedure above has been found to always pick up the non separated solution closest to the equilibrium one as long as possible, while the separated solution is taken only when it is the only solution to the equation. The algorithm might still get trapped in a positive solution branch when the actual solution is negative, but this seems largely ininfluential in the overall picture (considering also the nature of the solution, which is an approximate companion problem for a bc) and only affects the specific point where the switch of the solution branches actually happens. Of course, this reasoning is based on a certain class of common turbulent viscosity profiles; changing them to properly adapt to the non equilibrium case is actually the key in properly solving the problem.

In the next two posts in the series I will provide two closed form analytical solutions for the integrals and . The first will be the equivalent of the standard wall function, but for arbitrary N. The main advantage of its derivation in the present more rigorous framework is for the thermal case, where the resulting function just works for , while for the case involves the Jayatilleke function only for the determination of the switching position between the linear ang logarithmic part and not its actual value (where, however, it is still implicitly present).

The second solution will be a reworking of the already presented Musker-Monkewitz solution. I will only provide a template of the solution here, but that template has practically shown to be easily extendable to arbitrary N. In particular, aside from a polynomial term, the same solution template works for arbitrary N, the difference being only in the function coefficients. I recall here that the Musker-Monkewitz solution is an all y+ solution that by a math trick can be used for arbitrary ratios (see, for example, here), the underlying turbulent voscosity has the correct behavior near the wall and the correct one in the logarithmic region. Also, it can be made the exact near wall behavior of the Spalart-Allmaras model by a trivial modification of any correct implementation of the model (more on this in a later post).

* Note that the definition used above has a very specific context, which is the evaluation of the integrals involving the turbulent viscosity ratio, because it is the variable used to parameterize it. In practice, it helps regularizing the implicit function for . However, the true , only dependent on , remains the only relevant one for any other mean (including postprocessing). ]]>

This all started with the aim to solve the following problem:

with boundary conditions and . Here, (constant pressure specific heat), (dynamic viscosity), (Prandtl number) and (turbulent Prandtl number) are all constant and is an user specified turbulent viscosity profile better defined later. My previous attempts used a constant too, here the more general case:

is considered. The fact that this equation correctly represents a large set of velocity/temperature viscous boundary conditions should be self-evident (and you can read previous posts on the matter for explanations). Omitting derivations, assuming also , the formal solution to the problem above can be written as:

where (note the sign opposite to the classical thermal convention, but coherent to the velocity one, for ease of exposition):

.

It also imemdiately follows that:

The functions appearing above are defined as follows:

All the expressions above hold true for the velocity case as well, provided that , and are all set to 1 (in this case, for clarity, we denote the functions above as ). The average turbulent kinetic energy production can instead be shown to be:

where:

and the functions appearing above are defined as follows:

It is worth mentioning that, at this point, the formulas above are still exact for the initial problem statement, and all the details of the specific solution method have been moved to the integrals and . This, while kind of obvious, is still remarkable, as it suggests a very specific implementation for wall functions and wall bcs in general, as the formulas above are straightforward generalizations of classical laminar formulas. Note also, that we have introduced an but it just appears as extreme of the integrals and as their denominator (elevated to a certain power). In practice, it is the variable that appears in the formula, but nothing more needed to be specified about it in order to obtain the formulas above. In general, it will have a formula like , but could well just depend from the tubulent model variables.

Leaving aside, for the moment, the solution for the integrals above (we assume the availability of a routine that gives their value for given ), the general solution procedure would be as follows:

- Solve equation above for (iteratively, if depends from it) using . Note that the equation for is obtained from the one for with , and set to 1 and the functions in place of the .
- Determine or , depending from the available thermal bc, from the equations above using and the now certainly available (either iteratively from first step or just from turbulent variables).
- Repeat the previous point for any scalar (with set to 1 and the Schmidt numbers in place of the Prandtl numbers, also within the s integrals)

In practice, as will be clear in a moment, I suggest to have routines that actually return and , as it is their ratio that needs to be computed. Also, for reasons that will be clear in the second post of this series, I anticipate here that for the iterative solution of in step 1 above, it is useful to have the same routine for to also return:

which is the only wall function specific term entering in the derivative for the iterative procedure. I conclude this first post by stating, without proof, that for going to 0 as with , all the above terms can be safely computed also for and their exact limits are:

The second post in the series will formalize the iterative procedure for computing . ]]>

Preamble:

(If this is your first time then you may like read it, or you can skip it if it’s not your first time)

Being involved in CFD engineering and development/programming, I decided to dedicate some of the learning to community by writing through blog towards new budding CFD developers and engineers. So that new people don’t have to start from scratch. In this blog series, I shall try to create link to connect my files over GitHub, and explanation in this blog. I have work on many CFD, numerical modeling software packages. However, I’m indented to discuss very few software packages which are linked to the various stories which I experienced throughout academia or industry. These software packages are OpenFOAM, ANSYS FLUENT, ANSYS CFX, Star CCM+, COMSOL and MATLAB. The topics discuss in this blog is at random, and independent of difficulty level. Many of you may consider this first blog entry as ‘‘easy-peasy child’s play’’, or may stumble upon it as ‘‘I’m not able to understand at all, what is this gobbledygook’’

In my CFD academia and industrial career, what I have learned is that, the humans can solve PDEs in various different ways, such as reducing to ODEs and then applying boundary conditions to the ODEs so on and so forth, that is essentially ‘think and solve’ mechanism through human brain. This computers can’t do they can only do ‘solve’ mechanism which humans told them to do. Thus, computers solvers solve numerical modeling equations in particular fashion, which humans can also do that but can’t go beyond probably 100 × 100 matrices, I guess.

Although, over the period of time, as the Machine Learning/Artificial Intelligence is growing and everyone want to supersede the previously developed algorithms, soon one day a computer should be able to think and solve PDEs just like humans and should be able to solve Navier-Stokes equations with no approximations involved, by method of direct numerical simulation. After that I don’t know what humans shall do, indulge in artistic activities or leave planet, I guess.

CFD Blog (1): OpenFOAM small bits: How to compile new solver using 'wmake' utility.

The procedure describe here is generalized procedure regardless of what solver a person trying to build using 'wmake' utility. Thus, this blog entry does not give you representation of creating new solver out of previously existing solver library, however this blog post shall let you know ‘what need to do’ and ‘what not to do’ while building solver using 'wmake' utility.

1. A) If you planning to create solver based upon previously existing solvers then first send command 'echo $FOAM_SOLVERS' then check the path in ubuntu system where the solvers are installed. After that, give command 'cd $FOAM_SOLVERS' and enter the directory and copy whichever original solver you planning to use as base for your solver, copy that solver to '$FOAM_RUN' directory by giving command in terminal as, ‘cp -r /name_of_solver/ \$FOAM_RUN/solvers/new_named_Solver’ (replace name of solver with the solver you want to use).

B) If the new solver is planned to make from scratch, then just enter in $FOAM_RUN directory by command ‘cd $FOAM_RUN’ and start typing your code.

2. Now, that solver is created with modified conditions, or novel solver is created from scratch. We shall check the solver build parameters are correctly defined or not. For that, first enter Make directory of that newly modified solver. Inside the directory you shall see two files, one file titled ‘files’ (This file essentially keeps track of executable path and its functionality) and another file you shall see titled ‘options’ (This file essentially keep checks of what libraries and other executable utilities need to use while performing complete error free build process). Open file which is named ‘files’ in your favorite text editor (I usually use sublime text 3), and change wherever in that file mentioned, ‘solver_name.C’ (here solver name can be anything such as ‘icoFoam.C’ or ‘interFoam.C’) to ‘new_solver_name.C’ (Example: ‘new_icoFoam.C’, or ‘new_interFoam.C’) and then change $(FOAM_APPBIN) to $(FOAM_USER_APPBIN) in that same file. Also make sure the file mentioning ‘new_interFoam.C’ that should also be the same file name in that directory, in other words the solver file name which contains the solver code should also be ‘new_interFoam.C’.

3. Now make sure that the solver is clean, that means it does not contain previously build files. Thus, run command over top directory of solver called ‘wclean’ no need to use sudo, in fact sudo does not recognizes this command.

4. Now, in older OpenFOAM solver compilation sometimes it is mentioned that copy wmake folder from openfoam directory and paste it in solver directory. You can do that as well, but disadvantage is that you won’t be able to call newly created solver from anywhere in the system. For example, if command ‘which new_icoFoam’ is given in any terminal window, then Ubuntu OS will not be able to provide path in response, or if you type ‘new_’ in terminal and press tab, then terminal will not able to provide any available commands in response. However, the advantage of copying wmake to solver directory and then building solver is that, you create this solver as portable. Now in this case, if your co-worker want to run the case that you have created then you need to provide only that built executable file and case study along with it in zip folder and co-worker should be able to run the case file by giving solver command.

If you have created solver as portable then make sure the executable built is in top directory of case file as well after that you’ll have to give command as ‘./your_solver_name’ (that is, ‘./new_interFoam’ or ‘./new_icoFoam’).

5. If the solver directory does not contain wmake folder copied from the main ‘opt/openfoam-x/’ directory, then it’ll directly use the default wmake folder and its utilities for compilation. In this case, now you have solver that can be called from any terminal window in Ubuntu OS. As well as now case directory does not need to have solver executable built file, and if you are in case directory then all you have to do is type ‘new_solver_name’ (that is, ‘new_interFoam’ or ‘new_icoFoam’), and that’s it, it’ll start calculations.

Thus, this blog entry ends by giving small titbits to beginners as well as experienced users as well. Good bye and see you next time. ]]>

Quote:

well I found it out. Just for everyone whoever finds it.
1. Make the Expression you want. (e.g. volumeInt(vapour.Volume Fraction)@Default Domain) 2. Make a chart and tick the transient box => location: Expression 3. Take the x-axis time and y-axis with the variable from your expression 4. click apply and see your expression over time chart |