CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home >

CFD Blog Feeds

Another Fine Mesh top

► This Week in CFD
  23 Sep, 2022

Let’s get Navier-Stoked because it’s Friday and time to review recent news from the world of computational fluid dynamics. I want to draw your attention to a few specific articles beginning with the importance of stupidity. In case you couldn’t … Continue reading

The post This Week in CFD first appeared on Another Fine Mesh.

► Webinar: Making the Shipping Sector Greener by Leveraging CFD
  21 Sep, 2022

The times of drawing ships by hand are over as CAD vessel designs become more complex to reduce fuel emissions. Though maritime transport is one of the most energy-efficient modes of transportation, it is also a large and growing source … Continue reading

The post Webinar: Making the Shipping Sector Greener by Leveraging CFD first appeared on Another Fine Mesh.

► I’m Reza Djeddi and This Is How I Mesh
  19 Sep, 2022

Hi, I’m Reza Djeddi and I’m a Lead CFD Application Engineer at Cadence Design Systems. Some of my friends call me “The Jedi” because of how they think my last name is pronounced and the fact that I’m a big … Continue reading

The post I’m Reza Djeddi and This Is How I Mesh first appeared on Another Fine Mesh.

► This Week in CFD
  16 Sep, 2022

This week’s agglomeration of CFD flotsam and jetsam is chock full of wonderful images including an “image of the week” that qualifies for “image of the year.” There are several articles involving ML including one that’s cautionary. There are also … Continue reading

The post This Week in CFD first appeared on Another Fine Mesh.

► Webinar: Accelerated CFD Meshing with Fidelity Pointwise
  14 Sep, 2022

Through further enhancements to our Flashpoint goal-oriented and intelligent automatic surface meshing framework and a new automatic volume meshing mode, one can continue to build upon established best practices throughout all stages of the meshing process. In addition, we have … Continue reading

The post Webinar: Accelerated CFD Meshing with Fidelity Pointwise first appeared on Another Fine Mesh.

► Webinar: Predicting Aerodynamic Flow Around Automotive Vehicles
  13 Sep, 2022

Predicting aerodynamic flow physics around automotive vehicles is a complex endeavor, often requiring engineers to balance cost and accuracy. While steady-state approaches (such as RANS) are attractive for their low computational cost, they usually fail to predict all flow phenomena … Continue reading

The post Webinar: Predicting Aerodynamic Flow Around Automotive Vehicles first appeared on Another Fine Mesh.

F*** Yeah Fluid Dynamics top

► “Haut”
  30 Sep, 2022

In Susi Sie’s “Haut” the camera seems to fly over ever-shifting landscapes. In reality, these are macro images, created (I think) by dyes and patterns atop a water bath. But they look like vistas we could find on Earth or Mars — giant dune fields, calving glaciers, and river-divided canyons. For something similar in color, check out Roman De Giuli’s “Geodaehan.” (Video credit: S. Sie)

► Optimizing Wind Farms Collectively
  29 Sep, 2022

In a typical wind farm, each wind turbine aligns itself to the local wind direction. In an ideal world where every turbine was completely independent, this would maximize the power produced. But with changing wind directions and many turbines, it’s inevitable that upstream wind turbines will interfere with the flow their downstream neighbors see.

So, instead, a research team investigated how to optimize the collective output of a wind farm. Their strategy involved intentionally misaligning the upstream wind turbines to improve conditions for downstream turbines. They found that the loss in power generation by upstream turbines could be more than recovered by improved performance downstream.

After testing their models over many months in an actual wind farm, they reported that their methodology could, on average, increase overall energy output by about 1.2 percent. That may sound small, but the team estimates that if existing wind farms used the method, it would generate additional power equivalent to the needs of 3 million U.S. households. (Image credit: N. Doherty; research credit: M. Howland et al.; via Boston Globe; submitted by Larry S.)

► Cloud Streets
  28 Sep, 2022

Parallel lines of cumulus clouds stream over the Labrador Sea in this satellite image. These cloud streets are formed when cold, dry winds blow across comparatively warm waters. As the air warms and moistens over the open water, it rises until it hits a temperature inversion, which forces it to roll to the side, forming parallel cylinders of rotating air. On the rising side of the cylinder, clouds form while skies remain clear where the air is sinking. The result are these long, parallel cloud bands. (Image credit: J. Stevens; via NASA Earth Observatory)

► Predicting Alien Ice
  27 Sep, 2022

Europa is an ocean world trapped beneath an ice shell tens of kilometers thick. To better understand what we might find in those oceans, researchers turn to analogs here on Earth, looking at Antarctica’s ice shelves. Beneath those shelves, ice forms via two mechanisms: the first, congelation ice, freezes directly onto the existing ice-water interface. The second, frazil ice, forms crystals in supercooled water columns, which drift upward in buoyant currents and settle on the ice shelf like upside-down snow (pictured above).

Based on Europa’s conditions, the researchers conclude that congelation ice would gradually thicken the ice shell as the moon’s interior cools. But in areas where the shell is thinned by local rifts and Jovian tidal forces, frazil ice is likely to form. (Image credit: H. Glazer; research credit: N. Wolfenbarger et al.; via Physics World)

► Diving Together
  26 Sep, 2022

Two spheres dropped into water next to one another form asymmetric cavities. A single ball’s cavity is perfectly symmetric, and so are two spheres’, provided they are far enough apart. But for close impacts, the spheres influence one another, creating a mirror image. The same asymmetric cavity also forms when a sphere is dropped near a wall. In fluid dynamics, this trick — using two mirrored objects in place of a wall — is used to make calculating certain flows easier! (Image credit: A. Kiyama et al.)

► Jupiter in Infrared
  23 Sep, 2022

These recent composite images from the James Webb Space Telescope show Jupiter in stunning infrared detail. They’re the result of several images taken in different infrared bands, then combined and rendered in visible light. In general, the redder colors show longer wavelengths and the bluer ones show shorter wavelengths.

Jupiter’s cloud bands appear in beautiful detail. The Great Red Spot looks white in infrared. And the planet’s polar auroras shine bright in both images. The wide-angle shot additionally shows two of Jupiter’s moons and the planet’s rings, which are a million times fainter than the planet itself. If you look carefully, you may also see faint points of light in the lower half of the image. These are likely distant galaxies “photobombing” Jupiter’s close-up. (Image credit: NASA/ESA/Jupiter ERS Team 1, 2; via Colossal)

This composite image of Jupiter was taken in infrared bands and rendered into visible light. In general, the redder colors represent longer wavelengths and bluer ones shorter wavelengths.
This composite image of Jupiter was taken in infrared bands and rendered into visible light. In general, the redder colors represent longer wavelengths and bluer ones shorter wavelengths.

CFD Online top

► Installing foam-extend-4.1 from Source (Fedora 36)
  30 Aug, 2022
Just a reminder what I did on my Fedora 36
http://https://openfoamwiki.net/inde...oam-extend-4.1
Code:
 dnf install -y  python3-pip m4 flex bison git git-core mercurial cmake cmake-gui openmpi openmpi-devel metis metis-devel metis64 metis64-devel
llvm llvm-devel zlib  zlib-devel  ....
Code:
{
  echo 'export PATH=/usr/local/cuda/bin:$PATH' 
  echo 'module load mpi/openmpi-x86_64' 
}>> ~/.bashrc

Code:
cd ~
mkdir foam && cd foam
git clone https://git.code.sf.net/p/foam-extend/foam-extend-4.1 foam-extend-4.1
Code:
{  
 echo '#source ~/foam/foam-extend-4.1/etc/bashrc' 
 echo "alias fe41='source ~/foam/foam-extend-4.1/etc/bashrc' "
}>> ~/.bashrc
Code:
 pip install --user PyFoam
Code:
cd ~/foam/foam-extend-4.1/etc/
cp prefs.sh-EXAMPLE prefs.sh
Edit prefs.sh ->which bison
/usr/bin/bison
Code:
# Specify system openmpi
# ~~~~~~~~~~~~~~~~~~~~~~
 export WM_MPLIB=SYSTEMOPENMPI
# System installed CMake
export CMAKE_SYSTEM=1
export CMAKE_DIR=/usr/bin/cmake

# System installed Python
export PYTHON_SYSTEM=1
export PYTHON_DIR=/usr/bin/python

# System installed PyFoam
export PYFOAM_SYSTEM=1

# System installed ParaView
export PARAVIEW_SYSTEM=1
export PARAVIEW_DIR=/usr/bin/paraview 

# System installed bison
export BISON_SYSTEM=1
export BISON_DIR=/usr/bin/bison

# System installed flex. FLEX_DIR should point to the directory where
# $FLEX_DIR/bin/flex is located
export FLEX_SYSTEM=1
export FLEX_DIR=/usr/bin/flex  #export FLEX_DIR=/usr

# System installed m4
export M4_SYSTEM=1
export M4_DIR=/usr/bin/m4
; which flex ; which m4 ... all the 3rdParty Stuff

Code:
 foam
Allwmake.firstInstall -j
► The importance of a good mesh
    3 Jul, 2022
Recently I've been running a simulation of a Ranque Hilsch Vortex Tube in OpenFOAM. This went well for a time but when I tried refining and implementing a new mesh, it all came crashing down, showing negative total temperatures with gradients of more than 600 K for neighbouring cells. Since my experience with OpenFOAM is still rather limited, I tried with refining every surface up to a ridiculously high degree. After that mishap I went into it with my head on straight and started looking at what snappyHexMesh was doing. I saw the truly abysmal layer generation, meaning that some places had no layers and the ones that had, were very badly layered. To fix this I stopped using relativeSizes and specified the absolute wall layer thickness, as I wanted to control this parameter anyways. At first I thought I could set nRelaxIter to 0 but this produced no layers at all so I set it back to 5. Next I increased the feature angle and slip feature angle so my relatively complex geometry would be meshed everywhere, especially at the sharp corners.

In conclusion:
look at your mesh and don't use relativeSizes for layer addition.
► Regarding collaboration for research work in microchannel heat sink
  30 May, 2022
Dear Researchers,
I, Dr. Prabhakar Bhandari looking for an collaborative research in the field of microchannel heat sink. The work is totally numerical simulation based. If any body interested can email me on prabhakar.bhandari40@gmail.com
► laplacian(rAU, p) == fvc::div(phiHbyA)?
  28 May, 2022
Quote:
Originally Posted by sharonyue View Post
Hi,

In icoFoam's code, we have:
Code:
fvScalarMatrix pEqn
                (
                    fvm::laplacian(rAU, p) == fvc::div(phiHbyA)
                );
Why its not div(HbyA) as of the equation in the image?

This equation is deduced by myself. If it was wrong just correct me.
Though 9 yrs old, this question is worth leaving a note for, bc I will forget.

The argument of fvc::div(phiHbyA) is declared as a surfaceScalarField:
Code:
const surfaceScalarField& phiHbyA,
in constrainPressure().

That gives a hint that the class function fvc::div() must have a constructor that takes a surfaceScalarField and return a volVectorField by summing the 6 surface fluxes of each cell and dividing by the cell's volume, to finish the job of computing the divergence of a volume vector field by way of the total surface flux of the cell divided by the cell volume.

The openFoam.com code browser indeed points to https://www.openfoam.com/documentati...ce.html#l00161
Code:
namespace fvc
 {
  
 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
  
 template<class Type>
 tmp<GeometricField<Type, fvPatchField, volMesh>>
 div
 (
     const GeometricField<Type, fvsPatchField, surfaceMesh>& ssf
 )
 {
     return tmp<GeometricField<Type, fvPatchField, volMesh>>
     (
         new GeometricField<Type, fvPatchField, volMesh>
         (
             "div("+ssf.name()+')',
             fvc::surfaceIntegrate(ssf)
         )
     );
 }
and from there points to https://www.openfoam.com/documentati...ce.html#l00046, where indeed it looks like that's done:

Code:
  namespace Foam
 {
  
 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
  
 namespace fvc
 {
  
 // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
  
 template<class Type>
 void surfaceIntegrate
 (
     Field<Type>& ivf,
     const GeometricField<Type, fvsPatchField, surfaceMesh>& ssf
 )
 {
     const fvMesh& mesh = ssf.mesh();
  
     const labelUList& owner = mesh.owner();
     const labelUList& neighbour = mesh.neighbour();
  
     const Field<Type>& issf = ssf;
  
     forAll(owner, facei)
     {
         ivf[owner[facei]] += issf[facei];
         ivf[neighbour[facei]] -= issf[facei];
     }
  
     forAll(mesh.boundary(), patchi)
     {
         const labelUList& pFaceCells =
             mesh.boundary()[patchi].faceCells();
  
         const fvsPatchField<Type>& pssf = ssf.boundaryField()[patchi];
  
         forAll(mesh.boundary()[patchi], facei)
         {
             ivf[pFaceCells[facei]] += pssf[facei];
         }
     }
  
     ivf /= mesh.Vsc();
 }
► 2 Easy MPI pieces
    8 May, 2022
Anyone who has a minimum working experience with MPI (the Message Passing Interface for distributed parallel computing) has certainly had the chance to meet certain coding patterns multiple times, especially if working with a CFD (or any other computational physics like) code.

Regrettably, MPI and parallel distributed computing is one of those areas where textbooks and online examples (even SO) are largely useless, as most (if not all) of them just simply go into the details of how to use this or that feature. But most examples are so toy level that you even find cases where the presented code just works for a fixed number of processes (aaargh!).

Every MPI use case is very specific but, I want to present here two examples that are so simple and stupid that it is a shame they are not present in every single book or tutorial. Yet, they have both very practical use cases.

The first MPI piece I want to present is, superficially, related to the necessity to perform a deadlock avoiding loop among all processes. This problem, however, really is a twofold one. On one side, the main issue is: how do I automatically schedule the communications between N processes so that, for any couple of processes i and j, if the k^{th} communication partner of process i is process j then the k^{th} communication partner of process j is process i? With a proper communication schedule in place, avoiding deadlock, which is the second part of the problem, is then definitely a triviality (also present in most MPI examples). It turns out that such communication schedule is a triviality as well, to the point of being a single liner solution, so it really is a shame that it is not presented anywhere (to the best of my knowledge). So, here it is a pseudocode example of a loop where each process communicates with each other process (including itself) without deadlock (full Fortran example here):

Code:
nproc = mpi_comm_size  !How many processes
myid  = mpi_comm_rank !My rank among processes

!Loop over all processes
for i = 1 : nprocs
   !My communication partner at the i-th stage
   myp = modulo(i-myid-1,nproc)
   if (myid>myp) then
      !The process with higher rank sends and then receives
      mpi_send
      mpi_recv
   elseif (myid<myp) then
      !The process with lower rank receives and then sends
      mpi_recv
      mpi_send
   else
      !This is me, no send or recv actually needed
   endif
endfor
where the modulo operation is intended as the Fortran one. All the magic here happens in the determination of myp, the communication partner for the given process at the i-th stage, which is different for each process at each stage, but exactly match between them at any given stage.

Now, the pseudo-code above (and the one in the linked gist as well) is just an example, and you should not, DEFINITELY, do this sort of loops in your code. Also, you should not use blocking sends and recvs as well (I used them in the example to prove its very point). However, even for non blocking send/recv, and even when each process just has to communicate with a limited set of different processes, it turns out that this schedule is extremely efficient, as very little to none is spent in the handling of non matched send/recv calls. Now, for how stupid and simple this might look, most codes, even large scale ones, simply don't schedule their communications, which is the reason I thought to share this simple piece of code. In practice I have several variants of it (I provided 2 of them in the linked gist) as today I use it to schedule different pieces of communications in my code.

The second easy MPI piece is related to a very simple case: how would you write your own reduce operation without using mpi_reduce but just mpi_send/recv? While this might just look like a textbook exercise, it is indeed relevant for those cases where the needed reduce operation is not among the intrinsic ones of MPI (SUM, MAX, etc.). I first had a need for it when working on parallel statistics (e.g., how to compute the spatial statistics of a field variable in a finite volume code without using allreduce, which costs just like an mpi_barrier?).

For the most complex cases, MPI provides both an mpi_type_create_struct routine to create a generic user-defined data type and mpi_op_create routine to define a custom reduction operation for said data type (see, for example, here, here and here). Unfortunately, they can't actually cover all use cases, or at least not so straightforwardly.

However, if there are no specific needs in terms of the associativity of the reduce operator, it turns out that you can write your own reduce algorithm with no more than 25 lines of code (full Fortran example here). The algorithm has 3 very simple steps:
  1. Determine pp2, the largest power of 2 integer smaller than or equal to the current number of processes
  2. If the current number of processes is above pp2, ranks beyond pp2 just send their data to ranks that are lower by exactly pp2 (e.g., for 7 processes, zero indexed, ranks from 4 to 6 will send, respectively, to ranks from 0 to 2), which will perform the reduce operation
  3. Log2(pp2) iterations are performed where, at each iteration, the higher half of ranks up to a given power of 2, ppd (=pp2 at the beginning of the first iteration), send their data to the lower half (shifting by ppd/2), which will then reduce it. At the end of each iteration ppd is divided by 2.

The final reduce operation will then be available to the process with rank 0 (which could simply send it to the required root, if different). Note that the algorithm still needs nproc-1 messages to be sent/received but, differently from the naive case where a root process directly receives all the messages one after the other, here the messages of each stage are always between different couples of processes. Of course, by using such simple approach you abandon the possibility to use any possible optimizations on the MPI side (which, however, might not be available at all for the user defined data type and operator), but there is a very large gain in flexibility and simplicity.

If there are associativity needs, you can instead follow this Fortran example, where the MPICH binomial tree algorithm is used. The algorithm (which is shorter and largely more elegant, yet a bit dense in its logic) performs the reduction by following the rank order of the processes, so it is very easy to map them in order to follow a more specific order.
► Closing on wall functions - part 8: coupled/thin wall boundary conditions
  27 Apr, 2022
It might happen that wall functions for temperature or scalars are needed when the assigned boundary condition is not simply the assigned flux or temperature/scalar, but rather a more complex one. One example is when the wall is a coupled one, with either a fluid or a solid on the other side, or it has some thickness, possibly with source terms in it and, say, radiation or convective boundary conditions assigned on the other side, or maybe some other combination of the above.

In all such cases, the formulas presented before are still valid but need a slight rearrangement in order to fit the new conditions. Nothing of this is really new, the concpet is as old as the book of Patankar (probably older) and this is just one of the latest additions. Also, major commercial CFD codes have been offering this for decades. Still, I am not aware of full formulas available in the more general case. Everything I write here for the temperature just straightforwardly applies to other scalars with similar equations. Of course, as per the original wall function ODE, the assumption is that of steady state. That is, the solved problem might or not be steady, but the boundary conditions are, in fact, derived by solving a steady state problem (this was, as a matter of fact, true also for the wall function ODE, even if the unsteady term was considered among the non equilibrium ones).

We start by noting that the wall heat flux formula provided here can actually be rewritten as follows (also taking into account that the original derivation used a wrong sign for ease of exposition):

q_w = -\frac{\left\{\frac{\left[T\left(y\right) - T_w\right]}{y} \left(\frac{\mu C_p}{Pr}\right) - y\sum_{i=0}^{N}\frac{F_T^i}{i+1}\left(\frac{y}{y_p}\right)^i\left(\frac{{s_T^i}^+}{{y^+}^{i+2}}\right) \right\}}{\left(\frac{{s_T^{-1}}^+}{y^+}\right)} = -\frac{\left(T_p - T_w\right)}{y_p} k_p C_{WF} + \dot{Q}_p

Where C_{WF}=1/\left(\frac{{s_T^{-1}}^+}{y^+}\right) and it is recognized that, being independent, at the first order, from the flow conditions and being an integral that grows with wall distance, the non equilibrium terms are, indeed, just an explicit source term \dot{Q}_p for the near wall cell. In practice, the source term is also assimilable to a non orthogonal correction thus, in the following, we will simply consider the point p, with temperature T_p and distance from the wall y_p to be along the normal to the wall. A similar reasoning can also be done for the viscous dissipation that, as presented here, is independent from the temperature distribution and can be absorbed by the same source term as well.

We want to extend the formula above to the case where neither T_w nor q_w are actually given. Also, we assume that T_w=T_n and q_w=q_n, that is, they are only known as the values at the extreme of an n layers thin wall. So we have T_n, T_{n-1}, ..., T_1 and T_0, and the same for q. They are are the temperatures and fluxes at the intefraces between the n layers of the thin wall. Each one of these n layers will have its own thickness \Delta x_i, thermal conductivity k_i and possibly a source term \dot{Q}_i. Finally, we want to consider two possible boundary conditions, either T_0 directly given or q_0 given as:

q_0 = h_{\infty}\left(T_{\infty}-T_0\right) + \epsilon_R \sigma \left(T_R^4-T_0^4\right) = h_{\infty}\left(T_{\infty}-T_0\right) + \widetilde{h}_R\left(T_R-T_0\right)

with \widetilde{h}_R = \epsilon_R \sigma \left(T_R+T_0\right)\left(T_R^2+T_0^2\right) being non-linearly dependent from T_0 and where we assumed that the q_i are positive if entering the domain.

In order to formalize a solution, we need to complement the fundamental conservation statement that holds for a layer in the thin wall q_i=q_{i-1}+\dot{Q}_i \Delta x_i with a similar relation for the temperature jump across the layer, T_i-T_{i-1}. Considering that, in our model, the heat conduction equation that holds in each layer has the form:

\frac{d^2T}{dx^2} + \frac{\dot{Q}}{k} = 0

solving it with proper boundary conditions one easily obtains that:

T_i-T_{i-1} = -\frac{\Delta x_i}{k_i}\left(q_{i-1}+\frac{\dot{Q}_i \Delta x_i}{2}\right)

The two jump relations for single layers above can then be used to obtain jump relations across the whole thin wall of n layers. For the flux it is just, again, a simple conservation statement:

q_n = q_0 + \sum_{i=1}^n \dot{Q}_i \Delta x_i

For the temperatures one obtains:

T_n - T_0 = \sum_{i=1}^n \left(T_i -T_{i-1}\right) = -q_0 \sum_{i=1}^{n}\frac{\Delta x_i}{k_i} - \sum_{i=1}^{n} \frac{\Delta x_i}{k_i} \left(\frac{\dot{Q}_i \Delta x_i}{2} + \sum_{j=1}^{i-1}\dot{Q}_j \Delta x_j\right)

Assigning:

h_p = \frac{k_p C_{WF}}{y_p}

R_T = \sum_{i=1}^{n}\frac{\Delta x_i}{k_i}

\alpha = \sum_{i=1}^{n} \frac{\Delta x_i}{k_i} \left(\frac{\dot{Q}_i \Delta x_i}{2} + \sum_{j=1}^{i-1}\dot{Q}_j \Delta x_j\right)

\beta = \sum_{i=1}^n \dot{Q}_i \Delta x_i

where the latter 3 can all be pre-computed and stored, then it is a matter of simple manipulation (yet a quite long one, which is omitted here), to show that our initial formula can now be generally expressed as follows:

q_w = \left(T_w - T_p\right)h_p + \dot{Q}_p = \frac{\left(T^*-T_p\right)+\left(R^*\beta-\alpha\right)}{R^*+\frac{1}{h_p}} + \dot{Q}_p

where T^* and R^* depend from the specific boundary condition in use. More specifically, for T_0 directly assigned, one has T^*=T_0 and R^*=R_T. For the general convection/radiation boundary condition (or just one of the two, by zeroing the coefficient of the other) one has:

R^* = R_T + \frac{1}{h_{\infty}+\widetilde{h}_R}

T^* = \frac{h_{\infty}T_{\infty}+\widetilde{h}_R T_R}{h_{\infty}+\widetilde{h}_R}

Finally, the coupled case is simply obtained by using \widetilde{h}_R = 0, h_{\infty} = h_{pc} and T_{\infty}=T_{pc} where the subscript pc refers to the quantities taken from the coupled side.

The corresponding value of T_w is instead given by:

T_w = \frac{T^*+R^*\left(\beta+h_pT_p\right)-\alpha}{1+h_pR^*}

The last step missing is the determination of \widetilde{h}_R, which depends from T_0. A relation for the latter can be obtained, but itself, of course, will depend from \widetilde{h}_R. This relation (whose derivation is again simple but long, so it is omitted here) can then be used in tandem with the one for \widetilde{h}_R iteratively:

\widetilde{h}_R = \epsilon_R \sigma \left(T_R+T_0\right)\left(T_R^2+T_0^2\right)

T_0 = \frac{\left(h_{\infty}T_{\infty}+\widetilde{h}_R T_R\right)\left(R_T+\frac{1}{h_p}\right)+T_p+\alpha+\frac{\beta}{h_p}}{\left(h_{\infty}+\widetilde{h}_R\right)\left(R_T+\frac{1}{h_p}\right)+1}

I have found that, starting from T_0=T_p no more than 10 iterations are necessary to converge on \widetilde{h}_R and T_0.

One thing which is worth highlighting here in the more general context of wall functions is that, as a matter of fact, non equilibrium/viscous dissipation terms (from both sides of the thin wall, if a coupled bc is used) do not directly enter the modifications presented above. That is, they still appear as in the original wall function formulation. This might very easily go unnoticed if a thermal wall function is just presented as a single relation for T^+ without distinguishing the roles of the terms.

Another thing worth noting is that, if one decided to solve the original ODE as it was (say, with a tridiagonal algorithm as in one of the scripts provided here), instead of directly integrating it as done here, it would have been impossible to let the exact form above emerge, leading to 3 major consequences: 1) inability to separate the non-equilibrium/viscous dissipation part from the rest (that is, in the ODE solution one only has the wall values, not their dependence), 2) either the need to solve the 1D problem also in the thin wall and/or coupled side or the need to iterate the solution on both sides exchanging wall values at each iteration and 3) which is a consequences of 1 and 2, the impossibility to set up the problem for an implicit implementation in the coupled case, which might have major consequences on convergence in certain cases.

Finally, I want to mention that all the above developments are also relevant for the Musker-modified Spalart-Allmaras model for which, given the proper conditions (equilibrium for the velocity, steady state in general), they represent a full analytical solution, which now is thus extended to the present more general boundary conditions.

GridPro Blog top

► The Challenges of Meshing Ice Accretion for CFD
  12 Jul, 2022

Figure 1: Hexahedral mesh for an aircraft icing surface.

1228 words / 6 minutes read

Complex ice shapes make generating well-resolved mesh extremely difficult, CFD practitioners make geometric and meshing compromises to understand the effect of Ice accretion on UAVs.

Introduction

Flying safely and reliably depends on how well icing conditions are managed. Atmospheric icing is one of the main reasons for the operational limitations, Icing disturbs the aerodynamics and limits the flight capabilities such as range and duration. In some scenarios, it can even lead to crashes.

Icing has been under research for manned aircraft since the 1940s. However, the need to understand icing effects for different flying scenarios in unmanned aerial vehicles (UAVs) or drones has reignited the research. Drones are used for a wide range of applications like package delivery, military, glacier studies, pipeline monitoring, search and rescue, etc.

Ice accumulation on different aircraft parts such as nose cone, engine, pitot probe.
Figure 2: a. Ice on nose cone. b. Ice on an engine. c. Ice on a pitot probe. Image source – Ref [4]

The well-understood icing process of manned civil and military aircraft does not hold good for most UAVs. UAVs fly at a lower air speed and are smaller in size. They operate at a low Reynolds Number in the range of 0.1-1.0 million as against manned aviation which fly at Reynolds Numbers of the order of 10-100 million. This huge difference necessitates the need to gain a better understanding of the icing process at low Reynolds numbers.

CFD simulation of aircraft ice accretion is a natural choice for researchers due to its cost-effective approach when compared to flight testing. In this article, we will discuss how researchers navigate through geometry and meshing challenges to understand the icing effects.

Ice Accretion Analysis

Icing analysis covers a large variety of physical phenomena. From droplet or ice crystal impact on cold surfaces to solidification process at different scales. Ice accumulation degrades aerodynamic performances such as the lift, drag, stability and stall behaviour of lifting surfaces by modifying the leading-edge geometry and the state of the boundary layer downstream. This results in premature and highly undesirable flow separation.

Aircraft Icing: Flow field around an iced airfoil.
Figure 3: Aircraft Icing: Flow field around an iced airfoil. Image source – [Ref 5, 6]

Such flow transition and turbulently active regions need well-resolved grids. However, the complex icing undulations make meshing very hard, forcing the CFD practitioners to face geometric and meshing challenges.

Complex Geometric Shapes

Icing develops different kinds of geometric features such as conic shapes, jagged ridges, narrow, deep valleys and concave regions. In 3D, the spanwise variation of these features creates further complexities.

Meshes for aircraft icing simulation: Inviscid unstructured mesh using tetrahedral elements to discretize the complex 3D iced wing.
Figure 4: Inviscid unstructured mesh using tetrahedral elements to discretize the complex 3D iced wing. [Image source: Ref 3]

Geometric simplification is more often done while attempting 3D simulations. Even though fine resolution 3D scanned ice feature data is available, incapability to create quality normal wall resolved cells compels CFD practitioners to either simplify the ice features or settle down for some kind of inviscid simulation without capturing the viscous effects. Figure 4 shows such a compromised unstructured mesh without viscous padding for a DLES simulation. Figure 5 shows the extraction of a smoothened and simplified ice geometry from an actual icing surface.

Aircraft icing: Geometric simplification done to 3D ice surface to ease meshing difficulties.
Figure 5 Geometric simplification done to 3D ice surface to ease meshing difficulties. [Image source- Ref 9].

It is extremely difficult to mesh such realistic ice shapes for any mesh generation algorithm let alone the aspect of mesh quality.

As a compromise, the sub-scale surface roughness is smoothened out and is not captured. As a consequence, the turbulence effects due to sub-scale geometric features get ignored.

Wide-Ranging Geometric Scales

Ice features range widely in geometric scales. For, e.g., ice horns can be as big as 1-2 centimetres, while sub-scale surface roughness can be as small as a few microns.

The level of deterioration in performance is directly related to the ice shapes and to the degree of aerodynamic flow disruption they rake up. Sub-scale ice surface roughness triggers laminar to turbulent transition while large size ice-horns cause large-scale separation.

Orthogonal boundary layer padding to capture the viscous activities near the wall.
Figure 6: Orthogonal boundary layer padding to capture the viscous activities near the wall.

Meshing such wide-ranging geometric scales poses a few challenges. Firstly, they will need a massive number of cells to capture the micron-level features, directly posing a challenge to the computational power and considerable time for both meshing and CFD.

Literature review shows that certain CFD practitioners, foreseeing these challenges, settle down for 2D simulations to avoid computationally expensive 3D simulations. Even at the 2D level, finer ice-roughness features are smoothened to make viscous padding creation more manageable.

Finely refined flow aligned hexahedral grid to capture the ice horn wake using GridPro.
Figure 7: Finely refined flow-aligned hexahedral grid to capture the ice horn wake.

Horns and Crevices

Crevices and concave regions are home to re-circulation flows. These viscous regions need finely resolved unit aspect ratio cells to capture them. But since many grid generators find it difficult to mesh these regions, the crevices are removed and replaced by a small depression.

Hexahedral meshing of the narrow crevices and concave regions of the aircraft icing surface using GridPro.
Figure 8: Hexahedral meshing of the narrow crevices and concave regions of the aircraft icing surface.

Aft of the horns, large-scale wakes are created, which are highly unsteady and three-dimensional in nature. Also, with an increase in the angle of attack, these turbulent features grow in size and start to extend further in the normal and axial direction w.r.t the wing surface. In concave regions and narrow crevices, recirculation flows can be observed.

Boundary-Layer Mesh

The boundary layer padding needs to have a good wall-normal resolution with first spacing equivalent to Y+ not more than 1. The rough ice surfaces aggravate flow separation and adequate viscous padding with a uniform number of layers with orthogonal cells is necessary at all locations.

Growing wall-normal quadrilateral or hexahedral cells from the ice walls for the entire region is a challenge since the crevices are very narrow with irregular protrusions, and generating continuous viscous padding causes cells to collapse one over the other.

Aircraft icing meshes: Viscous boundary layer padding in narrow crevices. a. Hybrid unstructured mesh. b. Hexahedral mesh.
Figure 9: Viscous boundary layer padding in narrow crevices. a. Hybrid unstructured mesh. Image source [Ref 7] b. Hexahedral mesh.

To overcome this some grid generators resort to partial normal wall padding to the extent the local geometry permits and quickly transition to unstructured meshing, as shown in Figure 9a.

Meshing Transient Ice Accumulation

Research has shown that airframe size and air speed are two main important parameters influencing ice accretion.

One of the icing simulation requirements is computing ice accumulation for a finite time period spanning 15 to 20 minutes. Multiple CFD simulations are done for different chord lengths and air velocities. As one can perceive, this is a numerically intensive job requiring automated geometry building and mesh generation. In such studies, it is necessary to generate new mesh for every minute or even less to make a CFD run for newer instances of ice deposition.

Figure 10: Ice accumulation due to change in a. Airframe. b. Airspeed. Image source Ref [5].

With each time step the shape of the ice-feature changes and with time, they take fairly complex shapes with horns and crevices, making local manual intervention an inevitable necessity.

GridPro's single-topology multiple grid approach helps to rapidly generate high-quality meshes for multiple icing variants.
Figure 11: GridPro’s single-topology multiple grid approach helps to rapidly generate high-quality meshes for multiple icing variants-ice accretion analysis automatically.

Parting Remarks

For the safe operation of UAVs without an icing protection system, the common solution is to ground the aircraft when icing conditions prevail. This limitation can be overcome by having a better de-icing system. Through CFD analysis of ice accretion at different atmospheric conditions, the amount of optimal onboard electrical power needed to do de-icing can be known.

However, accurate CFD analysis hinges on precise capturing of the ice features by the mesh. A meshing system which can aptly meet this requirement without making geometric or meshing compromises is the need of the hour.

For structured meshing needs for icing analysis reach out to GridPro, please contact: gridpro@gridpro.com.

Further Reading

References

1.”Comparison of LEWICE 1.6 and LEWICE/NS with IRT Experimental Data from Modern Airfoil Tests“, William B. Wright, Mark G. Potapczuk.
2. “Geometry Modeling and Grid Generation for Computational Aerodynamic Simulations around Iced Airfoils and Wings“, Yung K. Choo, John W. Slater, Mary B. Vickerman, Judith F. VanZante.
3. “COMPUTATIONAL MODELING OF ROTOR BLADE PERFORMANCE DEGRADATION DUE TO ICE ACCRETION“, A Thesis in Aerospace Engineering, Christine M. Brown, The Pennsylvania State University The Graduate School, December 2013.
4. ” ICE INTERFACE EVOLUTION MODELLING ALGORITHMS FOR AIRCRAFT ICING“, SIMON BOURGAULT-CÔTÉ, Thesis, UNIVERSITÉ DE MONTRÉAL, 2019.
5. “Atmospheric Ice Accretions, Aerodynamic Icing Penalties, and Ice Protection Systems on Unmanned Aerial Vehicles“, Richard Hann, PhD Thesis,  Norwegian University of Science and Technology, July 2020.
6. “Icing on UAVs“, Richard Hann, NASA Seminar.
7. https://www.ntnu.no/blogger/richard-hann/
8. https://uavicinglab.com/
9. “An Integrated Approach to Swept Wing Icing Simulation“, Mark G. Potapczuk et al, Presented at 7th European Conference for Aeronautics and Space Sciences Milan, Italy, July 3-6, 2017.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post The Challenges of Meshing Ice Accretion for CFD appeared first on GridPro Blog.

► Challenges in Meshing Scroll Compressors
  25 Mar, 2022

Figure 1: Structured multi-block mesh for scroll compressors with tip seal.

804 words / 4 minutes read

Scroll compressors with deforming fluid space, narrow flank, and axial clearance pose immense meshing challenges to any mesh generation technique.

Introduction

Scroll compressors and expanders have been in extensive usage in refrigeration, air-conditioning, and automobile industries since the 1980s. A slight improvement in scroll efficiency results in significant energy savings and reduction in pollution on the environment. It is therefore important to minimize frictional power loss at each pair of the compressor elements and also the fluid leakage power loss at each clearance between the compressor elements. So developing ways to minimize leakage losses is essential to improve scroll performance.

Scroll Compressor CFD Challenges

Unlike other turbomachines like compressors and turbines, Positive Displacement (PD) machines like scroll suffer from innovative designs and performance enhancements. This is mainly due to difficulties in applying CFD to these machines because of the challenges in meshing , fluid real equations and long computational time.

Scroll compressor Working
Figure 3: Deforming fluid pockets at different stages in the compression process. Image source Ref [11].

Geometric Challenges for meshing

Deforming Flow Field:

The fluid flow is transient and the flow volume changes with time (Figure 3). The fluid is compressed and expanded as it passes through different stages of the compression process. The mesh for the fluid space should be able to ‘follow’ the deformation imposed by the machine without losing its quality.

When the deformation is small, the initial mesh maintains cell quality, however, for large deformations, mesh quality deteriorates and collapses near the contact points between the stator and moving parts.

Scroll compressors - leakage through flank clearance.
Figure 4: Leakage through flank clearance. Image source – Ref [10].

Flank Clearance:

The narrow passage between the stationary and moving scroll in the radial direction is called the Flank clearance. A clearance of [~ 0.05 mm] is generally used to avoid contact, rub and tear.

Adequately resolving this clearance with a fine mesh is one of the key factors in obtaining an accurate CFD simulation. However, the narrowness of this gap poses meshing challenges for many grid generators.

scroll compressor - leakage through axial clearance.
Figure 5: Leakage through axial clearance. Image source – Ref [10].

Axial Clearance:

The narrow passage between the stationary and moving scroll in the axial direction is called the Axial clearance. The axial clearance is about one thousand of the axial scroll plate height, which is much smaller than the flank clearance.

The gap actually forces to have separate zones of mesh in some cases. Adequate resolution of axial clearance gaps is also equally important since it leads to inaccurate flow field prediction.

Scroll compressor tip seal.
Figure 6: Tip seal used to reduce axial clearance leakage. Image source Ref [5, 8].

Tip Seal Modeling:

Tip seals are used to reduce axial leakages which are caused due to wear and tear. The tip seals influence the mass flow rate of the fluid. Modeling internal leakages with tip seals would require many numerical techniques ranging from fluid-structure interaction to special treatments for thermal deformation and tip seals efficiency.

GridPro's structured mesh for capturing axial gap and tip seal in scroll compressor.
Figure 7: GridPro’s structured mesh for capturing axial gap and tip seal: a. With axial gap. b. Axial gap with tip seal.

Discharge Check Valve Modeling:

Valves called reed valves are installed at the discharge to prevent reverse flow. Understanding the dynamics of the check valves is important because they significantly influence scroll efficiency and noise levels. The losses at the discharge can significantly reduce the overall efficiency.

However, modeling the valve with appropriate simplification is a challenge for any meshing technique.

Reed valve and flip valve's in scroll compressors.
Figure 8: a. Reed valve geometry. b. Flip valve geometry. Image source Ref [2].

Influence of Mesh Element Type

A lot of different meshing methods have been employed from tetrahedral to hexahedral to polyhedral cells to discretize the fluid passage. However, researchers who tend to weigh more on the accuracy of the solution tend to weigh more to mesh with structured hexahedral cells.

Hexahedral meshing outweighs other element types w.r.t grid quality, domain space discretization efficiency, solution accuracy, solver robustness, and convergence levels.

One of the reasons why structured hexahedral mesh offers better accuracy is that it can be squeezed without deteriorating the cell quality. This allows to place, a large number of mesh layers in the narrow clearance gap. Better resolution of the critical gap results in better CFD prediction.

Parting Remarks

Understanding the key meshing challenges before setting forth to mesh scrolls is very essential. Becoming aware of the regions that pose difficulties to mesh and regions that strongly influence the accuracy of the CFD prediction is critically important. More importantly, which meshing approach to pick – structured, unstructured, or cartesian also influence the quality and accuracy of your CFD prediction.

In the next article on Automating meshing for scroll compressors, we discuss, how we can mesh scroll compressors in GridPro.

References

1.“Study on the Scroll Compressors Used in the Air and Hydrogen Cycles of FCVs by CFD Modeling”, Qingqing ZHANG et al, 24th International Compressor Engineering Conference at Purdue, July 9-12, 2018.
2. “Numerical Simulation of Unsteady Flow in a Scroll Compressor”, Haiyang Gao et al, 22nd International Compressor Engineering Conference at Purdue, July 14-17, 2014.
3. “Novel structured dynamic mesh generation for CFD analysis of scroll compressors”, Jun Wang et al, Proc IMechE Part A: J Power and Energy 2015, Vol. 229(8), IMechE 2015.
4. “Modeling A Scroll Compressor Using A Cartesian Cut-Cell Based CFD Methodology With Automatic Adaptive Meshing”, Ha-Duong Pham et al, 24th International Compressor Engineering Conference at Purdue, July 9-12, 2018.
5. “3D Transient CFD Simulation of Scroll Compressors with the Tip Seal”, Haiyang Gao et al, IOP Conf. Series: Materials Science and Engineering 90 (2015) 012034.
6.“CFD simulation of a dry scroll vacuum pump with clearances, solid heating and thermal deformation”, A Spille-Kohoff et al, IOP Conf. Series: Materials Science and Engineering 232 (2017).
7.  “Structured Mesh Generation and Numerical Analysis of a Scroll Expander in an Open-Source Environment”, Ettore Fadiga et al, Energies 2020, 13, 666.
8. “Analysis of the Inner Fluid-Dynamics of Scroll Compressors and Comparison between CFD Numerical and Modelling Approaches”, Giovanna Cavazzini et al, Energies 2021, 14, 1158.
9. “FLOW MODELING OF SCROLL COMPRESSORS AND EXPANDERS”, by George Karagiorgis, PhD- Thesis, The City University, August 1998.
10. “Heat Transfer and Leakage Analysis for R410A Refrigeration Scroll Compressor“, Bin Peng et al, ICMD 2017: Advances in Mechanical Design pp 1453-1469.
11. “Implementation of scroll compressors into the Cordier diagram“, C Thomas et al, IOP Conf. Series: Materials Science and Engineering 604 (2019) 012079.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post Challenges in Meshing Scroll Compressors appeared first on GridPro Blog.

► Automation of Hexahedral Meshing for Scroll Compressors
  25 Mar, 2022

Figure 1: Structured multi-block mesh for scroll compressors.

1167 words / 5 minutes read

Developing a three-dimensional mesh of a scroll compressor for reliable Computational Fluid Dynamics (CFD) Analysis is challenging. The challenges not only demand an automated meshing strategy but also a high-quality structured hexahedral mesh for accurate CFD results in a shorter turnaround time.

Introduction

The geometric complexities of Meshing Scroll Compressors discussed in our previous article give us a window into the need for creating a high-quality structured mesh of scroll compressors.

A good mesher should handle the following challenges in a positive displacement machine:

  • The continuously deforming pocket volume.
  • Since its a complex and time-dependent fluid dynamic phenomena. The mesher should be able to accurately “follow” the deformation imposed by the machine moving part without losing the mesh quality.
  • The mesh should not suffer quality decay, or have uncontrolled mesh refinements and mesh collapses near contact points between the stator and moving parts, etc.
  • Should offer higher accuracy of numerical simulations and the short simulation turn-around time.

Meshing Strategy

The scroll compressor fluid mesh region on a given plane is a helical passage, with varying thickness, expanding, and contracting based on the crank angle and the fluid domain is topologically a rectangular passage. So we use the same approach as that of meshing a rectangle for the Scroll Compressor.

scroll compressor structured mesh blocking: Blocking for a linear and curved rectangular passage.
Figure 2: Blocking for a linear and curved rectangular passage.
Animation video 1: Block creation by sweeping in GridPro.

Mesh Topology

One of the main obstacles for simulation in scroll compressors is the generation process of dynamic mesh in fluid domains, especially in the region of flank clearance. The topology based approach offers a perfect solution for such scenarios. Primarily because the deforming fluid domain in the Scroll compressor does not change the topology of the fluid region.

Animation video 2: Mesh at every time step for a scroll compressor.

Advantages of Topology based Meshing:

  • At each time step, when the orbiting rotor moves to a new position, the new mesh is generated without any user intervention.
Animation video 3: Mesh in the Discharge Chamber of the Scroll Compressor.
  • The block built becomes a template for a new variation of the scroll rotors, this makes it ideal for optimization and even meshing variable thickness scroll compressors.
  • Since meshes share the same topology, i.e. the number of blocks and their connectivity and cells remain the same, which avoids the need for interpolation of results. The computational effort is significantly reduced and the mesh quality is high, leading to reliable CFD analysis.

Flank Clearance and it’s Meshing Needs

The flank clearance could reduce to as low as 0.05 mm and an adequate resolution of the flank clearance with low skewness is the key reason for better prediction of performance by structured meshes when compared to unstructured meshes.

Animation video 4: Mesh in the flank clearance at different scroll rotor positions. 12 layers of cells finely discretize the narrow flank clearance.

The dynamic boundary conforming algorithm of GridPro moves the blocks into the compressed space automatically and generates the mesh. The smoother ensures that the mesh has a homogenous mesh distribution and is orthogonal. Orthogonality is another important mesh quality metric that sets structured meshes against moving mesh approaches. Orthogonality improves the numerical accuracy, stability of the solution and prevents numerical diffusion.

Solid Scroll Meshing for FSI

Understanding the heat transfer towards and inside the solid components is important since the heat transfer influences the leakage gap size. Heat transfer analysis is especially required in vacuum pumps where the fluid has low densities and low mass flow rates.

Structure multi-block mesh for the solid and fluid zone in a scroll compressor.
Figure 4: Structure Hexahedral mesh for the solid and fluid zone in a scroll compressor.

 

One of the major drawbacks of scroll compressors is the high working temperature (maximum temperature of up to 250 degrees Celsius is reported [Ref 3]). The higher temperatures increase excessively the thermal expansion of scroll spirals, leading to significant increments of internal leakages and thereby affecting the efficiency.

A mesh created for conjugate heat transfer has to model the in-between compression chamber, the scrolls and the convective boundary condition at the outer surface of the scrolls. This type of mesh enables to get consistent temperatures in the solids, to calculate the thermal deformation of the scrolls.

Automation and Optimization of Scroll Compressor

Even though scroll compressors enjoy a high volumetric efficiency in the range of 80-95%, there is still room for improvements. Optimization of the geometric parameters is necessary to reduce the performance degradation due to leakage flows in radial and axial clearances.

CFD as a design tool plays a significant role in optimizing scroll geometry. The major advantage of a 3D CFD simulation combined with fluid-structure interaction (FSI) is that the 3D geometry effect is directly considered. This makes CFD analysis highly suitable for the optimization of the design.

GridPro provides an excellent platform for automating hexahedral meshing through because of its working principle and the python based API.

The key features are:

  • Quick set up of a CFD model from CAD geometry.
  • Parametric design of geometry can be incorporated into the same blocking and can be used even for variable thickness scrolls.
  • The mesh at each time interval is of high quality with orthogonal cells and even distribution.
  • The other advantage of this strategy is that it is respectful of the space conservation law while conserving mass, momentum, energy, and species.

Since GridPro offers both process automation through scripting and API level automation. The automation can either be triggered outside of a CAD environment or inside the CAD environment.

This flexibility provides companies and researchers to develop full-scale meshing automation with GridPro while the user only interacts with CAD / CFD or any software connector platform.

GridPro coupled with CAESES software connector to generate meshes automatically for every change in geometry.
Figure 5: GridPro coupled with CAESES software connector to generate meshes automatically for every change in geometry.

Parting Remarks

The generation of a structured mesh for the entire scroll domains, including the port region, is a very challenging task. It could be very difficult to model narrow gaps and complex features of the geometry. However, with GridPro’s template-based approach and dynamic boundary conforming technology the setup is reduced to a few specifications and the user can develop his own automation module for structured hexahedral meshing.

If scroll compressor meshing is your need and you are looking out for solutions. Feel free to reach out to us at: support@gridpro.com

Contact GridPro

References

1.”Analysis of the Inner Fluid-Dynamics of Scroll Compressors and Comparison between CFD Numerical and Modelling Approaches“, Giovanna Cavazzini et al, Advances in Energy Research: 2nd Edition, 2021.

2. “Structured Mesh Generation and Numerical Analysis of a Scroll Expander in an Open-Source Environment”, Ettore Fadiga et al, Energies 2020, 13, 666.

3. “Waste heat recovery for commercial vehicles with a Rankine process“, Seher, D.; Lengenfelder, T.; Gerhardt, J.; Eisenmenger, N.; Hackner, M.; Krinn, I., In Proceedings of the 21st Aachen Colloquium on Automobile and Engine Technology, Aachen, Germany, 8–10 October 2012; pp. 7–9.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post Automation of Hexahedral Meshing for Scroll Compressors appeared first on GridPro Blog.

► GridPro Version 8.1 Released
  16 Feb, 2022

About GridPro Version 8.1 

The GridPro version 8.1 release marks the completion of yet another endeavor to provide a feature-rich, powerful and reliable package to the Structured meshing software to the CAE community.

In every cycle of development, we fulfill the feature requests from our users, improve workflow challenges and democratize the feature to enable newer users to transition without much learning. Along the way, we are improvising on the performance of the tool with the increasing demand to handle challenging geometries in meshing.

Here is a quick Preview of the Major Features:

  • New License Monitoring System for Network license users.
  • Automatic grouping of Boundary faces for quicker workflow.
  • New Face display for better understanding of Topology.
  • Faster and Robust block extrusion for Tubular geometries ( ducts, arteries, volutes, etc).

Major Highlights of Version 8.1

License Monitoring System

Network / Float Licenses

The License Management System now has GUI access to most of the features that a user or a system admin would look for. The License Manager GUI now displays all the license-related information. When the user loads the license file and starts the license manager, all the initialization process is done before the license manager is started. The license manager also displays the number of licenses used and the MAC id/ hostname of the user using the license.

Node-locked / Served Licenses

The client license management system is now packaged along with the GUI. When the GUI is opened for the first time the license popup appears where the user is asked to upload the license and Initialise. The initialization process runs in the background and opens up the GUI. This process irons out the need to go through a list of specific commands listed in section 9.11 of the utility manual.

Smart Face Groups to Enhance user workflow in GridPro

The quest to improve user experience and provide easy access to the entities continues. The current version has made a major stride in this direction. From version 8.1 onwards the user has a list of smart selections of face groups available as a part of the Selection Panel. From the blocking, the algorithm calculates the boundary faces and smart groups, based on certain checks. These face groups are displayed and the user can select a single group or a combination of groups to progress in further modifying the structure or assigning to surfaces.

The selection pane also has a temporary selection group to provide flexibility in the workflow. In the past, the user had to select a group to select the entities in the GL. However, the present version enables the users with an alternative workflow where they can right-click and drag in the GL to select faces /blocks. These selected blocks/faces/edges/corners are stored in the Selection Group. It is overwritten when the next selection is made. However, the user has an option to move the selection into one of the permanent groups.

Topology now has Face Display for Better Visualization

The topology now has a Face display along with the corners and edges. The face display now helps the user to have a better perception of the faces and blocks both displayed in the GL and grouped in individual groups. To reassure the user of the topology entities selected, the display mode is automatically changed to face display mode in the following scenarios.

  • User selects corners and edges into a group.
  • Wrap displays the new faces created after an operation.
  • Copy shows the blocks that are created when a face/s are created.
  • Extrude displays the output blocks created.

There are many such scenarios where the user is provided feedback on the operations visually.

Fast Blocking for Tubular Geometries (Arteries, Ducts, etc)

The improved centreline evaluation tool is now robust and fast. This speeds up the topology building for geometries like pipes and human arteries and ducts. The algorithm extrudes the given input along the centreline of the geometry resection the change in cross-sectional area change. The algorithm is now available under extrude option in the GUI.

For more details about the new features, enhancements, and bug fixes please, refer to:

Supported Platforms

GridPro WS works on Windows 7 and above, Ubuntu 12.04 and above, Rhel 5.6 and above, MacOS 10 and above.

The support for the 32-bit platform has been discontinued for all operating systems.

GridPro AZ will be discontinued from version 9 onwards.

Download

GridPro Version 8.1 can be downloaded by registering here.

All tutorials can be found in the Doc folder in the GridPro Installation directory. Alternatively, it can be downloaded from the link here.

All earlier software versions can be found in the Download sections.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post GridPro Version 8.1 Released appeared first on GridPro Blog.

► Turbopumps – A Unique Rotating Machine
  10 Dec, 2021

Figure 1: Structured multi-block grid for turbopumps.

1050 words / 5 minutes read

Turbopumps help rockets achieve high power to weight ratio by feeding pressurized propellant to the rocket’s combustion chamber. The success of rocket launch missions is heavily influenced by the design of inducers in the turbopumps. 

The Rocket Challenge

Turbopump in a liquid propellant rocket engine configuration
Figure 2: Turbopump in a liquid propellant rocket engine configuration.

Human conquest of space has advanced at full speed over the last few years. According to Forbes, there are 10,000 space companies globally. This massive growth has triggered competition in the global space transportation business which is fueling innovations. Reports indicate that nearly 59 % of rocket launch failures are due to propulsion system failures. In this article, we will discuss how the industry is focused on increasing reliability and reducing the cost of development of launch vehicles by improving the design of liquid-propellent rocket engines.

The main reason for propulsion system failure is due to instabilities in the combustor or turbopump. It is estimated that up to 50% of a rocket development program’s cost goes into the design and development of turbopumps.

Failed Missions

Despite many years of extensive research, unsteady cavitation instabilities in turbopumps are a significant problem and are not entirely understood. Further, there are no well-established procedures for predicting its onset during the early design phase.

Cavitation instabilities that can trigger severe load and vibrations within turbopumps cause engine thrust fluctuations and sometimes even total mechanical failure. Historically, cavitation instabilities have caused failed missions in almost all rocket development programs, including Apollo (NASA), Space Shuttle main engines (NASA), Fastrac (NASA), Vulcain (ESA), and LE-7 (JAXA).

Hence, it is critical to identify the mechanisms governing cavitation instabilities to pave the way for building principle-based design guidelines for inducers to suppress cavitation instabilities in turbopump. The beneficial outcome of this exercise will be – more affordable, reliable, and higher-performance turbopumps.

Cavitation is turbopump inducers
Figure 3: Cavitation is turbopump inducers.

Why Turbopump in Rockets?

Liquid oxidizers and fuels like hydrogen or methane must be fed into the combustion chamber at a higher pressure than the existing chamber at a sufficient flow rate. It is done either by having pressurized tanks or by using a pump. Pressurized tanks tend to be heavy and bulky and are less preferred since they add to the overall rocket system weight.

On the other hand, turbopumps serve as a better alternative due to their compact nature and low weight. Rockets can achieve a high power-to-weight ratio since turbopumps only need a lightweight, low-pressure feed tank.

Challenges in Turbopump Design

One of the main goals of a rocket designer is to stretch the maximum possible delivery payload. Maintaining high thrust chamber pressure and reducing the inert weight of the rocket to a minimum can help achieve this goal. Reduction in system weight is possible by lowering the turbopump size and mass. But, to maintain the same pressure and flow rate, the turbopump needs to run at a high rotational speed. Unfortunately, running at high speed leads to cavitation problems. Coming up with ways to mitigate this issue is a critical design challenge.

 Inducer and impeller in rocket engine turbopumps
Figure 4: Inducer and impeller of a rocket engine turbopump.

The second design challenge arises when the turbopumps are expected to work in off-design conditions. Such a need arises because rocket engines often face varying thrust requirements during their flight. For example, the designer needs to make appropriate design decisions to alleviate the problems of vibrations due to cavitation when the liquid pressure is lowered below the vapour pressure limits. Hence, coming up with ways to reduce performance degradation under cavitation conditions is essential.

Other design challenges which come in more significant magnitudes, unlike in compressors include, high radial and axial thrusts, leakages, increased disk friction, etc. It is up to the designer to develop tricks to manage the tradeoffs and make specific design choices to overcome these problems.

Inducer and impeller in turbopumps
Figure 5: Inducer and impeller of a turbopump.

Importance of Inducer in Turbopumps

Designing small and compact turbopumps rotating at high speed can reduce the total weight of rockets. However, at higher speeds, cavitation onsets, causing machine noise and vibration, erosion, loss of head and efficiency, etc.

An anti-cavitation component called the inducer is axially placed upstream of the impeller to overcome these challenges. The inducer, acting as a pre-pump, increases the pressure of the fluid by a sufficient amount to minimize cavitation and improve the performance of the impeller. They are sometimes expected to sacrifice themselves to safe-guard the impeller blade from cavitation.

Unlike the impeller, the inducer blades are fewer in number and are lengthier and wider. Further, they have larger stagger angles, increasing pitch between blades, high blade solidity, and usually small angles of incidences.

With these unique features, inducer blades have minimal blockage due to cavitation, thereby allowing them to operate under very low suction pressure conditions without deteriorating the pump performance. In general, inducers have a minimal effect on the efficiency and head of the pump but offer a dramatic impact on the cavitation performance. Further, they reduce noise and vibration. But more importantly, inducers decrease the pump’s critical NPSH by more than three times.

Structured multi-block mesh for an inducer
Figure 6: Structured multi-block mesh for an inducer.

Parting Remarks

Cavitation surge and inlet backflows are inevitable in turbopumps. All we can think of is finding ways to suppress them to some extent. Suppression can be done by using an obstruction plate or by connecting a smaller diameter suction pipe upstream of the inducers. Backflow suppression helps to narrow the onset range of cavitation surge. Even if they occur, their amplitudes are weakened and subdued by the suppression devices. This helps in achieving improved surge performance.

However, these two suppression techniques are effective when the flow rates are healthy but show their limitations at extremely low flow rates. Researchers recommend combining these suppressing methods with inducer blade shapes suitable for reducing inlet backflows for such extreme conditions.

Further Readings

  1. Meshing of Rocket Engine Nozzles for CFD
  2. Spiked Blunt Bodies for Hypersonic Flights
  3. Know Your Mesh for Reentry Vehicles

References

1. “Studies of cavitation characteristics of inducers with different blade numbers“, Lulu Zhai et al., AIP Advances 11, 085216; 12 August 2021.

2. “Numerical and experimental study of cavitating flow through an axial inducer considering tip clearance“, Rafael Campos-Amezcua et al., Proc IMechE Part A: J Power and Energy 227(8) 858–868, IMechE 2013.

3. “Suction Performance and Cavitation Instabilities of Turbopumps with Three Different Inducer Design“, Tatsuya Morii et al., International Journal of Fluid Machinery and Systems, Vol. 12, No. 2, April-June 2019.

4. “A Study on the Design of LOx Turbopump Inducers“, Lucrezia Veggi et al., International Symposium on Transport Phenomena and Dynamics of Rotating Machinery Maui, Hawaii, December 16-21, 2017.

5. “Study on Hydraulic Performances of a 3-Bladed Inducer Based on Different Numerical and Experimental Methods“, Yanxia Fu et al., Hindawi Publishing Corporation International Journal of Rotating Machinery, Volume 2016, Article ID 4267429.

6. “Study on inducer and impeller of a centrifugal pump for a rocket engine turbopump“, Soon-Sam Hong et al., Proc IMechE Part C: J Mechanical Engineering Science 227(2) 311–319, IMechE 2012.

7. “Turbopump Design: Comparison of Numerical Simulations to an Already Validated Reduced-Order Model“, A Apollonio et al., Journal of Physics: Conference Series 1909 (2021) 012029, ISROMAC18.

8. “Effect of leading-edge sweep on the performance of cavitating inducer of LOX booster turbopump used in semi cryogenic engine“, Arpit Mishra et al., IOP Conf. Series: Materials Science and Engineering 171 (2017).

9. “Design and Analysis of a High Speed, High-Pressure Peroxide/RP-1 Turbopump“, William L. Murray et al., AIAA paper.

10. “A Body Force Model for Cavitating Inducers in Rocket Engine Turbopumps“, William Alarik Sorensen et al., MS Thesis, Massachusetts Institute of Technology, September 2014.

11. “Rocket engine inducer design optimization to improve its suction performance“, M. J. Lubieniecki, M S Thesis, Delft University of Technology, 7 December 2018.

12.” Modeling Rotating Cavitation Instabilities in Rocket Engine Turbopumps“, Adam Gabor Vermes, M S Thesis, Delft University of Technology.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post Turbopumps – A Unique Rotating Machine appeared first on GridPro Blog.

► Aircraft Vortex Generators – The Nacelle Strakes
  14 Oct, 2021

Figure 1: Structured multi-block meshing for aircraft vortex generators – the nacelle strakes.

1100 words / 5 minutes read

Introduction

In modern transport aircraft, underwing engine nacelle installation is the most common design choice. Here, the engine nacelles which are tightly coupled with the wing have a huge impact on the maximum lift and stall angle of the wing. With the usage of larger by-pass ratio engines over the years, the adverse effects of the nacelle on the wing’s performance have increased dramatically, especially so when the high lift devices are deployed.

The nacelles hamper the wing’s desired performance by triggering premature massive flow separation on the main element and decrease the CL_max and stall angle. We cover more of this is in our earlier article Engine Nacelle Aerodynamics.

In order to attenuate the negative influence of nacelle, most aircraft manufacturers employ vortex generators called strakes or more popularly known as chines at appropriate locations on the nacelle.

Nacelle strakes are small delta-shaped or triangular panel sheets positioned strategically on the nacelle to induce longitudinal vortices. In short –  vortex generators mounted on nacelles are called strakes.

Usually, a pair of strakes are mounted on the nacelles to generate additional vortices to control the flow separation on the wing. Depending on the mounting location and the nacelle-pylon-wing flow field, the generated strake vortices can avoid the generation of slower nacelle vortex or sometimes even interact with nacelle vortex and increase their axial core speed. Thus they affect the position and strength of the installation vortices leading to an increase in maximum achievable lift. Since strakes directly influence the wing’s lift generation capabilities, their design demands careful attention.

Aircraft vortex generators: Single strake and Double strake
Figure 2: a. Aircraft vortex generators: Single strake. b. Double strake. Image source Ref [1].

Effectiveness of strake installation

For underwing nacelle configurations without strakes, at alphas near to stall, a large zone of low energy flow gets set above the main wing. The creation of this low energy zone is due to the nacelle blocking the flow from passing over the upper surface of the wing at high alphas. Any further increase in the angle of attack results in premature flow separation.

Reduction in upwash flow due to nacelle strakes: Strake off. Strakes on.
Figure 3: Surface oil flow visualization. Reduction in upwash flow. a. Strake off. b. Strakes on. Images source Ref [3].
When strakes are installed, the flow field is made more conducive for achieving higher lift by two mechanisms. Firstly, the nacelle strakes reduce the nacelle upwash and thereby relieve the adverse flow effects at the wing-pylon intersection. Figure 3 and 4 shows the reduction in upwash and the reduction in cross-flow separation on the nacelle near the pylon junction.

Aircraft vortex generators: Nacelle strakes particle traces
Figure 4: Aircraft vortex generators: Nacelle strakes particle traces. Image source Ref [3].
By a second mechanism, the strakes vortices provide a downwash on the upper surface of the wing which energizes the boundary layer and eliminates the low energy zone. This happens as the strake vortex with high kinetic energy passes through the low energy zone and the neighboring high total pressure air rushes into the low energy zone. In this way, the flow gets reenergized and the flow separation gets delayed. This positive effect of strake installation can clearly be seen in Figure 5. For an alpha beyond stall, the configuration without strake gets stalled, while in the configuration with strake, the flow separation is suppressed and the stall is delayed.

Total pressure coefficient contours: Without strake, With strake
Figure 5: Total pressure coefficient contours. a. Without strake. b. With strake. Image source Ref [4].
This positive effect of strake installation can also be seen in the Cp distribution, as shown in Figure 6. Here we can notice the elimination of flow separation on the upper surface of the main wing and the flap with the strakes mounted. As an outcome, the lift on the main wing and flap is recovered. Further, the maximum lift is enhanced and the stall is delayed. Studies show that nearly 60 to 70% loss in maximum lift can be recovered and an improvement in lift coefficient by 0.3 and stall angle by 3 degrees is possible by using strakes.

Nacelle strakes: Cp distribution at 35% spanwise station
Figure 6: Cp distribution at 35% spanwise station. Image source Ref [4].
In one study, usage of a single strake showed improvement in the stall angle by 1 degree but without any larger change in maximum lift. However, adding another strake was observed to increases the maximum lift from 2.26 to 2.3. When a third strake on the nacelle lip was introduced, the maximum lift became 2.34 and the stall angle further increased by 1 degree.

Parametric design of nacelle strake

The effectiveness of the strakes is directly related to the strake’s geometry and installation location. The strength and trajectory of the strake vortex depend on the strake area, deflection angle, axial position, and azimuth location.

Parametric variants of nacelle strakes: Variants generated based on changing axial position and area
Figure 7: Parametric variants of nacelle strakes. Variants generated based on changing axial position and area. Image source Ref [4].
Figure 7 shows a parametric study where the axial location and the area were varied. Strake 2 is observed to achieve higher lift compared to other configurations. Strake 2, 1, and 4 have the same area, but their axial positions are sequentially increased from the nacelle’s trailing edge. As can be observed in Figure 8a, the maximum lift coefficient also decreases in the same order. What this implies is that strakes axial location is a key factor in determining the stall-delay capabilities of strakes. The closer the strake placement to the nacelle trailing edge, the higher is the achievable lift coefficient.

Also, strake 2 and strake 3 have the same exact position, but strake 3 has an area that is two-thirds that of strake 2. Since the location is the same, there is hardly any difference in lift coefficients between strake 2 and 3, before stall. However, after the stall, the strake with a smaller area (strake 3) produces an abrupt drop in lift coefficient.

vortex generators: CL vs alpha plot, Total pressure coefficient contours for different strakes geometries
Figure 8: a. CL vs alpha plot. b. Total pressure coefficient contours for different strakes geometries. Image source Ref [4].
From Figure 8b, we can observe that strake 4 is least effective in controlling the flow. Careful observation reveals that the vortex generated by strake 2 is strongest among all while that from strake 4 is the weakest. From Figures 8a and 8b we can conclude that the strength of the strake vortex is another key factor that affects the strake’s performance. And there is a direct correlation between the strake vortex axial strength and its installation location.

vortex generators: Surface streamlines around different strake variants
Figure 9: Surface streamlines around different strake variants. Image source Ref [4].
Studies of the local flow fields using surface streamlines reveal that the circumferential velocity component decreases when the distance between the strake and the nacelle trailing edge increases. This means the strength of the strake vortex is determined by the strake’s local angle of attack. It is for this reason that, strake 2 vortex is strongest while the strake 4 vortex is weakest.

With these observations, we can conclude that the axial positioning of the strake determines the circumferential component of the flow, which in turn determines the strake’s local angle of attack. For a fixed azimuth positioning, the local alpha is a key factor influencing the strength of the vortex. In turn, strake’s vortex strength is a key factor in strake’s effectiveness in delaying the stall.

aircraft vortex generators - the nacelle strakes, Gridpro structured multiblock grids
Figure 10: Multi-block surface mesh using GridPro on the nacelle in the near vicinity of the strakes.

aircraft vortex generators - the nacelle strakes, multi-block mesh
Figure 11: Multi-block structured surface mesh on the strakes using GridPro.

Parting thoughts

Even though aircraft vortex generators, the nacelle strakes are proven devices to enhance lift for underwing mounted nacelle configurations, they are observed to be less effective for larger UHBR nacelles. For larger bypass ratio engines, they are unable to energize the flow sufficiently and make the flow remained attached to the wing surface. For such nacelles, researchers are working on developing active flow control devices such as pulsed jet blowing to control flow separation.

Nevertheless, strakes which are successfully deployed by all aircraft manufacturers around the world for many decades, will continue to be in use for small and medium-sized aircraft because of their simplicity, cost-effectiveness, and more importantly for their effectiveness in controlling the flow.

Further Reading

  1. Engine Nacelle Aerodynamics
  2. Role of Vortex Generators in Diffuser S-Ducts of Aircraft

References

1. “Modelling the aerodynamics of propulsive system integration at cruise and high-lift conditions”, Thierry Sibilli, PhD Academic Year: 2011-2012, Cranfield University.

2. “CFD Prediciton of Maximum Lift Effects on Realistic High-Lift-Commercial-Aircraft-Configurations within the European project EUROLIFT II”, H. Frhr. v. Geyr et al, Second Symposium “Simulation of Wing and Nacelle Stall”, June 22nd – 23rd, 2010, Braunschweig, Germany.

3. “Navier-Stokes Analysis of a High Wing Transport High-Lift Configuration With Externally Blown Flaps”, Jeffrey P. Slotnick et al, NASA.

4. “Numerical Research of the Nacelle Strake on a Civil jet“, Wensheng Zhang et al, 28TH International Congress of the Aeronautical Sciences, ICAS 2012.

Subscribe To GridPro Blog

By subscribing, you'll receive every new post in your inbox. Awesome!

The post Aircraft Vortex Generators – The Nacelle Strakes appeared first on GridPro Blog.

Hanley Innovations top

► Aerodynamics of a golf ball
  29 Mar, 2022

 Stallion 3D is an aerodynamics analysis software package that can be used to analyze golf balls in flight. The software runs on MS Windows 10 & 11 and can compute the lift, drag and moment coefficients to determine the trajectory.  The STL file, even with dimples, can be read directly into Stallion 3D for analysis.


What we learn from the aerodynamics:

  • The spinning golf ball produces lift and drag similar to an airplane wing
  • Trailing vortices can be seen at the "wing tips"
  • The extra lift helps the ball to travel further

Stallion 3D strengths are:

  • The built-in Reynolds Averaged Navier-Stokes equations provide high fidelity CFD solutions
  • The grid is generated automatically 
  • Built-in  menus are used to specify speed, angle, altitude and even spin
  • Built-in visualization
  • The numbers are generated to compute the trajectory of the ball
  • The software runs on your laptop or desktop under Windows 7, 10 and 11
More information about Stallion 3D can be found at https://www.hanleyinnovations.com
Thanks for reading 🙋

► Accurate Aircraft Performance Predictions using Stallion 3D
  26 Feb, 2020


Stallion 3D uses your CAD design to simulate the performance of your aircraft.  This enables you to verify your design and compute quantities such as cruise speed, power required and range at a given cruise altitude. Stallion 3D is used to optimize the design before moving forward with building and testing prototypes.

The table below shows the results of Stallion 3D around the cruise angles of attack of the Cessna 402c aircraft.  The CAD design can be obtained from the OpenVSP hangar.


The results were obtained by simulating 5 angles of attack in Stallion 3D on an ordinary laptop computer running MS Windows 10 .  Given the aircraft geometry and flight conditions, Stallion 3D computed the CL, CD, L/D and other aerodynamic quantities.  With this accurate aerodynamics results, the preliminary performance data such as cruise speed, power, range and endurance can be obtained.

Lift Coefficient versus Angle of Attack computed with Stallion 3D


Lift to Drag Ratio versus True Airspeed at 10,000 feet


Power Required versus True Airspeed at 10,000 feet

The Stallion 3D results shows good agreement with the published data for the Cessna 402.  For example, the cruse speed of the aircraft at 10,000 feet is around 140 knots. This coincides with the speed at the maximum L/D (best range) shown in the graph and table above.

 More information about Stallion 3D can be found at the following link.
http://www.hanleyinnovations.com/stallion3d.html

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software that is accessible to engineers, designers and students.  For more information, please visit > http://www.hanleyinnovations.com


► 5 Tips For Excellent Aerodynamic Analysis and Design
    8 Feb, 2020
Stallion 3D analysis of Uber Elevate eCRM-100 model

Being the best aerodynamics engineer requires meticulous planning and execution.  Here are 5 steps you can following to start your journey to being one of the best aerodynamicist.

1.  Airfoils analysis (VisualFoil) - the wing will not be better than the airfoil. Start with the best airfoil for the design.

2.  Wing analysis (3Dfoil) - know the benefits/limits of taper, geometric & aerodynamic twist, dihedral angles, sweep, induced drag and aspect ratio.

3. Stability analysis (3Dfoil) - longitudinal & lateral static & dynamic stability analysis.  If the airplane is not stable, it might not fly (well).

4. High Lift (MultiElement Airfoils) - airfoil arrangements can do wonders for takeoff, climb, cruise and landing.

5. Analyze the whole arrangement (Stallion 3D) - this is the best information you will get until you flight test the design.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software the is accessible to engineers, designs and students.  For more information, please visit > http://www.hanleyinnovations.com

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 


The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 


Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.


Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.


Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.



Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:



► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse


Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit http://www.hanleyinnovations.com/stallion3d.html or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017


Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.




More information about the software can be found at the following url:
http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

Thanks for reading.


CFD and others... top

► A Benchmark for Scale Resolving Simulation with Curved Walls
  28 Jun, 2021

Multiple international workshops on high-order CFD methods (e.g., 1, 2, 3, 4, 5) have demonstrated the advantage of high-order methods for scale-resolving simulation such as large eddy simulation (LES) and direct numerical simulation (DNS). The most popular benchmark from the workshops has been the Taylor-Green (TG) vortex case. I believe the following reasons contributed to its popularity:

  • Simple geometry and boundary conditions;
  • Simple and smooth initial condition;
  • Effective indicator for resolution of disparate space/time scales in a turbulent flow.

Using this case, we are able to assess the relative efficiency of high-order schemes over a 2nd order one with the 3-stage SSP Runge-Kutta algorithm for time integration. The 3rd order FR/CPR scheme turns out to be 55 times faster than the 2nd order scheme to achieve a similar resolution. The results will be presented in the upcoming 2021 AIAA Aviation Forum.

Unfortunately the TG vortex case cannot assess turbulence-wall interactions. To overcome this deficiency, we recommend the well-known Taylor-Couette (TC) flow, as shown in Figure 1.

 

Figure 1. Schematic of the Taylor-Couette flow (r_i/r_o = 1/2)

The problem has a simple geometry and boundary conditions. The Reynolds number (Re) is based on the gap width and the inner wall velocity. When Re is low (~10), the problem has a steady laminar solution, which can be used to verify the order of accuracy for high-order mesh implementations. We choose Re = 4000, at which the flow is turbulent. In addition, we mimic the TG vortex by designing a smooth initial condition, and also employing enstrophy as the resolution indicator. Enstrophy is the integrated vorticity magnitude squared, which has been an excellent resolution indicator for the TG vortex. Through a p-refinement study, we are able to establish the DNS resolution. The DNS data can be used to evaluate the performance of LES methods and tools. 

 

Figure 2. Enstrophy histories in a p-refinement study

A movie showing the transition from a regular laminar flow to a turbulent one is posted here. One can clearly see vortex generation, stretching, tilting, breakdown in the transition process. Details of the benchmark problem has been published in Advances in Aerodynamics.
► The Darkest Hour Before Dawn
    2 Jan, 2021

Happy 2021!

The year of 2020 will be remembered in history more than the year of 1918, when the last great pandemic hit the globe. As we speak, daily new cases in the US are on the order of 200,000, while the daily death toll oscillates around 3,000. According to many infectious disease experts, the darkest days may still be to come. In the next three months, we all need to do our very best by wearing a mask, practicing social distancing and washing our hands. We are also seeing a glimmer of hope with several recently approved COVID vaccines.

2020 will be remembered more for what Trump tried and is still trying to do, to overturn the results of a fair election. His accusations of wide-spread election fraud were proven wrong in Georgia and Wisconsin through multiple hand recounts. If there was any truth to the accusations, the paper recounts would have uncovered the fraud because computer hackers or software cannot change paper votes.

Trump's dictatorial habits were there for the world to see in the last four years. Given another 4-year term, he might just turn a democracy into a Trump dictatorship. That's precisely why so many voted in the middle of a pandemic. Biden won the popular vote by over 7 million, and won the electoral college in a landslide. Many churchgoers support Trump because they dislike Democrats' stances on abortion, LGBT rights, et al. However, if a Trump dictatorship becomes reality, religious freedom may not exist any more in the US. 

Is the darkest day going to be January 6th, 2021, when Trump will make a last-ditch effort to overturn the election results in the Electoral College certification process? Everybody knows it is futile, but it will give Trump another opportunity to extort money from his supporters.   

But, the dawn will always come. Biden will be the president on January 20, 2021, and the pandemic will be over, perhaps as soon as 2021.

The future of CFD is, however, as bright as ever. On the front of large eddy simulation (LES), high-order methods and GPU computing are making LES more efficient and affordable. See a recent story from GE.

the darkest hour is just before dawn...

► Facts, Myths and Alternative Facts at an Important Juncture
  21 Jun, 2020
We live in an extraordinary time in modern human history. A global pandemic did the unthinkable to billions of people: a nearly total lock-down for months.  Like many universities in the world, KU closed its doors to students since early March of 2020, and all courses were offered online.

Millions watched in horror when George Floyd was murdered, and when a 75 year old man was shoved to the ground and started bleeding from the back of his skull...

Meanwhile, Trump and his allies routinely ignore facts, fabricate alternative facts, and advocate often-debunked conspiracy theories to push his agenda. The political system designed by the founding fathers is assaulted from all directions. The rule of law and the free press are attacked on a daily basis. One often wonders how we managed to get to this point, and if the political system can survive the constant sabotage...It appears the struggle between facts, myths and alternative facts hangs in the balance.

In any scientific discipline, conclusions are drawn, and decisions are made based on verifiable facts. Of course, we are humans, and honest mistakes can be made. There are others, who push alternative facts or misinformation with ulterior motives. Unfortunately, mistaken conclusions and wrong beliefs are sometimes followed widely and become accepted myths. Fortunately, we can always use verifiable scientific facts to debunk them.

There have been many myths in CFD, and quite a few have been rebutted. Some have continued to persist. I'd like to refute several in this blog. I understand some of the topics can be very controversial, but I welcome fact-based debate.

Myth No. 1 - My LES/DNS solution has no numerical dissipation because a central-difference scheme is used.

A central finite difference scheme is indeed free of numerical dissipation in space. However, the time integration scheme inevitably introduces both numerical dissipation and dispersion. Since DNS/LES is unsteady in nature, the solution is not free of numerical dissipation.  

Myth No. 2 - You should use non-dissipative schemes in LES/DNS because upwind schemes have too much numerical dissipation.

It sounds reasonable, but far from being true. We all agree that fully upwind schemes (the stencil shown in Figure 1) are bad. Upwind-biased schemes, on the other hand, are not necessarily bad at all. In fact, in a numerical test with the Burgers equation [1], the upwind biased scheme performed better than the central difference scheme because of its smaller dispersion error. In addition, the numerical dissipation in the upwind-biased scheme makes the simulation more robust since under-resolved high-frequency waves are naturally damped.   

Figure 1. Various discretization stencils for the red point
The Riemann solver used in the DG/FR/CPR scheme also introduces a small amount of dissipation. However, because of its small dispersion error, it out-performs the central difference and upwind-biased schemes. This study shows that both dissipation and dispersion characteristics are equally important. Higher order schemes clearly perform better than a low order non-dissipative central difference scheme.  

Myth No. 3 - Smagorisky model is a physics based sub-grid-scale (SGS) model.

There have been numerous studies based on experimental or DNS data, which show that the SGS stress produced with the Smagorisky model does not correlate with the true SGS stress. The role of the model is then to add numerical dissipation to stablize the simulations. The model coefficient is usually determined by matching a certain turbulent energy spectrum. The fact suggests that the model is purely numerical in nature, but calibrated for certain numerical schemes using a particular turbulent energy spectrum. This calibration is not universal because many simulations produced worse results with the model.

► What Happens When You Run a LES on a RANS Mesh?
  27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

 Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.    

p = 1

p = 2

p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions. 

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model. 

Table I. Comparison of lift and drag coefficients with experimental data

CL
CD
p = 1
2.020
0.293
p = 2
2.411
0.282
p = 3
2.413
0.283
Experiment
2.479
0.252


This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida. 


► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 


► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

AirShaper top

► How CFD can optimize VTOL drone design
    7 Sep, 2022
Optimizing the aerodynamic behaviour of VTOL drones is becoming more important as the application of drones continues to increase
► Analysing the Audi TT spoiler
  12 Aug, 2022
We used AirShaper software to analyse the aerodynamic effect of the Audi TT spoiler that was added to solve the original TTs high speed stability issues
► From supercar to hypercar - the Lamborghini Rayo
    4 Jul, 2022
7x Design used AirShaper shape optimization software to convert a Lamborghini Huracan supercar to a Lamborghini Rayo hypercar achieving speeds of more than 300mph
► The largest full size wind tunnel in the world - NASA
  30 May, 2022
NASA has the largest full size wind tunnel in the world at the National Full Scale Aerodynmamics Complex (NFAC) But what are the challenges and benefits of testing at full scale?
► Electrict Aviation - The Byfly Seaplane
  26 Apr, 2022
We are designing a versatile all-electric amphibious aircraft to transport people and goods. The amphibious aircraft concept we are developing is a flying boat type and it uses a hull-based fuselage to meet the hydrodynamic requirements for the aircraft to take-off and land on the water bodies. Operation from land airports is achieved through the retractable landing gear system onboard the aircraft.
► Helping teams understand the aero of a Trans-Am TA2 Mustang
  19 Apr, 2022
TFB Performance used RapidScan and AirShaper to understand and optimise the aerodynamics of the Trans-Am TA2 racecar for its customers

Convergent Science Blog top

► Tamara Gammaidoni Wins 2022 CONVERGE Academic Competition With Air-Cooled Battery Pack Simulation
    1 Jul, 2022

Tamara Gammaidoni

We’re thrilled to announce that Tamara Gammaidoni, a graduate student at the Università degli Studi di Perugia, has won the 2022 CONVERGE Academic Competition. The competition challenged students to design and run a novel CONVERGE simulation that demonstrated significant engineering knowledge, accurately reflected the real world, and represented progress for the engineering community.

“We are incredibly proud of the work produced by the students competing in this year’s CONVERGE Academic Competition,” said Hannah Leystra, University Relationship Specialist and competition director at Convergent Science. “Tamara’s winning project is especially noteworthy. She took on a challenging, globally relevant problem and developed a thoughtful and insightful simulation.”

Tamara is pursuing a master’s degree in mechanical engineering. “CFD is something that always fascinated me since the beginning of my studies,” she said. “Fortunately in my master’s program, we have a professor of fluid dynamics, Michele Battistoni, who introduced us to CONVERGE.”

For the academic competition, Tamara investigated an air-cooled battery pack (Figure 1). “The automotive industry is rapidly evolving, and much of the focus is on electric vehicles,” Tamara said. She was first introduced to the problem of battery thermal management a few years ago on her Formula SAE UniPG Racing Team, where part of their pilot project was studying battery cooling for an electric vehicle. Properly cooling a battery pack is key for optimal performance and to ensure safe operating conditions.

Figure 1: Battery pack geometry with temperature contours.

“The main goal of my project was to determine which parameters most affect the simulation. Because simulating a real battery pack can be time consuming and expensive, it’s important to know which parameters are useful to change and optimize,” Tamara said.

The battery pack geometry, material properties, and thermal data used as inputs came from an experimental setup by the University of Singapore.1 Tamara conducted a steady-state simulation using CONVERGE’s super-cycling feature to reduce the computational cost. In addition, Tamara employed Adaptive Mesh Refinement to capture large gradients in the temperature field, and she applied fixed embedding around the battery cells and copper bus bars to increase the simulation accuracy. 

With this case setup, Tamara explored several different parameters, starting with the wall treatment for the battery surfaces and for the surrounding casing. Based on the y+ value, she concluded that enhanced wall treatment would be the most accurate model for the battery surfaces. Indeed, when she compared the results between standard wall treatment and enhanced wall treatment, she found that standard wall treatment under-predicted the average temperature of the battery cells (Figure 2). 

Figure 2: Comparison of different wall treatments for the battery surface.

For the casing, Tamara tested three different boundary conditions: convection, adiabatic, and law of the wall. Figure 3 shows that these boundary conditions had a significant effect on the battery temperature. For this simulation, Tamara determined that the convection boundary condition was the most realistic.

Figure 3: Comparison of different wall boundary conditions for the casing.

Next, Tamara investigated the effect of grid size on her results. She determined that a base grid size of 0.004×0.004×0.004 m provided sufficient resolution, as her results didn’t change significantly with a more refined grid. In addition, Tamara tested out CONVERGE’s inlaid meshing feature to add a boundary layer mesh around the battery cells. She found that the inlaid mesh approach didn’t provide additional benefits for her case, so she opted to stick with fixed embedding. 

Having determined the optimal parameters for her case, Tamara compared the average battery cell temperatures obtained from her CONVERGE simulation with experimental measurements from the University of Singapore.1 As you can see in Figure 4, the data matched well across a range of mass flow rates.

Figure 4: Comparison of the average temperature on the surface of the battery cells obtained from the CONVERGE simulation (blue) and the experimental measurements1 (black).

The video below, provided courtesy of Tamara, presents a visual overview of her simulation, including a closer look at the geometry, mesh, and temperature and velocity results.

Visual overview of Tamara’s air-cooled battery pack simulation. Video provided courtesy of Tamara Gammaidoni.

“Overall, this project provides useful information on how to make a valid model to simulate an air-cooled battery pack,” Tamara said. “In the future, the simulation could be improved by adopting the anisotropic conductivity and electric potential models.” 

This work was made possible by Prof. Michele Battistoni and Dr. Jacopo Zembi, who introduced Tamara to CONVERGE and gave her the opportunity to run her simulation on the Università degli Studi di Perugia cluster. 

Tamara is now beginning to work on her master’s thesis, which will focus on conjugate heat transfer modeling of an electric motor using CONVERGE. We look forward to seeing more of her impressive work in the future!

To receive updates about upcoming CONVERGE Academic Competitions, please email capcompetition@convergecfd.com

References

[1] Saw, L.H., Ye, Y., Tay, A.A.O., Chong, W.T., Kuan, S.H., and Yew, M.C., “Computational Fluid Dynamic and Thermal Analysis of Lithium-Ion Battery Pack With Air Cooling,” Applied Energy, 177, 783-792, 2016. DOI: 10.1016/j.apenergy.2016.05.122

► Smooth Sailing: Analyzing Marine Propellers With CONVERGE
  28 Jun, 2022

I grew up in a port town on Lake Superior, the largest of the Great Lakes. As a kid, I would watch, spellbound, as cargo ships pulled into the harbor, amazed by their sheer looming size. As an adult, I’m no less in awe. These great metal giants can be laden with a hundred thousand tons of goods and conveyed through the water by propellers as tall as two- or three-storey buildings—truly a marvel of engineering. Moreover, cargo ships are vital to the global economy, transporting some 90% of goods worldwide. Shipping is by far the greatest enabler of global trade, but the industry is having to contend with tightening emissions regulations. To both meet demand and comply with legislation, designing increasingly efficient and effective marine propulsion systems is imperative—and one of the core factors in ship performance is the propeller.

CONVERGE CFD software offers many advantages for analyzing and optimizing propeller designs. With fully autonomous meshing, CONVERGE quickly generates a high-quality computational mesh for even the most complex propeller geometry. CONVERGE employs a stationary mesh that is regenerated locally at each time step to seamlessly accommodate the propeller motion. In addition, CONVERGE includes robust models for multi-phase flows, fluid-structure interaction, and cavitation—all the tools you need to assess propeller performance. 

In this article, we’ll discuss how we applied CONVERGE to several different propeller cases. We’ll start with validation of CONVERGE’s steady-state and transient modeling capabilities on the Potsdam propeller test case (PPTC), in which the propeller is fully submerged. Then, we’ll apply CONVERGE to a more physically complex surface-piercing propeller simulation.

PPTC: Steady-State Analysis

Figure 1: Geometry of the SVA Potsdam controllable pitch propeller VP1304.

For our first validation case, we ran a steady-state simulation of the SVA Potsdam controllable pitch propeller VP1304 (Figure 1) and compared the results to published experimental data.1 The experimental measurements were obtained from open water tests carried out in a towing tank with a constant propeller rotational speed of 15 rps. The inflow velocity was varied to test different advance coefficients (J), and the thrust and torque of the blades were measured for each run. From these measurements, the open water characteristics were established as the thrust coefficient (KT), torque coefficient (KQ), and open water efficiency (η0).

For our CONVERGE simulations, we employed the k-ω SST turbulence model, velocity-based Adaptive Mesh Refinement (AMR), and the multiple reference frame (MRF) approach for the moving geometry. 

Figure 2 shows the experimental and numerical thrust coefficients, torque coefficients, and open water efficiencies for varying advance coefficients. The CONVERGE results match the experimental data well across the full range of advance coefficients. 

Figure 2: Experimental and numerical thrust coefficient (KT), torque coefficient (KQ), and open water efficiency (η0) plotted against the advance coefficient (J).

PPTC: Transient Analysis

Following the steady-state PPTC validation, we ran a transient simulation to compare the predicted velocity field with published experimental measurements.2 Laser Doppler velocimetry (LDV) measurements were carried out in a cavitation tunnel, and the velocity was measured at several locations. 

CONVERGE’s autonomous meshing allowed us to resolve the rotating propeller geometry. As with the steady-state case, we used the k-ω SST turbulence model and velocity-based AMR. In the video below (Figure 3), the isosurfaces represent vorticity, and the mesh is shown on a plane perpendicular to the axis of the propeller. You can see how CONVERGE’s autonomous meshing accommodates the propeller motion and how AMR adjusts the resolution to capture the velocity field. 

Figure 3: Video of the transient PPTC CONVERGE simulation. The isosurfaces, plotted using the Q-criterion, visualize the vorticity. Velocity contours are plotted on a cut-plane passing through the propeller axis. The mesh is shown on a plane perpendicular to the axis, and the mesh lines are colored by velocity.

Figure 4 shows the experimental and numerical velocity results for two different radial positions at the plane x/D = 0.2. CONVERGE accurately captures the axial, tangential, and radial velocity trends. 

Surface-Piercing Propeller

Having validated that CONVERGE can accurately predict key performance factors for a submerged propeller, we then moved on to a more complicated scenario: a surface-piercing propeller. For this study, we used the same SVA Potsdam controllable pitch propeller VP1304 geometry. We took advantage of CONVERGE’s volume of fluid (VOF) approach, solved using the individual species solution method, to simulate the multi-phase flow. In addition, we applied a surface compression technique to track the air-water interface. As with the previous cases, we relied on CONVERGE’s autonomous meshing for easy case setup and AMR to help capture the important physical phenomena. In the video below (Figure 5), you can see that CONVERGE is able to capture the complex wake structure and maintain a sharp air-water interface.

Figure 5: Video of the surface-piercing propeller CONVERGE simulation. The isosurfaces represent a void fraction of 0.5. The mesh is shown on the mid-plane, and the mesh lines are colored by velocity.

Conclusion

This study validated CONVERGE’s modeling capabilities for propeller simulations and demonstrated the software’s utility for complex cases. Incorporating CONVERGE’s fluid-structure interaction and cavitation modeling will enable more holistic studies of ship propulsion and propeller wear. Overall, CONVERGE is a powerful tool for assessing propeller performance, and autonomous meshing makes it easy to test different propeller designs. If you’re interested in trying CONVERGE for your own propeller simulations, contact us below!

References

[1] Barkmann, U.H., “Potsdam Propeller Test Case (PPTC): Open Water Tests With the Model Propeller VP1304,” Schiffbau-Versuchsanstalt Potsdam GmbH Report 3752, 2011. https://www.sva-potsdam.de/wp-content/uploads/2016/04/SVA_report_3752.pdf

[2] Mach, K.-P., “Potsdam Propeller Test Case (PPTC) – LDV Velocity Measurements With the Model Propeller VP1304,” Schiffbau-Versuchsanstalt Potsdam GmbH Report 3754, 2011. https://www.sva-potsdam.de/wp-content/uploads/2016/03/SVA-report-3754.pdf

► A Cool New Take on a Switched Reluctance Motor
    9 Jun, 2022

Author:
Eileen Wagner

Research Engineer

Interest in home improvement has soared since the start of the pandemic, along with demand for the requisite tools. Saws, drills, sanders, and routers—what kind of motor do they use? Ideally, one that is powerful, easy to control, lightweight, affordable, robust, and low maintenance. In practice, no single motor meets all of these requirements.

Brushed DC motors are commonly used in tools because they are cheap and the speed and torque can be easily adjusted. However, they require frequent maintenance due to brush wear. Brushless motors are another option, and they provide high efficiency and power density. A drawback of brushless motors is that they rely on rare-earth permanent magnets, which are costly, not suitable for high speed rotation, and susceptible to damage at high temperatures. These limitations have spurred a renewed interest in alternative designs, including switched reluctance machines.

Though switched reluctance motors (SRMs) date back to the mid-nineteenth century, they never gained widespread use. A major reason for that was the lack of precise controllers. Now, with the availability of improved electronics, SRMs are getting a second look. SRMs are unique in that the windings are placed on the stator instead of the rotor. This enables a simplified, rugged design and low-cost manufacturing. In addition, they lack permanent magnets, which helps to further reduce costs. To operate, the current in the stator windings is switched from pole to pole, generating a rotating magnetic field. The rotor poles seek to align with the moving magnetic field and follow the path of least reluctance to drive motor spinning. One potential problem with SRMs is low torque density, which can be overcome by increasing the current and, inadvertently, heat. Because heat can degrade the windings and insulation and, over time, reduce performance and efficiency, accurate simulations are crucial for guiding the development of improved designs.

What improvements can be made to the conventional SRM to efficiently power a hand-held tool? In this blog post, we present the results of a collaborative effort between JMAG, the Shibaura Institute of Technology, and Convergent Science to evaluate a self-cooling SRM. This motor has a non-axisymmetric salient pole rotor with five poles and a segmented stator with six slots. The spinning rotor generates wind to cool the stator and windings. No fan is required due to the self-cooling effect, enabling an increase in motor volume and torque while retaining a small size. The motor was designed and characterized at Shibaura.1 JMAG was used to calculate electromagnetic (copper and iron) losses, which were applied as heat sources in CONVERGE to predict the temperature rise in the solid components and model the self-cooling effect of rotor spinning.

CONVERGE is well-suited to electric motor cooling simulations. CONVERGE’s coupling with JMAG enables seamless import of the NASTRAN geometry file that includes the computed electromagnetic losses. Conjugate heat transfer modeling is highly efficient with transient super-cycling time control and completes in a fraction of the time required for a fully transient calculation. And finally, CONVERGE offers superior grid control capabilities with autonomous meshing and Adaptive Mesh Refinement. These features allow high-speed motion of the rotor and dynamic air flow patterns to be captured with ease.

The copper loss in the windings and the iron losses in the stator and rotor (calculated in JMAG) were modeled as volumetric heat sources in CONVERGE. A separate case where a 12A current was applied to the windings in the absence of rotor spinning was simulated to determine the contact resistance between the windings and the stator (Figure 1).

Figure 1. Electromagnetic loss data from JMAG was used as inputs in CONVERGE for conjugate heat transfer modeling.

Next, the self-cooling effect simulated in CONVERGE was compared to experimental results. In the high load case, self-cooling is minimal, while under low load, the effect is more pronounced. In all cases, the CONVERGE simulations match the experimental measurements within 5°C (Figure 2).

Figure 2. Comparison of simulated and experimental measurements of the self-cooling effect.

What about air flow? In this animation, the air flow path in the wake region is depicted with velocity vectors (Figure 3). Air enters from the radial direction and flows through the stator slots to cool the stator and windings. Conjugate heat transfer modeling with transient super-cycling depicts the increasing temperature of the solid stator during sustained motor operation.

Figure 3. Animation of SRM showing airflow velocity vectors and stator temperature increase.

The market for electric motors is expanding rapidly. Meeting this demand will require ongoing innovation—both a massive challenge and opportunity. With the combined capabilities of CONVERGE and JMAG, you’ll be equipped with powerful and efficient tools to drive the transition to a more electrified future.

Ready to simulate electric motor cooling? Contact us today!

References

1.   Koinuma, K., Aiso, K., and Akatsu, K., “A Novel Self Cooling SRM for Electric Hand Tools,” 2018 IEEE Energy Conversion Congress and Exposition (ECCE), Portland, OR, United States, Sep 23–27, 2018. DOI: 10.1109/ECCE.2018.8557901

► WIND TURBINE SIMULATIONS: ADVANCING THE STATE OF THE ART
  13 Apr, 2022

Author:
Jameil Kolliyil

Engineer, Documentation

Wind energy has emerged as one of the major types of renewable energy sources in recent times. In the United States alone, the past decade has seen a 15% growth per year in wind power capacity, making wind energy the largest source of renewable power in the U.S.1 And as the market for wind energy grows, wind turbines and farms are subsequently becoming larger (some of these wind turbines are over 100 meters tall!). With such large turbine structures, in addition to wake effects from other wind turbines, the effect of the atmospheric boundary layer (or ABL, which is the lowest part of the atmosphere directly influenced by the earth’s surface) also becomes significant. The turbulence in the ABL can affect the efficiency and lifetime of wind farms, and the wake flows from the farms can alter the structure of the ABL. So when designing and optimizing large wind farms, you have to consider the complex interaction between the farm and the ABL. 

So the all-important question is whether computational fluid dynamics can help you design better wind farms. Can it? Well, the short answer is yes, but the biggest hurdle to performing full-scale CFD simulations becomes immediately apparent: the massive disparity in length and time scales. In a comprehensive simulation of such systems, you will have length scales ranging from millimeters, corresponding to the thickness of the boundary layer on the turbine rotor, to tens of kilometers, corresponding to the size of a wind farm. Simulations of this magnitude would be expensive and resource-intensive, to put it mildly. The industry has therefore turned to actuator models. These models replace the rotor blade with lines (actuator line model) or discs (actuator disc model) that impose body forces corresponding to blade loading on the flow field. Meanwhile, a three-dimensional Navier-Stokes solver is used to simulate the flow field. This circumvents the need for a fine mesh around the rotor blade while maintaining adequate refinement to capture turbulence and wake characteristics. Developed in 2002 by Sorensen and Shen2, the actuator line model (ALM) has gained popularity in recent years and has been extensively used in wind turbine simulations. The main challenges in developing ALM involve determining the relative velocity at each discrete point on the actuator line and deciding how to project the aerodynamic forces back onto the flow field. State-of-the-art ALM codes use an interpolation method (where velocity is interpolated from nearby fluid points to AL points) or an integral method (where a force projection weighted velocity integral is used to retrieve the free upwind velocity) for calculating the relative velocity. A Gaussian function is used for projecting the aerodynamic forces onto the fluid flow.

At Convergent Science, we’re constantly pushing the envelope and looking to improve existing models. Dr. Shengbai Xie (Principle Research Engineer at Convergent Science) published a research paper in which he employed alternate velocity-sampling and force projection functions. Instead of interpolating velocity from nearby fluid points, Dr. Xie’s approach used a Lagrangian-averaged velocity sampling technique. Instead of a Gaussian force projection function, he used a piecewise function3. He implemented these modifications in CONVERGE CFD software and simulated a 5MW NREL (National Renewable Energy Laboratory) reference wind turbine4. Figure 1 shows steady-state rotor power and torque predictions from other ALM implementations, Dr. Xie’s approach, and reference curve from Jonkman et al., 20094

Figure 1: Steady-state responses of (A) rotor power and (B) rotor torque as a function of wind speed for CONVERGE’s novel Lagrangian technique versus conventional approaches.

As you can see in Figure 1, Dr. Xie’s novel approach produces a better match to the reference curve when compared to the interpolation and integral methods. Dr. Alessandro Bianchini’s Wind Section group at the University of Florence has already employed this new approach to simulate a DTU 10 MW reference wind turbine; Figure 2 shows an animation from their work. You can find more information about the new approach and detailed comparisons with other ALM implementations in Dr. Xie’s research paper here

Figure 2: Simulation of a DTU 10 MW reference wind turbine with Q-criterion isosurface to visualize vortices. Animation credit: Wind Section, REASE group, University of Florence.

CONVERGE’s trademark autonomous meshing, Adaptive Mesh Refinement (AMR), and smooth handling of moving geometries make it uniquely suitable for simulating wind turbines. Check out our wind turbine webpage to see how CONVERGE can bolster your wind turbine simulations! 

References

[1] Wind Energy Technologies Office, “Advantages and Challenges of Wind Energy”, https://www.energy.gov/eere/wind/advantages-and-challenges-wind-energy, accessed on Aug 10, 2021.

[2] Sorensen, J. N., Shen, W. Z., “Numerical modelling of wind turbine wakes,” J. Fluids Eng., 124, 393-399, 2002. DOI: 10.1115/1.1471361

[3] Xie, S., “An actuator-line model with Langrangian-averaged velocity sampling and piecewise projection for wind turbine simulations,” Wind Energy, 1-12, 2021. DOI: 10.1002/we.2619

[4] Jonkman, J., Butterfield, S., Musial, W., Scott, G., “Definition of a 5-MW reference wind turbine for offshore system development,” NREL, 2009, https://www.nrel.gov/docs/fy09osti/38060.pdf

► Urea Deposits: Risk Assessment or Direct Prediction?
  29 Mar, 2022

Co-Author:
Scott Drennan

Director of Aftertreatment Applications

Co-Author:
Pengze Yang

Senior Research Engineer

Prevention of solid deposit formation in urea/Selective Catalytic Reduction (SCR) aftertreatment systems is a primary concern for design engineers. There are significant resource and reputation costs associated with urea deposits if they arise in the field. It is imperative that aftertreatment system designers evaluate and mitigate the potential for urea deposit formation.

Though we colloquially refer to “urea deposits”, the actual deposit species are byproducts of urea decomposition. Within a narrow temperature range, ammelide and cyanuric acid (CYA) form hard crystalline deposits of considerable size on the walls of the exhaust system. These crystalline structures are exceedingly difficult to remove once they form, and the deposits decompose only at very high temperatures. Computational fluid dynamics (CFD) tools can provide valuable design information to mitigate urea deposit formation if the tool is both accurate and fast enough to meet tight design schedules.

Urea deposits form over a long period of operation. Some deposits are not even visible for up to five minutes of operation time, and many experiments consider runs of more than one hour. Running CFD for one hour of simulated time is not currently feasible. However, we can get closer to a fully developed film prediction and steady formation rate of deposits with a few minutes of simulation time. Unfortunately, even simulating several minutes with traditional CFD approaches is unacceptably expensive in a production environment, perhaps taking many weeks. Of course, an overnight run would be best, but a valuable result would be worth a few days of wall-clock time. How do we speed things up? How do we ensure that our result will be a valuable one? CONVERGE offers two key answers to those questions.

First Principles: Urea Decomposition Chemistry

A urea-water solution (UWS), also called Diesel Exhaust Fluid (DEF), is injected upstream of the SCR catalyst as a feedstock for the ammonia needed to reduce NOx. At exhaust gas temperatures, after the solvent water evaporates, the urea thermally decomposes into ammonia (NH3) and isocyanic acid (HNCO), as seen on the left side of Figure 1. The HNCO then hydrolyzes into ammonia and water, either before the SCR or inside of it. The most common method of modeling urea decomposition is to treat the urea, after the evaporation of water, as a molten solid that decomposes into gaseous ammonia and HNCO1. This decomposition happens in both droplets and films.

Figure 1: Detailed urea kinetics reaction diagram2.

Unfortunately, urea films at temperatures from 130°C to 165°C can form crystalline deposits that require very high temperatures to remove. Deposit removal is accomplished through decomposition, as shown in the right side of Figure 1. Biuret is formed first, which then converts into CYA and ammelide. These latter species require temperatures as high as 360°C to decompose.

Challenges of Urea Deposit Risk Assessment

Traditionally, deposit formation has not been simulated directly due to long simulation times and the lack of accurate deposit kinetics that we mentioned earlier. Many aftertreatment system modelers use a CFD approach that focuses on assessing the risk of urea deposit formation based on film temperatures and other local film conditions. Urea deposit risk models are highly empirical, requiring extensive tuning of key urea film parameters such as temperature, shear, etc. The payoff: after investing significant time in tuning the parameters, users can predict which walls are at risk for deposit formation. There is no information provided on the growth rate, shape, or composition of the deposit, or on other key parameters designers need to know. At Convergent Science, we invested in integrating an accurate detailed decomposition mechanism for urea into CONVERGE, then worked to accelerate simulation speeds to make direct prediction of urea deposits possible at reasonable runtimes.

CONVERGE’s detailed decomposition mechanism for urea was originally developed by our partners at IFP Energies nouvelles. Prior validation work by IFPEN has shown the urea mechanism to be accurate in several fundamental validation cases (e.g., heated decomposition, or single drop and spray ammonia conversion2).

CONVERGE’s detailed urea decomposition model has been used successfully to determine where deposits will form on a commercial validation case for a medium-duty diesel engine with Isuzu-Americas, shown in Figure 23. Isuzu had a wide range of experimental data on deposits, and CONVERGE was used to determine which of the cases formed deposits of the crystalline species biuret, ammelide, and CYA. Isuzu was able to determine the composition of the deposits, with the ratio of CYA to ammelide being an important parameter. These experimental results for ratio of CYA to ammelide in the deposit are well predicted by the CONVERGE detailed decomposition simulation for the location of the sample.

Now that we have a fully-coupled modeling capability with accurate deposit chemistry, spray-wall interactions, and conjugate heat transfer for accurate metal and film temperatures, it’s time to speed things up to address the long runtimes needed in deposit simulations.

Figure 2: Isuzu-Americas medium-duty diesel engine urea deposit location and chemical analysis match CONVERGE urea detailed decomposition prediction for ratio of cyanuric acid to ammelide3.

Speedup: Fixed Flow Advantages

We can harness our understanding of typical urea/SCR systems to optimize our solution strategy and provide a dramatic simulation speedup. The DEF injection duration is relatively short compared to the pulse frequency, and the spray momentum flux is very small compared to the gas momentum flux. Therefore, the gas flow is relatively consistent in between the spray pulses. CONVERGE has implemented what we call a fixed flow solver approach to take advantage of this quasi-steady-state behavior. This technique exploits the disparity in time scales, solving the full spray and Navier-Stokes equations only during the spray pulse. The flow state is then fixed for the interval between sprays. Simulations for urea/SCR applications are sped up many times with fixed flow, achieving up to 30 seconds per day on a typical commercial aftertreatment system (e.g., approximately 2 million cells on 96 processors).

Such an achievement in speedup must be properly validated to determine the effects on accuracy. The most common validation case to demonstrate accuracy in urea spray, splash, film heat transfer and evaporation, and metal temperature prediction is the Birkhold filming spray-wall validation case4. This case has a pulsed urea/water spray impinging on a thin flat metal plate, with hot air flowing above and below the plate. A thermocouple is located at the leading edge of the film pool. Successful validation of this case requires prediction of the initial slope of the temperature drop during dry cooling, capturing the temperature when films begin to form with a rapid temperature drop, and the final temperature as the film becomes fully developed. The CONVERGE fixed flow results for the Birkhold filming case are shown on the left side of Figure 3, and we see good agreement on all three key behaviors5. Note that achieving this accuracy requires some initial tuning of the Kuhnke splash model constant. However, once tuned, the same model constants produced accurate results for the Birkhold non-filming cases.

Figure 3: Fixed flow results for Birkhold spray-wall interactions. Results for the filming case on left and non-filming cases on right5.

Putting it Together: Speed and Accurate Deposit Chemistry

Detailed decomposition of urea in CFD offers the promise of moving from empirical predictions of risk to actual predictions of deposit formation through detailed chemistry. CONVERGE’s detailed decomposition model for urea has been validated against fundamental urea validation cases2,6 and in commercial cases3.

A recent validation of CONVERGE’s urea deposit prediction ability was conducted with some experimental data from Prof. Deutschmann at Karlsruhe University7. In this experiment, three different exhaust gas temperatures and DEF spray conditions were operated for many minutes (see Figure 4). The available experimental data in this study include the outer wall temperature of the exhaust pipe, the shape of the deposit, and the chemical composition of the deposit. Unfortunately, no information was available on the mass of the deposit formed.

Figure 4: Urea deposit experimental layout, operating conditions, and deposit composition7 compared to CONVERGE simulation8.

The CONVERGE deposit simulations included fixed flow for speed and the accurate spray-wall interaction, conjugate heat transfer with super-cycling, and the decomposition of urea mechanism for accuracy of wall film temperature and deposit chemistry. The simulations predicted the outer wall temperature quite well, including the location and shape of the wall film (see Figure 5). The predictions of the crystalline deposit species also matched quite nicely with the experiment. CONVERGE achieved 30 seconds of simulation time per day for this nearly 2 million-cell model when coupled with the fixed flow approach.

Figure 5: Comparison of outer wall temperature at the location of the deposit (experiment on left and CONVERGE results on right).

The next step in urea deposit predictions is to achieve deposit growth estimates based on accurate deposit species growth rates in a fully-developed urea film. The growth rates of species such as biuret, ammelide, and CYA must be calculated at their exact location, allowing prediction of the shape, size, and weight of the deposit. This deposit growth projection capability must predict the mass, location, and speciation of deposits over many minutes or hours in duration. Many commercial customers are now using CONVERGE to conduct urea deposit predictions and for comparison to their experimental data.

Figure 6: Urea deposit growth projections based on the species growth rates predicted by CONVERGE.

It is important that the urea decomposition model be fully coupled with the spray, film, gas, and metal models to obtain the accurate film conditions that are required to correctly predict the chemical kinetics. The main drawback of this full coupling had been the computational cost associated with the film kinetic calculation.

Summary

CONVERGE’s detailed decomposition model is now delivering the same simulation speed as the lower-fidelity molten-solid model and urea deposit risk approach. The detailed decomposition model is a direct and fully coupled calculation of the deposit species of interest (i.e., CYA, ammelide, and biuret). Therefore, it is a more fundamental approach, requiring little or no tuning for accurate predictions of urea deposits. The urea deposit risk model was once the state of the art, and it delivered value to design engineers, but we now have something better that is just as fast. Why just assess risk when you can know when, where, and what kind of deposits are formed?

References

[1] Quan, S., Wang, M., Drennan, S., Strodtbeck, J., and Dahale, A., “A Molten Solid Approach for Simulating Urea-Water Solution Droplet Depletion,” ILASS Americas 27th Annual Conference on Liquid Atomization and Spray Systems, Raleigh, NC, United States, May 17–20, 2015.

[2] Habchi, C., Quan, S., Drennan, S.A., and Bohbot, J., “Towards Quantitative Prediction of Urea Thermo-Hydrolysis and Deposits Formation in Exhaust Selective Catalytic Reduction (SCR) Systems,” SAE Paper 2019-01-0992, 2019. DOI: 10.4271/2019-01-0992

[3] Sun, Y., Sharma, S., Vernham, B., Shibata, K., and Drennan, S., “Urea Deposit Predictions on a Practical Mid/Heavy Duty Vehicle After Treatment System,” SAE Paper 2018-01-0960, 2018. DOI: 10.4271/2018-01-0960

[4] Birkhold, F., Meingast, U., Wassermann, P., and Deutschmann, O., “Modeling and Simulation of the Injection of Urea-Water-Solution for Automotive SCR DeNOx-Systems,” Applied Catalysis B: Environmental, 70, 119-127, 2007. DOI: 10.1016/j.apcatb.2005.12.035

[5] Maciejewski, D., Sukheswalla, P., Wang, C., Drennan, S.A., and Chai, X., “Accelerating Accurate Urea/SCR Film Temperature Simulations to Time-Scales Needed for Urea Deposit Predictions,” SAE Paper 2019-01-0982, 2019. DOI: 10.4271/2019-01-0982

[6] Ebrahimian, V., Nicolle, A., and Habchi, C., “Detailed Modeling of the Evaporation and Thermal Decomposition of Urea-Water Solution in SCR Systems,” AIChE Journal, 58(7), 1998-2009, 2011. DOI: 10.1002/aic.12736

[7] Brack, W., Heine, B., Birkhold, F., Kruse, M., and Deutschmann, O., “Formation of Urea-Based Deposits in an Exhaust System: Numerical Predictions and Experimental Observations on a Hot Gas Test Bench,” Emission Control Science and Technology, 2, 115-123, 2016. DOI: 10.1007/s40825-016-0042-2

[8] Yang, P. and Drennan, S., “Predictions of Urea Deposit Formation With CFD Using Autonomous Meshing and Detailed Urea Decomposition,” SAE Paper 2021-01-0590, 2021. DOI: 10.4271/2021-01-0590

► 2021: Making Waves with CONVERGE
  30 Dec, 2021

2021 was a complicated year. The second full year of the pandemic offered reasons for hope and optimism, along with times of hardship and uncertainty. I sincerely hope that this next year is a turning point in the pandemic and that we see significant improvement around the world.

Despite the continuing pandemic, there have been exciting developments and opportunities for Convergent Science this past year. We are releasing CONVERGE 3.1, a major version of our software that includes many new features and enhancements. We strengthened relationships with our partners and collaborators, and forged new ones with universities around the world through our CONVERGE Academic Program. We were honored to receive several awards, and we have pushed further into new market segments and application areas. All the while, we have continued to strive to improve CONVERGE in a way that best meets your simulation needs and to provide our customers with the best possible support.

CONVERGE 3.1 Release

Offshore wind turbine simulated using CONVERGE’s implicit FSI approach and mooring model.

We’re pleased to be releasing a new major version of our software: CONVERGE 3.1. During development of this version, we focused on expanding CONVERGE’s physical modeling capabilities, improving user experience, and simplifying the workflow for advanced simulations. We added several new volume of fluid (VOF) modeling approaches for multi-phase flows that reduce numerical diffusion at fluid interfaces and enable you to simulate the separation of phases or immiscible liquids under the influence of gravity. CONVERGE 3.1 also offers implicit fluid-structure interaction (FSI) modeling, which increases the stability of the solver when simulating floating objects or simulating fluids and solids with similar densities. To complement this capability, CONVERGE 3.1 contains tools to generate realistic wind and wave fields. This set of features opens the door to many offshore and marine applications, such as floating offshore wind turbines and boat or ship hulls.

CONVERGE 3.1’s multi-stream simulation capability allows you to apply different solver settings and physical models to different regions of the domain. Using the multi-stream approach, you can model complex, multi-physics problems in a single simulation, which offers a simpler workflow than running multiple independent simulations. Another workflow enhancer in 3.1 is the ability to couple CONVERGE with ParaView Catalyst to perform in situ post-processing of your simulation results. You’ll find many other enhancements in CONVERGE 3.1, including moving inlaid meshes, the capability to simulate solid particles, and more flexibility for wall motion. We’re very excited about this new release, and we think it will greatly benefit users across many application areas.

Award-Winning Collaborations

At Convergent Science, we’re dedicated to creating innovative tools and methods that industry can leverage to accelerate the development of cutting-edge technology. We couldn’t achieve this goal without the invaluable collaborations we have with world-class institutions and companies. This year, several of our collaborative projects were recognized for their merit and contributions to the research community and society at large.

TCF Award

This summer, Convergent Science and Argonne National Laboratory were awarded funding through the U.S. Department of Energy’s 2021 Technology Commercialization Fund (TCF) to continue developing a deep learning framework called ChemNODE, which accelerates detailed chemistry CFD simulations for reacting flows. The goal of ChemNODE is to enable engineers to use detailed mechanisms that, compared to skeletal mechanisms, provide more predictive results for combustion simulations, without incurring such a large computational expense. 

HPCwire Awards

In the fall, Convergent Science received two 2021 HPCwire Awards. 

With Aramco Research Center – Detroit and Argonne National Laboratory, we received the 2021 Editors’ Choice Award for Best Use of HPC in Industry. We were recognized for our work using high-performance computing and CONVERGE simulations to evaluate engine cold-start operations, during which the majority of emissions are formed in modern vehicles. We achieved a 26% improvement in combustion efficiency at cold conditions for a heavy-duty engine.

The second HPCwire Award we received was for a collaborative project with Argonne National Laboratory and Parallel Works. Together, we have been developing an automated machine learning-genetic algorithm (ML-GA) approach to accelerate design optimization and virtual prototyping. We coupled ML-GA with CONVERGE to perform a design optimization of a gasoline compression ignition engine and found that this approach sped up the process by ten times compared to the industry standard.

Convergent Science Partner Updates

Another key way we are able to deliver top-notch products to our customers is through our partnerships. In 2021, we strengthened our partnership with Tecplot as we work to provide a seamless simulation workflow from pre- to post-processing. Tecplot for CONVERGE is included with a CONVERGE license, and now users have the convenient option to buy a full Tecplot 360 license directly from our Convergent Science sales team. 

This year we also introduced GT-CONVERGE, a specialized version of CONVERGE that is fully integrated into GT-SUITE. GT-CONVERGE replaced our previous GT-SUITE product, CONVERGE Lite, and offers many more features and greater functionality, including conjugate heat transfer, a steady-state solver, automatic export of 3D visualization slices, enhanced wall models, and much more. 

Computational Chemistry Consortium

Another collaborative effort, the Computational Chemistry Consortium (C3) concluded Phase 1 of operations in 2021, culminating in the public release of C3MechV3.3. C3Mech is a new detailed kinetic model for surrogate fuels consisting of 3,761 species and 16,522 reactions. It contains chemistry for small species such as hydrogen, syngas, natural gas, and methanol; important surrogate fuel components for gasoline, diesel, and jet fuel; and NOx and PAHs. The mechanism represents the first time that the combustion community has developed and validated a mechanism combining small, intermediate, and large species in a self-consistent, comprehensive, and hierarchical way. C3Mech will help facilitate the study of low-carbon, carbon-neutral, and carbon-free fuels, which are going to play a critical role in the decarbonization of industry. If you’re interested in checking out the mechanism, it will soon be available to download on the C3 website.

CONVERGE Academic Program

At Convergent Science, we have always been strong believers in the importance of training the next generation of engineers, and we greatly value our relationships with universities and other academic institutions. Now, we have dedicated personnel to help cultivate these relationships. Our goal with the CONVERGE Academic Program is to make it easier for students around the world to access our software and to better support them throughout their academic journey. 

This year, we also launched the CONVERGE Academic Competition, a simulation competition for students around the world. We’re challenging participants to design and execute a novel CONVERGE simulation that doesn’t just look nice, but also accurately captures the relevant physics of their system. We’re looking forward to seeing the creative simulations the competitors come up with, and we’re excited to showcase their work when the winners are announced next summer!

2021 Global CONVERGE User Conference

This year we held the first-ever global edition of our CONVERGE User Conference, with the goal of exposing attendees to research they might not otherwise come across. To accommodate attendees in different time zones, we hosted each of the four presentation sessions twice. In addition, we offered attendees the option to watch the presentations on-demand, and we also unveiled on-demand CONVERGE training. Each day of the conference, our support engineers hosted office hours so attendees could meet one-on-one with a CONVERGE expert to get answers to any questions they had. The event was a great success, with more than 400 attendees from six continents and nearly 30 countries. While we hope future user conferences can once again take place in person, we were thrilled to be able to host this virtual global event.

On-Demand CONVERGE Training

As I mentioned above, we introduced a new resource for CONVERGE users at our fall conference: on-demand training. Both introductory and advanced training courses are available on the Convergent Science Hub, and we’ll keep adding and updating courses as we go. We hope this convenient option helps you get up and running with CONVERGE on your own schedule—and our Support team is always available if you have questions. We’ll continue to offer live training throughout the year as well, virtually at the beginning of 2022 and hopefully (!) in person later in the year. 

Convergent Science Around the Globe

The primary mission of Convergent Science is twofold: (1) help current clients run the best CFD simulations possible, and (2) discover other industries that can benefit from CONVERGE’s unique combination of features. Our offices around the world are dedicated to fulfilling both parts of this mission.

In Europe, we’ve had a great year for bringing on new clients in a variety of industries, who plan to use CONVERGE for a broad array of applications: oil and gas, hydrogen injectors and engines, vacuum pumps, compressors and engines for refrigeration applications, fuel cells, marine technology, construction and agricultural engines, redesigning racing engines as motorsports move to renewable fuels, and more. We attended a wide variety of conferences, both virtually and in person, that covered topics ranging from tunnel safety to space propulsion to compressors. Our European team grew, and we expanded our office space to accommodate more growth in the future.

This year, our India branch celebrated its four-year anniversary. Our team in India continued to grow, gaining seven new employees in 2021. The team is busy exploring how to most effectively apply CONVERGE to applications such as motor cooling, battery thermal runaway, flexible fuel engines, pumps, and more. In addition, the India office is working to bridge the gap between industry and academia by helping students gain exposure to simulation software.

In the United States, our world headquarters in Madison, Wisconsin continued to thrive, with more than a dozen new hires this year. We’re continuing to branch out into exciting application areas including hydrogen, aerospace, batteries, biomedical applications, and renewable energy. With our dedicated university relations team, we strengthened our relationships with existing academic users and forged many new relationships as well. In 2021, we gained more than 180 new academic users in North and South America across 36 different labs and 14 universities. 

Our partners at IDAJ continue to provide excellent support to CONVERGE users in China, Korea, and Japan. Major areas of focus for IDAJ include hydrogen engines and non-engine applications such as rotating machinery, battery burning, and spray painting. They hosted their popular IDAJ Conference Online 2021, which garnered over 2,800 attendees. In addition, we worked with IDAJ to port CONVERGE on Fugaku, the world’s fastest supercomputer. IDAJ demonstrated CONVERGE on Fugaku by running high-fidelity combustion simulations using large eddy simulations (LES) and detailed chemistry. 

A Look Forward

Despite the ongoing challenges of the pandemic, 2021 has been a successful year, and we’re looking forward to new opportunities in 2022. While virtual events have been a great way to connect during the pandemic, they just aren’t the same as seeing your colleagues face-to-face. We hope to be able to hold our next user conference in person and to attend more in-person tradeshows in the new year. We’re also looking forward to our next CONVERGE release—we have many great features under development, and we can’t wait to share them with you. We’re excited to continue to delve into new application areas and to strengthen our collaborations and partnerships. Above all, we look forward to helping you run novel simulations and providing you with the tools you need to create next-generation technology.

Numerical Simulations using FLOW-3D top

► Making The Mactaquac Dam New Again
  30 Sep, 2022

FLOW-3D HYDRO computational fluid dynamics (CFD) software answers important questions about the future of New Brunswick’s historic Mactaquac Dam

This material was provided by Jason Shaw, Discipline Practice Lead – Hydrotechnical, Hatch Ltd.

Damming streams and rivers to generate electricity is nothing new. Beginning with Appleton, Wisconsin’s construction of the Vulcan Street Plant on the Fox River in 1882 — the world’s first hydroelectric power plant — dams now account for more than 70% of all renewable energy across the globe.

From the Grand Coulee and Chief Joseph dams in Washington State to the Mica and W.A.C. Bennett dams in British Columbia, the United States and Canada boast nearly 3,000 hydroelectric stations, powering more than 50 million homes and generating approximately 80% of the renewable energy consumed in North America.

Designing these massive objects has long been one of the most demanding of engineering activities. For starters, there are the structural concerns that come with pouring several million tons of concrete, followed by the need to manage many megawatts of electricity. But it’s determining the optimal way of passing water through, over, and around dams and spillways that has perhaps proven to be one of the most challenging design aspects of dam building, requiring costly physical models, lengthy analyses, and no small amount of educated guesswork.

Fortunately, hydraulic design has become much easier over recent decades thanks to the development of computational fluid dynamics (CFD) software. CFD is now an indispensable tool for dam designers and anyone who needs to understand what makes water and other fluids behave as they do, and how to effectively harness their immense power.

Straddling the St. John River

The Mactaquac Generating Station ranks high on the list of Canada’s essential dams. Located at the intersection of the Mactaquac River and the St. John River, this embankment dam sits twelve miles upstream from Fredericton, New Brunswick’s capital. Its six turbines generate 660 megawatts of power, making it the largest hydroelectric facility in the Canadian Maritime provinces.  According to its operator, NB Power, the 55-meter tall, 518-meter long structure supplies approximately 12% of the province’s homes and businesses with electricity.

mactaquac-generating-station
The Mactaquac Dam courtesy of NB Power

The Mactaquac Dam was completed in 1968 and intended to last 100 years. But as with any large-scale infrastructure project, unanticipated problems can sometimes occur, some of which might fail to emerge for years or even decades after the foundation is laid. Such is the case with the Mactaquac Dam, where an adverse chemical phenomenon known as alkali-aggregate reaction (AAR) caused the concrete to swell and crack, resulting in significant and ongoing annual maintenance and repair costs.

Granted, CFD analysis would neither have predicted nor prevented this particular problem, but it can help to answer the question of how to refurbish the structure. Is it enough to simply replace the faulty concrete, or will a significant redesign be necessary? This is where Jason Shaw and his team at Hatch comes into the picture.

Building relationships

A Project Manager and Hydraulic Engineer at Hatch Inc., Shaw and the other 9,000 professionals at the Mississauga, Ontario-based consulting firm have extensive experience in a range of industries, among them civil engineering, mining, oil and gas, and all manner of infrastructure design and development, power generation facilities included.

They’ve also had a long-term relationship with NB Power. “In 2004, Hatch acquired Acres International, an engineering consultancy with expertise in dams and hydropower,” said Shaw. “They were the original designer of Mactaquac and have since become part of our energy group”.

As such, we’ve had a longstanding relationship with NB Power, and we continue to do work for them, not only Mactaquac life-extension, but other facilities as well.”

Shaw explained that alkali-aggregate reaction is very difficult to manage. In the Mactaquac Dam’s case, high amounts of silica in the locally-quarried greywacke—a type of sandstone used to make the concrete—caused a chemical reaction between it and the alkaline limestone found in cement. The result is a viscous gel that, in the presence of water, expands over time, leading to spalling, cracking, and rebar exposure.

Rendering of Spillway Baffle Blocks courtesy of ASI Marine Report Underwater Inspection of Mactaquac Generating Station
Rendering of Spillway Baffle Blocks courtesy of ASI Marine Report Underwater Inspection of Mactaquac Generating Station

“One area of concern is the spillway, where the baffle blocks and end sill have seen significant deterioration,” said Shaw. “But it’s really everything about the dam that’s in jeopardy. Because the concrete is squeezing on the gate guides, for example, you get to the point where the spillway gates are at risk of binding. And in the powerhouse, it’s pushing on the concrete that holds the power generation units, causing them to shift location and become ‘out-of-round’. The consequences are gradual but distortions are inevitable, leading to the requirement for a complex structural remediation.”

To avoid this, NB Power commissioned Hatch to study the problem and provide options on how to move forward. Since AAR issues were discovered in the 1980s, the Hatch team has installed sensors throughout the structure to monitor structural movement and concrete performance. They continue to analyze the ongoing alkali-aggregate reaction in an effort to understand how the concrete is deteriorating and ways extend the life of the project. NB Power and Hatch even pioneered cutting small, strategic spaces and gaps in the dam using diamond wire saws to relieve internal stresses and manage deformations.

Saving the spillway

Over the course of the project, NB Power determined their best option was to refurbish the dam by repairing and improving the damaged portions. A major part of this plan included a hydraulic analysis to determine the best approach. This helped answer questions about whether the operating conditions of the existing structure may have accelerated erosion of the spillway, and if any modifications could be made to reduce this risk. Much of that analysis was based on Hatch’s extensive use of CFD software to determine which parts of the spillway structures need replacing and what designs would provide the best results.

That software comes from Flow Science Inc. of Santa Fe, New Mexico, developers of FLOW-3D HYDRO. “We’ve had a relationship with Flow Science for close to 30 years,” said Shaw. “In fact, I’d say we were probably one of the early adopters, although now practically everyone in the industry is using it and it’s far from novel to use CFD on projects like this.”

Prior to CFD, the only alternative would have been to perform the analysis using a scaled physical model. Shaw noted that this is not only time-consuming, but if multiple iterations are needed, it may promote schedule delays and escalate project development costs. Additional factors related to the scaling of the physical model can also lead to questionable conclusions. CFD, on the other hand, allows engineers to iterate at scale as much as necessary. Various scenarios are easily tested, solutions applied, and the optimal design quickly determined. Physical models are still used, but as a means of validation rather than experimentation.

“CFD fills a crucial gap,” said Shaw. “It allows designers to examine a range of different scenarios that would otherwise be very costly to replicate. This allows you to fine-tune a design and, when you’re ready, check it against the physical model—if they agree, it eliminates any question marks.”

Moving downstream

This was exactly the case with the Mactaquac project, where the first phase of the project was validating the CFD model against measurements from a past physical modeling study of the site. This critical stage of the study allowed the engineers to quantify uncertainty and build confidence in the results of the CFD simulations. Shaw and his team were able to compare these physical model results against the newly-created 3D CFD model of the dam and its surrounding area. They soon found reasonable correlation between the two, providing them with a high degree of confidence that they were on the right track and that their CFD analyses were correct.

A 3D model is only as good as its calibration and validation, he said. If you can’t provide that, then you don’t know where you stand, regardless of the approach. Despite the need for this critical step, however, CFD is a necessary part of the analysis train, if you will. It represents a more precise and more accurate way of analyzing a complex problem. These studies have served as a basis for making decisions about the dam’s future rehabilitation.

comparison-of-physical-model-and-CFD-simulation
A comparison of the physical model results and the CFD simulation results.

After successful validation of the CFD model, the next phase of the study used FLOW-3D HYDRO to evaluate the existing conditions in the deteriorated spillway. Engineers compared estimates of water depths, jump containment, velocities and pressures on the aprons related to energy dissipation, and erosion and cavitation potential for the concrete structures as well as the tailrace areas downstream from each structure. CFD simulations illustrated hydraulic performance for each of these variables, allowing the team to accurately evaluate the three proposed refurbishment options. Ultimately, the CFD model results led the design team to recommend restoration of the original spillway dimensions, adding two new baffle blocks, and modifying the spillway end sill. The CFD results also raised concerns that cavitation may have played a role in the concrete erosion, which led to further recommendations for modified baffle block designs.

simulation-of-deteriorated-baffle-block-end-sill
CFD simulation results of existing conditions with deteriorated concrete.
comparison-of-3-refurbishment-options
CFD simulation result comparison of 3 refurbishment options

A great deal of work remains before the Mactaquac Generating Station is restored. FLOW-3D HYDRO has allowed Hatch to identify the best approach moving forward, giving them a solid footing to plan and design future improvements and refurbishment. It allowed them to pinpoint the most effective way to improve hydraulic performance and reduce the risk of future erosion in the most efficient and cost-effective possible way.

“The intent here is to move forward with project development using CFD analyses and continue to sharpen the pencil,” said Shaw. I’m very confident that we will derive design solutions that will ensure hydraulic spill performance at Mactaquac which will meet the objective of ensuring a safe design.”

► Announcing the FLOW-3D 2022R2 Product Family Release
  15 Sep, 2022

Announcing the FLOW-3D 2022R2 Product Family Release: A Unified Solver Offers Performance, Flexibility and Ease-of-Use

Santa Fe, NM, September 15, 2022 – Flow Science has released the FLOW-3D 2022R2 product family that includes FLOW-3D, FLOW-3D HYDRO and FLOW-3D CAST. In the 2022R2 release, Flow Science has unified the workstation and HPC versions of FLOW-3D to deliver a single solver engine capable of taking advantage of any type of hardware architecture, from single node CPU configurations to multi-node parallel high performance computing executions. Additional developments include a new log conformation tensor method for visco-elastic flows, continued solver speed performance improvements, advanced cooling channel and phantom component controls, improved entrained air functionalities, as well as boundary condition definition improvements for civil and environmental engineering applications.

By combining the workstation and HPC versions of our products, we are making the latest HPC optimizations available to our workstation users who run on lower CPU core counts, removing the delay for our HPC customers getting their hands on the latest developments, and maintaining only one unified code base, which makes our development efforts that much more efficient. With this release, we’re going to be nimbler and faster to market than ever before, said Dr. Amir Isfahani, President & CEO of Flow Science.

Committed to user success, FLOW-3D products come with high-level support, video tutorials and access to an extensive set of example simulations. Customers can also take advantage of Flow Science’s CFD Services to augment their product experience, including customized training courses, HPC resources and flexible cloud computing options.

A FLOW-3D 2022R2 product release webinar focusing on how to optimize run times on workstations and an overview of performance gains will be held on October 6 at 1:00 pm ET. Online registration is now available. 

A full description of what’s new in all products is available for FLOW-3D, FLOW-3D HYDRO and FLOW-3D CAST.

About Flow Science

Flow Science, Inc. is a privately held software company specializing in computational fluid dynamics software for industrial and scientific applications worldwide. Flow Science has distributors and technical support services for its FLOW-3D products in nations throughout the Americas, Europe, Asia, the Middle East, and Australasia. Flow Science is headquartered in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.

683 Harkle Rd.

Santa Fe, NM 87505

info@flow3d.com

+1 505-982-0088

► Optimization and Workflow Automation Software FLOW-3D (x) 2022R1 Released
  17 Aug, 2022

Optimization and Workflow Automation Software FLOW-3D (x) 2022R1 Released

Flow Science releases a new version of its optimization and workflow automation product that will change the way its customers use CFD software. Santa Fe, NM, August 17, 2022 — Flow Science, Inc. has released FLOW-3D (x) 2022R1, an optimization and workflow automation software that integrates seamlessly with the FLOW-3D product family. FLOW-3D (x) offers a powerful platform for users to arrive at the best design solution, achieving greater certainty while reducing modeling and analysis time. FLOW-3D (x) 2022R1 marks a significant development upgrade to the workflow automation and design optimization capabilities of FLOW-3D (x). The development objectives for this release center around performance and improved user experience. Remote execution, running simulations in parallel, and fully integrated batch post-processing are some of the new features that make FLOW-3D (x) 2022R1 an integral tool for the FLOW-3D user community.
Our customers are already solving the toughest CFD problems on the market, and the addition of this tool to their FLOW-3D product portfolio will allow them to do that better and faster. FLOW-3D (x) will transform your simulation workflow and let you delve into your parameter space like never before, said Dr. Amir Isfahani, CEO of Flow Science.
FLOW-3D (x) puts power and efficiency into the hands of users through its core functionality and connectivity, allowing them to explore solutions using optimization, workflow automation, distributed solving, parameter sensitivity studies, simulation calibration, CAD and Microsoft Excel plugins, and Python interoperability. A live product webinar exploring the new functionality of FLOW-3D (x) will be held on Thursday, August 25 at 1:00 pm ET. Learn more about FLOW-3D (x).

About Flow Science

Flow Science, Inc. is a privately held software company specializing in computational fluid dynamics software for industrial and scientific applications worldwide. Flow Science has distributors and technical support services for its FLOW-3D products in nations throughout the Americas, Europe, Asia, the Middle East, and Australasia. Flow Science is headquartered in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.

683 Harkle Rd.

Santa Fe, NM 87505

info@flow3d.com

+1 505-982-0088

► What’s New in FLOW-3D (x) 2022R1
    3 Aug, 2022
FLOW-3D (x) 2022R1 Release
FLOW-3D (x) 2022R1 Release

What's New in FLOW-3D (x) 2022R1

FLOW-3D (x) 2022R1 marks a significant development upgrade to the workflow automation and design optimization capabilities of FLOW-3D (x). The development objectives for this release center around performance and improved user experience.

FLOW-3D (x) is a powerful, versatile, and intuitive connectivity and automation platform, which includes a native optimization engine specifically designed for CFD applications. Whether you want to automate running FLOW-3D models through a parameter sweep, extracting key data to create post-processing deliverables, or you want to run dedicated optimization projects, refining geometry from dynamically connect CAD models or by sweeping through flow conditions, FLOW-3D (x) has all the features needed to perform these tasks in a clear and efficient manner. Remote execution, running simulations in parallel, and fully integrated batch post-processing are some of the new features that make FLOW-3D (x) 2022R1 an integral tool for our FLOW-3D user community.

Performance

Parallel execution of FLOW-3D simulations for automation and optimization tasks

With 2022R1, FLOW-3D (x) can now run multiple FLOW-3D simulations in parallel. By evolving from serial to parallel execution, users can now make the most of available computational resources, vastly accelerating the time to completion of automated parameter sweeps and gaining valuable insight sooner.

Parallel execution of simulations
Depending on available license resources, the number of concurrent executions to use in the automation or optimization project is easily set in the FLOW-3D (x) execution widget.

Execution of FLOW-3D simulations on remote nodes

Hand-in-hand with the ability to execute FLOW-3D nodes in parallel, we recognized the need to be able to make the most efficient use of computational resources that might be remote and distributed across multiple workstations on a network. With FLOW-3D (x) 2022R1, users can define execution nodes as remote nodes. Users can decide which nodes, local or remote, to run FLOW-3D executions in order to best make use of their computational resources.

Simulation on remote nodes
In addition to running FLOW-3D (x) and FLOW-3D executions on a local workstation, remote nodes are easily configured in order to take advantage of remote computing resources.

Full integration with FLOW-3D POST and Python automation

Automated post-processing using FLOW-3D POST state files is now fully integrated into the workflow automation supported by FLOW-3D (x). The latest release of FLOW-3D POST 2022R1 allows users to create macros, state files, and Python-driven advanced batch automation. These advanced post-processing features are integrated into the FLOW-3D (x) 2022R1 release under a dedicated post-processing node, as well as under dedicated Python script execution nodes.

Advanced post-processing
With the integration of FLOW-3D POST post-processing capabilities into FLOW-3D (x), users can now automate their entire optimization process, from geometry or CFD mode parameter sweep through post-processed graphical deliverables.

User experience

Streamlined definition of optimization targets

A simplified definition of optimization targets has been added, allowing users to directly define targets rather than having to define a minimization goal.

Simplified definition of optimization targets
The new “target” class of objectives is now available in FLOW-3D (x) 2022R1.

Streamlined layout of user interface

Based on user feedback from the original release of FLOW-3D (x), the user interface now delivers a clear, intuitive experience even for large, complex optimization projects. Superior clarity of node and workflow definitions, improved layout optimization tasks and population selection, and dedicated nodes for all FLOW-3D products are some of the improvements delivered in this release.

Streamlined layout of interface
A more compact, streamlined workflow graphical representation is just one of the many user interface improvements delivered in FLOW-3D (x) 2022R1.

Data analysis and plot formatting upgrades

In keeping with efforts to streamline FLOW-3D (x) model setup and execution for the user, the data analytics graphical representation widget allows for clear, simple access to the most important data from your project simulations. Plot definition has been simplified and plot formatting improved. A new type of chart allows filtered data to be exported as text and images at custom resolution.

► Flow Science Receives the 2022 Flying 40
  28 Jul, 2022

Flow Science is named one of the fastest growing technology companies in New Mexico for the seventh year running.

Santa Fe, NM, July 28, 2022 – Flow Science has been named one of New Mexico Technology’s Flying 40 recipients for the last seven consecutive years. The New Mexico Technology Flying 40 awards recognize the 40 fastest growing technology companies in New Mexico each year, highlighting the positive impact the tech sector has on growing and diversifying New Mexico’s economy.

Flow Science continues to deliver best-in-class CFD products and service to our customers at top engineering companies worldwide. The strength of our business model has allowed us to not simply weather the economic storm brought on by the pandemic and other world events, but to grow in both revenue and workforce. We’re very proud to be recognized by New Mexico Flying 40 yet again and look forward to continuing our success and contributing to New Mexico’s economic growth, said Dr. Amir Isfahani, President & CEO of Flow Science.

The Flying 40 awards are based on three revenue categories: the top revenue growth companies with revenues between $1 million and $10 million, the top revenue growth companies with revenues of more than $10 million, and the top revenue-producing technology companies irrespective of revenue growth. Growth is measured over five years, from 2017-2021.

“In the midst of the worst pandemic of the past century these employers not only stayed open and provided thousands of jobs, they were able to grow their employee base. All of New Mexico should join us in celebrating these accomplishments,” said Sherman McCorkle, President and CEO of the Sandia Science & Technology Park Development Corp. and host of this year’s event.

Learn more about the Flying 40.

About Flow Science

Flow Science, Inc. is a privately held software company specializing in computational fluid dynamics software for industrial and scientific applications worldwide. Flow Science has distributors and technical support services for its FLOW-3D products in nations throughout the Americas, Europe, Asia, the Middle East, and Australasia. Flow Science is headquartered in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.

683 Harkle Rd.

Santa Fe, NM 87505

info@flow3d.com

+1 505-982-0088

► Learning CFD from Afar
  12 Jul, 2022

This blog was contributed by Garrett Clyma, CFD Engineer at Flow Science.

Passion Meets Opportunity

It’s difficult for a high school student to predict a specific career path, especially in the broad world of engineering. At that stage in my life, all I knew was that I was good at math, loved science, and had an interest in space travel, largely in part to the booming developments of SpaceX and the anticipation of the James Webb Space Telescope. Majoring in aerospace engineering was the obvious choice, prompting me to move across my home state to attend Western Michigan University.

As I progressed through my degree, I saw for the first time that there was more to solving engineering problems than just design, manufacturing, and process engineering. I enjoyed social situations and interacting with other students, which furthered my interest away from a purely technical engineering career. Although learning about space will always remain a hobby, my time in school and at internships caused me to think outside the realm of aero- and astronautics to a career in which I could combine my love for science and problem solving, while still using my social skills. I was introduced to CFD for the first time through my senior capstone project, which concentrated on the design of a scramjet combustion test lab. Using software to simulate the supersonic flow within the wind tunnel sparked my interest in numerical modeling and had me questioning, “where else is CFD used?”

It just so happens that, from a combination of luck and opportunity, I was able to land a post-graduate remote internship with Flow Science in a career path I was extremely interested in and could picture myself excelling in. This internship offered me a three-month opportunity to increase my competitiveness for a full-time position.

Strategic Objectives

CFD is, simply put, complicated. How was I, a college graduate with sparse CFD experience, to become well versed enough to solve and convey complex topics for highly technical clients in just 3 months? My manager, John Wendelbo, and I laid out a plan with two objectives.

  • Objective #1: Develop my CFD modeling skills to be competitive for a Sales Engineer position
  • Objective #2: In the process, fine-tune my presentation skills and develop a strong familiarity with value propositions, sales pipelines, and the inner workings of sales group processes.

To accomplish my objectives, we set a generalized plan for me to learn the FLOW-3D product family, spending about 3 weeks on each software. The idea was to work from lower to higher complexity CFD concepts and continue a steady buildup of aptitude in modeling while approaching the software like any new user, by utilizing the extensive directory of user training materials. This would give me a strong base of CFD knowledge while also providing plenty of opportunities to pick the brains of my colleagues, asking relevant questions when needed (and irrelevant questions when curious) to accomplish my second objective.

Working remotely from a different state had implications that I had yet to experience in a working environment. The lack of face-to-face interaction with my co-workers could have had a negative impact on relationship-building and productivity. To combat that possibility, I introduced myself to each of my colleagues over Zoom throughout my internship and allowed them to put a face to a name and exhibit my positive, motivated, and curious approach to work. I organized myself by creating daily, weekly, and monthly goals to circle back to and adjusted my work habits to be as productive at home as I would be in an office.

Onward and Upward

Due to the broad design of my internship, I was exposed to an array of industries, physics concepts, and software features rather than focusing on one or two specific applications. Since similar physics phenomena are present across different engineering problems, learning a range of applications provided me with a more thorough understanding of which physics models are important to include and which are not for an effective simulation. Modeling microfluidic capillary flows brought me insight to surface tension physics which I could apply to melt pool modeling in laser powder-bed fusion and bubble formation in boiling water. Additionally, setting up simulations to validate experimental research let me practice creating models of real situations which included exploring the changing flow rate over a labyrinth hydraulic weir and spillway and the effect of gravity on electron beam penetration of a metal disc.

Garrett Clyma attends Rapid tct 2022
Garrett (left) with coworker Ibai Mugica (right) at the RAPID+TCT Additive Manufacturing Tradeshow in Detroit, MI

The variety in my experience continued when I got to leave my “home office” for a few days to attend the RAPID+TCT 3D Printing and Additive Manufacturing Conference in Detroit, MI alongside two colleagues that had flown in. The energy there was high and foot traffic at our booth reflected that. I enjoyed speaking to academics, engineers, executives, and investors about what they are looking for in applications relating to CFD respective to their roles in the business or engineering process. Walking the conference floor exposed me to different companies’ capabilities and trends present within the industry, both of which will help me make informed decisions in my role.

It’s fun for me to look back and think about how I arrived at this stage of my academic and professional career and there’s no shortage of people to thank. I’m now very happy to say that, after a successful internship, I’ve been hired full-time as a CFD Engineer with the sales team focusing on additive manufacturing and metal casting applications.

Mentor Blog top

► News Article: Graphcore leverages multiple Mentor technologies for its massive, second-generation AI platform
  10 Nov, 2020

Graphcore has used a range of technologies from Mentor, a Siemens business, to successfully design and verify its latest M2000 platform based on the Graphcore Colossus™ GC200 Intelligence Processing Unit (IPU) processor.

► Technology Overview: Simcenter FLOEFD 2020.1 Package Creator Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Electrical Element Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Battery Model Extraction Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 BCI-ROM and Thermal Netlist Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.

► On-demand Web Seminar: Avoiding Aerospace Electronics Failures, thermal testing and simulation of high-power semiconductor components
  27 May, 2020

High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.

Tecplot Blog top

► Getting Started with PyTecplot
  22 Sep, 2022
Getting Started with PyTecplot

Try Tecplot 360 and PyTecplot for Free! Image courtesy of blueOASIS (case study).

Agenda

  • Tecplot 360 with PyTecplot [0:0:57]
  • Why use PyTecplot? [0:02:19]
  • Customer Examples [0:04:10]
  • PyTecplot Requirements & Installation [0:07:45]
  • Running PyTecplot on the Command Line [0:10:44]
  • Common Problems [0:14:49]
  • Running PyTecplot: Batch vs Connected [0:19:19]
  • Running PyTecplot: Jupyter Notebooks [0:25:40]
  • Recording Scripts [0:47:09]
  • PyTecplot Documentation & Resources [0:47:44]
  • Q&A [0:50:00]

 

Tecplot 360 with PyTecplot

PyTecplot is the Python API to automate and analyze your data with Tecplot 360, our tool for data visualization.

Tecplot 360 is our flagship software and has long been trusted as a multi-purpose post-processor by engineers and scientists. 360’s  clear visualization of complex data removes any doubt in your designs or analyses. It can create visually impressive 3D plots, and create 2D and XY plots for detailed engineering decision making. Large billion-cell datasets are efficiently handled  to visualize your simulated or experimental data from multiple industry-standard data formats.

The built in CFD Analyzer toolkit also gives you the option of performing integrations, extracting key flow features such as vortex cores and shock surfaces, calculating new variables, and more, for additional analysis of your data.

PyTecplot allows you to automate 360 with the Python language. Use your own installation of 64-bit Python, install the PyTecplot Python API, and then create scripts that control an active session of Tecplot 360, or run in batch mode – which is especially useful if you want to incorporate this with other Python scripts as part of a larger workflow.

Why use PyTecplot?

Tecplot 360 is capable of powerful data analysis on its own and has a custom scripting language known as the Tecplot Macro Language. However, there is a benefit to using PyTecplot:

Automate your work in Tecplot 360 with Python, a widely used scripting language.

  • Actions that are tedious in the Tecplot 360 user interface can be performed quickly with a PyTecplot Python script.
  • For those who are new to Tecplot 360, the Tecplot Macro Language allows you to automate workflows as well, but rather than learning a scripting language specific to Tecplot 360 you can use Python, which is already used for many other tools and you may have as part of your workflow as well.

Directly access your data for analysis. Certain types of data analysis can be cumbersome, or not possible, in either the Tecplot 360 user interface or with the Tecplot Macro Language. Python libraries like NumPy or SciPy allow you to analyze and manipulate your data in ways that are not possible using only Tecplot 360: arrays, statistical analysis, and more.

Use your own installation of Python 64 bit.

Debug with Python IDEs. Tecplot 360 has a built-in debugger for the Tecplot Macro language, but with PyTecplot you can use your preferred Python IDE (Integrated Development Environment) to debug your script.

Customer Examples

  1. A Tight Cooperation Towards Ocean Sustainability by blueOASIS
  2. Analyzing Bubble Characteristics in Simulations of Fluidized Beds by CPFD Software
  3. Thermal Visualization of a Self-Sustaining Green House by MSOL, UC Berkeley
  4. Thermal Field Visualization for Uncertainty Quantification by MSOL, UC Berkeley
  5. Flow Dynamics of Liquid Jet Irrigation by CFD Research Group, RMIT University
  6. Tecplot Visualization Helps Understand the Carbon Cycle by UMCES, University of Maryland

PyTecplot Documentation & Resources

PyTecplot Script Examples on GitHub

We highly recommend using PyTecplot if you want to get more out of your work with Tecplot 360, and to get to the answers that you need faster.

Try Tecplot 360 for Free

The post Getting Started with PyTecplot appeared first on Tecplot.

► Which is Faster?
  16 Sep, 2022

Tecplot 360 vs. ParaView for CONVERGE Data

At Tecplot we know that you have choices when it comes to post-processing and that your time is important to you, so we’ve done some performance testing to help you decide which post-processor will perform the best with your data. Of course, performance is dependent on several factors – and which data type you are using is an important one. For today’s post we’ll be diving into performance with CONVERGE data only.

CONVERGE users are fortunate to have access to ‘Tecplot for CONVERGE’ (TfC) as part of their CONVERGE license. However, for users who are looking for more power than what TfC offers out of the box (i.e. batch-processing), the most popular options are Tecplot 360 or ParaView.

TfC, 360, and ParaView each include a direct data reader for CONVERGE post*.h5 files, which means that with these post-processors there’s no need to run post_convert. By avoiding running post_convert you’re saving yourself time and disk space.

Experiment

Since many CONVERGE users need to create movies of their results, we chose a reasonably large, multi-cycle internal combustion engine (ICE) simulation. This dataset is composed of 1278 timesteps, totaling 169Gb on disk.

The goal is to capture the execution time and peak RAM required to create a plot which consists of a slice colored by temperature and an iso-surface of the flame front (temperature = 1700). We repeated this for each time step in the data series for a total of 1278 images produced. Figure 1 shows an example of the plot:

Figure 1: Example Image from ICE Simulation Showing Temperature & Flame Front Iso-Surface

Setup

We conducted our experiments on a Windows 10 machine with 32 logical cores, 128Gb RAM, and a NVIDIA Quadro K4000 graphics card. The data was stored locally on a spinning hard drive to avoid any slowdowns due to network traffic.

All tests were run unattended in batch mode (using PyTecplot for Tecplot 360, and pvbatch for ParaView 5.10). We used the Python memory-profiler utility to capture timing and RAM information.

Figure 2: Animation of ICE Simulation Created with Tecplot 360 Image Exports (For animation speed and file size, every 4th image was used to create this animation)

Results

Serial Execution

The graph below shows the outcome of our tests running the processes serially (i.e. producing 1 image at a time).

Figure 3: 360 vs ParaView: Execution Time & Peak RAM Usage

See the PyTecplot script on our GitHub.

For the generated plot, Tecplot 360 uses multi-threading for a number of operations such as: deriving node-located values from the cell-centered Temperature value (a pre-requisite for iso-surface creation), slice creation, and iso-surface creation.

ParaView does not publish a list of which filters are multi-threaded, but “all common filters are [multi-threaded]” [1,2]

So, serial here means that we loaded the series of CONVERGE post*.h5 files and ran through the animation, creating an image one timestep at a time.

For the HDF5 dataset performance tests, we discovered that processing with Tecplot 360 in minimized memory mode gave the fastest processing time at 1.99 hours, ~5.6 seconds per image. Tecplot 360 in minimized memory mode also held the least amount of RAM at 2.7 Gb peak RAM usage. Note, Tecplot 360 defaults to keeping 30-70% of the data that it’s loaded in RAM in the event that this data may need to be re-rendered. Once the RAM meets the 70% threshold, Tecplot 360 will offload stored memory until it reaches 30%. This behavior is handy in the GUI when moving back and forth between time-steps. Setting Tecplot 360 to minimize memory mode prevents it from saving data from previous time steps in RAM.

Did we run ParaView using MPI? – The current recommendation is to only run ParaView using MPI when you can distribute the data [3]. CONVERGE data is single block, so it’s not ideal for distribution. Furthermore, the ParaView CONVERGE reader does not read in parallel [4]. As such, we did not run ParaView using MPI.

Parallel Execution

Another benefit of PyTecplot (i.e. batch mode) is that image export scripts can be written to use multiple concurrent PyTecplot processes—slashing the time to process images. We utilized our ParallelImageCreator.py PyTecplot script (located on our GitHub — see the documentation for examples) which processes multiple timesteps simultaneously to put more cores to work. In the plot below you can see how running even just two concurrent processes can drastically reduce the processing time:

Figure 4: Total time to Execute and RAM Usage against # of Parallel 360 Processes

Our script, running with 16 cores, took an average of ~1.2 seconds per image—this is 4.8 times faster than Tecplot 360’s fastest run without parallelizing the image exports and 10.7 times faster than ParaView! However, this comes at a significant cost in terms of RAM (34.7 vs 2.74 Gb, 12.6 times more RAM). We observed nearly identical speed with half the RAM when parallelizing with 8 processors vs. 16.  With parallel processes you can get hardware contention (which is why we don’t see a linear improvement in performance), so it may take some experimentation with your data to find the ideal amount of concurrency to use.

Once the images were created, we recommend using tools such as ImageMagick or FFmpeg (executable shipped with Tecplot 360) where you can stitch together the exported images. You can create numerous animation types, such as .mp4, .gif, or .avi, and also tune your settings by adjusting the framerates for example. You can also take advantage of our GitHub script, to automate this process with Python.

All in all, with Tecplot 360’s batch mode, you have choices — if you have a single license seat, you can create fast animations, and with multiple cores and licenses, you can create images even faster. Let us know if you want to talk about ways we can supercharge your image processing!

Try Tecplot 360 for Free

References

[1] https://discourse.paraview.org/t/paraview-parallel/320/9
[2] https://discourse.paraview.org/t/make-paraview-multicore-gpu/5723/2
[3] https://discourse.paraview.org/t/paraview-parallel/320/13
[4] https://discourse.paraview.org/t/a-problem-of-parallel-processing-in-paraview/5089/7

The post Which is Faster? appeared first on Tecplot.

► Flow Dynamics of Liquid Jet Irrigation
  23 Aug, 2022

Flow Visualization of Nasal Irrigation Leads to New Squeeze Bottle Design

Nasal saline irrigation is a therapy technique where a liquid solution cleans the nasal passages. It can be used to alleviate congestion and sinus irritation by forcing the saline through the nasal cavity washing out thick or dry mucus, and pollutants such as pollen, and dust particles.

Squeeze bottles deliver the solution into the nasal cavity as liquid jets, which have been shown to reduce allergic rhinitis symptoms and can be used by patients with good compliance and minimal side effects.[1] However, the reasons why squeeze bottles were effective for nasal irrigation were not well known from an engineering perspective.

Kiao Inthavong and his CFD Research Group (RMIT University) team wanted to better understand the flow dynamics of liquid jet irrigation. Using CT scans of actual human patients from ENT clinicians, his team reconstructed a computational model of a human nasal cavity, and applied the multiphase Volume Of Fluid (VOF) approach to track the liquid-air interface of the nasal irrigation within the human nasal cavity domain.

Fig 1. A) Computational model of the nasal cavity with the squeeze bottle inserted into the nostril. B) The internal nasal cavity with colors depicting some anatomical features, including the frontal sinus (green), sphenoid sinus (blue), maxillary sinus (yellow), and the nasopharynx (dark grey)

The simulations are inherently transient applied with small time steps, and with multiphase flows, double precision was used. This led to huge datasets per study. Tecplot 360’s ability to handle large datasets allowed the entire set of results to be loaded and the time-dependent data to be processed efficiently.

“We were impressed by the powerful capabilities of Tecplot 360 to handle the dataset”, Kiao remarked. “Furthermore, we used the varied automation tools available, including PyTecplot connected with Python, and Macro Scripts, which has become part of our established workflow.”

The flow visualization from CFD modelling with the VOF method revealed the liquid jet behavior within a human nasal cavity. The overflow and flooding effects helped increase sinonasal surface coverage, residence times across the mucosal surfaces, and shearing force of irrigation.

Fig 2. Liquid surface coverage inside the nasal cavity viewed from the top of the head for three patients that had undergone FESS (Functional Endoscopic Sinus Surgery). The blue color represents the liquid region.

The applied industry related research project outcomes were published in two leading journals, the International Forum of Allergy and Rhinology[2], and Journal of Biomechanics[3] . Later, in collaboration with ENT Technologies, the results were used by the company to redesign their squeeze bottle. The new larger volume bottle and ergonomically designed received huge success anecdotally by patients and the new bottles are expected to come out to market.

Try Tecplot 360 for Free

 

[1] Piromchai P, Kasemsiri P, Reechaipichitkul W. Squeeze Bottle Versus Syringe Nasal Saline Irrigation for Persistent Allergic Rhinitis – a Randomized Controlled Trial. Rhinology. 2020 Oct 1;58(5):460-464. doi: 10.4193/Rhin19.308. PMID: 32427228

[2] Inthavong, K., Shang, Y., Wong, E. and Singh, N., 2020, January. Characterization of Nasal Irrigation Flow from a Squeeze Bottle Using Computational Fluid Dynamics. In International Forum of Allergy & Rhinology (Vol. 10, No. 1, pp. 29-40).

[3] Shrestha, K., Salati, H., Fletcher, D., Singh, N. and Inthavong, K., 2021. Effects of Head Tilt on Squeeze-bottle Nasal Irrigation–A Computational Fluid Dynamics Study. Journal of Biomechanics, 123, p.110490.


The CFDResearch group is a collaborative research space at RMIT University in Australia, led by Dr Kiao Inthavong. The group uses CFD and experimental flow visualization techniques to investigate multiphase flows in the respiratory system, indoor airflow behavior, and fluid-particle dynamics. We use Tecplot 360 as the primary post processing tool to analyze complex data from CFD simulations. The main features we use are automation through Macros and Python scripting to handle large data sets, helping to generate streamlines, contours, and overlaying them onto the shaded geometries for visualization.

In 2017, we initiated a research program that involved a cross-disciplinary engineering-clinician program on “Engineering Solutions Impacting Clinical Outcomes (ESICO)” that led to the foundation of SCONA (Society of Computational Fluid Dynamics of the Nose and Airway). Throughout this research program, we have helped clinicians to understand nasal diseases, their therapy through drug delivery, and support treatment planning in functional rhinosurgery through CFD visualization with Tecplot 360 to generate images and animations and its interpolation function to compare data sets.

The post Flow Dynamics of Liquid Jet Irrigation appeared first on Tecplot.

► Using Pages and Frames in Tecplot 360
  28 Jul, 2022
Try Pages and Frames for yourself!
Use your own data or the sample
data included with your free trial.

Request a Free Trial

In this video, we will explain Pages and Frames in Tecplot 360.

When first opening Tecplot 360, a single frame is visible where you can load a data set and create a plot. We can create multiple frames in the workspace, but Tecplot 360 also provides pages to help organize and present your visualizations!

A page is a container for any number of frames, and each frame is a container for a single plot.

Pages and Frames

With multiple pages, users can create multiple workspaces within a single layout—like a spreadsheet with multiple sheets or a PowerPoint with multiple slides.

For instance, in this layout (image above), the plot has five frames. One frame shows the overall view of the 3D plot for context. Another frame is a magnified view to visualize the red, blue, and green points of interest in the volume. And the three line plots display the variable values for each of the points of interest over time in separate frames.

Creating a New Page

Now, if we wanted to focus on a different aspect of our dataset without losing these views, we could either open a new instance of Tecplot 360 or create a new page within this layout. Creating a new page is beneficial because:

  • You don’t need to create another layout file to save the visualization.
  • Creating a page doesn’t consume another license of Tecplot 360.
  • It is an easy way to bookmark a visualization of interest, and then continue exploring the data.
  • And it also doesn’t load data twice—the datasets can be shared between pages.

In this 2nd page, we’ve focused our visualization on a holistic view of contoured slices. The 3rd page focuses on fluid velocity and the flow of particles. The 4th page visualizes the Particle Volume Fraction as an iso-surface, and the last page shows a simple plot of particles against a black background.

Pages

Bookmarking a View

If we wanted to bookmark a view of interest as a page, for example the moment when the particles make their way to the bottom wall of the collection chamber, we can do so by copying the frame (control-c or right-clicking on the edge of the frame), creating a new page, and then pasting the frame into the new page (control-v or Edit>Paste). No new data is loaded because the dataset is shared between frames. Data sharing occurs when copying and pasting a frame, or when creating a new page and updating the plot type from Sketch to any other plot type which shares data from a previous frame.

Finally, we can rename the Page and zoom-in on the Frame.

This concludes our video on Pages and Frames in Tecplot 360. Thank you for watching!

Try Pages and Frames for yourself!

Use your own data or the sample data included with your free trial.

Request a Free Trial

The post Using Pages and Frames in Tecplot 360 appeared first on Tecplot.

► For PIV, Tecplot 360 is the Real MVP
  28 Jul, 2022

When it comes to PIV, Tecplot 360 is the real MVP.

PIV, or Particle Image Velocimetry, is a method of experimental flow visualization that uses seeding particles and some form of camera or optics to measure 2- or 3-dimensional vector fields. It can be an extremely powerful way to capture “real life” (in quotes due to the errors and assumptions inherent in any experiment!) flow physics so as not to be fully reliant on computational methods. But, just as with computational methods, the output of the experiment is a ton of numerical data – what then?

Enter Tecplot 360, which is used in conjunction with every major PIV system to visualize and analyze the recorded data. 360 has everything you might need to get the most out of your PIV data; it can handle 3D data, 2D data, XY plots, animations, text & graphical annotations, and can even compare your PIV data against CFD results for the same experiment.

3D Data

For 3D measurements, like tomographic PIV, 360 can produce high-quality movies for transient cases. In steady-state cases, 360 can create plots of amazing quality.

3D model plus PIV slices

3D model plus PIV slices

PIV 2D

TomoPIV Animation from La Vision

You can easily combine data from arbitrary sources with 360. One option is to compare PIV slices which have been measured at different heights with the corresponding 3D CAD model. The import of the CAD model is done with 360’s STL loader.

radial lüfter

3D Model combined with PIV planes, done with Dantec Dynamics, data from Valeo

2D Data

The classic output from a PIV system is 2D measurement data.

Here again you may have transient cases that you might wish to animate. Another great feature of 360 is the extraction of point probes over time. You may wish to do a Fourier transform to understand fluctuations better. Or perhaps you will want to extract profile lines in stationary and transient cases and perform some statistical analysis (if not already given by your PIV system) such as the calculation of mean values, measurement of turbulent energy, etc. With 360 any of these analyses or visualization tasks are easy to perform out of the box.

PIV 2D Stat

2D Droplet Visualization Vectorfield

XY Plots

XY plots are often used to create various line plots from extracted profile data. Here is an example from the 2D animation for the first timestep.

XY Profile Lines

XY Profile Lines

Animations

High-quality animations from several different types of plots are easily created with 360. You also have the freedom to arrange animations over multiple frames, allowing multiple animations to play in a synchronized manner, as in the animations above.

Text or Graphical Annotations

360 allows the user to anchor text or geometric annotations (vectors, lines, circles, rectangles, …) to specific locations in your plot. This helps the audience to understand your work better by adding context to your visuals.

2D PIV Data Combined with 3D Simulation Data

Oftentimes CFD groups work closely together with PIV or other measurement groups. 360 is perfectly able to load all the different data types. You can compare the data, create difference plots, show the PIV slices in 3D together with CFD results, etc.

Suffice to say, if you have a PIV visualization or analysis task in mind the odds are good that 360 can help you accomplish it.

Tips and Tricks for PIV Post-Processing

PIV systems can produce a wide variety of data formats. Fortunately, 360 makes it simple to handle most formats, and even for more esoteric data formats there is usually a path to load your data. Briefly we’ll describe some common workflows, but if you need more help, please don’t hesitate to contact our fantastic support team (support@tecplot.com).

#1 The ideal case:

If your data is already in Tecplot format via export or data loaders (for example, via Dantec, La Vision, TSI), you won’t have to do much. You may wish to scale your data or rotate/translate it to align with your preferred reference frame. If so, read case #4 below.

#2 The 2D measurement data is ASCII, but contains only points, not the structure as an IJ-ordered matrix:

Use the General Text loader and, if needed, give the data a structure.

  • Triangulation. In 2D mode, Data > 2D triangulation. This will create a triangular mesh connecting the data points.
  • Linear interpolation: First, triangulate the data (this is required for linear interpolation, but not required if you use inverse-distance interpolation). Now, go to 2D mode, select Data > Create Zone > Rectangular, in order to create a IJ-ordered mesh matrix. Here you interpolate, Data > Interpolate > Linear, in order to interpolate the source zone data in the triangulation to the rectangular, IJ-ordered zone.

#3 The data is transient, but time information is missing:

We have a tool which assigns time information to the zones. After loading your data, you might see that the time slider in the Plot sidebar is greyed out. In the Zone Style, you will see all timesteps as single zones. If you want to assign solution times to your data, select Data > Edit Time Strands which allows you to assign the correct timesteps.

#4 The data must be rotated, translated, or scaled:

  • To scale the data, go to Data > Alter > Specify Equations and use an assignment (= “Equation”), such as x = x/1000 to convert millimeters to meters in the x-direction.
  • To translate the data, go to Data > Alter > Specify Equations and use an assignment like x = x + 1.2 to translate the data 1.2 units in the positive x-axis direction.
  • To rotate the data, you will use in 2D mode: Data > Alter > Axial Rotation. Be sure that you correctly assigned the vector field first (simply activate the vector layer). This will rotate the vector field accordingly.

#5 Data is loaded, but more variables must be calculated:

To compute new variables from your existing dataset, use “Specify Equations” for simple cases. Alternatively, you can calculate CFD Analyze variables with Analyze > Field Variables to select the correct vector field first, and then choose the desired computation (i.e. Analyze > Calculate Variables > Velocity Magnitude). The full spectrum of possibilities is available using 360 together with PyTecplot, which is 360’s Python API. Use the Python API to compute more complex variables, such as the standard deviation of velocity in each timestep compared to the mean velocity over time, or multidimensional Fourier transforms.

#6 You find you must perform certain tasks repeatedly:

Use either 360’s intrinsic macro language or PyTecplot to perform repeated tasks programmatically. Either of these options allows a full automation and adaption to your workflow.

Learn More and Try Tecplot 360 For Free!

Again, please feel free to ask the Support Team for help with these tasks or anything else! They are eager to help! Also, take a look at our existing tutorial videos to gain further insight into the wide range of 360’s capabilities.

Try Tecplot 360 for Free

The post For PIV, Tecplot 360 is the Real MVP appeared first on Tecplot.

► What’s New in Tecplot 360 2022 R1
  14 Jul, 2022

In this webinar you’ll learn what’s new in Tecplot 360 2022 R1 as well as important platform and Python version support changes.  We will also introduce our Higher-Order Element Technology Preview, which is available as a separate download.  See how Tecplot, Inc. is helping usher in the future of CFD by supporting higher-order elements.

Highlights of the 360 Release

  • LaTeX support in Contour Legends
  • TecIO performance improvements
  • Data Loader updates
    • Ansys Fluent 2022 Common Fluids Format (HDF5) support
    • Abaqus 2022 (.odb files)
    • HDF5 loader performance (nearly 10x faster!)
    • EnSight loader updates
  • Platform updates
    • CentOS 8 is no longer supported
    • PyTecplot 1.5.0 released. Python 3.7 or greater is a hard requirement.  Note that we officially support the latest Python release and the two previous (currently 3.8, 3.9, 3.10)

Get a Free Trial

Update Your Software

Higher-Order Element Support

The Higher-Order Element Technology Preview is available as a separate download. Log in to MyTecplot and check it out!

Download Beta Here

The post What’s New in Tecplot 360 2022 R1 appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► Ansys adds Zemax optical imaging system simulation to its portfolio
  31 Aug, 2021

Ansys adds Zemax optical imaging system simulation to its portfolio

Ansys has announced that it will acquire Zemax, maker of high-performance optical imaging system simulation solutions. The terms of the deal were not announced, but it is expected to close in the fourth quarter of 2021.

Zemax’s OpticStudio is often mentioned when users talk about designing optical, lighting, or laser systems. Ansys says that the addition of Zemax will enable Ansys to offer a “comprehensive solution for simulating the behavior of light in complex, innovative products … from the microscale with the Ansys Lumerical photonics products, to the imaging of the physical world with Zemax, to human vision perception with Ansys Speos [acquired with Optis]”.

This feels a lot like what we’re seeing in other forms of CAE, for example, when we simulate materials from nano-scale all the way to fully-produced-sheet-of-plastic-scale. There is something to be learned at each point, and simulating them all leads, ultimately, to a more fit-for-purpose end result.

Ansys is acquiring Zemax from its current owner, EQT Private Equity. EQT’s announcement of the sale says that “[w]ith the support of EQT, Zemax expanded its management team and focused on broadening the Company’s product portfolio through substantial R&D investment focused on the fastest growing segments in the optics space. Zemax also revamped its go-to-market sales approach and successfully transitioned the business model toward recurring subscription revenue”. EQT had acquired Zemax in 2018 from Arlington Capital Partners, a private equity firm, which had acquired Zemax in 2015. Why does this matter? Because the path each company takes is different — and it’s sometimes not a straight line.

Ansys says the transaction is not expected to have a material impact on its 2021 financial results.

► Sandvik building CAM powerhouse by acquisition
  30 Aug, 2021

Sandvik building CAM powerhouse by acquisition

Last year Sandvik acquired CGTech, makers of Vericut. I, like many people, thought “well, that’s interesting” and moved on. Then in July, Sandvik announced it was snapping up the holding company for Cimatron, GibbsCAM (both acquired by Battery Ventures from 3D Systems), and SigmaTEK (acquired by Battery Ventures in 2018). Then, last week, Sandvik said it was adding Mastercam to that list … It’s clearly time to dig a little deeper into Sandvik and why it’s doing this.

First, a little background on Sandvik. Sandvik operates in three main spheres: rocks, machining, and materials. For the rocks part of the business, the company makes mining/rock extraction and rock processing (crushing, screening, and the like) solutions. Very cool stuff but not relevant to the CAM discussion.

The materials part of the business develops and sells industrial materials; Sandvik is in the process of spinning out this business. Also interesting but …

The machining part of the business is where things get more relevant to us. Sandvik Machining & Manufacturing Solutions (SMM) has been supplying cutting tools and inserts for many years, via brands like Sandvik, SECO, Miranda, Walter, and Dormer Pramet, and sees a lot of opportunity in streamlining the processes around the use of specific tools and machines. Light weighting and sustainability efforts in end-industries are driving interest in new materials and more complex components, as well as tighter integration between design and manufacturing operations. That digitalization across an enterprise’s areas of business, Sandvik thinks, plays into its strengths.

According to info from the company’s 2020 Capital Markets Day, rocks and materials are steady but slow revenue growers. The company had set a modest 5% revenue growth target but had consistently been delivering closer to 3% — what to do? Like many others, the focus shifted to (1) software and (2) growth by acquisition. Buying CAM companies ticked both of those boxes, bringing repeatable, profitable growth. In an area the company already had some experience in.

Back to digitalization. If we think of a manufacturer as having (in-house or with partners) a design function, which sends the concept on to production preparation, then to machining, and, finally, to verification/quality control, Sandvik wants to expand outwards from machining to that entire world. Sandvik wants to help customers optimize the selection of tools, the machining strategy, and the verification and quality workflow.

The Manufacturing Solutions subdivision within SMM was created last year to go after this opportunity. It’s got 3 areas of focus: automating the manufacturing process, industrializing additive manufacturing, and expanding the use of metrology to real-time decision making.

The CGTech acquisition last year was the first step in realizing this vision. Vericut is prized for its ability to work with any CAM, machine tool, and cutting tool for NC code simulation, verification, optimization, and programming. CGTech is a long-time supplier of Vericut software to Sandvik’s Coromant production units, so the companies knew one another well. Vericut helps Sandvik close that digitalization/optimization loop — and, of course, gives it access to the many CAM users out there who do not use Coromant.

But verification is only one part of the overall loop, and in some senses, the last. CAM, on the other hand, is the first (after design). Sanvik saw CAM as “the most important market to enter due to attractive growth rates – and its proximity to Sandvik Manufacturing and Machining Solutions’ core business.” Adding Cimatron, GibbsCAM, SigmaTEK, and Mastercam gets Sandvik that much closer to offering clients a set of solutions to digitize their complete workflows.

And it makes business sense to add CAM to the bigger offering:

  1. Sandvik has over 100,000 machining customers, many of which are relatively small, and most of which have a low level of digitalization. Sandvik believes it can bring significant value to these customers, while also providing point solutions to much larger clients
  2. Software is attractive — recurring revenue, growth rates, and margins
  3. CAM lets Sandvik grow in strategic importance with its customers, integrating cutting and tool data with process planning, as a way of improving productivity and part quality
  4. The acquisitions are strong in Americans and Asia — expanding Sandvik’s footprint to a more even global basis

To head off one question: As of last week’s public statements, anyway, Sandvik has no interest in getting into CAD, preferring to leave that battlefield to others, and continue on its path of openness and neutrality.

And because some of you asked: there is some overlap in these acquisitions, but remarkably little, considering how established these companies all are. GibbsCAM is mostly used for production milling and turning; Cimatron is used in mold and die — and with a big presence in automotive, where Sandvik already has a significant interest; and SigmaNEST is for sheet metal fabrication and material requisitioning.

One interesting (to me, anyway) observation: 3D Systems sold Gibbs and Cimatron to Battery in November 2020. Why didn’t Sandvik snap it up then? Why wait until July 2021? A few possible reasons: Sandvik CEO Stefan Widing has been upfront about his company’s relative lack of efficiency in finding/closing/incorporating acquisitions; perhaps it was simply not ready to do a deal of this type and size eight months earlier. Another possible reason: One presumes 3D Systems “cleaned up” Cimatron and GibbsCAM before the sale (meaning, separating business systems and financials from the parent, figuring out HR, etc.) but perhaps there was more to be done, and Sandvik didn’t want to take that on. And, finally, maybe the real prize here for Sandvik was SigmaNEST, which Battery Ventures had acquired in 2018, and Cimatron and GibbsCAM simply became part of the deal. We may never know.

This whole thing is fascinating. A company out of left field, acquiring these premium PLMish assets. Spending major cash (although we don’t know how much because of non-disclosures between buyer and sellers) for a major market presence.

No one has ever asked me about a CAM roll-up, yet I’m constantly asked about how an acquirer could create another Ansys. Perhaps that was the wrong question, and it should have been about CAM all along. It’s possible that the window for another company to duplicate what Sandvik is doing may be closing since there are few assets left to acquire.

Sandvik’s CAM acquisitions haven’t closed yet, but assuming they do, there’s a strong fit between CAM and Sandvik’s other manufacturing-focused business areas. It’s more software, with its happy margins. And, finally, it lets Sandvik address the entire workflow from just after component design to machining and on to verification. Mr. Widing says that Sandvik first innovated in hardware, then in service – and now, in software to optimize the component part manufacturing process. These are where gains will come, he says, in maximizing productivity and tool longevity. Further out, he sees, measuring every part to see how the process can be further optimized. It’s a sound investment in the evolution of both Sandvik and manufacturing.

We all love a good reinvention story, and how Sandvik executes on this vision will, of course, determine if the reinvention was successful. And, of course, there’s always the potential for more news of this sort …

► Missed it: Sandvik also acquiring GibbsCAM, Cimatron & SigmaNEST
  25 Aug, 2021

Missed it: Sandvik also acquiring GibbsCAM, Cimatron & SigmaNEST

I missed this last month — Sandvik also acquired Cambrio, which is the combined brand for what we might know better as GibbsCAM (milling, turning), Cimatron (mold and die), and SigmaNEST (nesting, obvs). These three were spun out of 3D Systems last year, acquired by Battery Ventures — and now sold on to Sandvik.

This was announced in July, and the acquisition is expected to close in the second half of 2021 — we’ll find out on Friday if it already has.

At that time. Sandvik said its strategic aim is to “provide customers with software solutions enabling automation of the full component manufacturing value chain – from design and planning to preparation, production and verification … By acquiring Cambrio, Sandvik will establish an important position in the CAM market that includes both toolmaking and general-purpose machining. This will complement the existing customer offering in Sandvik Manufacturing Solutions”.

Cambrio has around 375 employees and in 2020, had revenue of about $68 million.

If we do a bit of math, Cambrio’s $68 million + CNC Software’s $60 million + CGTech’s (that’s Vericut’s maker) of $54 million add up to $182 million in acquired CAM revenue. Not bad.

More on Friday.

► Mastercam will be independent no more
  25 Aug, 2021

Mastercam will be independent no more

CNC Software and its Mastercam have been a mainstay among CAM providers for decades, marketing its solutions as independent, focused on the workgroup and individual. That is about to change: Sandvik, which bought CGTech late last year, has announced that it will acquire CNC Software to build out its CAM offerings.

According to Sandvik’s announcement, CNC Software brings a “world-class CAM brand in the Mastercam software suite with an installed base of around 270,000 licenses/users, the largest in the industry, as well as a strong market reseller network and well-established partnerships with leading machine makers and tooling companies”.

We were taken by surprise by the CGTech deal — but shouldn’t be by the Mastercam acquisition. Stefan Widing, Sandvik’s CEO explains it this way: “[Acquiring Mastercam] is in line with our strategic focus to grow in the digital manufacturing space, with special attention on industrial software close to component manufacturing. The acquisition of CNC Software and the Mastercam portfolio, in combination with our existing offerings and extensive manufacturing capabilities, will make Sandvik a leader in the overall CAM market, measured in installed base. CAM plays a vital role in the digital manufacturing process, enabling new and innovative solutions in automated design for manufacturing.” The announcement goes on to say, “CNC Software has a strong market position in CAM, and particularly for small and medium-sized manufacturing enterprises (SME’s), something that will support Sandvik’s strategic ambitions to develop solutions to automate the manufacturing value chain for SME’s – and deliver competitive point solutions for large original equipment manufacturers (OEM’s).”

Sandvik says that CNC Software has 220 employees, with revenue of $60 million in 2020, and a “historical annual growth rate of approximately 10 percent and is expected to outperform the estimated market growth of 7 percent”.

No purchase price was disclosed, but the deal is expected to close during the fourth quarter.

Sandvik is holding a call about this on Friday — more updates then, if warranted.

► Bentley saw a rebound in infrastructure in Q2 but is cautious about China
  18 Aug, 2021

Bentley saw a rebound in infrastructure in Q2 but is cautious about China

Bentley continues to grow its deep expertise in various AEC disciplines — most recently, expanding its focus in underground resource mapping and analysis. This diversity serves it well; read on.

In Q2,

  • Total revenue was $223 million, up 21% as reported. Seequent contributed about $4 million per the quarterly report filed with the US SEC, so almost all of this growth was organic
  • Subscription revenue was $186 million, up 18%
  • Perpetual license revenue was $11 million, down 8% as Bentley continues to focus on selling subscriptions
  • Services revenue was $26 million, up 86% as Bentley continues to build out its Maximo-related consulting and implementation business, the Cohesive Companies

Unlike AspenTech, Bentley’s revenue growth is speeding up (total revenue up 21% in Q2, including a wee bit from Seequent, and up 17% for the first six months of 2021). Why the difference? IMHO, because Bentley has a much broader base, selling into many more end industries as well as to road/bridge/water/wastewater infrastructure projects that keep going, Covid or not. CEO Greg Bentley told investors that some parts of the business are back to —or even better than— pre-pandemic levels, but not yet all. He said that the company continues to struggle in industrial and resources capital expenditure projects, and therefore in the geographies (theMiddle East and Southeast Asia) that are the most dependent on this sector. This is balanced against continued success in new accounts and the company’s reinvigorated selling to small and medium enterprises via its Virtuosity subsidiary — and in a resurgence in the overall commercial/facilities sector. In general, it appears that sales to contractors such as architects and engineers lag behind those to owners and operators of commercial facilities —makes sense as many new projects are still on pause until pandemic-related effects settle down.

One unusual comment from Bentley’s earnings call that we’re going to listen for on others: The government of China is asking companies to explain why they are not using locally-grown software solutions; it appears to be offering preferential tax treatment for buyers of local software. As Greg Bentley told investors, “[d]uring the year to date, we have experienced a rash of unanticipated subscription cancellations within the mid-sized accounts in China that have for years subscribed to our China-specific enterprise program … Because we don’t think there are product issues, we will try to reinstate these accounts through E365 programs, where we can maintain continuous visibility as to their usage and engagement”. So, to recap: the government is using taxation to prefer one set of vendors over another, and all Bentley can do (really) is try to bring these accounts back and then monitor them constantly to keep on top of emerging issues. FWIW, in the pre-pandemic filings for Bentley’s IPO, “greater China, which we define as the Peoples’ Republic of China, Hong Kong and Taiwan … has become one of our largest (among our top five) and fastest-growing regions as measured by revenue, contributing just over 5% of our 2019 revenues”. Something to watch.

The company updated its financial outlook for 2021 to include the recent Seequent acquisition and this moderate level of economic uncertainty. Bentley might actually join the billion-dollar club on a pro forma basis — as if the acquisition of Seequent had occurred at the beginning of 2021. On a reported basis, the company sees total revenue between $945 million and $960 million, or an increase of around 18%, including Seequent. Excluding Seequent, Bentley sees organic revenue growth of 10% to 11%.

Much more here, on Bentley’s investor website.

► AspenTech is cautious about F2022, citing end-market uncertainty
  18 Aug, 2021

AspenTech is cautious about F2022, citing end-market uncertainty

We still have to hear from Autodesk, but there’s been a lot of AECish earnings news over the last few weeks. This post starts a modest series as we try to catch up on those results.

AspenTech reported results for its fiscal fourth quarter, 2021 last week. Total revenue of $198 million in DQ4, down 2% from a year ago. License revenue was $145 million, down 3%; maintenance revenue was $46 million, basically flat when compared to a year earlier, and services and other revenue was $7 million, up 9%.

For the year, total revenue was up 19% to $709 million, license revenue was up 28%, maintenance was up 4% and services and other revenue was down 18%.

Looking ahead, CEO Antonio Pietri said that he is “optimistic about the long-term opportunity for AspenTech. The need for our customers to operate their assets safely, sustainably, reliably and profitably has never been greater … We are confident in our ability to return to double-digit annual spend growth over time as economic conditions and industry budgets normalize.” The company sees fiscal 2022 total revenue of $702 million to $737 million, which is up just $10 million from final 2021 at the midpoint.

Why the slowdown in FQ4 from earlier in the year? And why the modest guidance for fiscal 2022? One word: Covid. And the uncertainty it creates among AspenTech’s customers when it comes to spending precious cash. AspenTech expects its visibility to improve when new budgets are set in the calendar fourth quarter. By then, AspenTech hopes, its customers will have a clearer view of reopening, consumer spending, and the timing of an eventual recovery.

Lots more detail here on AspenTech’s investor website.

Next up, Bentley. Yup. Alphabetical order.

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:  https://openfoam.org
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

vertices
(
    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15
);

blocks
(
    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);

edges
(
);

boundary
(
    inlet
    {
        type patch;
        faces
        (
            (0 8 12 4)
        );
    }
    outlet
    {
        type patch;
        faces
        (
            (3 7 15 11)
        );
    }
    lowerWall
    {
        type wall;
        faces
        (
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        );
    }
    upperWall
    {
        type patch;
        faces
        (
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        );
    }
    frontAndBack
    {
        type empty;
        faces
        (
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)
        );
    }
);

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

edges
(
        polyLine 1 2
        (
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)
        )

        polyLine 9 10
        (
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)
        )
);

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!

Cheers.

This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
200
0.000012924
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:

T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

Summary

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

https://wiki.openfoam.com/%223_weeks%22_series

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.

Summary

Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with curiosityFluidsAirfoilMesher.py

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!

Download

You can download the script here:

https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.

Instructions

(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh

PS
You need to run this with python 3, and you need to have numpy installed

Inputs

The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.

Examples

12% Joukowski Airfoil

Inputs:

With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:


With these inputs, the result looks like this:


Mesh Quality:


Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).

Inputs:


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality

Summary

Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.


If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.


return

Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: