CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home >

CFD Blog Feeds

Another Fine Mesh top

► Roundup of CFD Success Stories
  29 Jun, 2020
If it seems like we’ve been sharing a lot recently about our work with our friends and partners, it’s true – we have been. During the first several months of “work from home” we chose to showcase a lot of … Continue reading
► This Week in CFD
  26 Jun, 2020
This week’s news is highly visual with cool images of various solutions and grids. What’s new are a couple of insightful interviews with CAE folks. And how about a job in quantum computing for CFD? Shown here is an example … Continue reading
► This Week in CFD
  19 Jun, 2020
This week’s CFD news may take you some time to get through because it includes several worthwhile, long reads. One is about simulation driven design and democratization, three are about computing including HPC, GPU, and AWS, and another fluid mystery … Continue reading
► Webinar: Automated Meshing and Adaptive Remeshing
  17 Jun, 2020
In what promises to be a fantastic webinar, Bombardier’s Amine Ben Haj Ali talks with Pointwise’s Rick Matus about the automatic mesh generation system he built using Pointwise and with which over 250,000 meshes have been generated since 2017. Live … Continue reading
► Case Study: Accurate Performance Predictions for Marine Propellers
  15 Jun, 2020
This case study presents the benchmark validation of CFD simulation results of the Potsdam Propeller Test Case (PPTC) using CFD Support’s TCFD flow solver with a Pointwise mesh. PPTC is a marine propulsor that was extensively measured by SVA Potsdam. … Continue reading
► This Week in CFD
  12 Jun, 2020
This week’s CFD news includes several event updates from which we infer that people are starting to think beyond our pandemic lockdown. And if you’ve ever wondered what y+ to use in your grid, Leap’s series of articles on that … Continue reading

F*** Yeah Fluid Dynamics top

► Internal Waves in the Andaman Sea
    2 Jul, 2020

Differences in temperature and salinity create distinct layers within the ocean. When combined with flow over submerged topography — underwater canyons, mountains, and reefs — it makes waves. But those waves aren’t always apparent when sitting at the surface. Instead, they travel along those ocean layers as internal waves that can be as tall as hundreds of meters in height.

When the sun glints just right off the ocean, these massive internal waves can be caught by satellite imagery, as shown in the above image of the Andaman Sea near Thailand and Myanmar. Even seemingly calm waters can roil in the deep. (Image credit: USGS; via NASA Earth Observatory)

► The Vortex Beneath a Drop
    1 Jul, 2020

While we’re most used to seeing levitating Leidenfrost droplets on a solid surface, such drops can also form above a liquid bath. In fact, the smoothness of the bath’s surface, combined with mechanisms discussed in a new study, means that drops will levitate at a cooler temperature over a liquid than they will over a solid surface.

Researchers found that a donut-shaped vortex forms in the bath beneath a levitating droplet, but the direction of the vortex’s circulation is not always the same. For some liquids, the flow moves radially outward from beneath the drop. In this case, researchers found that the dominant force was shear stress caused by the vapor escaping from under the droplet.

With other droplet liquids, the flow direction instead moved inward, forming a sinking plume beneath the center of the drop. In this situation, researchers found that evaporative cooling dominated. As the liquid beneath the droplet cooled, it became denser and sank. At the same time, the lower temperature changed the bath’s local surface tension, creating the inward surface flow through the Marangoni effect. (Image credit: F. Cavagnon; research credit: B. Sobac et al.)

► Making Waves
  30 Jun, 2020

The Seoul Aquarium is now home to an enormous crashing wave, courtesy of design company d’strict. Check out several different views of the anamorphic illusion in their video above. There’s no word on the techniques used to generate the animation, but it’s certainly a cool visual! (Image and video credit: d’strict; via Colossal)

► How Animals Stay Dry in the Rain
  29 Jun, 2020

Getting wet can be a problem for many animals. A wet insect could quickly become too heavy to fly, and a wet bird can struggle to stay warm. But these animals have a secret weapon: tiny, multi-scale roughness on their wings, scales, and feathers that helps them shed water. Watch the latest FYFD video to learn how! (Image and video credit: N. Sharp; research credit: S. Kim et al.)

► Traffic Flow and Phantom Jams
  26 Jun, 2020

We’ve all experienced the frustration of traffic jams that seem to come from nowhere — standstills that occur with no accident, construction, or obstacle in sight. Traffic shares a lot of similarities with fluid flows, including its waves and instabilities.

These disturbances propagate and grow when traffic surpasses a critical density. Once that happens, any small speed adjustment made by a lead driver gets amplified by the larger and larger braking of each driver downstream. Effectively, this creates a wave of slower speed and higher density that travels downstream through the traffic.

Each driver brakes more than the last largely because they can’t tell what the conditions upstream of them are. But that lack of knowledge may be less of an issue for driverless cars, which have the potential to communicate with cars and traffic sensors ahead of them. With enough automated vehicles on the highway, phantom traffic jams may become a thing of the past. (Video and image credit: TED-Ed)

► New Details on the Sun’s Surface
  25 Jun, 2020

As part of its shakedown, the new Inouye Solar Telescope has captured the surface of the sun in stunning new detail. Seen here are some of the sun’s turbulent convection cells, each about the size of the state of Texas. Hot plasma rises in the center of each cell, cools, and then sinks near the dark edges. Also visible within these dark borders are bright spots thought to mark magnetic fields capable of channeling energy out into the corona. Researchers hope the new telescope will help them uncover the physics behind these processes. (Image and video credit: Inouye Solar Telescope)

Convection cells on the sun.

Editor’s note: Like several other telescopes located in Hawai’i, the Inouye Solar Telescope was built against the wishes of many native Hawaiians. Although FYFD supports scientific progress, it is my personal belief that scientific advances should not come at the expense of indigenous populations. I strongly urge my scientific colleagues to listen to and work alongside those with concerns about future facilities.

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

CFD Online top

► solids4Foam + sixDoFRigidBodyDisplaceme nt
    8 Mar, 2020
Quote:
Originally Posted by bigphil View Post
  • D is the total displacement
  • DD is the increment of displacement i.e. DD = D - D_0
  • D_0 is the old-time total displacement
  • U is the velocity
  • pointD is D interpolated (in the solver) to the mesh points: this interpolation is a but more accurate than the interpolation by ParaView
  • similarly pointDD is the point version of DD

    So when using "warp by vector", use D or pointD fields (D will be interpolated to the points by ParaView so it should be almost the same as pointD, but pointD is more correct).

    Philip

Philip, Thank you so much for clarifying the variables. It helps set the boundary conditions and analyze the results!
► Understanding CFD, Turbulence, turbo-machinery and combustion
    6 Mar, 2020
► Physical Aspects of Lagrangian Particle Tracking (DPM)
    1 Mar, 2020
This blog is first of the two part blog related to DPM Modeling. It is not to be considered as a professional document. Nor is it meant for those who consider themselves to be expert in DPM. It is meant as a guide for the beginners who want to understand various aspects related to DPM Modeling. Though terminology is meant to be general but it is rather closer to the one used in Fluent. Of course, experts are also welcome to go through it and share their comments.

Assume that there is a need to track the evolution of contaminant in a canal. There are at least two ways to do it. Place a few sensors at a few locations. Or attach a sensor to multiple boats and let them traverse the canal. The latter is called Lagrangian.

The boats may or may not have any drives to control their motion. Now these boats may be very small and light, just like the paper boats from the childhood. These will go with the flow without disturbing the flow of the water in any noticeable manner. This is known as One-Way coupling in DPM or LPT terminology. Or, these could be boats with rotors or big in size or heavy in mass and, consequently, the flow of the canal will be affected due to their presence. Even the distribution of the contaminant could be affected. This is called Two-Way coupling.

Now, imagine having these big or heavy boats instead of light ones. Or it could be mix as well. But, if there are too many boats in the canal, a wave generated by one will affect the motion of all the nearby boats. That is three-way coupling. And if their number increases so much that it feels like traffic on the road, boats hitting each other, literally, to the extent that its not the flow rather the hits that decide the motion, you have what is called four-way coupling. This is where models like DEM come into play.

Some users think of DEM as a separate model than DPM. The reality is that particles are always tracked using DPM but when there are so many of these that the forces generated due to their hits with each other affect their motion, some model is required to predict this force. DEM predicts that force.

Lets consider one of these heavy boats. What are the bare minimum forces acting on it.

1. Its mass or weight, if you like the latter term, pulling it towards the center of the earth

2. The resistance from the air as well as water, only water if it is a submarine. This is called drag force and depends on various aspects, such as, contact area with the fluid, surface roughness, shape, and presence of other boats or submarines around it. The method to address three-way coupling is actually addition of extra forces or modification of drag due to this presence of extra particles around the particle being tracked. And there is lift along with the drag; always normal to the drag.

3. A large wave could also push the boat in a particular direction. Essentially, a wave brings a pressure difference because of its height and boat is pushed from higher pressure to lower pressure. This is called the pressure gradient force.

4. Other types of forces could be generated by rolling of the boat, its rotor at the back, or if it being towed. There is buoyancy that is pushing it up.

For particles, that are not as big as boats, even smaller than paper boats, other forces also need to be considered. But before we go into that, lets take a look at the shape of the particles.

As far as LPT is concerned, particles are assumed to have spherical shape. But particles are also termed as point particles. So, how is spherical shape important. Or how is shape important at all. To rephrase it, how is the shape taken into account. Its via the exchange coefficients, the models that define how a particle and the continuous fluid around it exchange various fields, such as, momentum, mass, thermal energy, etc.

If drag model for a sphere is used, then it is obvious that the user is assuming the particles to be sphere. But lets say that the user wants to assume the particles to be oblong spheres or cylinders. Can he use drag coefficient for the cylinder or for oblong sphere? Ideally, yes. Practically, there are a few issues.

Since LPT tracks only the positions of the particles, their orientations are unknown. Drag for a cylinder will certainly depend on its orientation. As far as sphere is concerned, it is invariant under rotations, i.e., it does not matter about which axis you rotate it, as long as the axis is passing through its center, its orientation will not vary. So, a single drag coefficient can be applied to all particles. For oblong spheres, their orientation is required but is unavailable.

One plausible approach would be to use some probability distribution function for the orientation of the particles and apply drag or other forces based on that. But usually, it is not justified because the particles are much smaller than the characteristic length scale of the flow.

Mass and heat transfer are also affected by the orientation. Though the area for drag force, or reaction, or heat convection remains the same, Nusselt and Sherwood numbers do not. They are affected by the orientation just like drag and lift coefficient. Therefore, particles are usually assumed or expected to be spherical or slightly non-spherical for applicability of DPM.

Another important aspect is the volume fraction. Remember the waves from one boat affecting the flow of nearby boats. Well, standard DPM is valid until that happens. In other words, DPM is valid for use if the volume occupied by the particles is so less that they do not affect their neighbors in any significant manner. Usually, it is assumed that if the particles occupy less than 10% volume, then the LPT is valid. Do note that this condition needs to be satisfied locally as well as globally, i.e., if one cell has one particle then the cell volume should be at least 10 times the particle volume. There is no such limit on mass loading or mass fraction though.

Now, these particles have to come from somewhere. That is what is called Injection. The injections require the important parameters to be specified, such as, initial position and velocity vectors, diameter for the calculation of mass and area (remember it is a sphere), total mass flow rate, temperature (if temperature is important for the simulation and thermal energy equation is being solved), and time duration for the injection.

Now, there is no number specified here. CFD tools determine this number by taking a ratio of mass flow rate and mass of each particle, which is based on material property and diameter. Usually, this turns out to be in millions. Since each particle requires one equation, and this will be captured in the part two, millions of equations need to be solved. Now, that's expensive.

Solution is to use representative particles, or otherwise known as parcels. Each parcel can depict any number of particles, varying from 1 to hundreds of thousands. This makes the situation workable. But what are basis for this? There are particles that have similar momentum or similar mass or similar position, or some other similarity that can be exploited to track them together. And that's how parcels are tracked. This number can be controlled by the user.

So, once the tool knows the initial conditions, provided via injections, and it knows the forces acting on it, all that remains is to solve a rather simple ODE, called Newton's second law.

\frac{\partial^2\vec{x}}{\partial t^2} = F_d + F_b + mg + ...

As you observe, the equation has only one independent variable, time. So, the particle tracking is always transient. Solution procedure for this and the setup for DPM in Fluent will make the second part of this blog. Do not expect a tutorial, rather aspects that need to be looked after while setting it up.

Until next time...
► Quick notes on testing optimization flags with OpenFOAM et al
  21 Jan, 2020
Greetings to all!

Tobi sent me an email earlier today related to this and I might as well leave a public note as well, to share with everyone the suggestions I had... so this is a quick copy-paste-adapt for future reference, until I or anyone else bothers with writing this at openfoamwiki.net

I have no idea yet for the current generation of Ryzen CPUs (Ryzen 3000 series), but I do know of this report for EPYC: http://www.prace-ri.eu/best-practice-guide-amd-epyc
If you look for Table 5, you will see the options they suggest for GCC/G++.

However, the "znver1" architecture is possibly not the best for this generation of Ryzen/Threadripper... there is an alternative, which is to use:
Code:
-march=native -mtune=native
It will only work properly with a recent GCC version for the more recent CPUs.

Beyond this, it might take some trial and error. Some guidelines are given here: https://wiki.gentoo.org/wiki/GCC_optimization

You can use the following strategy to test various builds with different optimization flags:
  1. Go into the folder "wmake/rules/linux64Gcc"
  2. Copy the files "cOpt" and "c++Opt" to another name, for example: "cOptNative" and "c++OptNative"
  3. "cOPT" and "c++OPT" are the lines that need to be updated.
  4. Create a new alias in your ".bashrc" file for this, for example:
    Code:
    alias ofdevNative='source $HOME/OpenFOAM/OpenFOAM-dev/etc/bashrc WM_COMPILE_OPTION=OptNative'
  5. Start a new terminal and activate this alias ofdevNative.
  6. Then run ./Allwmake inside "OpenFOAM-dev".
  7. Repeat the same strategy for other names and therefore you can do several builds with small changes to the optimization flags.

Warning: Last time I checked, AVX and AVX2 are not used by OpenFOAM, so don't bother with them.

Best regards,
Bruno
► Mixing of Ammoniak and Exhaust
  16 Aug, 2019
Dear Foamers,

in my thesis I worked with static mixers.
If you like to see my case you can see it here.
https://www.dropbox.com/sh/5rndjj0qs...Wci0dlNqa?dl=0
Feel free to ask!
► Determination of mixing quality/ uniformity index
  16 Aug, 2019
Dear guys,

for a long time I had problems determining the mixing quality of a mixing line. Now I've come across a usable formula. I would like to share this with you.
It is the degree of uniformity also called uniformity index.
The calculation is cell-based.
U = 1 - (SUM^{N}_{i=1}(Ci-Cm))/(2*N*Cm)
with N cells
and concentration of a cell Ci
and the arythmetic agent Cm
Cm = (SUM^{N}_{i=1}(Ci))/N
The easiest way is to export the cells with concentration of the considered region (outlet) and create an Excel file.
An example is shown in my public dropbox:
https://www.dropbox.com/sh/vm5qlawb0j611dp/AAD51PsCxgc4CUwMmBNWIqIxa?dl=0
Greetings Philipp

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:  https://openfoam.org
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

vertices
(
    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15
);

blocks
(
    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);

edges
(
);

boundary
(
    inlet
    {
        type patch;
        faces
        (
            (0 8 12 4)
        );
    }
    outlet
    {
        type patch;
        faces
        (
            (3 7 15 11)
        );
    }
    lowerWall
    {
        type wall;
        faces
        (
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        );
    }
    upperWall
    {
        type patch;
        faces
        (
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        );
    }
    frontAndBack
    {
        type empty;
        faces
        (
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)
        );
    }
);

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

edges
(
        polyLine 1 2
        (
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)
        )

        polyLine 9 10
        (
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)
        )
);

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!

Cheers.

This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
200
0.000012924
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:

T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

Summary

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

https://wiki.openfoam.com/%223_weeks%22_series

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.

Summary

Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with curiosityFluidsAirfoilMesher.py

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!

Download

You can download the script here:

https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.

Instructions

(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh

PS
You need to run this with python 3, and you need to have numpy installed

Inputs

The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.

Examples

12% Joukowski Airfoil

Inputs:

With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:


With these inputs, the result looks like this:


Mesh Quality:


Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).

Inputs:


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality

Summary

Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

Hanley Innovations top

► Accurate Aircraft Performance Predictions using Stallion 3D
  26 Feb, 2020


Stallion 3D uses your CAD design to simulate the performance of your aircraft.  This enables you to verify your design and compute quantities such as cruise speed, power required and range at a given cruise altitude. Stallion 3D is used to optimize the design before moving forward with building and testing prototypes.

The table below shows the results of Stallion 3D around the cruise angles of attack of the Cessna 402c aircraft.  The CAD design can be obtained from the OpenVSP hangar.


The results were obtained by simulating 5 angles of attack in Stallion 3D on an ordinary laptop computer running MS Windows 10 .  Given the aircraft geometry and flight conditions, Stallion 3D computed the CL, CD, L/D and other aerodynamic quantities.  With this accurate aerodynamics results, the preliminary performance data such as cruise speed, power, range and endurance can be obtained.

Lift Coefficient versus Angle of Attack computed with Stallion 3D


Lift to Drag Ratio versus True Airspeed at 10,000 feet


Power Required versus True Airspeed at 10,000 feet

The Stallion 3D results shows good agreement with the published data for the Cessna 402.  For example, the cruse speed of the aircraft at 10,000 feet is around 140 knots. This coincides with the speed at the maximum L/D (best range) shown in the graph and table above.

 More information about Stallion 3D can be found at the following link.
http://www.hanleyinnovations.com/stallion3d.html

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software that is accessible to engineers, designers and students.  For more information, please visit > http://www.hanleyinnovations.com


► 5 Tips For Excellent Aerodynamic Analysis and Design
    8 Feb, 2020
Stallion 3D analysis of Uber Elevate eCRM-100 model

Being the best aerodynamics engineer requires meticulous planning and execution.  Here are 5 steps you can following to start your journey to being one of the best aerodynamicist.

1.  Airfoils analysis (VisualFoil) - the wing will not be better than the airfoil. Start with the best airfoil for the design.

2.  Wing analysis (3Dfoil) - know the benefits/limits of taper, geometric & aerodynamic twist, dihedral angles, sweep, induced drag and aspect ratio.

3. Stability analysis (3Dfoil) - longitudinal & lateral static & dynamic stability analysis.  If the airplane is not stable, it might not fly (well).

4. High Lift (MultiElement Airfoils) - airfoil arrangements can do wonders for takeoff, climb, cruise and landing.

5. Analyze the whole arrangement (Stallion 3D) - this is the best information you will get until you flight test the design.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software the is accessible to engineers, designs and students.  For more information, please visit > http://www.hanleyinnovations.com

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 


The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 


Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.


Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.


Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.



Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:



► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse


Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit http://www.hanleyinnovations.com/stallion3d.html or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017


Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.




More information about the software can be found at the following url:
http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

Thanks for reading.


► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

CFD and others... top

► Facts, Myths and Alternative Facts at an Important Juncture
  21 Jun, 2020
We live in an extraordinary time in modern human history. A global pandemic did the unthinkable to billions of people: a nearly total lock-down for months.  Like many universities in the world, KU closed its doors to students since early March of 2020, and all courses were offered online.

Millions watched in horror when George Floyd was murdered, and when a 75 year old man was shoved to the ground and started bleeding from the back of his skull...

Meanwhile, Trump and his allies routinely ignore facts, fabricate alternative facts, and advocate often-debunked conspiracy theories to push his agenda. The political system designed by the founding fathers is assaulted from all directions. The rule of law and the free press are attacked on a daily basis. One often wonders how we managed to get to this point, and if the political system can survive the constant sabotage...It appears the struggle between facts, myths and alternative facts hangs in the balance.

In any scientific discipline, conclusions are drawn, and decisions are made based on verifiable facts. Of course, we are humans, and honest mistakes can be made. There are others, who push alternative facts or misinformation with ulterior motives. Unfortunately, mistaken conclusions and wrong beliefs are sometimes followed widely and become accepted myths. Fortunately, we can always use verifiable scientific facts to debunk them.

There have been many myths in CFD, and quite a few have been rebutted. Some have continued to persist. I'd like to refute several in this blog. I understand some of the topics can be very controversial, but I welcome fact-based debate.

Myth No. 1 - My LES/DNS solution has no numerical dissipation because a central-difference scheme is used.

A central finite difference scheme is indeed free of numerical dissipation in space. However, the time integration scheme inevitably introduces both numerical dissipation and dispersion. Since DNS/LES is unsteady in nature, the solution is not free of numerical dissipation.  

Myth No. 2 - You should use non-dissipative schemes in LES/DNS because upwind schemes have too much numerical dissipation.

It sounds reasonable, but far from being true. We all agree that fully upwind schemes (the stencil shown in Figure 1) are bad. Upwind-biased schemes, on the other hand, are not necessarily bad at all. In fact, in a numerical test with the Burgers equation [1], the upwind biased scheme performed better than the central difference scheme because of its smaller dispersion error. In addition, the numerical dissipation in the upwind-biased scheme makes the simulation more robust since under-resolved high-frequency waves are naturally damped.   

Figure 1. Various discretization stencils for the red point
The Riemann solver used in the DG/FR/CPR scheme also introduces a small amount of dissipation. However, because of its small dispersion error, it out-performs the central difference and upwind-biased schemes. This study shows that both dissipation and dispersion characteristics are equally important. Higher order schemes clearly perform better than a low order non-dissipative central difference scheme.  

Myth No. 3 - Smagorisky model is a physics based sub-grid-scale (SGS) model.

There have been numerous studies based on experimental or DNS data, which show that the SGS stress produced with the Smagorisky model does not correlate with the true SGS stress. The role of the model is then to add numerical dissipation to stablize the simulations. The model coefficient is usually determined by matching a certain turbulent energy spectrum. The fact suggests that the model is purely numerical in nature, but calibrated for certain numerical schemes using a particular turbulent energy spectrum. This calibration is not universal because many simulations produced worse results with the model.

► What Happens When You Run a LES on a RANS Mesh?
  27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

 Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.    

p = 1

p = 2

p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions. 

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model. 

Table I. Comparison of lift and drag coefficients with experimental data

CL
CD
p = 1
2.020
0.293
p = 2
2.411
0.282
p = 3
2.413
0.283
Experiment
2.479
0.252


This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida. 


► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 


► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.


A Tank outside my taxi



A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion


To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.



Convergent Science Blog top

► Models On Top of Models: Thickened Flames in CONVERGE
    2 Jul, 2020

Any CONVERGE user knows that our solver includes a lot of physical models. A lot of physical models! How many combinations exist? How many different ways can you set up a simulation? That’s harder to answer than you might think. There might be N turbulence models and M combustion models, but the total set of combinations isn’t N*M.

Why not? In some cases, our developers haven’t completed it yet! The ECFM and ECFM3Z combustion models, for example, could not be combined with a large eddy simulation (LES) turbulence model until CONVERGE version 3.0.11. We’re adding more features all the time. One interesting example is the thickened flame model (TFM). 

The name is descriptive, of course: TFM is designed to thicken the flame. If you’re not a combustion researcher, this notion may not be intuitive. A real flame is thin (in an internal combustion engine environment, tens or hundreds of microns). Why would we want to design a model that intentionally deviates from this reality? As is often the case with physical modeling, the answer lies in what we’re trying to study.

CONVERGE is often used to study the engineering operability of a premixed internal combustion or gas turbine engine. This requires accurate simulation of macroscopic combustion dynamics (flame properties), including the laminar flamespeed. A large eddy simulation (LES) might use cells on the order of 0.1 mm

The problem may now be clear. The flame is much too thin to resolve on the grid we want to use. In fact, a detailed chemical kinetics solver like SAGE requires five or more cells across the flame in order to reproduce the correct laminar flamespeed. An under-resolved flame results in an underprediction of laminar flamespeed. Of course, we could simply decrease the cell size by an order of magnitude, but that makes for an impractical engineering calculation.

The thickened flame model is designed to solve this problem. The basic idea of Colin et al. [1] was to simulate a flame that is thicker than the physical one, but which reproduces the same laminar flamespeed. From simple scaling analysis, this can be achieved by increasing the thermal and species diffusivity while reducing the reaction rate by a factor of F. Because the flame thickening effect decreases the wrinkling of the flame front, and thus its surface area, an efficiency factor E is introduced so that the correct turbulent flamespeed is recovered.

The combination of these scaling factors allows CONVERGE to recover the correct flamespeed without actually resolving the flame itself. CONVERGE also calculates a flame sensor function so that these scaling factors are applied only at the flame front. By using TFM with SAGE detailed chemistry, a premixed combustion engineering simulation with LES becomes practical.

Hasti et al. [2] evaluated one such case using CONVERGE with LES, SAGE, and TFM. This work examined the Volvo bluff-body augmentor test rig, shown below, which has been subjected to extensive study. At the conditions of interest, the flame thickness is estimated to be about 1 mm, and so SAGE without TFM should require a grid not coarser than 0.2 mm to accurately simulate combustion.


Figure 1: Volvo bluff-body augmentor test rig [3].

With TFM, Hasti et al. show that CONVERGE is able to generate a grid-converged result at a minimum grid spacing of 0.3125 mm. We might expect such a calculation to take only about 40% as many core hours as a simulation with a minimum grid spacing of 0.25 mm.

Figure 2: Representative instantaneous temperature field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm.
Figure 3: Representative instantaneous velocity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 4: Representative instantaneous vorticity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 5: Transverse mean temperature profiles at x/D = 3.75, 8.75, and 13.75.
Base grid sizes of 2 mm, 2.5 mm, and 3 mm correspond to minimum cell sizes of 0.25 mm, 0.3125 mm, and 0.375 mm, respectively.

Understanding the topic of study, the underlying physics, and the way those physics are affected by our choice of physical models, are critical to performing accurate simulations. If you want to combine the power of the SAGE detailed chemical kinetics solver with the transient behavior of an LES turbulence model to understand the behavior of a practical engine–and to do so without bankrupting your IT department–TFM is the enabling technology.

Want to learn more about thickened flame modeling in CONVERGE? Check out these TFM case studies from recent CONVERGE User Conferences (1, 2, 3) and keep an eye out for future Premixed Combustion Modeling advanced training sessions.

References
[1] Colin, O., Ducros, F., Veynante, D., and Poinsot, T., “A thickened flame model for large eddy simulations of turbulent premixed combustion,” Physics of Fluids, 12(1843), 2000. DOI: 10.1063/1.870436
[2] Hasti, V.R., Liu, S., Kumar, G., and Gore, J.P., “Comparison of Premixed Flamelet Generated Manifold Model and Thickened Flame Model for Bluff Body Stabilized Turbulent Premixed Flame,” 2018 AIAA Aerospace Sciences Meeting, AIAA 2018-0150, Kissimmee, Florida, January 8-12, 2018. DOI: 10.2514/6.2018-0150
[3] Sjunnesson, A., Henrikson, P., and Lofstrom, C., “CARS measurements and visualizations of reacting flows in a bluff body stabilized flame,” 28th Joint Propulsion Conference and Exhibit, AIAA 92-3650, Nashville, Tennessee, July 6-8, 1992. DOI: 10.2514/6.1992-3650

► The Search for Soot-free Diesel: Modeling Ducted Fuel Injection With CONVERGE
  26 Mar, 2020

At the upcoming CONVERGE User Conference, which will be held online from March 31–April 1, Andrea Piano will present results from experimental and numerical studies of the effects of ducted fuel injection on fuel spray characteristics. Dr. Piano is a Research Assistant in the e3 group, coordinated by Prof. Federico Millo at Politecnico di Torino, and these are the first results to be reported from their ongoing collaboration with Prof. Lucio Postrioti at Università degli Studi di Perugia, Andrea Bianco at Powertech Engineering, and Francesco Pesce and Alberto Vassallo at General Motors Global Propulsion Systems. This work is a great example of how CONVERGE can be used in tandem with experimental methods to advance research at the cutting edge of engine technology. Keep reading for a preview of the results that Dr. Piano will discuss in greater detail in his online presentation.

The idea behind ducted fuel injection (DFI), originally conceived by Charles Mueller at Sandia National Laboratories, is to suppress soot formation in diesel engines by allowing the fuel to mix more thoroughly with air before it ignites1. Soot forms when a fuel doesn’t burn completely, which happens when the fuel-to-air ratio is too high. In DFI, a small tube, or duct, is placed near the nozzle of the fuel injector and directed along the axis of the fuel stream toward the autoignition zone. The fuel spray that travels through this duct is better mixed than it would be in a ductless configuration. Experiments at Sandia have shown that DFI can reduce soot formation by as much as 95%, demonstrating the enormous potential of this technology for curtailing harmful emissions from diesel engines.

Introduction to ducted fuel injection from Sandia National Laboratories.

While the Sandia researchers have focused on heavy-duty diesel applications, Dr. Piano and his collaborators are targeting smaller engines, such as those found in passenger cars and light-duty trucks. To understand how the fuel spray evolves in the presence of a duct, they first performed imaging and phase Doppler anemometry analyses of non-reacting sprays in a constant-volume test vessel. Figure 1 shows a sample of the experimental results. The video on the left corresponds to a free spray configuration with no duct, while the video on the right corresponds to a ducted configuration. Observe how the dark liquid breaks up and evaporates more quickly in the ducted configuration—this is the enhanced mixing that occurs in DFI.

Figure 1: Videos from experiments on non-reacting sprays in a free spray configuration (left) and a ducted configuration (right). Images were obtained from a constant-volume vessel at a rail pressure of 1200 bar, vessel temperature of 500°C, and vessel pressure of 20 bar.

Their next step was to develop a CFD model of the fuel spray that could be calibrated against the experimental results. Dr. Piano and his colleagues reproduced the geometry of the experimental setup in a CONVERGE environment, using physical models available in CONVERGE to simulate the processes of spray breakup, evaporation, and boiling, as well as the interactions between the spray and the duct. With fixed embedding and Adaptive Mesh Refinement, they were able to increase the grid resolution in the vicinity of the spray and the duct without a significant increase in computational cost. They simulated the spray penetration for both the free spray and the ducted configuration over a range of operating conditions and validated those results against the experimental data.

With a calibrated spray model in hand, the researchers were then able to run predictive simulations of DFI for reacting fuel sprays. They combined their spray model with the SAGE detailed chemical kinetics solver for combustion modeling, along with the Particulate Mimic model of soot formation. They ran simulations at different rail pressures and vessel temperatures to see how DFI would affect the amount of soot mass produced under engine-like operating conditions. Figures 2 and 3 show examples of the simulation results for a rail pressure of 1200 bar and a vessel temperature of 1000 K. Consistent with the findings of Mueller et al.1, these results show a dramatic reduction in the mass of soot produced during combustion in the ducted configuration as compared to the free spray configuration.

Figure 2: The plots on the right side show the heat release rate and soot mass produced in simulations of reacting sprays (red lines correspond to the free spray configuration and blue lines correspond to the ducted configuration). The dashed vertical lines indicate the simulation time at which the two contour plots were generated, with the free spray configuration on the left and the ducted configuration in the center. Contours are colored by soot mass, with regions of high soot mass shown in red.
Figure 3: The plots on the right side show the heat release rate and soot mass produced in simulations of reacting sprays (red lines correspond to the free spray configuration and blue lines correspond to the ducted configuration). The dashed vertical lines indicate the simulation time at which the two contour plots were generated, with the free spray configuration on the left and the ducted configuration in the center. Contours are colored by soot mass, with regions of high soot mass shown in red.

While these early results are promising, Dr. Piano and his collaborators are just getting started. They will continue using CONVERGE to investigate phenomena such as the duct thermal behavior and to explore the effects of different geometries and operating conditions, with the long-term goal of incorporating DFI into the design of a real engine. If you are interested in learning more about this work, be sure to sign up for the CONVERGE User Conference today!

References

[1] Mueller, C.J., Nilsen, C.W., Ruth, D.J., Gehmlich, R.K., Pickett, L.M., and Skeen, S.A., “Ducted fuel injection: A new approach for lowering soot emissions from direct-injection engines,” Applied Energy, 204, 206-220, 2017. DOI: 10.1016/j.apenergy.2017.07.001

► An Evening With the Experts: Scaling CFD With High-Performance Computing
  25 Feb, 2020
Listen to the full audio of the panel discussion.

As computing technology continues to advance rapidly, running simulations on hundreds and even thousands of cores is becoming standard practice in the CFD industry. Likewise, CFD software is continually evolving to keep pace with the advances in hardware. For example, CONVERGE 3.0, the latest major release of our software, is specifically designed to scale well in parallel on modern high-performance computing (HPC) systems. It’s clear that HPC is the future of CFD, so how does this shift affect those of us running simulations and how can we make the most of the increased availability of computational resources? At the 2019 CONVERGE User Conference–North America, we assembled a panel of engineers from industry and government to share their expertise.

In the panel discussion, which you can listen to above, you’ll learn about the computing resources available on the cloud and at the U.S. national laboratories and how to take advantage of them. The panelists discuss the types of novel, one-of-a-kind studies that HPC enables and how to handle post-processing data from massive cases run across many cores. Additionally, you’ll get a look at where post-processing is headed in the future to manage the ever-increasing amounts of data generated form large-scale simulations. Listen to the full panel discussion above!

Panelists

Alan Klug, Vice President of Customer Development, Tecplot

Sibendu Som, Manager of the Computational Multi-Physics Section, Argonne National Laboratory

Joris Poort, CEO and Founder, Rescale

Kelly Senecal, Co-Founder and Owner, Convergent Science

Moderator

Tiffany Cook, Partner & Public Relations Manager, Convergent Science

► 2019: A (Load) Balanced End to a Successful Decade
  19 Dec, 2019

2019 proved to be an exciting and eventful year for Convergent Science. We released the highly anticipated major rewrite of our software, CONVERGE 3.0. Our United States, European, and Indian offices all saw significant increases in employee count. We have also continued to forge ahead in new application areas, strengthening our presence in the pump, compressor, biomedical, aerospace, and aftertreatment markets, and breaking into the oil and gas industry. Of course, we remain dedicated to simulating internal combustion engines and developing new tools and resources for the automotive community. In particular, we are expanding our repertoire to encompass batteries and electric motors in addition to conventional engines. Our team at Convergent Science continues to be enthusiastic about advancing simulation capabilities and providing unmatched customer support to empower our users to tackle hard CFD problems.

CONVERGE 3.0

As I mentioned above, this year we released a major new version of our software, CONVERGE 3.0. We have frequently discussed 3.0 in the past few months, including in my recent blog post, so I’ll keep this brief. We set out to make our code more flexible, enable massive parallel scaling, and expand CONVERGE’s capabilities. The results have been remarkable. CONVERGE 3.0 scales with near-ideal efficiencies on thousands of cores, and the addition of inlaid meshes, new physical models, and enhanced chemistry capabilities have opened the door to new applications. Our team invested a lot of effort into making 3.0 a reality, and we’re very proud of what we’ve accomplished. Of course, now that CONVERGE 3.0 has been released, we can all start eagerly anticipating our next major release, CONVERGE 3.1.

Computational Chemistry Consortium

2019 was a big year for the Computational Chemistry Consortium (C3). In July, the first annual face-to-face meeting took place at the Convergent Science World Headquarters in Madison, Wisconsin. Members of industry and researchers from the National University of Ireland Galway, Lawrence Livermore National Laboratory, RWTH Aachen University, and Politecnico di Milano came together to discuss the work done during the first year of the consortium and establish future research paths. The consortium is working on the C3 mechanism, a gasoline and diesel surrogate mechanism that includes NOx and PAH chemistry to model emissions. The first version of the mechanism was released this fall for use by C3 members, and the mechanism will be refined over the coming years. Our goal is to create the most accurate and consistent reaction mechanism for automotive fuels. Stay tuned for future updates!

Third Annual European User Conference

Barcelona played host to this year’s European CONVERGE User Conference. CONVERGE users from across Europe gathered to share their recent work in CFD on topics including turbulent jet ignition, machine learning for design optimization, urea thermolysis, ammonia combustion in SI engines, and gas turbines. The conference also featured some exciting networking events—we spent an evening at the beautiful and historic Poble Espanyol and organized a kart race that pitted attendees against each other in a friendly competition. 

Inaugural CONVERGE User Conference–India

This year we hosted our first-ever CONVERGE User Conference–India in Bangalore and Pune. The conference consisted of two events, each covering different application areas. The event in Bangalore focused on applications such as gas turbines, fluid-structure interaction, and rotating machinery. In Pune, the emphasis was on IC engines and aftertreatment modeling. We saw presentations from both companies and universities, including General Electric, Cummins, Caterpillar, and the Indian Institutes of Technology Bombay, Kanpur, and Madras. We had a great turnout for the conference, with more than 200 attendees across the two events.

CONVERGE in the Big Easy

The sixth annual CONVERGE User Conference–North America took place in New Orleans, Louisiana. Attendees came from industry, academic institutions, and national laboratories in the U.S. and around the globe. The technical presentations covered a wide variety of topics, including flame spray pyrolysis, rotating detonation engines, machine learning, pre-chamber ignition, blood pumps, and aerodynamic characterization of unmanned aerial systems. This year, we hosted a panel of CFD and HPC experts to discuss scaling CFD across thousands of processors; how to take advantage of clusters, supercomputers, and the cloud to run large-scale simulations; and how to post-process large datasets. For networking events, we took a dinner cruise down the Mississippi River and encouraged our guests to explore the vibrant city of New Orleans.

KAUST Workshop

In 2019, we hosted the First CONVERGE Training Workshop and User Meeting at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Attendees came from KAUST and other Saudi Arabian universities and companies for two days of keynote presentations, hands-on CONVERGE tutorials, and networking opportunities. The workshop focused on leveraging CONVERGE for a variety of engineering applications, and running CONVERGE on local workstations, clusters, and Shaheen II, a world-class supercomputer located at KAUST. 

Best Use of HPC in Automotive

We and our colleagues at Argonne National Laboratory and Aramco Research Center – Detroit received this year’s 2019 HPCwire Editors’ Choice Award in the category of Best Use of HPC in Automotive. We were incredibly honored to receive this award for our work using HPC and AI to quickly optimize the design of a clean, highly efficient gasoline compression ignition engine. Using CONVERGE, we tested thousands of engine design variations in parallel to improve fuel efficiency and reduce emissions. We ran the simulations in days, rather than months, on an IBM Blue Gene/Q supercomputer located at Argonne National Laboratory and employed machine learning to further reduce design time. After running the simulations, the best-performing engine design was built in the real world. The engine demonstrated a reduction in CO2 of up to 5%. Our work shows that pairing HPC and AI to rapidly optimize engine design has the potential to significantly advance clean technology for heavy-duty transportation.

Sibendu Som (Argonne National Laboratory), Kelly Senecal (Convergent Science), and Yuanjiang Pei (Aramco Research Center – Detroit) receiving the 2019 HPCwire Editors’ Choice Award

Convergent Science Around the Globe

2019 was a great year for CONVERGE and Convergent Science around the world. In the United States, we gained nearly 20 employees. We added a new Convergent Science office in Houston, Texas, to serve the oil and gas industry. In addition, we have continued to increase our market share in other areas, including automotive, gas turbine, and pumps and compressors.

In Europe, we had a record year for new license sales, up 70% from 2018. A number of new employees joined our European team, including new engineers, sales personnel, and office administrators. We attended and exhibited at tradeshows on a breadth of topics all over Europe, and we expanded our industry and university clientele. 

Our Indian office celebrated its second anniversary in 2019. The employee count nearly doubled in size from 2018, with the addition of several new software developers and marketing and support engineers. The first Indian CONVERGE User Conference was a huge success–we had to increase the maximum number of registrants to accommodate everyone who wanted to attend. We have also grown our client base in the transportation sector, bringing new customers in the automotive industry on board.

In Asia, our partners at IDAJ continue to do a fantastic job supporting CONVERGE. CONVERGE sales significantly increased in 2019 compared to 2018. And at this year’s IDAJ CAE Solution Conference, speakers from major corporations presented CONVERGE results, including Toyota, Daihatsu, Mazda, and DENSO.

Looking Ahead

While we like to recognize the successes of the past year, we’re always looking toward the future. Computing technology is constantly evolving, and we are eager to keep advancing CONVERGE to make the most of the increased availability of computational resources. With the expanded functionality that CONVERGE 3.0 offers, we’re also looking forward to delving into untapped application areas and breaking into new markets. In the upcoming year, we are excited to form new collaborations and strengthen existing partnerships to promote innovation and keep CONVERGE on the cutting-edge of CFD software.

► CONVERGE 3.0: From Specialized Software to CFD Powerhouse
  25 Nov, 2019

When Eric, Keith, and I first wrote CONVERGE back in 2001, we wrote it as a serial code. That probably sounds a little crazy, since practically all CFD simulations these days are run in parallel on multiple CPUs, but that’s how it started. We ended up taking our serial code and making it parallel, which is arguably not the best way to create a parallel code. As a side effect of writing the code this way, there were inherent parts of CONVERGE that did not scale well, both in terms of speed and memory. This wasn’t a real issue for our clients who were running engine simulations on relatively small numbers of cores. But as time wore on, our users started simulating many different applications beyond IC engines, and those simulating engines wanted to run finer meshes on more cores. At the same time, computing technology was evolving from systems with relatively few cores per node and relatively high memory per core to modern HPC clusters with more cores and nodes per system and relatively less memory per core. We knew at some point we would have to rewrite CONVERGE to take advantage of the advancements in computing technology.

We first conceived of CONVERGE 3.0 around five years ago. At that point, none of the limitations in the code were significantly affecting our clients, but we would get the occasional request that was simply not feasible in the current software. When we got those requests, we would categorize them as “3.0”—requests we deemed important, but would have to wait until we rewrote the code. After a few years, some of the constraints of the code started to become real limitations for our clients, so our developers got to work in earnest on CONVERGE 3.0. Much of the core framework and infrastructure was redesigned from the ground up in version 3.0, including a new mesh API, surface and grid manipulation tools, input and output file formats, and load balancing algorithms. The resulting code enables our users to run larger, faster, and more accurate simulations for a wider range of applications.

Scalability and Shared Memory

Two of our major goals in rewriting CONVERGE were to improve the scalability of the code and to reduce the memory requirements. Scaling in CONVERGE 2.x versions was limited in large part because of the parallelization method. In the 2.x versions, the simulation domain is partitioned using blocks coarser than the solution grid. This can cause a poor distribution of workload among processors if you have high levels of embedding or Adaptive Mesh Refinement (AMR). In 3.0, the solution grid is now partitioned directly, so you can achieve a good load balance even with very high levels of embedding and AMR. In addition, load balancing is now performed automatically instead of on a fixed schedule, so the case is well balanced throughout more of the run. With these changes, we’ve seen a dramatic improvement in scaling in 3.0, even on thousands of cores. 

Figure 1. CONVERGE 3.0 scaling for a combusting turbulent partially premixed flame (Sandia Flame D) case on the Blue Waters supercomputer at the National Center for Supercomputing Applications[1]. On 8,000 cores, CONVERGE 3.0 scales with 95% efficiency.

To reduce memory requirements, our developers moved to a shared memory strategy and removed redundancies that existed in previous versions of CONVERGE. For example, many data structures, like surface triangulation, that were stored once per core in the 2.x versions are now only stored once per compute node. Similarly, CONVERGE 3.0 no longer stores the entire grid connectivity on every core as was done in previous versions. The memory footprint in 3.0 is thus greatly reduced, and memory requirements also scale well into thousands of cores.

Figure 2. Load balancing in CONVERGE 2.4 (left) versus 3.0 (right) for a motor simulation with 2 million cells on 72 cores. Cell-based load balancing in 3.0 results in an even distribution of cells among processors.

Inlaid Mesh

Apart from the codebase rewrite, another significant change we made was to incorporate inlaid meshes into CONVERGE. For years, users have been asking for the ability to add extrusion layers to boundaries, and it made sense to add this feature now. As many of you are probably aware, autonomous meshing is one of the hallmarks of our software. CONVERGE automatically generates an optimized Cartesian mesh at runtime and dynamically refines the mesh throughout the simulation using AMR. All of this remains the same in CONVERGE 3.0, and you can still use meshes exactly as they were in all previous versions of CONVERGE! However now we’ve added the option to create an inlaid mesh made up of cells of arbitrary shape, size, and orientation. The inlaid mesh can be extruded from a triangulated surface (e.g., a boundary layer) or it can be a shaped mesh away from a surface (e.g., a spray cone). For the remainder of the domain not covered by an inlaid mesh, CONVERGE uses our traditional Cartesian mesh technology. 

Figure 3. Inlaid mesh for a turbine blade. In CONVERGE Studio 3.0, you can create a boundary layer mesh by extruding the triangulated surface of your geometry. CONVERGE Studio automatically creates the interface between the inlaid mesh and the Cartesian mesh, as seen in the image on the right.

Inlaid meshes are always optional, but in some cases they can provide accurate results with fewer cells compared to a traditional Cartesian mesh. In the example of a boundary layer, you can now refine the mesh in only the direction normal to the surface, instead of all three directions. You can also align an inlaid mesh with the direction of the flow, which wasn’t always possible when using a Cartesian mesh. This feature makes CONVERGE better suited for certain applications, like external aerodynamics, than it was previously.

Combustion and Chemistry

In CONVERGE 3.0, our developers have also enhanced and added to our combustion models and chemistry tools. For the SAGE detailed chemistry solver, we optimized the rate calculations, improved the procedure to assemble the sparse Jacobian matrix, and we introduced a new preconditioner. The result is significant speedup in the chemistry solver, especially for large reaction mechanisms (>150 species). If you thought our chemistry solver was fast before (and it was!), you will be amazed at the speed of the new version. In addition, 3.0 features two new combustion models. In most large eddy simulations (LES) of premixed flames, the cells are not fine enough to resolve the laminar flame thickness. The thickened flame model for LES allows you to increase the flame thickness without changing the laminar flamespeed. The second new model, the SAGE three-point PDF model, can be used to account for turbulence-chemistry interaction (more specifically, the commutation error) when modeling turbulent combusting flows with RANS.

On the chemistry tools side, we’ve added a number of new 0D chemical reactors, including variable volume with heat loss, well-stirred, plug flow, and 0D engine. The 1D laminar flamespeed solver has seen significant improvements in scalability and parallelization, and we have new table generation tools in CONVERGE Studio for tabulated kinetics of ignition (TKI), tabulated laminar flamespeed (TLF), and flamelet generated manifold (FGM). 


Figure 4. CONVERGE 3.0 simulation of flow and combustion in a multi-cylinder spark-ignition engine.

CONVERGE Studio Updates

To streamline our users’ workflow, we have implemented several updates in CONVERGE Studio, CONVERGE’s graphical user interface (GUI). We partnered with Spatial to allow users to directly import CAD files into CONVERGE Studio 3.0, and triangulate the geometry on the fly in a way that’s optimized for CONVERGE. Additionally, Tecplot for CONVERGE, CONVERGE’s post-processing and visualization software, can now read CONVERGE output files directly, for a smoother workflow from start to finish.

CONVERGE 3.0 was a long time in the making, and we’re very excited about the new capabilities and opportunities this version offers our users. 3.0 is a big step towards CONVERGE being a flexible toolbox for solving any CFD problem.


[1] The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. The NCSA Industry Program is the largest Industrial HPC outreach in the world, and it has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand computational problems at rapid speed and scale. The CONVERGE simulations were run on NCSA’s Blue Waters supercomputer, which is one of the fastest supercomputers on a university campus. Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.


► Changing the CFD Conference Game with the CONVERGE UC
  21 Aug, 2019

As the 2019 CONVERGE User Conference in New Orleans approaches, I’ve been thinking about the past five years of CONVERGE events. Let me take you back to the first CONVERGE User Conference. It was September 2014 in Madison, Wisconsin, and I was one of the first speakers. I talked about two-phase flows and the spray modeling we were doing at Argonne National Laboratory. Many of the people in the audience didn’t know you could do the kinds of calculations in CONVERGE that we were doing. Take needle wobble, for example. At the time, people didn’t know that you could not only move the needle up and down, but you could actually simulate it wobbling. After my talk, we had many interesting discussions with the other attendees. We made connections with international companies that we otherwise would not have had the chance to meet, and we formed collaborations with some of those companies that are still ongoing today.

At Argonne National Laboratory, I lead a team of more than 20 researchers, all of them focused on simulating either piston engines or gas turbines using high-performance computing. Our goal is to improve the predictive capability of piston engine and gas turbine simulations, and we do a lot of our work using CONVERGE. We develop physics-based models that we couple with CONVERGE to gain deeper insights from our simulations.

We routinely attend and present our work at conferences like SAE World Congress and ASME, and what really sets the CONVERGE User Conference apart is the focus of the event—it’s dedicated towards the people doing simulation work with piston engines, gas turbines, and other real-world applications. The user conference is the go-to place where we can meet all of the people doing 3D CFD simulations, so it’s a fantastic networking opportunity. We get to speak to people from academia and industry and learn about their research needs—understand what their pain points are, what their bottlenecks are, where the physics is not predictive enough. Then we take that information back to Argonne, and it helps us focus our research. 

Apart from the networking, the CONVERGE User Conference is also a great venue for presenting. My team has presented at the CONVERGE conferences on a wide variety of topics, including lean blow-out in gas turbine combustors, advanced ignition systems, co-optimization of engines and fuels, predicting cycle-to-cycle variation, machine learning for design optimizations, and modeling turbulent combustion in compression ignition and spark ignition engines. The attendees are engaged and highly technical, so you get direct, focused feedback on your work that can help you find solutions to challenges you may be encountering or give ideas for future studies.

The presenters themselves take the conference seriously. The quality of the presentations and the work presented is excellent. If you’ve never attended a CONVERGE User Conference before, my advice to you is to try to be a sponge. Bring your notebooks, bring your laptops, and take as many notes as you can. The amount of useful information you will gain from this conference is enormous and more relevant than other conferences you may attend, since this event is tailored for a specific audience. The CONVERGE User Conference also draws speakers from all over the world, which provides a unique opportunity to hear about the challenges that automotive original equipment manufacturers (OEMs), for example, face in other countries, which are different challenges than those in the United States. Listening to their presentations and getting access to those speakers has been very helpful for us. And since there are plenty of opportunities for networking, you can interact with the speakers at the conference and connect with them later on if you have further questions.

Overall, the CONVERGE User Conference is a great opportunity for presenting, learning, and networking. This is a conference where you will gain a lot of useful knowledge, meet many interesting people, and have some fun at the evening networking events. If you haven’t yet come to a CONVERGE User Conference—I highly recommend making this year your first.


Interested in learning more about the CONVERGE User Conference? Check out our website for details and registration!

Numerical Simulations using FLOW-3D top

► Sand Core Making – Is It Time to Vent?
    1 Jul, 2020
Sand cores are a crucial element in the casting process because they are used to create complex interior cavities. For example, sand cores are used to create passages for water cooling, oil lubrication, and air flow in typical V8 engine casting. Ever wonder how a sand core is made? How can a material that works so well for making sandcastles on the beach be made into complex forms able to withstand the brutal conditions of hot metal flowing and solidifying around them? In this blog I will walk you through the process of how sand cores are made and describe the modeling tools in FLOW-3D CAST v5.1 that help engineers design their manufacturing processes.

The Sand Core Making Process Workspace

Choosing the correct physics models for such complex flow dynamics to model sand core making can be daunting. The Sand Core Making Workspace addresses this challenge by providing automated settings for numerical techniques and activating the appropriate physics models. Sub workspaces for cold box, hot box, and inorganic processes guide the user through the setup process with ease.

Sand Shooting

The starting point with all sand cores is the shooting process. In the shooting process, a mixture of air, sand, and binder is “shot” under high pressure into a core box with air vents placed strategically around the cavity to allow air to be displaced by sand.
Water jacket sand core
Simulation of a water jacket sand core. The sand/binder mixture is shot into the core box through the 8 inlets at the top. Air vents of varying size are placed around the sand core to allow air to escape.

The primary goal of a sand core shooting is to create a sand core with uniform density. Two design factors play important roles in achieving this goal — the location of sand inlets and the location and size of the air vents. Simulating the flow of the sand mixture using FLOW-3D CAST allows us to study different inlet and air vent configurations.

This video shows the filling pattern of H32 sand with a 2% binder additive being shot to produce a water jacket sand core. Notice that some of the regions are underfilled.

To address underfilling, air vents can be easily and accurately placed at the problem area using our interactive geometry placement tool. Here, a 6 mm air vent (see red arrow) is placed at a location where incomplete filling was observed.

This video shows a comparison of the filling in the region where the air vent has been added compared with the original result. The filling is now more complete in the region where the air vent was added. More vents can be added to address other underfilled regions.

Core Hardening

Once the air vent configurations have been placed and the shooting provides a uniform sand distribution, the sand core needs to be hardened. Three different hardening methods can be simulated in FLOW-3D CAST: cold box, hot box, and inorganic.

Drying Sand Cores in an Inorganic Process

The sand/binder mixtures used to produce inorganic cores are water based. To harden them, energy from the hot core box along with a hot air purge evaporate the water and carry it out of the core through the air vents. In this video, an intake manifold sand core shot with a sand/binder mixture containing 2% water by weight is dried by a hot (180 C) air purge. The blue region represents the water remaining in the sand core. The air vents are shown in gray. After 150 seconds of drying, the moisture continues to be pushed to the area where the most venting occurs.

Hardening Cores in a Hot Box Process

Sand cores shot in a hot box process are hardened using energy from the core box to cure the binder. This video shows the temperature distribution in the sand core as it is heated by the hot core box.

Simulating the hardening step allows us to determine the temperature distribution in the shot sand core and identify the time required to ensure that all regions of the core are sufficiently heated to harden it.

Gassing Sand Cores in a Cold Box Process

The binder used to produce sand cores shot in a cold box process contains a phenolic urethane resin. To harden these cores and give them the strength required to withstand flowing hot metal in the casting process, hot air carrying a catalyst (amine gas in this case) is used to purge the core. The hot air/amine gas mixture is introduced through the inlets and leaves the core box through the air vents that were used in the shooting step.

This video shows the evolution of amine gas through the porous shot sand core, which is a water jacket for an internal combustion engine.

With FLOW-3D CAST v5.1, sand core manufactures have the tools they need to model their sand core making processes to optimize the quality of their cores. Learn more about the Sand Core Making Workspace.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► Simulating the Investment Casting Process
  24 Jun, 2020

The investment casting process can produce high quality, complex castings with great accuracy and controlled grain structure. However, many challenges face process designers hoping to achieve these results. Fortunately, FLOW-3D CAST v5.1 includes an Investment Casting Workspace which provides the necessary tools to study the wide range of process parameters in a virtual space and determine an optimal design before casting a single part.

In this blog, we’ll walk through the Investment Casting Workspace and show how easy it is to simulate a directionally-cooled investment casting using a Bridgman process. The casting we’ll be investigating is this multi-cavity casting on the right.

Multi-cavity investment casting

Shell Building Tool

An investment casting process begins with a wax representation of the part to be cast. The next step is to dip the wax part into a ceramic slurry mixture successively to build up a shell around the part. This is done until a sufficient thickness shell is achieved. FLOW-3D CAST’s shell building tool allows users to create water-tight shells of any thickness in a matter of minutes.

Using the shell building interface in the GUI, the first step is to select the geometry around which the shell should be created. Next, select Fit Mesh to create a computational mesh around the geometry to be shelled. The edge of the mesh where the pouring sprue is located would be moved into the part slightly so that the generated shell is open there. The only other required inputs are the shell thickness and the cell size which should be roughly half the shell thickness.

A preview mode allows various shell thicknesses to be generated and examined quickly. For example, a 5mm shell built from the wax casting part was created in under 2 minutes.

Calculating View Factors

A critical aspect of investment casting is the calculation of view factors between all surfaces in the simulation. Every surface that “sees” another surface requires a calculation of “how” each of the surfaces see each other. The orientation of each surface relative to others and the emissivity of each must be evaluated. For complex shapes, the surface is subdivided, or clustered, and the view factor between each cluster is computed.

Understanding surfaces investment casting

Surface Clustering

In a Bridgman process, where the solidifying casting is being moved slowly through a selectively heated and cooled oven, the view factors are updated continuously throughout the simulation. This simulation result shows the surface clustering computed for the shell mold and the internal surfaces of the oven.

The simulation result shows the surface clustering computed for the shell mold and the internal surfaces of the oven. In a Bridgman process, where the solidifying casting is being moved slowly through a selectively heated and cooled oven, the view factors are updated continuously throughout the simulation.

Cluster Generation

A number of user-adjustable controls for cluster generation are available to minimize memory use and simulation runtime. For example, the cluster size could be set relatively large so that iterative simulations can be run quickly. As the design options are reduced, more refined details can be added to zero-in on the final design.

Here we see the solidifying casting has moved downward from the heated portion of the oven through a cooling ring so that the casting solidifies from the bottom to the top. This process allows for equiaxed grain structure to be formed.

This simulation shows how the temperature distribution in the solidifying casting on the left and the solid fraction on the right. The feeders at the top of each part provide liquid metal to the casting as it solidifies and shrinks.

Many process parameters can affect the outcome of an investment casting. With FLOW-3D CAST v5.1 in your design toolbox, the effect of these parameters, including the temperature profiles of the heated and cooled sections of the oven, the initial shell temperature, and the rate of motion of the solidifying casting through the oven, can be studied in-depth before casting a single part.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► FLOW-3D CAST v5.1 Released
  16 Jun, 2020

Featuring new process workspaces and state-of-the-art solidification model

SANTA FE, NM, June 16, 2020 — Flow Science, Inc. has announced a major release of their metal casting simulation software, FLOW-3D CAST v5.1, a modeling platform that combines extraordinary accuracy with versatility, ease of use, and high performance cloud computing.

FLOW-3D CAST v5.1 features new process workspaces for investment casting, sand core making, centrifugal casting, and continuous casting, as well as a chemistry-based alloy solidification model capable of predicting the strength of the part at the end of the process, an expansive exothermic riser database, and improved interactive geometry creation. FLOW-3D CAST now has 11 process workspaces that cover the spectrum of casting applications, which can be purchased individually or as bundles.

Offering FLOW-3D CAST by process workspace gives foundries and tool & die shops the flexibility to balance their needs with cost, in order to address the increased challenges and demands of the manufacturing sector, said Dr. Amir Isfahani, CEO of Flow Science.

FLOW-3D CAST v5.1’s brand new solidification model advances the industry into the next frontier of casting simulation – the ability to predict the strength and mechanical properties of cast parts while reducing scrap and still meeting product safety and performance requirements. By accessing a database of chemical compositions of alloys, users can predict ultimate tensile strength, elongation, and thermal conductivity to better understand both mechanical properties and microstructure of the part.

This release delivers the complete package – a process-driven workspace concept for every casting application paired with our unparalleled filling and now, groundbreaking microstructure and solidification analyses. Expert casting knowledge pre-loads sensible components and defaults for each workspace, putting our users on a path to success each time they run a simulation. FLOW-3D CAST v5.1 is going to take the industry by storm, said Dr. Isfahani.

Additionally, databases for heat transfer coefficients, air vents, HPDC machines, and GTP Schäfer risers provide information at users’ fingertips. The new Exothermic Riser Database along with the Solidification Hotspot Identification tool helps users with the precise placement of exothermic risers to prevent predicted shrinkage.

A live webinar outlining the new developments and how to apply them to casting workflows will take place on July 15 at 1:00 pm EST. Registration is available online > 

Go here for an extensive description of the FLOW-3D CAST v5.1 release improvements > 

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
Attn: Amanda Ruggles
info@flow3d.com
+1 505-982-0088

► FLOW-3D Workshops: Additive Manufacturing
  29 May, 2020
FLOW-3D AM workshops

Our two-day Online Additive Manufacturing and Laser Welding Workshops offer thorough, hands-on instruction in the interface and modeling capabilities of FLOW-3D AM and FLOW-3D WELD.

Wednesday, August 19 — Thursday August 20

  • 12:00pm – 3:00pm (EST)
  • Registration deadline: Wednesday, August 12

Wednesday, September 16 — Thursday, September 17

  • 12:00pm – 3:00pm (EST)
  • Registration deadline: Wednesday, September 9

What will you learn?

  • How to set up, run and analyze simulations of laser welding and additive manufacturing processes such as laser powder bed fusion and directed energy deposition.
  • How to use FLOW-3D AM and WELD to analyze the effect of specific parameters on your process design.
  • How post-processing in FlowSight can be used to create compelling visualizations for in-depth analysis of the melt pool and associated defects.

What happens after the workshop?

  • You will have access to a FLOW-3D AM and WELD license for 30 days following the workshop. During this period, a Flow Science CFD Engineer will work closely with you, providing customized technical support to help you develop CFD models for analysis of your process.
  • If you decide to purchase a FLOW-3D license, the workshop fee will be fully credited toward your purchase.

Who should attend?

  • These workshops are open to scientists and engineers working in industry research and development of additive manufacturing and laser welding processes.
  • Each workshop is held over two consecutive days, for three hours each day
  • Workshops are hosted on Zoom
  • Registration is limited to three attendees
  • Registration deadline is one week prior to the first day of the workshop
  • Cost: $2,000.00
    • Includes a 30 day FLOW-3D AM and FLOW-3D WELD license* and in-depth, customized technical support — a $9,000.00 value
*Workshop licenses available to prospective users in the US, Canada, the UK, and Ireland.
  • A Windows machine running Windows 7 or later
  • An external mouse (not a touchpad device)
  • Dual monitor setup recommended
  • nVidia Quadro card required for remote desktop
For more info on recommended hardware, see our Supported Platforms page.

Flow Science reserves the right to cancel a workshop at any time, due to reasons such as insufficient registrations or instructor unavailability. In such cases, a full refund will be given, or attendees may opt to transfer their registration to another workshop. Flow Science is not responsible for any costs incurred.

Registrants who are unable to attend a workshop may cancel up to one week in advance to receive a full refund. Attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop; after that date, no refunds will be given. If available, an attendee can also request to have their registration transferred to another workshop.

Workshop licenses are available to prospective users in the US, Canada, the UK, and Ireland. Existing users should contact sales@flow3d.com to discuss their licensing options.

Register for an Online Additive Manufacturing Workshop

  • American Express
    Discover
    MasterCard
    Visa
     

About the Instructor

Paree Allu, Senior CFD Engineer

Paree Allu is a Senior CFD Engineer with Flow Science, where he leads the technical and business strategy for Flow Science’s additive manufacturing and laser welding software solutions. Paree holds a Master’s Degree in Mechanical Engineering from The Ohio State University.

► test
  23 May, 2020

v=\frac{h}{{2\mu }}\left| {~\frac{{d\gamma }}{{dT}}} \right|.\left| {\nabla T} \right|

► FLOW-3D Workshops: Microfluidics
    6 May, 2020
FLOW-3D microfluidics workshop
Build hands-on CFD expertise and learn how you can use FLOW-3D‘s powerful multiphysics simulation capabilities to solve problems in microfluidics, lab-on-a-chip, diagnostics and biomedical applications. In this 4-hour online workshop, you will learn how to set up microfluidics simulations, apply complex physics to your models, analyze and interpret your simulation results, and how to follow general simulation best practices. After the workshop, build upon what you’ve learned with your 30-day FLOW-3D license.

Wednesday, July 8

  • 12:00pm – 4:00pm EST

Wednesday, August 5

  • 12:00pm – 4:00pm EST

Tuesday, September 15

  • 12:00pm – 4:00pm EST

Wednesday, October 14

  • 12:00pm – 4:00pm EST

What will you learn?

  • How to import geometry and set up models, including meshing and initial and boundary conditions
  • How to add capture coupled physics with models such as electrokinetics, heat transfer, and Lagrangian particles
  • How to use sophisticated visualization tools such as FlowSight to effectively analyze and convey simulation results

What happens after the workshop?

  • After the workshop, your license will be extended for 30 days. During this time you will have the support of one of our CFD engineers who will help you work through your specifics. You will also have access to our web-based training videos covering introductory through advanced modeling topics. 

Who should attend?

  • Researchers, scientists and engineers working in the fields of microfluidics, biomedical devices, inkjets, capillary flows, diagnostics, or lab-on-a-chip
  • University students at all levels interested in numerical modeling
  • Registration is limited to 10 attendees
  • Cost: $499 (private sector); $299 (government); $99 (academic)
  • 30-day FLOW-3D license*

*Workshop licenses available to prospective users in the US, Canada, the UK, and Ireland.

  • A Windows machine running Windows 7 or later
  • An external mouse (not a touchpad device)
  • Dual monitor setup recommended
  • nVidia Quadro card required for remote desktop

For more info on recommended hardware, see our Supported Platforms page.

Flow Science reserves the right to cancel a workshop at any time, due to reasons such as insufficient registrations or instructor unavailability. In such cases, a full refund will be given, or attendees may opt to transfer their registration to another workshop. Flow Science is not responsible for any costs incurred.

Registrants who are unable to attend a workshop may cancel up to one week in advance to receive a full refund. Attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop; after that date, no refunds will be given. If available, an attendee can also request to have their registration transferred to another workshop.

Workshop licenses are available to prospective users in the US, Canada, the UK, and Ireland. Existing users should contact sales@flow3d.com to discuss their licensing options.

Register for an Online Microfluidics Workshop

  • American Express
    Discover
    MasterCard
    Visa
     
  • Certificates will be in pdf format. Flow Science does not confirm that our workshops are eligible for PDHs or CEUs.

About the Instructor

Karthik Ramaswamy, FLOW-3D CFD Engineer

Karthik Ramaswamy is a senior CFD engineer with Flow Science, where he uses CFD to investigate and solve problems in microfluidics, biomedical engineering, complex fluids, and consumer product manufacturing processes. He also works in hydrodynamic modeling for civil infrastructure, coastal engineering, municipal, marine and water/environmental applications. Karthik holds a M.S. in Aerospace Engineering from the University of Illinois at Urbana-Champaign.

Mentor Blog top

► Technology Overview: Simcenter FLOEFD 2020.1 Battery Model Extraction Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 BCI-ROM and Thermal Netlist Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.

► On-demand Web Seminar: Avoiding Aerospace Electronics Failures, thermal testing and simulation of high-power semiconductor components
  27 May, 2020

High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.

► On-demand Web Seminar: Simulating and Testing the Latest Technology for EV Power Electronics
  26 May, 2020

Learn how to measure high quality component thermal metrics for vehicle power electronics components and see how to produce lifetime prediction data using Simcenter POWERTESTER. The testing techniques are used for component design, verifying datasheet values, thermal model calibration, and for reliability studies to identify thermal degradation and failure in automotive applications.

► On-demand Web Seminar: Aviation Electrification: Integrated design of electric motors
    6 May, 2020

The Webinar is designed for anyone who wants to learn more about effectively approaching electric motor design with electromagnetic, thermal, system level simulation, and optimization software to reduce prototype costs and design cycle time.

► Blog Post: Utilizing real world data and AI
    3 May, 2020

AI is all around us, but what is it exactly? For curious minds, this series of blogs explores the fundamental building blocks of AI, which together build the AI solutions we see today and that will enable the products we will enjoy tomorrow. This blog is the fourth and final part of a 4-part series and throws light on AI and its utilization in a smart city infrastructure. After releasing the product

Tecplot Blog top

► Getting Started Tecplot 360 – FVCOM Dataset
  17 Jun, 2020

Getting Started with Tecplot is easy with this online training session. Learn the basic capabilities of visualizing your results with Tecplot 360 in this 45-minute training session. The demonstration uses an FVCOM dataset (in netCDF format), however, the training is applicable for other datasets, whether you are working with steady or unsteady results.

You can follow along by downloading the dataset from our Getting Started Bundle. Timestamps have been added for each section to get you to answers faster! See all Training Videos.

Getting Started with Tecplot 360 – Training Agenda

  • Introducing Tecplot, Inc. [timestamp 00:30]
  • Touring the Graphical User Interface (GUI) [timestamp 04:00]
  • Loading FVCOM Data in netCDF format [timestamp 06:44]
  • Manipulating the Plot View [timestamp 08:00]
  • Viewing the Dataset Information [09:45]
  • Adding a Georeferenced Image [timestamp 11:22]
  • Adjustment of 3D Plot Axis [timestamp 11:50]
  • Blanking Values [timestamp 13:56]
  • Walking through the Plot Sidebar [timestamp 14:39]
    (Mesh, Contour, Shade, Vector, Edge, Scatter)
  • Viewing Slices, Isosurfaces, Streamtraces [timestamp 25:13]
  • Frames, Frame Style Files, and Frame Linking  [timestamp 29:55]
  • Data Extraction and Polylines [timestamp 33:45]
  • Exporting Results with Images and Videos [timestamp 41:10]
  • Q&A [timestamp 45:05]

The post Getting Started Tecplot 360 – FVCOM Dataset appeared first on Tecplot.

► Tecplot 360 Basics Training – Q&A
  17 Jun, 2020
You asked some great questions during the Tecplot 360 Basics Online Training session using the ONERA M6 Wing dataset. Over 150 scientists and engineers registered to learn how to increase their efficiency when visualizing and analyzing their CFD results.

We were not able to answer every question during the training, and this blog provides answers to all the questions. We thought you all might be interested in reading through them to find ones that will help you with your work.

You can watch this training session video, register for upcoming training sessions, and watch recorded sessions.

How can I make a graphic overlapping two pair of data with different delta X?

To overlay two datasets that have different X values, you have two options:

  1. Use Data>Alter>Specify Equations to normalize the X variable. You can then plot both datasets in the same frame.
  2. Use two different frames. Load each dataset into a separate frame and then overlay the frames. Use Frame>Edit Current Frame to make the frame background transparent.

How do I pull out a single variable from a pre-formatted Tecplot dataset and plot it as line vs time?

If I have understood the question correctly, after you load the data, you can use the Tools>Probe. This command creates a time-series function to plot the values at a specific point through time. You can also use the Analyze>Perform Integration tool to create a line plot of an integrated variable versus time.

Can I input exact geometries to extract? For example, can I define a rectangle by coordinates and then extract geometry over time?

Yes, you can do this using our scripting layer. See the macro command, $!ExtractFromPolyline, in the Tecplot Scripting Guide.

If you prefer Python, use the script, tecplot.data.extract.extract_line(), in the PyTecplot Reference Manual.

Could you do a seminar on PyTecplot?

We have many tutorials and webinars on PyTecplot. Here is a list of the resources available on our website. And we will most likely do a future online training on PyTecplot!

Which export format do you recommend?

Tecplot binary files (.plt or .szplt) are recommended for importing and exporting Tecplot 360 data. See our blog on Comparison of Data File Formats.

If you are asking about image and animation export formats, here is our advice. The vector image formats EPS and PostScript will give you the clearest high-resolution images for printing and publications. If you are sharing your images online, JPEG, PNG are best because they will be easier to render and the files sizes will be smaller. Here is a short blog on Exporting Image File Formats.

For movies, we find that MP4 works well for us. The format you choose will certainly depend on where you will be using the video.

Can Tecplot 360 import CFD++ data directly?

CFD++ exports directly to Tecplot binary format (.plt), and we recommend that. Tecplot 360 is compatible with many other file formats. Here is a short video tutorial on Loading Your Data. And here is a link to all Tecplot 360 compatible file formats.

Can Tecplot 360 read Autodesk CFD files?

Tecplot 360 does not have a direct loader for Autodesk CFD files. We can read STL files, so if you’re bringing in a geometry, we can read the STL format. If you are having trouble loading your data, please contact our support staff (support@tecplot.com), and we’ll see if we help you.

If I switch to a predefined view in 3D (let’s say an XY view), can I control the direction of the view?

I believe you are referring to the Snap to orientation view in the plot sidebar. Yes, once you snap to an orientation view, you can manipulate the plot as usual. The Snap to orientation views are quick shortcuts to get to a certain view.

Can you integrate the pressure along the wing surface to get the net force?

Yes. You do that with the Analyze>Perform Integration menu, which opens the Integrate dialog. Then you can do integrations, and calculate forces and moments. Here are a couple of links that can walk you through calculating forces:

In a map view of a terrain where Z changes with X and Y, can Tecplot 360 show an X, Y view of only the top most layer nodes? Sometimes it shows contours underlying the layers.

This is an interesting question. In coastal and ocean modeling, for example, if you load FVCOM data, which have X, Y and Z dimensions, and show a 2D plot, Tecplot 360 has no awareness of depth in that 2D view. The Z axis must be assigned to tell Tecplot 360 whether to show the top nodes or the bottom nodes. If you want to see a top down view, change to a 3D plot and assign the Z axis for a top down view.

Can I do a derivation of the variables in the equation editor?

Yes, go to Data>Alter>Specify Equations to open the Specify Equations dialog. Click on the Help menu, which opens the equations page in the Tecplot 360 User’s Manual. This Help page has links to the equation syntax. There you will see a list of the different expressions and operators you can use. The page also tells you how to do different derivative and differencing functions, if you want a derivative of an existing variable.

Export image and video formats seem to lose some quality compared to what I see in the Tecplot GUI. Can you give me some suggestions?

The first thing to try is to check the anti-aliasing box and set it to three (3). Antialiasing will smooth out the lines and the text. However, there will still be some differences in the onscreen vs. the exported images. Tecplot 360 uses one branch of code for onscreen rendering, and a different branch when exporting. I recommend exporting larger images with anti-aliasing.

Is there a way to find the location of the maximum or minimum value of a variable?

This capability is not built into the Tecplot 360 GUI (graphical user interface), but can be done with our Python API, PyTecplot. A script to highlight the maximum point is available on our GitHub page. More information can be found in our user manuals, installation, getting started, scripting and quick reference guides in our Tecplot Product Documentation.

The post Tecplot 360 Basics Training – Q&A appeared first on Tecplot.

► Tecplot 360 Basics Training – Aerodynamics
    9 Jun, 2020

Description

Tecplot 360 Basics training with Account Manager Jared McGarry. Jared has plenty of tips, tricks, and best practices to show you how to analyze your data more effectively. This training uses the ONERA M6 Wing dataset, which you can find in your Tecplot 360 installation examples folder.

  • Touring the Tecplot 360 User Interface. ​
  • Loading your data. ​
  • Exploring your data – styling slices, iso-surface, and streamlines. ​
  • Calculating new quantities – using the equation editor and built-in functions. ​
  • Extracting data – for dimensionality reduction. ​
  • Line Plotting – engineering decisions are often made with simple line plots. ​
  • Exporting your results – exporting images, animations, and videos. ​

Try Tecplot 360 for free

The post Tecplot 360 Basics Training – Aerodynamics appeared first on Tecplot.

► Getting Started Tecplot for Barracuda
  27 May, 2020

The webinar is hosted by Scott Fowler, Tecplot 360 Product Manager, and Sam Clark, Barracuda Virtual Reactor Product Manager.

Description

You won’t find a tutorial here, but I will give you a tour of what Barracuda Virtual Reactor® users will see as they begin to use Tecplot for Barracuda to look at their simulation results. And for anyone who is not familiar with Barracuda from past experience, I’ll point out a few things that make Barracuda unique among CFD codes.

Barracuda Virtual Reactor® is the industry standard for CFD simulation of industrial fluidization systems, and it just got a whole lot more powerful thanks to a recent partnership with Tecplot, Inc. This Webinar will help you get started using Tecplot for Barracuda while learning best practices for plotting and analyzing Virtual Reactor™ data. 

Tecplot for Barracuda will greatly enhance the ease-of-use, flexibility, and operating system compatibility of Barracuda’s post-processing capabilities. The CPFD Software team is excited about Tecplot for Barracuda and we are confident that Barracuda users will find it to be a powerful tool for post-processing their simulation results.

Webinar Agenda

  • Tecplot and CPFD​
  • CPFD and Barracuda Virtual Reactor​
  • Fluidized Bed Example​
  • Where to learn more​

First I’ll give you a quick overview of the example problem we’re going to be using to do this demo today, this is a large industrial scale fluidized catalytic cracking regenerator and Barracuda is very strong in this area. It has become an industry standard for people who are doing simulations of FCC regenerators. This particular example is based on a case study that was presented at the 2016 AFPM annual meeting. There is a paper available if you want to dig in to more of the details of the case study itself and what the engineering conclusions were. That paper is available on the CPFD Software website.

Demonstration Outline

  • Launching Tecplot360 from the Barracuda GUI​
  • Viewing the Grid and Boundary Conditions
  • Exploring 3D simulation results​
  • Using multiple frames
  • Selecting spatial regions using blanking
  • Working with layouts and frame styles​
  • Comparing two simulations side-by-side​
  • Extracting data from 3D results
  • Plotting data from Barracuda’s text-based output files

The post Getting Started Tecplot for Barracuda appeared first on Tecplot.

► Creating Materials Legend Tecplot 360
  19 May, 2020

Here’s a Tecplot Tip that requires a bit of Tecplot “Kung Fu” – but our customers love it!

Tecplot 360 is designed to read and display numeric data, but sometimes you may have categorical data that is best represented using names. We call this a Materials Legend. An example of this type of data is a groundwater simulation in which scalar values refer to materials such as rock, sand, and water. Another example is an internal combustion simulation where the particles have different states: Not in wall film, In wall film, rebounded, etc.

While categorical data is not the primary design target, Tecplot 360 can display this information in the Contour Legend using Custom Label Sets. Custom labels are text strings contained within a data file or text geometry file which define labels for your axes or contour table. You may select Custom Labels anywhere you can choose a number format. The result is the text strings in place of numbers. There is more information on Custom Labels in our documentation:

Creating Custom Label Sets

Our goal here is to create a legend which displays the names associated with the DP_film_flag variable values in a CONVERGE dataset (from Convergent Science). The CONVERGE documentation defines the values of DP_film_flag as shown in the table below.

Value Meaning
0 Not in wall film
1 In wall film
2 Rebounded
3 Splashed
4 Separated
5 Stripped
Custom Label Sets

Figure 1. Custom Label Sets

Custom Label Sets are a way of assigning strings to (1-based) positive integer values. A custom label set can be a simple one-line file containing the strings that you want to associate with the values. You can then load this data file with your simulation data.

The first string is associated with the value 1, the second string with the value 2, and so on. Since the DP_film_flag values start at zero, we have to create a 1-based copy of this variable, which I will demonstrate below. 

Part 1: Initial setup

  1. Load the files: tec000050.plt and customlabels.dat. Download the ZIP file below to find the files.
    Materials Legend Files
  2. Create a new variable called Film Flag which is a 1-based version of DP_film_flag. This new variable will be used to color the particles. 
    Specify Equations

    Figure 2. Specify Equations dialog

     

  3. Adjust the plot style to display the spray zone, colored by Film Flag and shaped as Octahedron (Octahedrons are faster to draw than Spheres and easier to see than Points. We’ll use the Sphere shape for the final output). Watch this 2-minute video tutorial on displaying spray particles.

In Figure 3, you can see distinct colors, but it’s difficult to know which particles are In wall film or Rebounded by looking at numbers. We want to see the names on the legend, instead of the numbers.

Adjusted Plot Style

Figure 3. Adjusted Plot Style: display the spray zone, colored by Film Flag and shaped as Octahedron.

Part 2: Adjusting the Contour Legend to create the Materials Legend

  1. Double-click on the contour legend and set the contour levels to integer values 0-6. The zero value ensures we have an extra color band at the bottom of the legend.  
    Contour Details dialog

    Figure 4. Contour Details dialog

     

  2. Display the string values on the legend: Select the Legend page and click on the Number Format… Now, In the Specify Number Format dialog choose Custom set 1, and change the Prefix and Suffix fields as shown below. Using the <sub> and </sub> notation is a trick to align the strings with the center of the color, rather than at the division of the color band. Make the font Bold to increase the size and weight of the font, because the subscript notation reduces it.
    Contour Details Legend

    Figure 5. Display string values on the legend in the Specify Number Format dialog.

     

  3. Now, Adjust the end points of the legend to white so they blend into the background. Start by unchecking Separate color bands.
  4. Next, open the Legend Box… and set Legend Type to Fill with Fill color set to White.
    Legend Box

    Figure 6. Legend Box Dialog

     

  5. To make the end-points of the legend white, use the Override band colors feature from the Bands page. Override the band spanning 0-1 and the band spanning 7-8. 
    Override Band Colors

    Figure 7. Override band colors option

     

  6. The contour legend should now appear as the adjusted Material Legend for the Film Flag.
    Film Flag

    Figure 8. Adjusted Film Flag Legend

     

  7. To further distinguish the colors of each material you have two options:
    • Use additional band overrides to define specific colors for each level.
    • Choose a different colormap (or create a custom colormap – see Materials_Legend.py in the supplied files). For this dataset, the Qualitative – Dark 2 colormap does a pretty good job of distinguishing between the different values.
    Contour Details-Qualitative Dark 2

    Figure 9. Qualitative Dark 2 colormap

 
And here is your final image.

Materials Legend

Figure 10. Final Image – Materials Legend for the Film Flag

 


See Related Videos »

The post Creating Materials Legend Tecplot 360 appeared first on Tecplot.

► Complex Nature of Higher-Order Finite-Element Data
  12 May, 2020

Visualizing Higher-Order Finite-Element Data – Part 2: The Challenges

This blog was written by Dr. Scott Imlay, Chief Technical Officer, Tecplot, Inc.

“Run when you can, walk if you have to, crawl if you must; just never give up!”

-Dean Karnazes, Ultramarathon Man

I feel it is important to acknowledge our worldwide struggle with COVID-19. Like me, most of you have probably made significant changes to your lives to slow the spread of the disease. Some of you have been more directly affected; having yourself or loved ones fall ill. A few of you may have lost a loved one. I used variations of the Dean Karnazes quote above as a mantra to help keep me moving through the most difficult parts of IRONMAN triathlons. It also helped me through the darkest moments of my grief after my daughter passed away last year. If you are going thru a dark time, give it a try.

Run when you can, walk if you have to, crawl if you must, just never give up! Keep moving forward!

Visualization techniques for Higher-Order Finite-Element Solutions

Isosurface in a linear tetrahedra.

Figure 1. Isosurface in a linear tetrahedra.

In keeping with this theme, my colleagues and I at Tecplot are pushing ahead at full speed on new features and new releases, in spite of working from home. My current role is researching visualization techniques for higher-order finite-element solutions. My last blog Visualization of Higher-Order Elements was a primer on higher-order finite-element CFD methods – what they are, how they are used, and what they work best for. In this blog I will address the challenges visualizing the results of a high-order CFD solution.

The complexity of higher-order finite-element data complicates the visualization process. First, is the variability in defining the coefficients of the polynomial basis functions. For nodal techniques, the coefficients of the basis function are the values of the solution at the nodes. Sometimes a modal technique is used where the polynomial is less coupled to the nodal values. When developing visualization algorithms for the higher-order-element data we much decide which basis functions to use.

Frequently, the polynomial order used for the solution is higher than the polynomial order used to define the curved edges and surfaces of the grid. For example, the grid may use a linear or tri-linear polynomial (our normal elements) while the solution within the element may vary by a third-order polynomial. For curved edges and surfaces, a second- or third-order polynomial is often used even if higher-order polynomials (up to 10th order) are used for the solution.

Solution Complexities

Another complexity is that the solution is not always continuous. For example, the discontinuous Galerkin (DG) method has a discontinuity in the solution values between adjacent elements. Customers sometimes want to see the discontinuity in their visualization and sometimes they don’t. For example, the amount of discontinuity gives them information on the level of grid-convergence. Small jumps between elements mean they have a sufficiently dense grid while large jumps mean the grid is too coarse. On the other hand, when presenting the results to customers they would prefer not to see the discontinuities.

Nature of Isosurface – Linear versus Quadratic

But, probably the biggest complexity for higher-order-element visualization algorithms is the very nature of the data. Consider first the isosurface passing through a linear tetrahedral element. Because the solutions are linear, the isosurface within the element will be a planar triangle or quadrilateral (see Figure 1).

Visualizing Higher-Order Finite-Element Data

Figure 2. Isosurface in a quadratic tetrahedra.

In fact, the isosurface can be completely defined by its intersections with the edges of the tetrahedra. Since the solution varies linearly along the edges you can calculate these intersections very quickly. You can also quickly exclude edges based on the range of the nodal values at either end of the edge. If the isosurface value is greater than the maximum node value, or less than the minimum node value, no need to compute further. In this way, the vast majority of the edges can be excluded from further computation by a couple of simple floating-point compares. This, among other optimizations, make this simple technique very fast. The equivalent algorithms (marching cubes) for hexahedra is a little more complex, but the same sort of optimizations apply.

Isosurface in a quadratic Tetrahedra

No such simple isosurface algorithm exists for higher-order elements. The isosurface is not, in general, planar and it doesn’t even have to intersect the edges or surfaces of the element (see Figure 2). You can have isosurfaces that are entirely contained within an element like little islands. It demands a new and, most likely, more computationally expensive algorithm.

Other areas of visualization are also complicated by the use of higher-order elements. Surface shading, mesh plots, interpolations for streamtrace computation (and other things) all must be modified.

In my next blog, I will discuss the results of our research into higher-order finite-element isosurface algorithms.

Subscribe to Tecplot 360

Get notified when the next blog is posted!

Subscribe Now

The post Complex Nature of Higher-Order Finite-Element Data appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► And it’s a wrap: Preventing virtual conference fatigue
  25 Jun, 2020

And it’s a wrap: Preventing virtual conference fatigue

At some point, I’ll muster the energy to write about what I learned but right now I’d like to tell you about the experience of sitting through online user events from Altair, ANSYS, AVEVA, Bentley, Hexagon, PTC, SAP, SAS, Siemens, and Tech Soft 3D — in addition to NAFEMS’ CAASE and all of the normal industry, company, and investor events. Often, several events were scheduled for the same time, which made life extra-complicated. I learned you can’t have more than one session of many platforms open at the same time — who knew?! It turns out, there’s real fatigue involved in doing too much stuff online. Recommendations at the bottom, but let’s start with a few things that worked and didn’t.

What went well?

  • For the most part, the technology worked. There was one spectacular failure — see below — but, in general, it was OK. As good as a purely passive experience can be, at any rate.
  • The speakers and events that understood the limitations of the mechanism produced more engaging events than those who simply put a digital front end on what they would have presented in person.
  • A positive case in point: of the hundreds of sessions I attended over the last few weeks, ANSYS CEO Ajei Gopal’s keynote at Simulation World was probably the best. It was short (16 minutes), he laid out the world-changing benefits of simulation, how ANSYS meets customer needs, and issued a call to action: do more, do better. Use that simulation superpower.
  • Another one that worked really well in its own geeky/nerdy way was the PTC LiveWorx keynote by OnShape’s Jon Hirschtick and long-time PTCer Mike Campbell. They tag-teamed on their content and then used a Vuforia app to try to diagnose a fault in a piece of home audio gear. They were engaging, human, and got their messages across: Vuforia will rule the world, and PTC’s Atlas will change how we work. I remember both the message and the demo — and that, after all, is the point: to get that message to stick.
  • Far less “produced” than ANSYS, PTC, and most others was Hexagon’s VTD, Virtual Test Drive, event. It was more interactive and felt like a (very long) web meeting — and I mean that in a good way. The speakers weren’t pre-recorded, which meant that they could reference one another rather than speaking in a vacuum. People sometimes spoke over one another — just like in real life. And the event was spread out over several weeks, meaning nothing was too overwhelming to fit into already busy weeks
  • Bottom line, as a mechanism for talking at attendees, I’d say these tools work well. To gauge interest, to interact, they’re a failure because the communication is one-way.

What didn’t go well?

  • When technology failed, it was spectacular. SAP had a very ambitious vision for digital events to replace its SAPPHIRE Live, with multiple channels of content, each targeted at different audiences and held on different days. Some of it started being “live” (meaning recorded but available on their website) weeks ago, but the big TADAAA liftoff was supposed to be the musician Sting leading into the SAP CEO keynote. I don’t know, but I’m guessing that telling the world that they could tune in to see Sting perform led to a huge overload on the conference system. The CEO and other keynotes eventually went out over Twitter on the day, but it was an embarrassing start. (The keynotes are now available for replay on the conference site.)
  • And that brings up an important point: when it didn’t work, or when it was too hard to find what I was looking for, I moved on. And I’m sure others did, too. I have good intentions about going back to the SAP keynotes but … there will always be another urgent thing, pushing that off the day’s to-do list.
  • Most events put replays up instantly; some didn’t. To the prior point, if I can’t watch a replay when I think of it, it might not happen later. That’s especially important if …
  • Some events put too many things at the same time, thinking along in-person conference lines. The logic there is that if you have interesting sessions running concurrently, an enterprise will send more people so that it doesn’t miss out on the content, bumping up numbers and registration fees. That’s flawed, especially if you hope to cross-pollinate, say to get CAD people to explore CAE or structural CAE to fluids or PLM to ERP. Content should be staggered so that one human being can attend the sessions that matter to their jobs — and if the conference format can’t do that, the replays should be available quickly, before the attention span moves on.
  • For these massively complicated events, vendors should build suggested agendas to get people to the key sessions rather than to random ones. And make the search engines far more useful by adding good descriptions and tags — don’t just throw up 200 sessions and hope for the best. Sample agendas might be for CAD people, for specific product users, for investors, an introduction for people who know nothing about the company or its products — organize all of that awesome content so that attendees have a way of starting to engage.
  • Some events tried to have timelines for Asia, Europe, and the Americas — the CEO keynote starting at 9AM in each time zone, say. I understand why –to make it as much like the physical events as possible, and to respect each group’s normal working day– but it made for a very confusing experience. I’m up early, in the Eastern US, so watched replays from Asia, then caught some of Europe and rounded out with US events. Assume attendees will be watching asynchronously and perhaps, don’t bother with the added complication of time zones.
  • That said, AVEVA World Digital did add unique geo-specific content, but I believe it all went live at the same time.
  • Breaks! One event (apologies, I don’t remember which but it was brilliant) had “go outside” in the agenda. It was a legitimate 15 minute break between sessions – long enough to stretch and get a snack. Not long enough to get lost in another work task. And they came back to the same screen, so even if one did get distracted, the restart was audible.
  • I’ve attended day-long things, events in several hour-long chunks in one week, and events spread out over several weeks. I’m not sure there’s a “best practice”, but the day-long events are incredibly challenging for the audience. How long can we sit still? How much multi-tasking do we have to do, checking email, taking calls, etc.? When I’m offsite at an event, I can focus on it to the exclusion of most other things. That’s not really possible when everyone knows I’m just not answering the phone. If the point is to get and hold people’s attention, shorter is always better.

And that brings us to speakers, without whom this whole thing would be pointless. If you’re a CEO-level type, you can probably get a teleprompter and pro camera setup but for the rest of us, yikes. Sitting still and staring at a camera while delivering a Powerpoint presentation is … horrible. I walk around, wave my hands, stop and start thoughts and sentences, which works in person but not in a webinar. We need to find alternatives that let people communicate in a way that makes them more comfortable: colleagues could interview one another, they could chat about a favorite project, they could solve a problem together (as Jon and Mike did) — not everything has to be a Powerpoint. And it’s likely that those sessions would project more energy than yet another Powerpoint session, which would hold the audience’s interest.

My suggestions for the next round of virtual events?

  • Check the calendar of partner companies and competitors. All of your customers use other vendors’ products — don’t make them choose which event to attend in real-time and which to get (maybe) in replays. Send out a save-the-date notification as early as you can.
  • Shorter is better. For the overall event and for individual sessions, too. This isn’t the same experience as a live event; this is a person with a screen and a chair. You have to work very, very hard to keep their attention for more than 30 minutes.
  • Simple is OK. A lot of the platforms had bells and whistles that I didn’t use, and that made it harder to find the content I was looking for. I don’t need a glitzy front end; I need to get at the presentation I’m almost late for!
  • Mix it up. Some canned content, some live, some Powerpoint, some chat. Siemens had a fun video that involved exercise bicycles and tech-babble in between sessions. It broke up the otherwise serious content.
  • Triple check your technology. Make sure it works the way you need it to, when you need it to. You might not get people to come back.

Finally, think about the audience. They need guides, breaks, a chat capability (with organizers and/or each other), contests to make it less serious and more fun, less canned stuff and more live content. Promise to send a Tshirt or coffee mug to legit attendees who fill out more than some number of surveys. The care and attention that goes into a live attendee experience needs to go into this type of interaction, too.

For attendees: limit what you choose to attend. I clearly overdid it. Get a Bluetooth headset — walking around while listening (even if I had to dash back to the screen every so often) made it possible for me to stay more engaged, longer. It also means you can get to snacks as needed, and as we all know, that’s key to survival. Try not to multitask — just listen. If you do it for shorter periods of time, that email or phone call can wait. If a session isn’t what you thought it would be, move on to something else. It’s OK – they won’t notice. Take breaks and go outside!

This time of year is always crazybusy for me, with vendor and investor events every week. But it’s so much simpler when I can physically only be in one place at a time. I’m able to acknowledge that I can’t do it all and just let it go. This year, with so much virtual content, I felt I had to attend as much as possible. I am grateful to the vendors for putting on these events and to all of the speakers who worked hard on their sessions — but now I need a very, very, very long nap away from anything digital.

What about you? Did you attend any of the many virtual user conferences? What did you think?

Title image is by Tumisu from Pixabay.

The post And it’s a wrap: Preventing virtual conference fatigue appeared first on Schnitger Corporation.

► Quickie: Siemens adds UltraSoC to its semiconductor design offering
  23 Jun, 2020

Quickie: Siemens adds UltraSoC to its semiconductor design offering

Siemens just announced that it will acquire UltraSoC Technologies Ltd., maker of instrumentation and analytics solutions that “put intelligent monitoring, cybersecurity, and functional safety capabilities into the core hardware of system-on-chip (SoC)”. UltraSoC describes itself as providing a modular semiconductor IP platform that allows chip development teams to create capable, highly flexible on-chip monitoring and analytics infrastructures. These can be used both as a part of an SoC’s inherent functionality and as a development tool that dramatically accelerates and de-risks the entire process of producing a chip”.

UltraSoC’s solutions allow users to embed monitoring hardware into complex SoCs to enable “fab-to-field” analytics that “accelerate silicon bring-up, optimize product performance, and confirm that devices are operating “as designed” for functional safety and cybersecurity purposes”, says Siemens.

Siemens plans to integrate UltraSoC’s technology into Mentor’s Tessent software offering Siemens believes that the combination will “benefit the entire semiconductor product lifecycle, including structural, electrical, and functional capabilities of SoCs. It also supports Siemens’ comprehensive digital twin with UltraSoC providing monitoring of the real device”.

The acquisition is due to close in Q4 of Siemens’ fiscal year — meaning, before the end of September 2020. Terms of the deal were not disclosed.

The post Quickie: Siemens adds UltraSoC to its semiconductor design offering appeared first on Schnitger Corporation.

► Respond. Reset. Reimagine.
  16 Jun, 2020

I’ve been talking to a lot of people about how Covid-19 has affected their work lives–often inseparable from their home lives, given the sudden explosion of home offices and homeschooling–and it generally comes down to these three phases.

First, we went a little nuts. Panic. As one person told me, his firm’s IT burden went from 6 offices to 6000, each with a different IT setup and many with too few computer screens to meet the sudden work+education need. Central offices were often ill-equipped to deal with the sudden reality of dozens or hundreds of people using VPNs to access their work lives, placing a lot of stress on systems that were intended to support a handful of road warriors plus the occasional person working from home. It wasn’t pretty, but most companies were able to get to some level of productivity (even as we spent far too much time trying to figure out what’s on that shelf behind so-and-so on the Zoom call. Not me, of course. I’m focused).

We could call that the “Respond” part of the timeline — every day brought a new crisis, whether in IT, collapsing supply chains, new government guidelines, grocery shortages, and so on. We weren’t thinking, we were reacting.

Once we got past that phase (here, at Schnitger Corp that was in late April), the thrill of the new died down and it all became a slog. We had figured out what we could and couldn’t do, how to structure our days at home, and what to do when we couldn’t buy flour. It was a bit of a breather between the fear and panic of Respond, and a time to start thinking about what came next, starting to return to a new normal.

That phase has been labeled “Recover” or “Reset“. We have to figure out which projects in AEC were going ahead and which likely were stalled as government funding switched to critical infrastructure or create-jobs projects. In manufacturing, we need to dig through supply chains, figure out where things are, what’s missing, and how we can get back to production with a mandated 6 foot/2 meter gap between all of our people. Reset is about getting back to the old normal, in some way. Or compromising to get as close to the old normal as we can, given Covid-related restrictions.

Alongside Reset, though, is Reimagine. If what we’re doing now works, why do we have to go back to how it was? We’ve learned to do more things, more remotely., which can be both efficient and eco-friendly. Unmanned operations became even more critical in power generation, data centers, and the other industries that made work from home, possible. If we can monitor our production facility from home, why should we need to go to the plant to do it? Couldn’t we be more productive, monitoring five plants from home? Remote monitoring has been possible for a long time; new visualization, data cleaning, machine learning, and augmented reality tools make it better, easier, and lead to new insights. But but but

We also have the opportunity to radically change how we do, what we do. Keep the parts of our processes that work and jettison those that don’t. If working from home makes employees happier and more productive, why force them into central offices? The real-estate savings could pay for a lot of IT infrastructure improvements. If using cloud CAD or virtualized PLM turned out to work well, why not keep that alongside the traditional toolset? Maybe transition over in time? One of the few good things to come out of this whole pandemic is the fact that a lot of people had to try things they wouldn’t normally have considered — remember and use those lessons!

That’s all great and I know from my contacts that many of their companies are reinventing along these lines. Today, though, I realized that this isn’t enough. I was listening to Schneider Electric CEO Jean-Pascal Tricoire speak at AVEVA World Summit Digital, where he used this image in his presentation:

Mackaycartoons is right: we’ll respond, react, recover, reimagine, reset, whatever after Covid, but that’s hardly sufficient. What we need to do is build resilient and adaptive businesses that can deal with whatever economic fallout comes next — a recession seems likely but how deep or how long is unclear. Retail outlets may be re-opening, but if consumers aren’t confident enough to buy, what then? It’s very early days where I am (outside Boston in the US) but so far, there doesn’t seem to be much pent-up demand.

Right behind these immediate and very real problems is climate change. The pandemic will eventually play itself out. Business will return to some level of success. But our planet is getting warmer, our coastal cities are in danger of flooding, and we need to do more to stop the decline.

Digitalization, luckily, ties into both sets of problems — Covid and climate. Remote everything means fewer miles driven and flown. Working to resolve the supply chain issues that became obvious as a result of Covid could, perhaps, lead to using more locally-produced components or recycled materials. Maybe a production plant that’s a net energy consumer could become neutral — or use alternative sources of power.

My point? Reimagining is about my job and your job, and our companies. But it’s also a bigger opportunity to call into question many of our assumptions and to look for new and better ways to do business.


Technology plays such a huge part in our response to the Covid shocks, that we have to address the inequalities that became visible over the last few months. Someone figured out that a million children in one large US city didn’t have ANY schooling during their physical distancing –lots of reasons, including parents who had to work outside the home and couldn’t facilitate, lack of computers, lack of Internet, lack of the basic skills needed to make this style of teaching work –which won’t help them at all when they need to figure out how to work remotely as adults. We all need to get involved with the public/private/charity consortia that are working to address this critical issue. It can’t make up for what was lost over the last few months, but it’s a start. Go here for a lot more about this issue.

The cartoon is from M. Tricoire’s presentation because I couldn’t find the original on https://mackaycartoons.net/ — but check out Mr. Mackay’s cartoons and the Hamilton Spectator, at https://www.thespec.com/, for a Canadian view on all things.

The post Respond. Reset. Reimagine. appeared first on Schnitger Corporation.

► AVEVA enters fiscal 2021 betting on digitalization
    9 Jun, 2020

AVEVA enters fiscal 2021 betting on digitalization

As foreshadowed back in April, AVEVA today reported revenue of £834 million up 9% (and up 7% on an organic, constant currency basis, cc). CEO Craig Hayman and Deputy CEO/CFO James Kidd gave a lot of great, additional information on how the world and their business have changed since April, and we’ll get to that after the details:

  • By type, Subscription revenue was £317 million, up 45% (up 43.2% cc)
  • Maintenance revenue was £202 million, up 4% (down 1% cc) as AVEVA actively incentivized sales resources to move customers to subscriptions — and as the AVEVA Flex program took hold, which offers consumption-based subscriptions
  • Together, that’s AVEVA’s recurring revenue category, which totaled £519 million in F20, 62% of total revenue– up from 53% of total a year ago
  • Perpetual license revenue continued its planned decline, to £179 million, down 15% (down 17% cc)
  • Finally, Services revenue was £136 million, down 5% (down 5% cc) as AVEVA shifts its focus to performing high value, software-related services (and offloads other types to its expanding network of partners)
  • AVEVA does not break out Cloud revenue but said that it saw “growth of some 200% in Cloud orders” [emphasis mine; note, not revenue] on new and expansion orders from existing customers. Mr. Hayman told investors on the earnings call that the last quarter saw the conversation shift from “don’t talk to me about cloud” to “how quickly can we get going” as companies responded to work from home pressures
  • By geo, revenue from EMEA was up 4% to £327 million, driven by growth in Russia/CIS, in the oil & gas, power, and industrial markets. The other parts of EMEA saw flat to mid-single-digit growth, reflecting the economic environment and subdued North Sea oil activity. AVEVA reports closing “a number of key deals” across its end-markets as well as expanding its footprint at existing EPC customers
  • Revenue from the Americas was £279 million, up 2%. The company said Brazil “performed very well” while North America and Latin America “grew”. AVEVA said that the decline in services revenue was “higher in the Americas than other regions as AVEVA continued to reduce the Services element of its pipeline Monitoring & Control business”
  • Finally, revenue from Asia was up 27% to £228 million driven by strength in Australia and India. Interestingly, “China was on track for an outstanding year before the impact of Covid-19 hit the [March] quarter. Despite that, China still delivered double-digit growth for the year”
  • By business unit. Engineering was up 7% cc to £359 million, including 25% growth in subscription licenses
  • Monitoring & Control reported revenue of £259 million, up 3% cc, including a The Business Unit achieved approximately 150% increase in subscriptions following the introduction of AVEVA Flex. AVEVA reports “solid growth in the core Wonderware business … [and] particularly good [growth] in consumer packaged goods and life sciences [veticals]”
  • The Asset Performance Management unit reported revenue of £117 million, up 12% cc. with 250% growth in subscriptions, Overall growth was led by AVEVA Predictive Analytics
  • Finally, revenue from Planning & Operations was £100 million, up 13% cc on “particularly good growth from Planning & Scheduling and Asset Optimisation”
  • By end-industry, AVEVA seems to have seen a steady performance. Oil & Gas was around 40% of revenue (as compared to 40-45% a year ago), while Packaged Goods (in which AVEVA includes Food & Beverage and Pharma), Power, Marine, Chemicals & Petrochemicals, and Metals & Mining each accounted for 5-10% of total revenue. AVEVA makes this point to show its diversification away from oil and gas – and from the upstream part of that market most affected by OPEC’s manipulation of oil markets. In general, it seems like customers across the verticals are focusing more on digitalization as a means to business continuity and a means of increasing efficiency, lowering risk and improving profitability
  • Mr. Hayman, who divides oil & gas into energy and fuel, gave a bit more insight: We always need energy, but the sharp reduction in fuel consumption as we stopped flying and cut down on driving as a result of Covid-19 mandates led several oil majors to announce reductions in capital expenditure. That could have a negative impact on some of AVEVA’s EPC customers. But, “most of these EPCs have multi-year subscription contracts [with AVEVA] with a minimum spend, which provides some insulation for us”
  • Indirect sales represent 1/3 of total revenue and the channel “performed well, achieving growth across all regions”. The channel is increasingly enabled to sell more AVEVA products, such as AVEVA Asset Performance Management (APM), in addition to its historic focus on Monitoring & Control. And towards the end of the year, AVEVA introduced the AVEVA Select, which provides partners with the opportunity to distribute the full AVEVA portfolio

Mr. Hayman’s opening remarks on the call with investors covered the highlights of the results and the company’s experience of Covid-19: AVEVA “successfully navigated COVID in the final quarter of the [fiscal] year. COVID has changed the world, and we expect the disruption to continue for the first six months of fiscal 2021”. Later in the call, he said that “COVID is a substantial challenge but our hands are firmly on the wheel of this business, and we are very rapidly making decisions, choices, and taking action to maintain alignment around our strategy and medium-term guidance”. My take: this isn’t over, by any means, and businesses need to stay agile and adjust to changing conditions. A case pinpoint: Mr. Hayman said that AVEVA’s offices in China were the first to close and the first to reopen. The company is taking lessons learned there, on both sides of the closing, and applying them worldwide as it becomes appropriate.

Speaking of China, revenue from China was up in the “double digits” for the year, leading one investor to ask if China was back to business as usual. Mr. Hayman said that “Signs are good, energy consumption, manufacturing output, etc. are all up. China is entering the renewal phase” (in Mr. Hayman’s response — recovery — renewal model of Covid-19) but not yet at plan.

On cost-cutting: Mr. Hayman and Mr. Kidd both emphasized that the cuts were to discretionary items –travel, new hires, raises– and not to core research and development. Interesting (perhaps only to me): AVEVA’s 250 scrum teams shifted to working from home and may even have increased productivity. And, said Mr. Kidd (I think): AVEVA is looking at how it will work in the future, and comparing that to pre-Covid.

Take those two together, and it seems as though many of the things we’re hearing pundits talk about are being pondered in real life: who will work where? Why? What do they need to be productive, there? And the bigger questions: do we all need to be in offices? What about those people who travel a lot, anyway–do they need offices? How does that work if hot-desking is not allowed? What does a largely-remote workforce do to company culture?

About deals: Mr. Kidd said that it is harder to close (larger) perpetual deals and that there was some, minimal disruption in closing contracts in the March quarter — a few deals did slip out of fiscal 2020. I’m trying to find out if they’ve already closed.

Mr. Hayman told investors that Covid-19 has brought digitalization sharply into focus for many of his manufacturing prospects as they start to work out how to navigate whatever their new “normal” is. He said that solutions like AVEVA’s Unified Operations Centre expand from oil & gas into new verticals, to provide an integrated view of engineering, operations, and performance across their fleet of assets. As investments in new capacity become harder to justify, he said, companies will look to digitalization to build and maintain –and then, protect– the profitability of their current assets. AVEVA, Mr, Hayman said, is well-positioned to take advantage of those trends.

The post AVEVA enters fiscal 2021 betting on digitalization appeared first on Schnitger Corporation.

► Bentley buys voice-based construction app maker
    3 Jun, 2020

Bentley buys voice-based construction app maker

Bentley Systems just announced that it has acquired NoteVault, which makes voice-based solutions for construction management. Start NoteVault on your mobile device, speak into it, and the app’s construction-specific machine learning (backed by human transcription) creates textual status reports and other field notes.

Think about how often you stop to type into your phone — and how awkward and dangerous that can be. I use an iPhone; Siri is useful and able to turn many words into text but isn’t infallible — and for litigation reasons, construction notes need to be accurate.

Bentley plans to add NoteVault to its SYNCHRO digital construction environment, which already plans projects and uses mobile field applications to track and manage many aspects of a live site such as labor, materials, equipment, and so on.

Peter Lasensky, CEO of NoteVault explained how NoteVault fits into Bentley’s concept for a construction digital twin: “Updating 4D construction models through insights gleaned from our software for field-captured reporting fully extends the power of voice and positioning technologies together. Combining NoteVault and SYNCHRO is a natural next step in our overall mission, with Bentley, to drive greater efficiencies in construction.”

I can see many other use cases for voice-activated data gathering and wonder where else Bentley might take this capability. The company bought bridge monitoring technology a couple of years ago; inspectors take a lot of notes about their findings — perhaps there. Some of its Asset Information Management Solutions also rely on observed conditions — maybe there, too. The key is NoteVault’s machine learning: it has already been “taught” many construction-specific terms and I imagine that it can be taught the nuances of other industries.

Details of the acquisition were not disclosed.

The post Bentley buys voice-based construction app maker appeared first on Schnitger Corporation.

► Stratasys announces a 10% workforce reduction
    2 Jun, 2020

Stratasys announces a 10% workforce reduction

I just put up the post about no PLMish layoffs — and Stratasys puts out a press release that says that it will cut its global workforce by about 10%, mostly before the end of June and all by the end of the summer.

CEO Yoav Zeif says, “This reduction in force is a difficult but essential step in our ongoing strategic process, designed to better position the company for sustainable and profitable growth. I would like to express my appreciation to each of the employees impacted by this decision for their dedicated service. Current conditions make the job market even more challenging, and we have done our best to provide the departing employees globally with a respectable and fair separation. This measure is not expected to affect the progress on our forthcoming product launch plans, which remain a top priority as we lead the industry to new heights with our best-in-class additive manufacturing solutions.”

Stratasys believes this and other cost-reduction actions will save about $30 million a year.

Stratasys is PLMish in that its users have driven a lot of the development of additive-friendly design software — but it is in essence a hardware company. And that, right now, is a difficult place to be as many manufacturers are just now ramping back up in uncertain demand.

The post Stratasys announces a 10% workforce reduction appeared first on Schnitger Corporation.


return

Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: