CFD Online Logo CFD Online URL
Home >

CFD Blog Feeds

Another Fine Mesh top

► This Week in CFD
    3 Jan, 2020
The CFD world must be off to a slow start this year as evidenced by the relatively low number of news items in this week’s roundup. But there are two talks worth listening to. The first is by Donald Knuth … Continue reading
► I’m Vincent van Liebergen and This Is How I Mesh
    2 Jan, 2020
My fascination for fluid flow began around three decades ago. I always loved water sports as a child and competed in both sailing and swimming. In order to become a better sportsman, you have to practice, understand, and optimize your … Continue reading
► CFD 2030 at AIAA SciTech
  31 Dec, 2019
The AIAA CFD 2030 Integration Committee will be present at AIAA SciTech in Orlando so be certain to participate in what promise to be some very valuable events. High Performance Computing’s Impact on Aerospace Prediction This moderator-led, Forum 360 panel … Continue reading
► Pointwise at AIAA SciTech
  30 Dec, 2019
For a good time at AIAA SciTech next month, look no further than the reception we’re co-hosting with our friends at Tecplot and FieldView. Oh – we’ll be presenting technical work too. At next month’s AIAA SciTech Forum and Exposition, … Continue reading
► This Week in CFD
  27 Dec, 2019
December is a popular month for software releases judging by the number of announcements in this week’s post. There’s also an article about the use of CFD to study respiration in reptiles. The image shown here is from an article … Continue reading
► It’s Time to Apply for Summer 2020 Internships
  23 Dec, 2019
Are you a student majoring in engineering, computer science, math, or physics? Do you like computational fluid dynamics and mesh generation? Have you started thinking about how you’ll spend next summer? Are you home for winter break with free time … Continue reading

F*** Yeah Fluid Dynamics top

► Captured by Waves
  23 Jan, 2020

Acoustic levitation and optical tweezers both use waves — of sound and light, respectively — to trap and control particles. Water waves also have the power to move and capture objects, as shown in this award-winning poster from the 2019 Gallery of Fluid Motion. The central image shows a submerged disk, its position controlled by the arc-shaped wavemaker at work on the water’s surface. The complicated pattern of reflection and refraction of the waves we see on the surface draws the disk to a focal point and holds it there.

On the bottom right, a composite image shows the same effect in action on a submerged triangular disk driven by a straight wavemaker. As the waves pass over the object, they’re refracted, and that change in wave motion creates a flow that pulls the object along until it settles at the wave’s focus. (Image and research credit: A. Sherif and L. Ristroph)

► Rattlesnakes Sip Rain From Their Scales
  22 Jan, 2020

Getting enough water in arid climates can be tough, but Western diamondback rattlesnakes have a secret weapon: their scales. During rain, sleet, and even snow, these rattlesnakes venture out of their dens to catch precipitation on their flattened backs, which they then sip off their scales.

Researchers found that impacting water droplets tend to bead up on rattlesnake scales, forming spherical drops that the snake can then drink. Compared to other desert-dwelling snakes, Western diamondbacks have a far more complicated microstructure to their scales, with labyrinthine microchannels that provide a sticky, hydrophobic surface for impacting drops. (Video and image credit: ACS; research credit: A. Phadnis et al.; via The Kid Should See This)

► Bouncing Off Defects
  21 Jan, 2020

The splash of a drop impacting a surface depends on many factors — among them droplet speed and size, air pressure, and surface characteristics. In this award-winning video from the 2019 Gallery of Fluid Motion, we see how the geometry of a superhydrophobic surface can alter a splash.

When a drop falls on a protruding superhydrophobic surface, like the apex of a cone, it can be pierced from the inside, completely changing how the droplet rebounds and breaks up. The variations the video walks us through are all relatively simple, but resulting splashes may surprise you nevertheless. (Image and video credit: The Lutetium Project)

► Superman’s Hair Gel
  20 Jan, 2020

I love a good tongue-in-cheek physical analysis of superheroes. This estimate of the drag force experienced by Superman’s hair when outracing a plane or speeding bullet was done by Cornell students. According to their calculations, Superman’s hair (or his hair gel) must withstand nearly 80,000 Newtons of force. That’s a bit less than the typical force experienced by a restrained passenger in a car crash at highway speeds.

In grad school, my labmates and I held a spirited debate about the difference in drag Superman would experience when flying at hypersonic speeds depending on whether he had one or both arms extended in front of him. Sadly, we never found the chance to test our hypotheses in the wind tunnel. (Image and video credit: R. Geltman et al.)

Superman races to the rescue.
► A Dance of Hydrogen Bubbles
  17 Jan, 2020

Hydrogen bubbles rise off zinc submerged in hydrocholoric acid in this short film from the Beauty of Science team. In high-speed video, the rise of the bubbles is stately and mesmerizing. Notice how the smallest bubbles appear as perfect spheres; for them, surface tension is strong enough to maintain that spherical shape even against the viscous drag of their buoyant rise. Larger bubbles, formed from mergers both seen and unseen, have a harder time staying round. In them, surface tension must battle gravitational forces and drag from the surrounding fluid. (Image and video credit: Beauty of Science; via Laughing Squid)

► Tapping a Can Won’t Save Your Beer
  16 Jan, 2020

It happens to the best of us: sometimes our beer gets shaken up during transit. One common reaction to this is to tap the side of the can repeatedly before opening, but a new scientific study shows that tapping doesn’t affect the volume of beer lost. Danish scientists tested over 1,000 cans of beer in randomized combinations of shaken, unshaken, tapped, and untapped, and observed no difference between tapped and untapped cans.

The foam-up upon opening takes place in shaken beer because carbon dioxide bubbles form in the pressurized beer, especially along defects in the wall where bubbles can nucleate. When the pressure is released, the carbon dioxide becomes supersaturated and comes out of solution, especially into the pre-formed bubbles, which rapidly grow and overflow. In theory, tapping could disturb those bubbles before opening, but in practice, it makes no difference. Your best bet? Give the beer time to settle before you open it. (Image credit: Q. Dombrowski; research credit: E. Sopina et al.; via Ars Technica)

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

CFD Online top

► Quick notes on testing optimization flags with OpenFOAM et al
  21 Jan, 2020
Greetings to all!

Tobi sent me an email earlier today related to this and I might as well leave a public note as well, to share with everyone the suggestions I had... so this is a quick copy-paste-adapt for future reference, until I or anyone else bothers with writing this at

I have no idea yet for the current generation of Ryzen CPUs (Ryzen 3000 series), but I do know of this report for EPYC:
If you look for Table 5, you will see the options they suggest for GCC/G++.

However, the "znver1" architecture is possibly not the best for this generation of Ryzen/Threadripper... there is an alternative, which is to use:
-march=native -mtune=native
It will only work properly with a recent GCC version for the more recent CPUs.

Beyond this, it might take some trial and error. Some guidelines are given here:

You can use the following strategy to test various builds with different optimization flags:
  1. Go into the folder "wmake/rules/linux64Gcc"
  2. Copy the files "cOpt" and "c++Opt" to another name, for example: "cOptNative" and "c++OptNative"
  3. "cOPT" and "c++OPT" are the lines that need to be updated.
  4. Create a new alias in your ".bashrc" file for this, for example:
    alias ofdevNative='source $HOME/OpenFOAM/OpenFOAM-dev/etc/bashrc WM_COMPILE_OPTION=OptNative'
  5. Start a new terminal and activate this alias ofdevNative.
  6. Then run ./Allwmake inside "OpenFOAM-dev".
  7. Repeat the same strategy for other names and therefore you can do several builds with small changes to the optimization flags.

Warning: Last time I checked, AVX and AVX2 are not used by OpenFOAM, so don't bother with them.

Best regards,
► Mixing of Ammoniak and Exhaust
  16 Aug, 2019
Dear Foamers,

in my thesis I worked with static mixers.
If you like to see my case you can see it here.
Feel free to ask!
► Determination of mixing quality/ uniformity index
  16 Aug, 2019
Dear guys,

for a long time I had problems determining the mixing quality of a mixing line. Now I've come across a usable formula. I would like to share this with you.
It is the degree of uniformity also called uniformity index.
The calculation is cell-based.
U = 1 - (SUM^{N}_{i=1}(Ci-Cm))/(2*N*Cm)
with N cells
and concentration of a cell Ci
and the arythmetic agent Cm
Cm = (SUM^{N}_{i=1}(Ci))/N
The easiest way is to export the cells with concentration of the considered region (outlet) and create an Excel file.
An example is shown in my public dropbox:
Greetings Philipp
► Connecting Fortran with VTK - the MPI way
  24 May, 2019
I wrote a little couple of programs, respectively in Fortran and C++, as a proof of concept for connecting a Fortran program to a sort of visualization server based on VTK. The nice thing is that it uses MPI for the connection, so on the Fortran side nothing new and scary.

The code (you can find it at and the idea strongly predate a similar example in Using Advanced MPI by W. Gropp et al., but makes it more concrete by adding actual visualization based on VTK.

Of course, this is just a proof of concept, and nothing really interesting is really visualized (just a cylinder with parameters passed from Fortran side), but it is intended as an example to adapt to particular use cases (the VTK itself is taken from, where a lot of additional examples are present).
► Direct Numerical Simulation on a wing profile
  14 May, 2019
Direct Numerical Simulation on a wing profile

1 billion points DNS (Direct Numerical Simulation) on a NACA4412 profile at 5 degrees angle of attack. Reynolds number is 350000 per airfoil chord and Mach number is 0.117. Both upper and lower turbulent boundary layers are tripped respectively at 15% and 50% by roughness elements evenly spaced in the boundary layer created by a zonal immersed boundary condition (Journal of Computational Physics, Volume 363, 15 June 2018, Pages 231-255, The spanwise extent is 0.3*chord. The computation has been performed on a structured multiblock mesh with the FastS compressible flow solver developed by ONERA on 1064 MPI cores. The video shows the early stages of the calculation (equivalent to 40000 time steps) highlighting the spatial development of fine-scale turbulence in both attached boundary layer and free wake. Post-processing and flow images have been made with Cassiopée (
► NACA4 airFoils generator
  20 Feb, 2019

generate 3D model for foils
Attached Thumbnails
Click image for larger version

Name:	airfoilWinger.png
Views:	277
Size:	72.5 KB
ID:	452  

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15

    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)


        type patch;
            (0 8 12 4)
        type patch;
            (3 7 15 11)
        type wall;
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        type patch;
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        type empty;
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

        polyLine 1 2
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)

        polyLine 9 10
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!


This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously ( as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post:

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (, but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:



Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,


popt, pcov = curve_fit(sutherland, T, mu)
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')


plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!


In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post ( most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.


Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!


You can download the script here:

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.


(1) Copy to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3
(5) If no errors – run blockMesh

You need to run this with python 3, and you need to have numpy installed


The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.


12% Joukowski Airfoil


With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:

With these inputs, the result looks like this:

Mesh Quality:

Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality


Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

Hanley Innovations top

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 

The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 

Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.

Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.

Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.

Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse

Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017

Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.

More information about the software can be found at the following url:

Thanks for reading.

► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:

If your have any questions about this article, please call me at (352) 261-3376 or visit

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Avoid Testing Pitfalls
  24 Jan, 2017

The only way to know if your idea will work is to test it.  Rest assured, as a design engineer your idea and designs will be tested over and over again often in front of a crowd of people.

As an aerodynamics design engineer, Stallion 3D helps you to avoid the testing pitfalls that would otherwise keep you awake at night. An advantage of Stallion 3D is it enables you to test your designs on the privacy of your laptop or desktop before your company actually builds a prototype.  As someone who uses Stallion 3D for consulting, I find it very exciting to see my designs flying the way they were simulated in the software. Stallion 3D will assure that your creations are airworthy before they are tested in front of a crowd.

I developed Stallion 3D for engineers who have an innate love and aptitude for aerodynamics but who do not want to deal with the hassles of standard CFD programs.  Innovative technologies should always take a few steps out of an existing process to make the journey more efficient.  Stallion 3D enables you to skip the painful step of grid (mesh) generation. This reduces your workflow to just a few seconds to setup and run a 3D aerodynamics case.

Stallion 3D helps you to avoid the common testing pitfalls.
1. UAV instabilities and takeoff problems
2. Underwhelming range and endurance
3. Pitch-up instabilities
4. Incorrect control surface settings at launch and level flight
5. Not enough propulsive force (thrust) due to excess drag and weight.

Are the results of Stallion 3D accurate?  Please visit the following page to see the latest validations.

If your have any questions about this article, please call me at (352) 261-3376 or visit

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.
► Flying Wing UAV: Design and Analysis
  15 Jan, 2017

3DFoil is a design and analysis software for wings, hydrofoils, sails and other aerodynamic surfaces. It requires a computer running MS Windows 7,8 and 10.

I wrote the 3DFoil software several years ago using a vortex lattice approach. The vortex lattice method in the code is based on vortex rings (as opposed to the horse shoe vortex approach).  The vortex ring method allows for wing twist (geometric and aerodynamic) so a designer can fashion the wing for drag reduction and prevent tip stall by optimizing the amount of washout.  The approach also allows sweep (backwards & forwards) and multiple dihedral/anhedral angles.
Another feature that I designed into 3DFoil is the capability to predict profile drag and stall. This is done by analyzing the wing cross sections with a linear strength vortex panel method and an ordinary differential equation boundary layer solver.   The software utilize the solution of the boundary layer solver to predict the locations of the transition and separation points.

The following video shows how to use 3DFoil to design and analyze a flying wing UAV aircraft. 3DFoil's user interface is based on the multi-surface approach. In this method, the wing is designed using multiple tapered surface where the designer can specify airfoil shapes, sweep, dihedral angles and twist. With this approach, the designer can see the contribution to the lift, drag and moments for each surface.  Towards the end of the video, I show how the multi-surface approach is used to design effective winglets by comparing the profile drag and induced drag generated by the winglet surfaces. The video also shows how to find the longitudinal and lateral static stability of the wing.

The following steps are used to design and analyze the wing in 3DFoil:
1. Input the dimensions and sweep half of the wing (half span)
2. Input the dimensions and sweep of the winglet.
3. Join the winglet and main wing.
4. Generate the full aircraft using the mirror image insert function.
5. Find the lift drag and moments
6. Compute longitudinal and lateral stability
7. Look at the contributions of the surfaces.
8. Verify that the winglets provide drag reduction.

More information about 3DFoil can be found at the following url:

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

CFD and others... top

► What Happens When You Run a LES on a RANS Mesh?
  27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

 Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.    

p = 1

p = 2

p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions. 

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model. 

Table I. Comparison of lift and drag coefficients with experimental data

p = 1
p = 2
p = 3

This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida. 

► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 

► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.

A Tank outside my taxi

A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion

To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.

► The Linux Version of meshCurve is Now Ready for All to Download
  20 Apr, 2016
The 64-bit version for the Linux operating system is now ready for you to download. Because of the complexities associated with various libraries, we experienced a delay of slightly more than a month. Here is the link again.

Please let us know your experience, good or bad. Good luck!

ANSYS Blog top

► How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon
  10 Oct, 2018
Illini EV Concept Team Photo at Shell Eco Marathon 2018

Illini EV Concept Team Photo at Shell Eco Marathon 2018

Weight is the enemy of all teams that design electric cars for the Shell Eco Marathon.

Reducing the weight of electric cars improves the vehicle’s acceleration and power efficiency. These performance improvements make all the difference come race day.

However, if the car’s weight is reduced too much, it could lead to safety concerns.

Illini EV Concept (Illini) is a Shell Eco Marathon team out of the University of Illinois. Team members use ANSYS academic research software to optimize the chassis of their electric car without compromising safety.

Where to Start When Reducing the Weight of Electric Cars?

Front bump composite failure under a load of 2000N.

Front bump composite failure under a load of 2000N.

The first hurdle of the Shell Eco Marathon is an initial efficiency contest. Only the best teams from this efficiency assessment even make it into the race.

Therefore, Illini concentrates on reducing the most weight in the shortest amount of time to ensure it makes it to the starting line.

Illini notes that its focus is on reducing the weight of its electric car’s chassis.

“The chassis is by far the heaviest component of our car, so ANSYS was used extensively to help design our first carbon fiber monocoque chassis,” says Richard Mauge, body and chassis leader for Illini.

“Several loading conditions were tested to ensure the chassis was stiff enough and the carbon fiber did not fail using the composite failure tool,” he adds.

Competition regulations ensure the safety of all team members. These regulations state that each team must prove that their car is safe under various conditions. Simulation is a great tool to prove a design is within safety tolerances.

“One of these tests included ensuring the bulkhead could withstand a 700 N load in all directions, per competition regulations,” says Mauge. If the teams’ electric car designs can’t survive this simulation come race day, then their cars are not racing.

Iterate and Optimize the Design of Electronic Cars with Simulation

Front bump deformation under a load of 2000N.

Front bump deformation under a load of 2000N.

Simulations can do more than prove a design is safe. They can also help to optimize designs.

Illini uses what it learns from simulation to optimize the geometry of its electric car’s chassis.

The team found that its new designs have a torsional rigidity increase around 100 percent. This is after a 15 percent decrease in weight compared to last year’s model.

“Simulations ensure that the chassis is safe enough for our driver. It also proved that the chassis is lighter and stiffer than ever before. ANSYS composite analysis gave us the confidence to move forward with our radical chassis redesign,” notes Mauge.

The story optimization story continues from Illini. It plans to explore easier and more cost-effective ways to manufacture carbon fiber parts. For instance, the team wants to replace the core of its parts with foam and increase the number of bonded pieces.

If team members just go with their gut on these hunches, they could find themselves scratching their heads when something goes wrong. However, with simulations, the team makes better informed decisions about its redesigns and manufacturing process.

To get started with simulation, try our free student download. For student teams that need to solve in-depth problems, check out our software sponsorship program.

The post How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon appeared first on ANSYS.

► Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers
    9 Oct, 2018
This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

Simulation data sets have a funny habit of ballooning as engineers move through the development cycle. At some point, post-processing these data sets on a single machine becomes impractical.

Engineers can speed up post-processing by spatially or temporally decomposing large data sets so they can be post-processed across numerous servers.

The idea is to utilize the idle compute nodes you used to run the solver in parallel to now run the post-processing in parallel.

In ANSYS 19.2 Ensight Enterprise you can spatially or temporally decompose data sets. Ensignt Enterprise is an updated version of EnSight HPC.

Post-Processing Using Spatial Decomposition

EnSight is a client/server architecture. The client program takes care of the graphical user interface (GUI) and rendering operations, while the server program loads the data, creates parts, extracts features and calculates results.

If your model is too large to post-process on a single machine, you can utilize the spatial decomposed parallel operation to assign each spatial partition to its own EnSight Server. A good server-to-model ratio is one server for every 50 million elements.

Each EnSight Server can be located on a separate compute node on any compute resource you’d like. This allows engineers to utilize the memory and processing power of heterogeneous high-performance computing (HPC) resources for data set post-processing.

The engineers effectively split the large data set up into pieces with each piece assigned to its own compute resource. This dramatically increases the data set sizes you can load and process.

Once you have loaded the model into EnSight Enterprise, there are no additional changes to your workflow, experience or operations.

Post-Processing Using Temporal Decomposition

Keep in mind that this decomposition concept can also be applied to transient data sets. In this case, the dataset is split up temporally rather than spatially. In this scenario, each server receives its own set of time steps.

A turbulence simulation created using EnSight Enterprise post-processing

EnSight Enterprise offers performance gains when the server operations outweigh the communication and rendering time of each time step. Since it’s hard to predict network communication or rendering workloads, you can’t easily create a guiding principle for the server-to-model ratio.

However, you might want to use a few servers when your model has more than 10 million elements and over a hundred time steps. This will help keep the processing load of each server to a moderate level.

How EnSight Speeds Up the Post-Processing of Large Simulation Data Sets

Another good tip to ensure you are post-processed optimally within EnSight Enterprise. Engineers achieve the best performance gains by pre-decomposing the data and locating it locally to the compute resources they anticipate using. Ideally, this data should be in EnSight Case format.

To learn more, check out Ensight or register for the webinar Analyze, Visualize and Communicate Your Simulation Data with ANSYS EnSight.

The post Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers appeared first on ANSYS.

► Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing
    8 Oct, 2018

Your design team will make informed decisions about the products they create when they bring detailed simulations up front in the development cycle.

The 19.2 release of ANSYS Discovery AIM facilitates the need of early simulations.

It does this by streamlining templates for physics-aware meshing and rapid results.

High-Fidelity Simulation Through Physics-Aware Meshing

 Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Analysts have likely told your design team about the importance of a quality mesh to achieve accurate simulation results.

Creating high quality meshes takes time and specialized training. Your design team doesn’t likely have the time or patience to learn this art.

To account for this, Discovery AIM automatically incorporates physics-aware meshing behind the scenes. In fact, your design team doesn’t even need to see the mesh creation process to complete the simulation.

This workflow employs several meshing best practices analysts typically use. The tool even accounts for areas that require mesh refinements based on the physics being assessed.

For instance, areas with a sliding contact gain a finer mesh so the sliding behavior can be accurately simulated. Additionally, areas near the walls of fluid-solid interfaces are also refined to ensure this interaction is properly captured. Physics-aware meshing ensures small features and areas of interests won’t get lost in your design team’s simulation.

The simplified meshing workflow also lets your design team choose their desired solution fidelity. This input will help the software balance the time the solver takes to compute results with the accuracy of the results.

Though physics-aware meshing can create the mesh under the hood of the simulation process, it still has tools allowing user-control of the mesh. This way, if your design team chooses to dig into the meshing details — or an analyst decides to step in — they can finely tune the mesh.

Capabilities like this further empower designers as techniques and knowledge traditionally known only by analysts are automated in an easy-to-use fashion.

Gain Rapid Results in Important Areas You Might Miss

The 19.2 release of Discovery AIM has seen improvements with its ability to enable your design team to explore simulation results.

Many analysts will know instinctively where to focus their post-processing, but without this experience, designers may miss areas of interest.

Discovery AIM enables the designer to interactively explore and identify these critical results. These initial results are rapidly displayed as contours, streamlines or field flow lines.

Field flow and streamlines for an electromagnetics simulation

Field flow and streamlines for an electromagnetics simulation

Once your design team finds locations of interest within the results, they can create higher fidelity results to examine those area of interest in further detail. Designers can then save the results and revisit them when comparing design points or after changing simulation inputs.

To learn more about other changes to Discovery AIM — like the ability to directly access fluid results — watch the Discovery AIM 19.2 release recorded webinar or take it for a test drive.

The post Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing appeared first on ANSYS.

► Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer
    5 Oct, 2018
Traditional chemotherapy can often be blocked by a tumor’s stroma.

Traditional chemotherapy can often be blocked by a tumor’s stroma.

There are few illnesses as crafty as pancreatic cancer. It spreads like weeds and resists chemotherapy.

Pancreatic cancer is often asymptomatic, has a low survival rate and is often misdiagnosed as diabetes. And, this violent killer is almost always inoperable.

The pancreatic tumor’s resistance to chemotherapy comes from a shield of supporting connective tissue, or stroma, which it builds around itself.

Current treatments attempt to overcome this defense by increasing the dosage of intravenously administered chemotherapy. Sadly, this rarely works, and the high dosage is exceptionally hard on patients.

Nonetheless, doctors need a way to shrink these tumors so that they can surgically remove them without risking the numerous organs and vasculature around the pancreas.

“We say if you can’t get the drugs to the tumor from the blood, why not get it through the stroma directly?” asks William Daunch, CTO at Advanced Chemotherapy Technologies (ACT), an ANSYS Startup Program member. “We are developing a medical device that implants directly onto the pancreas. It passes drugs through the organ, across the stroma to the tumor using iontophoresis.”

By treating the tumor directly, doctors can theoretically shrink the tumor to an operable size with a smaller dose of chemotherapy. This should significantly reduce the effects of the drugs on the rest of the patient’s body.

How to Treat Pancreatic Cancer with a Little Electrochemistry

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Most of the drugs used to treat pancreatic cancer are charged. This means they are affected by electromotive forces.

ACT has created a medical device that takes advantage of the medication’s charge to beat the stroma’s defenses using electrochemistry and iontophoresis.

The device contains a reservoir with an electrode. The reservoir connects to tubes that connect to an infusion pump. This setup ensures that the reservoir is continuously filled. If the reservoir is full, the dosage doesn’t change.

The tubes and wires are all connected into a port that is surgically implanted into the patient’s abdomen.

A diagram of ACT’s chemotherapy medical device.

A diagram of ACT’s chemotherapy medical device.

The circuit is completed by a metal panel on the back of the patient.

“When the infusion pump runs, and electricity is applied, the electromotive forces push the medication into the stroma’s tissue without a needle. The medication can pass up to 10 to 15 mm into the stroma’s tissue in about an hour. This is enough to get through the stroma and into the tumor,” says Daunch.

“Lab tests show that the medical device was highly effective in treating human pancreatic cancer cells within mice,” added Daunch. “With conventional infusion therapy, the tumors grew 700 percent and with the device working on natural diffusion alone the tumors grew 200 percent. However, when running the device with iontophoresis, the tumor shrank 40 percent. This could turn an inoperable tumor into an operable one.” Subsequent testing of a scaled-up device in canines demonstrated depth of penetration and the low systemic toxicity required for a human device.

Daunch notes that the Food and Drug Administration (FDA) took notice of these results. ACT’s next steps are to develop a human clinical device and move onto to human safety trials.

Simulation Optimized the Fluid Dynamics in the Pancreatic Cancer Chemotherapy Implant

Before these promising tests, ACT faced a few design challenges when coming up with their chemotherapy implant.

For example, “There was some electrolysis on the electrode in the reservoir. This created bubbles that would change the electrode’s impedance,” explains Daunch. “We needed a mechanism to sweep the bubbles from the surface.”

An added challenge is that ACT never knows exactly where doctors will place the device on the pancreas. As a result, the mechanism to sweep the bubbles needs to work from any orientation.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

“We used ANSYS Fluent and ANSYS Discovery Live to iterate a series of designs,” says Daunch. “Our design team modeled and validated our work very quickly. We also noticed that the bubbles didn’t need to leave the reservoir, just the electrode.”

“If we place the electrode on a protrusion in a bowl-shaped reservoir the bubbles move aside into a trough,” explains Daunch. “The fast fluid flow in the center of the electrode and the slower flow around it would push the bubbles off the electrode and keep them off until the bubbles floated to the top.”

As a result, the natural fluid flow within the redesigned reservoir was able to ensure the bubbles didn’t affect the electrode’s impedance.

To learn how your startup can use computational fluid dynamics (CFD) software to address your design challenges, please visit the ANSYS Startup Program.

The post Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer appeared first on ANSYS.

► Making Wireless Multigigabit Data Transfer Reliable with Simulation
    4 Oct, 2018

The demand for wireless communications with high data transfer rates is growing.

Consumers want wireless 4K video streams, virtual reality, cloud backups and docking. However, it’s a challenge to offer these data transfer hogs wirelessly.

Peraso aims to overcome this challenge with their W120 WiGig chipset. This device offers multigigabit data transfers, is as small as a thumb-drive and plugs into a USB 3.0 port.

The chipset uses the Wi-Fi Alliance’s new wireless networking standard, WiGig.

This standard adds a 60 GHz communication band to the 2.4 and 5 GHz bands used by traditional Wi-Fi. The result is higher data rates, lower latency and dynamic session transferring with multiband devices.

In theory, the W120 WiGig chipset could run some of the heaviest data transfer hogs on the market without a cord. Peraso’s challenge is to design a way for the chipset to dissipate all the heat it generates.

Peraso uses the multiphysics capabilities within the ANSYS Electronics portfolio to predict the Joule heating and the subsequent heat flow effects of the W120 WiGig chipset. This information helps them iterate their designs to better dissipate the heat.

How to Design High Speed Wireless Chips That Don’t Overheat

Systems designers know that asking for high-power transmitters in a compact and cost-effective enclosure translates into a thermal challenge. The W120 WiGig chipset is no different.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

The chipset includes active/passive components and two main chips that are mounted on a printed circuit board (PCB). The system reaches considerably high temperatures due to the Joule heating effect.

To dissipate this heat, design engineers include a large heat sink that connects only to the chips and a smaller one that connects only to the PCB. The system is also enclosed in a casing with limited openings.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Traditionally, optimizing this set up takes a lot of trial and error as measuring the air flow within the enclosure would be challenging.

Instead, Peraso uses ANSYS SIwave to simulate the Joule heating effects of the system. This heat map is transferred to ANSYS Icepak, which then simulates the current heat flow, orthotropic thermal conductivity, heat sources and other thermal effects.

This multiphysics simulation enables Peraso to predict the heat distribution and the temperature at every point of the W120 WiGig chipset.

From there, Peraso engineers iterate their designs until they reached their coolest setup.

This simulation led design tactic helps Peraso optimize their system until they reached a heat transfer balance they need. To learn how Peraso performed this iteration, read Cutting the Cords.

The post Making Wireless Multigigabit Data Transfer Reliable with Simulation appeared first on ANSYS.

► Designing 5G Cellular Base Station Antennas Using Parametric Studies
    3 Oct, 2018

There is only so much communication bandwidth available. This will make it difficult to handle the boost in cellular traffic expected from the 5G network using conventional cellular technologies.

In fact, cellular networks are already running out of bandwidth. This severely limits the number of users and data rates that can be accommodated by wireless systems.

One potential solution is to leverage beamforming antennas. These devices transmit different signals to different locations on the cellular network simultaneously over the same frequency.

Pivotal Commware is using ANSYS HFSS to design beamforming antennas for cellular base stations that are much more affordable than current technology.

How 5G Networks Will Send More Signals on Existing Bandwidths

A 28 GHz antenna for a cellular base station.

A 28 GHz antenna for a cellular base station.

Traditionally, cellular technologies — 3G and 4G LTE — crammed more signals on the existing bandwidth by dividing the frequencies into small segments and splitting the signal time into smaller pulses.

The problem is, there is only so much you can do to chop up the bandwidth into segments.

Alternatively, Pivotal’s holographic beamforming (HBF) antennas are highly directional. This means they can split up the physical space a signal moves through.

This way, two cells in two locations can use the same frequency at the same time without interfering with each other.

Additionally, these HBF antennas use varactor (variable capacitors) and electronic components that are simpler and more affordable than existing beamforming antennas.

How to Design HBF Antennas for 5G Cellular Base Stations

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

Antenna design companies — like Pivotal — are always looking to design devices that optimize cost, size, weight and power (C-SWaP) and performance.

So, how was Pivotal able to account for C-SWaP and performance so thoroughly?

Traditionally, this was done by building prototypes, finding flaws, creating new designs and integrating manually.

Meeting a product launch with an optimized product using this manual method is grueling.

Pivotal instead uses ANSYS HFSS to simulate their 5G antennas digitally. This allows them to assess their HBF antennas and iterate their designs faster using parametric studies.

For instance, Pivotal wants to optimize their design for performance characteristics like roll-off. To do so they can plug in the parameter values, run simulations with these values and see how each parameter affects roll-off.

By setting up parametric studies, Pivotal assess which parameters affect performance and C-SWaP the most. From there they could weigh different trade-offs until they settled on an optimized design that accounted for all the factors they studied.

To see how Pivotal set up their parametric studies and optimize their antenna designs, read 5G Antenna Technology for Smart Products.

The post Designing 5G Cellular Base Station Antennas Using Parametric Studies appeared first on ANSYS.

Convergent Science Blog top

► 2019: A (Load) Balanced End to a Successful Decade
  19 Dec, 2019

2019 proved to be an exciting and eventful year for Convergent Science. We released the highly anticipated major rewrite of our software, CONVERGE 3.0. Our United States, European, and Indian offices all saw significant increases in employee count. We have also continued to forge ahead in new application areas, strengthening our presence in the pump, compressor, biomedical, aerospace, and aftertreatment markets, and breaking into the oil and gas industry. Of course, we remain dedicated to simulating internal combustion engines and developing new tools and resources for the automotive community. In particular, we are expanding our repertoire to encompass batteries and electric motors in addition to conventional engines. Our team at Convergent Science continues to be enthusiastic about advancing simulation capabilities and providing unmatched customer support to empower our users to tackle hard CFD problems.


As I mentioned above, this year we released a major new version of our software, CONVERGE 3.0. We have frequently discussed 3.0 in the past few months, including in my recent blog post, so I’ll keep this brief. We set out to make our code more flexible, enable massive parallel scaling, and expand CONVERGE’s capabilities. The results have been remarkable. CONVERGE 3.0 scales with near-ideal efficiencies on thousands of cores, and the addition of inlaid meshes, new physical models, and enhanced chemistry capabilities have opened the door to new applications. Our team invested a lot of effort into making 3.0 a reality, and we’re very proud of what we’ve accomplished. Of course, now that CONVERGE 3.0 has been released, we can all start eagerly anticipating our next major release, CONVERGE 3.1.

Computational Chemistry Consortium

2019 was a big year for the Computational Chemistry Consortium (C3). In July, the first annual face-to-face meeting took place at the Convergent Science World Headquarters in Madison, Wisconsin. Members of industry and researchers from the National University of Ireland Galway, Lawrence Livermore National Laboratory, RWTH Aachen University, and Politecnico di Milano came together to discuss the work done during the first year of the consortium and establish future research paths. The consortium is working on the C3 mechanism, a gasoline and diesel surrogate mechanism that includes NOx and PAH chemistry to model emissions. The first version of the mechanism was released this fall for use by C3 members, and the mechanism will be refined over the coming years. Our goal is to create the most accurate and consistent reaction mechanism for automotive fuels. Stay tuned for future updates!

Third Annual European User Conference

Barcelona played host to this year’s European CONVERGE User Conference. CONVERGE users from across Europe gathered to share their recent work in CFD on topics including turbulent jet ignition, machine learning for design optimization, urea thermolysis, ammonia combustion in SI engines, and gas turbines. The conference also featured some exciting networking events—we spent an evening at the beautiful and historic Poble Espanyol and organized a kart race that pitted attendees against each other in a friendly competition. 

Inaugural CONVERGE User Conference–India

This year we hosted our first-ever CONVERGE User Conference–India in Bangalore and Pune. The conference consisted of two events, each covering different application areas. The event in Bangalore focused on applications such as gas turbines, fluid-structure interaction, and rotating machinery. In Pune, the emphasis was on IC engines and aftertreatment modeling. We saw presentations from both companies and universities, including General Electric, Cummins, Caterpillar, and the Indian Institutes of Technology Bombay, Kanpur, and Madras. We had a great turnout for the conference, with more than 200 attendees across the two events.

CONVERGE in the Big Easy

The sixth annual CONVERGE User Conference–North America took place in New Orleans, Louisiana. Attendees came from industry, academic institutions, and national laboratories in the U.S. and around the globe. The technical presentations covered a wide variety of topics, including flame spray pyrolysis, rotating detonation engines, machine learning, pre-chamber ignition, blood pumps, and aerodynamic characterization of unmanned aerial systems. This year, we hosted a panel of CFD and HPC experts to discuss scaling CFD across thousands of processors; how to take advantage of clusters, supercomputers, and the cloud to run large-scale simulations; and how to post-process large datasets. For networking events, we took a dinner cruise down the Mississippi River and encouraged our guests to explore the vibrant city of New Orleans.

KAUST Workshop

In 2019, we hosted the First CONVERGE Training Workshop and User Meeting at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Attendees came from KAUST and other Saudi Arabian universities and companies for two days of keynote presentations, hands-on CONVERGE tutorials, and networking opportunities. The workshop focused on leveraging CONVERGE for a variety of engineering applications, and running CONVERGE on local workstations, clusters, and Shaheen II, a world-class supercomputer located at KAUST. 

Best Use of HPC in Automotive

We and our colleagues at Argonne National Laboratory and Aramco Research Center – Detroit received this year’s 2019 HPCwire Editors’ Choice Award in the category of Best Use of HPC in Automotive. We were incredibly honored to receive this award for our work using HPC and AI to quickly optimize the design of a clean, highly efficient gasoline compression ignition engine. Using CONVERGE, we tested thousands of engine design variations in parallel to improve fuel efficiency and reduce emissions. We ran the simulations in days, rather than months, on an IBM Blue Gene/Q supercomputer located at Argonne National Laboratory and employed machine learning to further reduce design time. After running the simulations, the best-performing engine design was built in the real world. The engine demonstrated a reduction in CO2 of up to 5%. Our work shows that pairing HPC and AI to rapidly optimize engine design has the potential to significantly advance clean technology for heavy-duty transportation.

Sibendu Som (Argonne National Laboratory), Kelly Senecal (Convergent Science), and Yuanjiang Pei (Aramco Research Center – Detroit) receiving the 2019 HPCwire Editors’ Choice Award

Convergent Science Around the Globe

2019 was a great year for CONVERGE and Convergent Science around the world. In the United States, we gained nearly 20 employees. We added a new Convergent Science office in Houston, Texas, to serve the oil and gas industry. In addition, we have continued to increase our market share in other areas, including automotive, gas turbine, and pumps and compressors.

In Europe, we had a record year for new license sales, up 70% from 2018. A number of new employees joined our European team, including new engineers, sales personnel, and office administrators. We attended and exhibited at tradeshows on a breadth of topics all over Europe, and we expanded our industry and university clientele. 

Our Indian office celebrated its second anniversary in 2019. The employee count nearly doubled in size from 2018, with the addition of several new software developers and marketing and support engineers. The first Indian CONVERGE User Conference was a huge success–we had to increase the maximum number of registrants to accommodate everyone who wanted to attend. We have also grown our client base in the transportation sector, bringing new customers in the automotive industry on board.

In Asia, our partners at IDAJ continue to do a fantastic job supporting CONVERGE. CONVERGE sales significantly increased in 2019 compared to 2018. And at this year’s IDAJ CAE Solution Conference, speakers from major corporations presented CONVERGE results, including Toyota, Daihatsu, Mazda, and DENSO.

Looking Ahead

While we like to recognize the successes of the past year, we’re always looking toward the future. Computing technology is constantly evolving, and we are eager to keep advancing CONVERGE to make the most of the increased availability of computational resources. With the expanded functionality that CONVERGE 3.0 offers, we’re also looking forward to delving into untapped application areas and breaking into new markets. In the upcoming year, we are excited to form new collaborations and strengthen existing partnerships to promote innovation and keep CONVERGE on the cutting-edge of CFD software.

► CONVERGE 3.0: From Specialized Software to CFD Powerhouse
  25 Nov, 2019

When Eric, Keith, and I first wrote CONVERGE back in 2001, we wrote it as a serial code. That probably sounds a little crazy, since practically all CFD simulations these days are run in parallel on multiple CPUs, but that’s how it started. We ended up taking our serial code and making it parallel, which is arguably not the best way to create a parallel code. As a side effect of writing the code this way, there were inherent parts of CONVERGE that did not scale well, both in terms of speed and memory. This wasn’t a real issue for our clients who were running engine simulations on relatively small numbers of cores. But as time wore on, our users started simulating many different applications beyond IC engines, and those simulating engines wanted to run finer meshes on more cores. At the same time, computing technology was evolving from systems with relatively few cores per node and relatively high memory per core to modern HPC clusters with more cores and nodes per system and relatively less memory per core. We knew at some point we would have to rewrite CONVERGE to take advantage of the advancements in computing technology.

We first conceived of CONVERGE 3.0 around five years ago. At that point, none of the limitations in the code were significantly affecting our clients, but we would get the occasional request that was simply not feasible in the current software. When we got those requests, we would categorize them as “3.0”—requests we deemed important, but would have to wait until we rewrote the code. After a few years, some of the constraints of the code started to become real limitations for our clients, so our developers got to work in earnest on CONVERGE 3.0. Much of the core framework and infrastructure was redesigned from the ground up in version 3.0, including a new mesh API, surface and grid manipulation tools, input and output file formats, and load balancing algorithms. The resulting code enables our users to run larger, faster, and more accurate simulations for a wider range of applications.

Scalability and Shared Memory

Two of our major goals in rewriting CONVERGE were to improve the scalability of the code and to reduce the memory requirements. Scaling in CONVERGE 2.x versions was limited in large part because of the parallelization method. In the 2.x versions, the simulation domain is partitioned using blocks coarser than the solution grid. This can cause a poor distribution of workload among processors if you have high levels of embedding or Adaptive Mesh Refinement (AMR). In 3.0, the solution grid is now partitioned directly, so you can achieve a good load balance even with very high levels of embedding and AMR. In addition, load balancing is now performed automatically instead of on a fixed schedule, so the case is well balanced throughout more of the run. With these changes, we’ve seen a dramatic improvement in scaling in 3.0, even on thousands of cores. 

Figure 1. CONVERGE 3.0 scaling for a combusting turbulent partially premixed flame (Sandia Flame D) case on the Blue Waters supercomputer at the National Center for Supercomputing Applications[1]. On 8,000 cores, CONVERGE 3.0 scales with 95% efficiency.

To reduce memory requirements, our developers moved to a shared memory strategy and removed redundancies that existed in previous versions of CONVERGE. For example, many data structures, like surface triangulation, that were stored once per core in the 2.x versions are now only stored once per compute node. Similarly, CONVERGE 3.0 no longer stores the entire grid connectivity on every core as was done in previous versions. The memory footprint in 3.0 is thus greatly reduced, and memory requirements also scale well into thousands of cores.

Figure 2. Load balancing in CONVERGE 2.4 (left) versus 3.0 (right) for a motor simulation with 2 million cells on 72 cores. Cell-based load balancing in 3.0 results in an even distribution of cells among processors.

Inlaid Mesh

Apart from the codebase rewrite, another significant change we made was to incorporate inlaid meshes into CONVERGE. For years, users have been asking for the ability to add extrusion layers to boundaries, and it made sense to add this feature now. As many of you are probably aware, autonomous meshing is one of the hallmarks of our software. CONVERGE automatically generates an optimized Cartesian mesh at runtime and dynamically refines the mesh throughout the simulation using AMR. All of this remains the same in CONVERGE 3.0, and you can still use meshes exactly as they were in all previous versions of CONVERGE! However now we’ve added the option to create an inlaid mesh made up of cells of arbitrary shape, size, and orientation. The inlaid mesh can be extruded from a triangulated surface (e.g., a boundary layer) or it can be a shaped mesh away from a surface (e.g., a spray cone). For the remainder of the domain not covered by an inlaid mesh, CONVERGE uses our traditional Cartesian mesh technology. 

Figure 3. Inlaid mesh for a turbine blade. In CONVERGE Studio 3.0, you can create a boundary layer mesh by extruding the triangulated surface of your geometry. CONVERGE Studio automatically creates the interface between the inlaid mesh and the Cartesian mesh, as seen in the image on the right.

Inlaid meshes are always optional, but in some cases they can provide accurate results with fewer cells compared to a traditional Cartesian mesh. In the example of a boundary layer, you can now refine the mesh in only the direction normal to the surface, instead of all three directions. You can also align an inlaid mesh with the direction of the flow, which wasn’t always possible when using a Cartesian mesh. This feature makes CONVERGE better suited for certain applications, like external aerodynamics, than it was previously.

Combustion and Chemistry

In CONVERGE 3.0, our developers have also enhanced and added to our combustion models and chemistry tools. For the SAGE detailed chemistry solver, we optimized the rate calculations, improved the procedure to assemble the sparse Jacobian matrix, and we introduced a new preconditioner. The result is significant speedup in the chemistry solver, especially for large reaction mechanisms (>150 species). If you thought our chemistry solver was fast before (and it was!), you will be amazed at the speed of the new version. In addition, 3.0 features two new combustion models. In most large eddy simulations (LES) of premixed flames, the cells are not fine enough to resolve the laminar flame thickness. The thickened flame model for LES allows you to increase the flame thickness without changing the laminar flamespeed. The second new model, the SAGE three-point PDF model, can be used to account for turbulence-chemistry interaction (more specifically, the commutation error) when modeling turbulent combusting flows with RANS.

On the chemistry tools side, we’ve added a number of new 0D chemical reactors, including variable volume with heat loss, well-stirred, plug flow, and 0D engine. The 1D laminar flamespeed solver has seen significant improvements in scalability and parallelization, and we have new table generation tools in CONVERGE Studio for tabulated kinetics of ignition (TKI), tabulated laminar flamespeed (TLF), and flamelet generated manifold (FGM). 

Figure 4. CONVERGE 3.0 simulation of flow and combustion in a multi-cylinder spark-ignition engine.

CONVERGE Studio Updates

To streamline our users’ workflow, we have implemented several updates in CONVERGE Studio, CONVERGE’s graphical user interface (GUI). We partnered with Spatial to allow users to directly import CAD files into CONVERGE Studio 3.0, and triangulate the geometry on the fly in a way that’s optimized for CONVERGE. Additionally, Tecplot for CONVERGE, CONVERGE’s post-processing and visualization software, can now read CONVERGE output files directly, for a smoother workflow from start to finish.

CONVERGE 3.0 was a long time in the making, and we’re very excited about the new capabilities and opportunities this version offers our users. 3.0 is a big step towards CONVERGE being a flexible toolbox for solving any CFD problem.

[1] The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. The NCSA Industry Program is the largest Industrial HPC outreach in the world, and it has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand computational problems at rapid speed and scale. The CONVERGE simulations were run on NCSA’s Blue Waters supercomputer, which is one of the fastest supercomputers on a university campus. Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.

► Changing the CFD Conference Game with the CONVERGE UC
  21 Aug, 2019

As the 2019 CONVERGE User Conference in New Orleans approaches, I’ve been thinking about the past five years of CONVERGE events. Let me take you back to the first CONVERGE User Conference. It was September 2014 in Madison, Wisconsin, and I was one of the first speakers. I talked about two-phase flows and the spray modeling we were doing at Argonne National Laboratory. Many of the people in the audience didn’t know you could do the kinds of calculations in CONVERGE that we were doing. Take needle wobble, for example. At the time, people didn’t know that you could not only move the needle up and down, but you could actually simulate it wobbling. After my talk, we had many interesting discussions with the other attendees. We made connections with international companies that we otherwise would not have had the chance to meet, and we formed collaborations with some of those companies that are still ongoing today.

At Argonne National Laboratory, I lead a team of more than 20 researchers, all of them focused on simulating either piston engines or gas turbines using high-performance computing. Our goal is to improve the predictive capability of piston engine and gas turbine simulations, and we do a lot of our work using CONVERGE. We develop physics-based models that we couple with CONVERGE to gain deeper insights from our simulations.

We routinely attend and present our work at conferences like SAE World Congress and ASME, and what really sets the CONVERGE User Conference apart is the focus of the event—it’s dedicated towards the people doing simulation work with piston engines, gas turbines, and other real-world applications. The user conference is the go-to place where we can meet all of the people doing 3D CFD simulations, so it’s a fantastic networking opportunity. We get to speak to people from academia and industry and learn about their research needs—understand what their pain points are, what their bottlenecks are, where the physics is not predictive enough. Then we take that information back to Argonne, and it helps us focus our research. 

Apart from the networking, the CONVERGE User Conference is also a great venue for presenting. My team has presented at the CONVERGE conferences on a wide variety of topics, including lean blow-out in gas turbine combustors, advanced ignition systems, co-optimization of engines and fuels, predicting cycle-to-cycle variation, machine learning for design optimizations, and modeling turbulent combustion in compression ignition and spark ignition engines. The attendees are engaged and highly technical, so you get direct, focused feedback on your work that can help you find solutions to challenges you may be encountering or give ideas for future studies.

The presenters themselves take the conference seriously. The quality of the presentations and the work presented is excellent. If you’ve never attended a CONVERGE User Conference before, my advice to you is to try to be a sponge. Bring your notebooks, bring your laptops, and take as many notes as you can. The amount of useful information you will gain from this conference is enormous and more relevant than other conferences you may attend, since this event is tailored for a specific audience. The CONVERGE User Conference also draws speakers from all over the world, which provides a unique opportunity to hear about the challenges that automotive original equipment manufacturers (OEMs), for example, face in other countries, which are different challenges than those in the United States. Listening to their presentations and getting access to those speakers has been very helpful for us. And since there are plenty of opportunities for networking, you can interact with the speakers at the conference and connect with them later on if you have further questions.

Overall, the CONVERGE User Conference is a great opportunity for presenting, learning, and networking. This is a conference where you will gain a lot of useful knowledge, meet many interesting people, and have some fun at the evening networking events. If you haven’t yet come to a CONVERGE User Conference—I highly recommend making this year your first.

Interested in learning more about the CONVERGE User Conference? Check out our website for details and registration!

► Apollo 11 at 50: Balancing the Two-Legged Stool
  15 Jul, 2019

On July 16th, I will look up at the night sky and celebrate the 50-year anniversary of the launch of Apollo 11. As I admire the full moon, the CFDer in me will think about the classic metaphor of the three-legged stool. Modern engineering efforts depend on theory, simulation, and experiment: Theory gives us basic understanding, simulation tells us how to apply this theoretical understanding to a practical problem, and experiment confirms that our applied understanding is in agreement with the physical world. One element does not seek to replace another; instead, each element reinforces the others. By modern standards, simulation did not exist in the 1960s⁠—NASA’s primary “computers” were the women we saw in Hidden Figures, and humans are limited to relatively simple calculations. When NASA sent people to the moon, it had to build a modern cathedral balanced atop a two-legged stool.

I like the cathedral metaphor for the Saturn V rocket because it expresses some unexpected similarities between the efforts. A medieval cathedral was a huge, societal construction effort. It required workers from all walks of life to contribute above and beyond, not just in scale but in care and diligence. Designers had to go past what they fully understood, overcoming unknown engineering physics through sheer persistence. The end product was a unique and breathtaking expression of craftsmanship on a colossal scale.

In aerospace, we are habituated to assembly lines, but each Saturn V was a one-off. The Apollo program as a whole employed some 400,000 people, and the Saturn family of launch vehicles was a major slice of the pie. Though their tools were certainly more advanced than a medieval artisan’s, these workers essentially built this 363-foot-tall rocket by hand. They had to, because the rocket had to be perfect. The rocket had to be perfect because there was so little margin for error, because engineers were reaching so far beyond the existing limits of understanding. Huge rockets are not routine today, but I want to highlight a few design challenges of the Saturn V as places where modern simulation tools would have had a program-altering effect.

The mighty F-1 remains the largest single-chambered liquid-fueled rocket engine ever fired. All aspects of the design process were challenging, but devising a practical combustion chamber was particularly torturous. Large rocket engines are prone to a complex interaction between combustion dynamics and aeroacoustics. Pressure waves within the chamber can locally enhance the combustion rate, which in turn alters the flow within the engine. If these physical processes occur at the wrong rates, the entire system can become self-exciting and unstable. From a design standpoint, engineers must control engine stability through chamber shaping, fuel and oxidizer injector design, and internal baffling. 

Without any way to simulate the fuel injection, mixing, combustion, and outflow, engineers were left with few approaches other than scaling, experimentation, and doggedness. They started with engines they knew and understood, then tried to vary them and enlarge them. They built a special 2D transparent thrust chamber, then applied high-speed photography to measure the unsteadiness of the combustion region. They literally set off tiny bombs within an operating engine, at a variety of locations, monitoring the internal pressure to see whether the blast waves decayed or were amplified. Eventually they produced a workable design for the F-1, but, in the words of program manager Werner von Braun:

…lack of suitable design criteria has forced the industry to adopt almost a completely empirical approach to injector and combustor development… [which] does not add to our understanding because a solution suitable for one engine system is usually not applicable to another…

It was being performed by engineers, but in some senses, it wasn’t quite engineering. Persistence paid off in the end, but F-1 combustion instability almost derailed the whole Apollo program.

Close-up of an F-1 injector plate. Many of the 1428 liquid oxygen injectors and 1404 RP-1 fuel injectors can be seen. The injector plate is about 44 inches in diameter and is split into 13 injector compartments by two circular and twelve radial baffles. Photo credit: Mike Jetzer (

Imagine if Rocketdyne engineers had had access to modern simulation tools! A tool like CONVERGE can simulate liquid fuel spray impingement directly, allowing an engineer to parametrically vary the geometry and spray parameters. A tool like CONVERGE can calculate the local combustion enhancement of impinging pressure fluctuations, allowing an engineer to introduce different baffle shapes and structures to measure their moderating effect. And the engineer can, in von Braun’s words, add to his or her understanding of how to combat combustion instability.

Snapshot from an RP-1 fuel tank on a Saturn I (flight SA-5). This camera looks down from the top center of the tank. Note the anti-slosh baffles. Photo credit: Mark Gray on YouTube.

Fuel slosh in the colossal lower-stage tanks presented another design challenge. The first-stage liquid oxygen tank was 33 feet in diameter and about 60 feet long. How do you study slosh in such an immense tank while subjecting it to what you think will be flight-representative vibration and acceleration? What about the behavior of leftover propellant in zero gravity? In the 1960s, the answer was you built the rocket and flew it! In fact, the early Saturn launches (uncrewed, of course) featured video cameras to monitor fuel flow within the tanks. Cameras of that era recorded to film, and these cameras were housed in ejectable capsules. After collecting their several minutes of footage, the capsules would deploy from the spent stage and parachute to safety. I bet those engineers would have been over the moon if you had presented them with modern volume of fluid simulation tools.

Readers who have watched Apollo 13 may recall that the center engine of the Saturn V second stage failed during the launch. This was due to pogo, another combustion instability problem. In a rocket experiencing pogo, a momentary increase in thrust causes the rocket structure to flex, which (at the wrong frequency) can cause the fuel flow to surge, causing another self-exciting momentary increase in thrust. In severe cases, this vibration can destroy the vehicle. Designers added various standpipes and accumulators to de-tune the system, but this was only performed iteratively, flying a rocket to measure the effects. Today, we can study the fluid-structure interaction before we build the structure! Modern simulation tools are dramatic aids to the design process.

Saturn V first-stage anti-pogo valve. Diagram credit: NASA.

Today’s aerospace engineering community is doing some amazing things. SpaceX and Blue Origin are landing rockets on their tails. The United Launch Alliance has compiled a perfect operational record with the Delta IV and Atlas V. Companies like Rocket Lab and Firefly Aerospace are demonstrating that you don’t need to have the resources of a multinational conglomerate to put payloads into orbit. But for me, nothing may ever surpass the incredible feat of engineers battling physical processes they didn’t fully understand, flying people to the moon on a two-legged stool.

Interested in reading more about the Saturn V launch vehicle? I recommend starting with Dr. Roger Bilstein’s Stages to Saturn.

► CONVERGE Chemistry Tools: The Simple Solution to Complex Chemistry
  20 May, 2019

As I’ve started to cook more, I’ve learned the true value of multipurpose kitchen utensils and appliances. Especially living in an apartment with limited kitchen space, the fewer tools I need to make delicious meals, the better. A rice cooker that doubles as a slow cooker? Great. A blender that’s also a food processor? Sign me up. Not only do these tools prove to be more useful, but they’re also more economical.

The same principle applies beyond kitchen appliances. CONVERGE CFD software is well known for its flow solver, autonomous meshing, and fully coupled chemistry solver, but did you know that it also features an extensive suite of chemistry tools, with even more coming in version 3.0? Whether you need to speed up your abnormal combustion simulations, create and validate new chemical mechanisms, expedite your design process with 0D or 1D modeling, or compare your chemical kinetics experiments with simulated results, CONVERGE chemistry tools have you covered. The many capabilities of CONVERGE translate to a broadly applicable piece of software for CFD and beyond.

Zero-Dimensional Simulations

CONVERGE 3.0 expands on the previous versions’ 0D simulation capabilities with a host of new tools and reactors that are useful across a wide range of applications. If you’re running diesel engine simulations, you can take advantage of CONVERGE’s autoignition utility to quickly generate ignition delay data for different combinations of temperature, pressure, and equivalence ratio. Furthermore, you can couple the autoignition utility with 0D sensitivity analysis to determine which reactions and species are important for ignition or to determine the importance of various reactions in forming a given species.

The variable volume tool in CONVERGE 3.0 is a closed homogeneous reactor that can simulate a rapid compression machine (RCM). RCMs are ideal for chemical kinetics studies, especially for understanding autoignition chemistry as a function of temperature, pressure, and fuel/oxygen ratio.

Another new reactor model is the 0D engine tool, which can provide information on autoignition and engine knock. HCCI engines operate by compressing well-mixed fuel and oxidizer to the point of autoignition, and so you can use the 0D engine tool to gain valuable insight into your HCCI engine.

For other applications, look toward the well-stirred reactor (WSR) model coming in 3.0. The WSR assumes a high rate of mixing so that the output composition is identical to the composition inside the reactor. WSRs are thus useful for studying highly mixed IC engines, highly turbulent portions of non-premixed combustors, and ignition and extinction limits on residence time such as lean blow-out in gas turbines.

In addition to the new 0D reactor models, CONVERGE 3.0 will also feature new 0D tools. The chemical equilibrium (CEQ) solver calculates the concentration of species at equilibrium. The CEQ solver in CONVERGE, unlike many equilibrium solvers, is guaranteed to converge for any combination of gas species. The RON/MON estimator finds the research octane number (RON) and motor octane number (MON) for a fuel by finding the critical compression ratio (CCR) at which autoignition occurs and correlates this with the CCR of PRF fuel composition using the LLNL Gasoline Mechanism.

One-Dimensional Simulations

For 1D simulations, CONVERGE contains the 1D laminar premixed flame tool, which calculates the flamespeed of a combustion reaction using a freely propagating flame. You can use this tool to ensure your mechanisms yield reasonable flamespeeds for specific conditions and to generate laminar flamespeed tables that are needed for some combustion models, such as G-Equation, ECFM, and TFM. In CONVERGE 3.0, this solver has seen significant improvement in parallelization and scalability, as shown in Fig. 1. You can additionally perform 1D sensitivity analysis to determine how sensitive the flamespeed is to the various reactions and species in your mechanism.

Figure 1. Parallelization (left) and scalability (right) of the CONVERGE flamespeed solver.

CONVERGE 3.0 also includes a new 1D reactor model: the plug flow reactor (PFR). PFRs can be used to predict chemical kinetics behavior in continuous, flowing systems with cylindrical geometry. PFRs have commonly been applied to study both homogeneous and heterogeneous reactions, continuous production, and fast or high-temperature reactions.

Chemistry Tools

Zero- and one-dimensional simulation tools aren’t all CONVERGE has to offer. CONVERGE also features a number of tools for optimizing reaction mechanisms and interpreting your chemical kinetics simulation results.

Detailed chemistry calculations can be computationally expensive, but you can decrease computational time by reducing your chemical mechanism. CONVERGE’s mechanism reduction utility eliminates species and reactions that have the least effect on the simulation results, so you can reduce computational expense while maintaining your desired level of accuracy. In previous versions of CONVERGE, mechanism reduction was only available to target ignition delay. In CONVERGE 3.0, you can also target flamespeed, so you can ensure that your reduced mechanism maintains a similar flamespeed as the parent mechanism.

CONVERGE additionally offers a mechanism tuning utility to optimize reaction mechanisms. This tool prepares input files for running a genetic algorithm optimization using CONVERGE’s CONGO utility, so you can tune your mechanism to meet specified performance targets.

If you’re developing multi-component surrogate mechanisms, or you need to add additional pathways or NOx chemistry to a fuel mechanism, the mechanism merge tool is the one for you. This tool combines two reaction mechanisms into one and resolves any duplicate species or reactions along the way.

CONVERGE 3.0 will feature new table generation and visualization tools. With the tabulated kinetics of ignition (TKI) and tabulated laminar flamespeed (TLF) tools, you can generate ignition or flamespeed tables that are needed for certain combustion models. To visualize your results, you can run a CONVERGE utility to prepare your tables for visualization in Tecplot for CONVERGE or other visualization software.

Figure 2. 3D visualization of flamespeed as a function of pressure and temperature.

CONVERGE’s suite of chemistry tools is just one of the components that make CONVERGE a robust, multipurpose solver. And just as multipurpose kitchen appliances have more uses during meal prep, CONVERGE’s chemistry capabilities mean our software has a broad scope of applications for not just CFD—but for all of your chemical kinetics simulation needs. Interested in learning more about CONVERGE or CONVERGE’s chemistry tools? Contact us today!

► Your <span class="text-lowercase">μ</span> Matters: Understanding Turbulence Model Behavior
    6 Mar, 2019

I recently attended an internal Convergent Science advanced training course on turbulence modeling. One of the audience members asked one of my favorite modeling questions, and I’m happy to share it here. It’s the sort of question I sometimes find myself asking tentatively, worried I might have missed something obvious. The question is this:

Reynolds-Averaged Navier Stokes (RANS) turbulence models and Large-Eddy Simulation (LES) turbulence models have very different behavior. LES will become a direct numerical simulation (DNS) in the limit of infinitesimally fine grid, and it shows a wide range of turbulent length scales. RANS does not become a DNS, no matter how fine we make the grid. Rather, it shows grid-convergent behavior (i.e., the simulation results stop changing with finer and finer grids), and it removes small-scale turbulent content.

If I look at a RANS model or an LES turbulence model, the transport equations look very similar mathematically. How does the flow ‘know’ which is which?

There’s a clever, physically intuitive answer to this question, which motivates the development of additional hybrid models. But first we have to do a little bit of math.

Both RANS and LES take the approach of decomposing a turbulent flow into a component to be resolved and a component to be modeled. Let’s define the Reynolds decomposition of a flow variable ϕ as

$$\phi = \bar \phi \; + \;\phi’,$$

where the overbar term represents a time/ensemble average and the prime term is the fluctuating term. This decomposition has the following properties:

$$\overline{\overline{\phi}} = \bar \phi \;\;{\rm{and}}\;\;\overline{\phi’} = 0.$$

Figure 1 Schematic of time-averaging a signal.

LES uses a different approach, which is a spatial filter. The filtering decomposition of ϕ is defined as

$$\phi  = \left\langle \phi  \right\rangle + \;\phi ”,$$

where the term in the angled brackets is the filtered term and the double-prime term is the sub-grid term. In practice, this is often calculated using a box filter, a spatial average of everything inside, say, a single CFD cell. The spatial filter has different properties than the Reynolds decomposition,

$$\left\langle {\left\langle \phi  \right\rangle } \right\rangle \ne \left\langle \phi  \right\rangle \;\;{\rm{and}}\;\;\left\langle {\phi ”} \right\rangle  \ne 0.$$

Figure 2 Example of spatial filtering. DNS at left, box filter at right. ( )

To derive RANS and LES turbulence models, we apply these decompositions to the Navier-Stokes equations. For simplicity, let’s consider only the incompressible momentum equation. The Reynolds-averaged momentum equation is written as

$$\frac{{\partial \overline {{u_i}} }}{{\partial t}} + \frac{{\partial \overline {{u_i}}\; \overline {{u_j}} }}{{\partial {x_j}}} = – \frac{1}{\rho }\frac{{\partial \overline P }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left[ {\mu \left( {\frac{{\partial \overline {{u_i}} }}{{\partial {x_j}}} + \frac{{\partial \overline {{u_j}} }}{{\partial {x_i}}}} \right) – \frac{2}{3}\mu \frac{{\partial \overline {{u_k}} }}{{\partial {x_k}}}{\delta _{ij}}} \right] – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\overline {{{u’}_i}{{u’}_j}}} } \right).$$

This equation looks the same as the basic momentum transport equation, replacing each variable with the barred equivalent, with the exception of the term* in red. That’s where the RANS model will make a contribution.

The LES momentum equation, again neglecting Favre filtering, is written

$$\frac{{\partial \left\langle {{u_i}} \right\rangle }}{{\partial t}} + \frac{{\partial \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle }}{{\partial {x_j}}} =  – \frac{1}{\rho }\frac{{\partial \left\langle P \right\rangle }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{{\partial \left\langle {{\sigma _{ij}}} \right\rangle }}{{\partial {x_j}}} – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\left\langle {{u_i}{u_j}} \right\rangle}}  – \rho \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle  \right).$$

Once again, we have introduced a single unclosed term*, shown in red. As with RANS, this is where the LES model will exert its influence.

These terms are physically stress terms. In the RANS case, we call it the Reynolds stress.

$${\tau _{ij,RANS}} =  – \rho \overline {{{u’}_i}{{u’}_j}}.$$

In the LES case, we define a sub-grid stress as follows:

$${\tau _{ij,LES}} = \rho \left( {\left\langle {{{u}_i}{{u}_j}} \right\rangle  – \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle } \right).$$

By convention, the same letter is used to denote these two subtly different terms. It’s common to apply one more assumption to both. Kolmogorov postulated that at sufficiently small scales, turbulence was statistically isotropic, with no preferential direction. He also postulated that turbulent motions were self-similar. The eddy viscosity approach invokes both concepts, treating

$${\tau _{ij,RANS}} = f\left( {{\mu _t},\overline V } \right)$$


$${\tau _{ij,LES}} = g\left( {{\mu _t},\overline V } \right),$$

where \(\overline V \) represents the vector of transported variables: mass, momentum, energy, and model-specific variables like turbulent kinetic energy. We have also introduced \({\mu _t}\), which we call the turbulent viscosity. Its effect is to dissipate kinetic energy in a similar fashion to molecular viscosity, hence the name.

If you skipped the math, here’s the takeaway. We have one unclosed term* each in the RANS and LES momentum equations, and in the eddy viscosity approach, we close it with what we call the turbulent viscosity \({\mu _t}\). Yet we know that RANS and LES have very different behavior. How does a CFD package like CONVERGE “know” whether that \({\mu _t}\) is supposed to behave like RANS or like LES? Of course the equations don’t “know”, and the solver doesn’t “know”. The behavior is constructed by the functional form of \({\mu _t}\).

How can the turbulent viscosity’s functional form construct its behavior? Dimensional analysis informs us what this term should look like. A dynamic viscosity has dimensions of density multiplied by length squared per time. If we’re looking to model the turbulent viscosity based on the flow physics, we should introduce dimensions of length and time. The key to the difference between RANS and LES behavior is in the way these dimensions are introduced.

Consider the standard k-ε model. It is a two-equation model, meaning it solves two additional transport equations. In this case, it transports turbulent kinetic energy (k) and the turbulent kinetic energy dissipation rate (ε). This model calculates the turbulent viscosity according to the local values of these two flow variables, along with density and a dimensionless model constant as

$${\mu _t} = {C_\mu }\rho \frac{{{k^2}}}{\varepsilon }.$$

Dimensionally, this makes sense. Turbulent kinetic energy is a specific energy with dimensions of length squared per time squared, and its dissipation rate has dimensions of length squared per time cubed. In a sufficiently well-resolved solution, all of these terms should limit to finite values, rather than limiting to zero or infinity. If so, the turbulent viscosity should limit to some finite value, and it does.

Figure 3 Example of a grid-converged RANS simulation: the ECN Spray A case, with a contour plot for illustration.

LES, in contrast, directly introduces units of length via the spatial filtering process. Consider the Smagorinsky model. This is a zero-equation model that calculates turbulent viscosity in a very different way. For the standard Smagorinsky model,

$${\mu _t} = \rho C_s^2{\Delta ^2}\sqrt {{S_{ij}}{S_{ij}}},$$

where \({C_s}\) is a dimensionless model constant, \({S_{ij}}\) is the filtered rate of strain tensor, and Δ is the grid spacing. Once again, the dimensions work out: density multiplied by length squared multiplied by inverse time. But what do the limits look like? The rate of strain is some physical quantity that will not limit to infinity. In the limit of infinitesimal grid size, the turbulent viscosity must limit to zero! The model becomes completely inactive, and the equations solved are the unfiltered Navier-Stokes equations. We are left with a direct numerical simulation.

When I was a first-year engineering student, discussion of dimensional analysis and limiting behaviors seemed pro forma and almost archaic. Real engineers in the real world just use computers to solve everything, don’t they? Yes and no. Even those of us in the computational analysis world can derive real understanding, and real predictive power, from considering the functional form of the terms in the equations we’re solving. It can even help us design models with behavior we can prescribe a priori.

Detached Eddy Simulation (DES) is a hybrid model, taking advantage of the similarity of functional forms of the turbulent viscosities in RANS and LES. DES adopts RANS-like behavior near the wall, where we know an LES can be very computationally expensive. DES adopts LES behavior far from the wall, where LES is more computationally tractable and unsteady turbulent motions are more often important.

The math behind this switching behavior is beyond the scope of a blog post. In effect, DES solves the Navier-Stokes equations with some effective \({\mu _{t,DES}}\) such that \({\mu _{t,DES}} \approx {\mu _{t,RANS}}\) near the wall and \({\mu _{t,DES}} \approx {\mu _{t,LES}}\) far from the wall, with \({\mu _{t,RANS}}\) and \({\mu _{t,LES}}\) selected and tuned so that they are compatible in the transition region. Our understanding of the derivation and characteristics of the RANS and LES turbulence models allows us to hybridize them into something new.

Figure 4 DES simulation over a backward facing step with CONVERGE

*This term is a symmetric second-order tensor, so it has six scalar components. In some approaches (e.g., Reynolds Stress models), we might transport these terms separately, but the eddy viscosity approach treats this unknown tensor as a scalar times a known tensor.

Numerical Simulations using FLOW-3D top

► FLOW-3D World Users Conference 2020 Conference Announced
  12 Dec, 2019

Santa Fe, NM, December 12, 2019 — In conjunction with its 40th anniversary, Flow Science, Inc. will hold the FLOW-3D World Users Conference 2020 on June 8-10, 2020 at the Maritim Hotel in Munich, Germany. Customers from around the world have been invited to the FLOW-3D World Users Conference 2020 to celebrate Flow Science’s milestone anniversary. Co-hosted by Flow Science Deutschland, this year’s conference features metal casting and water & environmental application tracks, advanced training sessions, in-depth technical customer presentations, and the latest product developments presented by Flow Science’s senior technical staff. Attendees will also enjoy a tour of the BMW Museum as part of the conference’s social events.

Flow Science has confirmed that Hubert Lang of BMW will be this year’s keynote speaker. Hubert Lang has worked in BMW’s Light Metal Foundry in Landshut, Germany since 1998. Introduced to FLOW-3D’s metal casting capabilities in 2005, Lang has led the expansion of BMW’s use of FLOW-3D. Today BMW uses FLOW-3D for a wide range of metal casting processes and special projects.

This year’s conference is particularly special. Not only are we celebrating our 40th anniversary with our customers around the world, but we are very pleased to welcome our keynote speaker Hubert Lang to honor our 15 years of partnership with BMW. Hubert will showcase some of BMW’s innovative designs for which FLOW-3D has played an indispensable role over the years. Since starting out as pioneers in computational fluid dynamics (CFD) 40 years ago, we continue to develop cutting edge software to enable customers like BMW to solve the toughest CFD problems around the world, said Dr. Amir Isfahani, CEO of Flow Science.

The call for abstracts is now open. Customers are encouraged to share their experiences, present their success stories, case studies and validations, and obtain valuable feedback from their peers and Flow Science staff. Topics include but are not limited to: metal casting, additive manufacturing, civil & municipal hydraulics, micro/nano/bio fluidics, aerospace and automotive applications. The deadline to submit an abstract is Friday, April 17.

Advanced training sessions for FLOW-3D’s family of products will be offered as part of the conference. Taught by senior technical staff and experts in their fields, advanced training topics include version up seminars for FLOW-3D CAST and FLOW-3D AM users, as well as sessions focused on troubleshooting techniques and municipal applications using FLOW-3D. Detailed information about these training sessions is available on the training page

Online registration for the conference is now available.

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
Attn: Amanda Ruggles
+1 505-982-0088

► Training Sessions at the FLOW-3D World Users Conference 2020
    6 Dec, 2019

Advanced Training Sessions

In conjunction with the FLOW-3D World Users Conference 2020, advanced training sessions will be held the afternoon of June 8 at the conference hotel. Taught by senior technical staff and experts in their fields, advanced training topics include version up seminars for FLOW-3D CAST and FLOW-3D AM users, as well as sessions focused on troubleshooting techniques and municipal applications using FLOW-3D. The courses are designed so that everyone, regardless of their application, can participate in the troubleshooting session. You can sign up for multiple training sessions when you register online.

Version Up: FLOW-3D CAST

Instructor: Dr.-Ing. Dipl.-Phys. Matthias Todte, Flow Science Germany

This one-hour FLOW-3D CAST training course will begin with an introductory overview and a review of the new features and GUI design changes in FLOW-3D CAST v5.1. Through examples, new workspaces will be covered in detail, including Investment Casting, Continuous, Sand Core Making, Centrifugal as well as the new Exothermic Sleeve capabilities and database. We will also discuss the new solidification model available in FLOW-3D CAST v5.1.

Training Details

Date: Monday, June 8
Time: 13:00 – 14:00
Cost: 100 €

Version Up: FLOW-3D AM

Instructor: Raed Marwan, President, Flow Science Japan

This one-hour course is open to FLOW-3D AM users as well as those interested in exploring the powerful capabilities of FLOW-3D AM for simulating additive manufacturing and laser welding processes. An overview of the additive manufacturing processes that can be simulated using FLOW-3D AM will be briefly introduced at the beginning of the training. The training will then focus on how to set up simulations for Selective Laser Melting (SLM) processes. The training will cover the powder laying process for single and multi-layer beds, powder spreading, and powder melting.

Training Details

Date: Monday, June 8
Time: 14:00 – 15:00
Cost: 100 €

Municipal Applications

Instructor: Brian Fox, MSc, Senior Water & Environmental Applications Engineer, Flow Science

CFD is rapidly gaining use as an advanced tool for the design and analysis of municipal systems for stormwater conveyance and water/wastewater treatment. FLOW-3D’s well-known strengths in free surface simulation provide excellent capabilities for simulating the complex flows encountered in stormwater conveyance structures. Our multiphysics capabilities offer a powerful tool for linking the physical, chemical and biological processes that are critical for the design and analysis of water/wastewater treatment systems.

In this two-hour training session we will explore FLOW-3D’s current capabilities along with recent and proposed developments for municipal applications. This class will be divided into four segments:

  • Review of air entrainment and two-fluid options for spiral, baffle and tangential dropshafts 
  • Simulation of contact tanks with the reaction kinetics model
  • How to use the settling sludge model for clarifier technology applications
  • Activated sludge modeling: advanced chemistry in FLOW-3D

Attendees will leave the training with in-depth knowledge of FLOW-3D’s modeling capabilities for municipal applications. For users interested in expanding their service offerings, this is an excellent opportunity to learn about capabilities for this exciting and fast growing market. 

Training Details

Date: Monday, June 8
Time: 13:00 – 15:00
Cost: 200 €

Troubleshooting Techniques

This two hour training is intended for all users of FLOW-3D products, regardless of application.

Instructor: Brian Fox, Senior Water & Environmental Applications Engineer, Flow Science

Understanding how to identify and resolve simulation and setup issues is a critical skill for every serious CFD modeler. In this workshop we will discuss how to efficiently diagnose and address issues with FLOW-3D simulations to help keep projects moving forward on schedule. Beginning from troubleshooting techniques and the overall process, we will review the methods and practical tools available in FLOW-3D for identifying, investigating and diagnosing problems simulation errors. We will then proceed to discuss model setup options that can be used to address these issues. Throughout the class, we will apply the approach to interactively troubleshoot several real simulations to demonstrate the troubleshooting process and techniques that will help you work more efficiently on your own simulations. Finally, we will describe how to use the ideas from this training in a preventive manner in your workflow.

Training Details

Date: Monday, June 8
Time: 15:00 – 17:00
Cost: 200 €

► FLOW-3D World Users Conference Registration
  25 Nov, 2019

Register for the FLOW-3D World Users Conference 2020

Registration Fees

  • Day 1 and 2 of the conference: 300 €
  • Day 1 of the conference: 200 €
  • Day 2 of the conference: 200 €
  • Guest Fee: 50 €
  • Opening Reception: included with registration
  • BMW Tour: included with registration
  • Conference Dinner: included with registration

Advanced Training Fees

  • Version Up: FLOW-3D CAST (1 hr) – 100 €
  • Version Up: FLOW-3D AM (1 hr) – 100 €
  • Municipal Applications (2 hrs) – 200 €
  • Troubleshooting (2 hrs) – 200 €

Registration and training fees are waived for conference speakers (one per presentation). 

    Presenters are strongly encouraged to attend both days of the conference.
  • A 50 € charge includes access to the opening reception, tour, and conference dinner. It does not include access to the conference itself.
  • Price: 300,00 €
  • Price: 200,00 €
  • Price: 200,00 €
  • Price: 100,00 €
  • Price: 100,00 €
  • Price: 200,00 €
  • Price: 200,00 €
  • Price: 50,00 €
  • American Express
► HPC Release of FLOW-3D v12.0
  13 Nov, 2019

The HPC-enabled FLOW-3D v12.0 takes full advantage of the most advanced hardware available with pricing options available for entry-level through enterprise scale clients.

Santa Fe, NM, November 13, 2019 — Flow Science, Inc. has announced that it has released the HPC-enabled FLOW-3D v12.0. The HPC version of FLOW-3D v12.0 can be run on in-house clusters or on the FLOW-3D CLOUD software-a-as-a-service platform, which provides high performance computing as well as the lowest cost entry point to FLOW-3D.

Existing HPC customers and IT admins will benefit from one-time cluster hardware configuration setup, improved support for multiple job schedulers, and a simplified interface for setting up simulation on any compatible HPC platform.

Our HPC-enabled products in tandem with our cloud platform, which boasts the latest and the greatest hardware allow us to provide our customers the tools they need, when they need them, in order to accelerate their R&D and stay ahead of their competitors. Whether you are running FLOW-3D on a single core or 1000s of CPU cores, FLOW-3D is engineered to take full advantage of the ongoing advancements in hardware, said Flow Science CEO, Amir Isfahani.

FLOW-3D v12.0 marks an important milestone in the design and functionality of the graphical user interface, which simplifies model setup and improves user workflows. A state-of-the-art Immersed Boundary Method brings greater accuracy to FLOW-3D v12.0’s solutions. Featured developments include the Sludge Settling Model, the 2-Fluid 2-Temperature Model, and the Steady State Accelerator, which allows users to model their free surface flows even faster. The HPC-enabled version of FLOW-3D v12.0 allows customers to access these advanced simulation options at an accelerated pace. Performance benchmarks of the HPC-enabled version of FLOW-3D v12.0 are available.

From running design variations simultaneously to solving fine-resolution, large, and highly-complex design scenarios that take weeks to run on a high-end workstation, our HPC-enabled products get you the answer you need as quickly as possible on your in-house cluster or on our cloud platform, added Amir Isfahani.

A live webinar will provide an overview of high performance computing and FLOW-3D CLOUD with an emphasis on deploying hardware and software resources on demand, such as performance benchmarks for understanding scaling and speed-up on the cloud. The webinar will take place on December 11, 2019 at 1:00 pm EST. Online registration is available.

Flow Science has made this new release available to customers who are currently under maintenance contracts.

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
Attn: Amanda Ruggles
+1 505-982-0088

► FLOW-3D World Users Conference 2020
    9 Nov, 2019

We invite our customers from around the world to join us at the FLOW-3D World Users Conference 2020 to celebrate 40 years of FLOW-3D.

The conference will be held on June 8-10, 2020 at the Maritim Hotel in Munich, Germany. Join engineers, researchers and scientists from some of the world’s most renowned companies and institutions to hone your simulation skills, explore new modeling approaches and learn about the latest software developments. This year’s conference features metal casting and water & environmental application tracks, advanced training sessions, in-depth technical presentations by our customers, and the latest product developments presented by Flow Science’s senior technical staff. The conference will be co-hosted by Flow Science Deutschland.

We are extremely pleased to announce that Hubert Lang of BMW will be this year’s keynote speaker.

Keynote Speaker Announced! 

Hubert Lang, BMW, Keynote Speaker
Hubert Lang, BMW, Keynote Speaker at the FLOW-3D World Users Conference 2020

15 years of FLOW-3D at BMW

Hubert Lang studied Mechanical Engineering with a focus on automotive engineering at Landshut University of Applied Sciences. In 1998, he started in BMW’s Light Metal Foundry in Landshut, working in their tool design department, where he oversaw the development of casting tools for six-cylinder engines. In 2005, Hubert moved to the foundry’s simulation department, where he was introduced to FLOW-3D’s metal casting capabilities. Since then, he has led considerable expansion in the use of FLOW-3D, both in the volume of simulations as well as the number of application areas.

Today, BMW uses FLOW-3D for sand casting, permanent mold gravity casting, low pressure die casting, high pressure die casting, and lost foam casting. FLOW-3D has also been applied to several special projects at BMW, such as supporting the development of an inorganic binder system for sand cores through the development of a core drying model; calculation of the heat input during coating of cylinder liners; the development of the casting geometry for the injector casting procedure; and the layout and dimensioning of cooling systems for casting tools. 

BMW Museum Tour

We are pleased to offer a tour of the BMW Museum as part of the conference offerings. The tour will take place at 17:30 after the technical proceedings on Tuesday, June 9. You can sign up for the tour when you register for the conference.

BMW Museum Tour
Exterior architectural detail of the BMW Welt building.

Conference Information

Important Dates

  • April 17: Abstracts Due
  • May 1: Abstracts Accepted
  • May 29: Presentations Due
  • June 8: Advanced Training Sessions
  • June 8: Opening Reception
  • June 9: Tour of the BMW Museum
  • June 9: Conference Dinner

Registration Fees

  • Day 1 and 2 of the conference: 300 €
  • Day 1 of the conference: 200 €
  • Day 2 of the conference: 200 €
  • Guest Fee: 50 €
  • Opening Reception: included with registration
  • BMW Tour: included with registration
  • Conference Dinner: included with registration

Advanced Training Topics

Taught by senior technical staff and experts in their fields, advanced training topics include Version Up seminars for FLOW-3D CAST and FLOW-3D AM users, as well as sessions focused on Troubleshooting techniques and Municipal applications. The courses are designed so that everyone, regardless of their application, can participate in the Troubleshooting Session. You can sign up for these training sessions when you register online.

Training Times and Fees

  • June 8 – 13:00 – 14:00 – Version Up: FLOW-3D CAST (1 hr) – 100 €
  • June 8 – 14:00 – 15:00 – Version Up: FLOW-3D AM (1 hr) – 100 €
  • June 8 – 13:00 – 15:00 – Municipal Applications (2 hrs) – 200 €
  • June 8 – 15:00 – 17:00 – Troubleshooting (2 hrs) – 200 €

Call for Abstracts

Share your experiences, present your success stories and obtain valuable feedback from the FLOW-3D user community and our senior technical staff. We welcome abstracts on all topics including those focused on the following applications:

  • Metal Casting
  • Additive Manufacturing
  • Civil & Municipal Hydraulics
  • Consumer Products
  • Micro/Nano/Bio Fluidics
  • Energy
  • Aerospace
  • Automotive
  • Coating
  • Coastal Engineering
  • Maritime
  • General Applications

Abstracts should include a title, author(s) and a 200 word description. Please email your abstract to by Friday, April 17.

Registration and training fees will be waived for presenters.

Presenter Information

Each presenter will have a 30 minute speaking slot, including Q & A. All presentations will be distributed to the conference attendees and on our website after the conference. A full paper is not required for this conference. Please contact us if you have any questions about presenting at the conference. Flow Science Deutschland will sponsor this year’s Best Presentation Award.

Conference Dinner

This year’s conference dinner will be held in the ever-popular Augustiner-Keller. All conference attendees and their guests are invited to join us on Tuesday, June 9 for a traditional German feast in a beautiful and famous beer garden. The conference dinner will take place following the BMW Tour.


► Software Engineer
  30 Oct, 2019

Our software, FLOW-3D, is used by engineers, designers, and scientists at top manufacturers and institutions throughout the world to simulate and optimize product designs. Many of the products used in our daily lives, from many of the components in an automobile to the paper towels we use to dry our hands, have actually been designed or improved through the use of FLOW-3D.

Open position

Flow Science has a job opportunity for a motivated, creative and collaborative Software Engineer. As a Software Engineer, you will use your object-oriented programming skills to create and maintain the user interface between our simulation software and the end user. You’ll have the opportunity to combine your creative skills with your analytical skills to contribute to a powerful tool used by customers around the world.

Required education, experience, and skills

  • A Bachelor’s degree in computer science, computer engineering, or related degree
  • A minimum of three years programming experience in a structured software environment, or academic setting
  • Object-oriented programming skills using C++
  • Graphical User Interface (GUI) development experience
  • Comfortable in both Windows and Linux environments

Preferred experience and skills

  • Knowledge in modern design/architectural patterns
  • Experience with Qt framework
  • Comfortable with version control systems such as Git or SVN
  • Experience with UML.
  • C++ 11/14/17 knowledge.
  • OpenGL/graphics programming
  • Familiarity with the VTK API


Flow Science offers an exceptional compensation and benefits package superior to those offered by most large companies. Flow Science employees receive a competitive base salary, employer paid medical, dental, vision coverage, life and disability insurances, relocation assistance, 401(k) retirement plan with extremely generous employer matching, and an outstanding incentive compensation plan that offers year-end bonus opportunity.


A resume and cover letter should be e-mailed to

Learn more about careers at Flow Science >

Mentor Blog top

► Blog Post: Article Roundup: Wally Rhines Chapter Twelve – The Future, Mentor’s Questa verification tools now run on 64-bit ARM based servers, Why EV Battery Design Is So Difficult, SMTAI 2019: Nir Benson Discusses Mentor's Challenges and Solutions & Evolving to Meet the Challenges for Electronics Manufacturers
  21 Jan, 2020

Wally Rhines Chapter Twelve – The Future Mentor’s Questa verification tools now run on 64-bit ARM based servers Why EV Battery Design Is So Difficult SMTAI 2019: Nir Benson Discusses Mentor’s Challenges and Solutions Evolving to Meet the Challenges for Electronics Manufacturers Wally Rhines Chapter Twelve – The Future SemiWiki One of the great opportunities for the semiconductor industry is

► Blog Post: Article Roundup: Hydrogen Powered Cars, On-Demand DRC with P&R, AV SoC Functional Safety, Automating CDC Verification & Advanced Packaging LVS and LVL issues
  16 Jan, 2020

Betting on Hydrogen-Powered Cars On-demand DRC within P&R cuts closure time in half for MaxLinear Functional Safety Verification For AV SoC Designs Accelerated With Advanced Tools Automating the pain out of clock domain crossing verification Mentor unpacks LVS and LVL issues around advanced packaging Betting on Hydrogen-Powered Cars SemiEngineering Hydrogen fuel cell vehicles are a compelling

► Technology Overview: Understanding Pressure Loss
  10 Jan, 2020

Early pressure drop investigations and optimizations are crucial for today’s efficient designs.  This short 3-minute video shows you how to conduct pressure drop characterizations such as  determination of kv or cv coefficients in Simcenter FLOEFD, the leading frontloading CFD simulation solution for design engineers.

Frontloading CFD refers to the practice of moving CFD simulation early into the design process where proposed models can be quickly analyzed and improved. Simcenter FLOEFD is embedded in CAD thus making CFD “plug and play”.  Due to key technologies at its core, Simcenter FLOEFD is easy to use, fast and accurate. In fact, design engineers using Simcenter FLOEFD have experienced x2 to x40 productivity improvement.   While this video clearly features the FloEFD for Solid Edge interface, you can expect the same level of integration with other popular design programs including Siemens NX, PTC Creo, and CATIA V5.

► Blog Post: Key Partnership Accelerates the Future of Mobility
  10 Jan, 2020

Advanced integrated circuits (ICs) and system-on-chip (SoC) devices coupled with sophisticated software will be crucial enablers of advanced mobility technologies such as advanced driver assistance systems (ADAS), active safety, vehicle connectivity, and automated driving. Computing chips running sophisticated software already enable an immense amount of functionality in today’s vehicles. As vehicle

► Event: What’s new in Simcenter Flotherm 2019.2 and Simcenter Flotherm XT 2019.3
    9 Jan, 2020

New enhancements to Simcenter Flotherm 2019.2 and Simcenter Flotherm XT 2019.3 electronics cooling software incl:  new results analysis mode and post processing automation; Simcenter Flotherm Package Creator, PCB copper thermal modeling options, and more.

► Technology Overview: Electronics Cooling Solutions: Optimizing deep learning machine microchannel liquid cooling
    8 Jan, 2020

Guy Wagner, of Electronic Cooling Solutions Inc., details how Simcenter Flotherm XT CFD simulation software provides the capabilities to optimize the thermal management design of a liquid-cooled deep learning machine that uses neural networks.

Engineers are able to accurately model fluid flow in all micro channels and passages within a module to meet requirement for high heat dissipation and ensure that multiple ICs operate in appropriate temperature ranges. As a result, ECS are able to model product performance in detail using the exact complex CAD geometry that their clients prefer.

Tecplot Blog top

► Tecplot Macro Tutorials
  22 Jan, 2020

This blog on Tecplot Macro Tutorials was written from a webinar entitled Ask the Expert About Tecplot 360 hosted by Scott Fowler, Tecplot 360 Product Manager. We received numerous questions about Tecplot macros and this blog pulls our macro resources together in one place.

What is a Tecplot macro? A Tecplot macro is a set of instructions, called macro commands, that perform actions in Tecplot 360. Macro commands can be used to accomplish virtually any task that can be done via the Tecplot 360 interface, offering an easy way to automate Tecplot 360 processes.

We recommend that you download the macros, datasets and layout files used in the descriptions below. If you do not already have Tecplot 360 running, you can download a Free Trial.

Macro, Data and Layout Files (17MB ZIP)

Using a Macro to Produce Transient Plots

This macro reads and edits different time steps and produces transient plots. This macro first finds out how many time steps are in your dataset, then it loops over those time steps. At each time step the current time step is set and an image is exported. That’s It!


Extend Time Macro Addon

The Extend Time Macro add-on simplifies the macro interface by allowing you to use a simple loop to query the number of solution times in the dataset and advance the time step. This differs from the native Tecplot macro language as it does not require that you know the solution time of your data.

This add-on uses a different algorithm than Tecplot 360 EX for sorting the solution times. Because Tecplot 360 combines time steps that are sufficiently close together, the number of time steps reported by this add-on may differ from the number of time steps reported by Tecplot 360.

You can load this addon by adding the following line to your tecplot.add File. See chapters 31 – 3.5 and 31 – 1.2 in the Tecplot 360 User’s Manual.

$!LoadAddOn "tecutilscript_extendtime.mcr"

Converting Binary Files to ASCII

A question asked during the webinar was “How do I convert TEC files to binary?” The .tec file extension has been adopted by the greater Tecplot community. These files are usually ASCII, but in this case, the TEC file was a binary file. So, the question becomes “How do I convert from binary to ASCII?” Well, that’s easy!

Let’s backtrack for a moment and review the canonical Tecplot file extensions:

  • PLT (.plt) – Tecplot binary format.
  • DAT (.dat) – Tecplot ASCII format.
  • SZL (.szplt) – Tecplot Subzone Load-on-demand binary format. This format allows you to load large volume metric grids with very little RAM.

» Read the “Comparison of Tecplot Data File Formats” blog on Tecplot file types.

We don’t have a standalone utility to convert binary files to ASCII, but you can do it using the macro binary_to_ascii.mcr (also included in the ZIP file mentioned above). In the macro, you simply use the ReadDataSet command and then a WriteDataSet command.

#!MC 1410
$!ReadDataSet  '"VortexShedding.plt" '
  ReadDataOption = New
  ResetStyle = No
  VarLoadMode = ByName
  AssignStrandIDs = Yes
$!WriteDataSet  "VortexShedding.dat"
  IncludeText = No
  IncludeGeom = No
  IncludeCustomLabels = No
  IncludeDataShareLinkage = Yes
  Binary = No
  UsePointFormat = No
  Precision = 9
  TecplotVersionToWrite = TecplotCurrent

From the command line you would call tec360.exe and pass this macro. Note that the file names in the macro need to be hard coded.

>tec360 -b -p binary_to_ascii.mcr

Here a plug for the power of Python!
Using PyTecplot to convert files, in this case binary to ASCII, is so much simpler as you can see in these few lines of code. This Python script ( in the ZIP file mentioned above.

import tecplot as tp‘mybinaryfile.plt’)​‘myasciifile.dat’)​

Converting ASCII Files to Binary

Another user wanted to convert ASCII files to binary.

You can use a utility called Preplot. Preplot is included with Tecplot 360 and it’s located in the bin directory in the Tecplot 360 installation. You just pass it an ASCII file (.dat) and tell it the output file name. Preplot will do the conversion for you.

> preplot myasciifile.dat mybinaryfile.plt

» Watch the video tutorial, Preplot and SZL Convert Tools.
» Read the Tecplot 360 User’s Manual.
» Reference the Data Format Guide.

A note about ASCII data, whenever Tecplot 360 loads an ASCII file we actually convert it to binary as it’s being loaded. Because of this, it is in your best interest, if you need to load that file over and over, to do the conversion only once. You will save yourself a lot of time!

Placing Streamlines and Streaklines Using Macros

During the webinar, one user was trying to create an arc of streamlines (which we call streamtraces in Tecplot 360). This is not possible using the onscreen “Rake” tool! The Rake tool only places straight lines of streamlines.

If you want to define an arc of streamlines, the best way to do it would be with PyTecplot (but that will be the topic of a future blog). I explain here how to do it with a Tecplot macro.

Here is the Tecplot macro for placing the streamtrace object associated with the frame. In the code below, I’m adding a streamtrace, defining it as a volume line, setting the direction to “both” (default is forward), and placing it at X= 0.5, Y=0.5, Z=0.5. You can create an arc by having multiple of these add commands at different locations.

  NUMPTS = 1 ​
  STREAMTYPE = VolumeLine​
  STARTPOS { X = 0.5 Y = 0.5 Z = 0.5 }​

Loading a Saved Frame Style Without the Position

How do I share data across frames without having to reload the data? In this example, I want to tile frames and then save the style from the upper left plot and load it into the lower right frame. You can do this, but not directly in the Tecplot 360 user interface. We will use a macro to do this (It can also be done with PyTecplot).

Copy Frame Style Steps

  1. First, create a new frame (click the Frame icon New Frame and draw a new frame). Then change the plot type to 2D Cartesian. (Now I could easily click on the upper left frame and save the frame style (Frame>Save Frame Style). Then, I could click in the new frame and load the frame style, but that will also copy the frame size and position which will overlay the original position. I want to retain the position. So, I don’t want to do it this way.)
  2. Drag and drop the copy_frame_style.mcr macro (from the ZIP file above) onto the Tecplot 360 user interface. Two new entries will appear in the Quick Macro Panel: “Copy frame Style” and “Paste frame Style.” Be sure the stylesheet directory in the macro is valid (C:\TEMP\temp_style.sty”).
  3. Click on the frame you want to copy, click the Quick Macro “Copy frame Style” and click Play (or simply double-click on the macro name).
  4. Select the new frame and double click on the “Paste frame Style” macro.
  5. Voila! You now have a new frame with the same style. This is one of the macros I use most often.

Copy Frame Style Video Tutorial

Watch MP4 Video

The magic is in the macro’s last command “INCLUDEFRAMESIZEANDPOSITION=NO,” which loads the frame style but ignores the position.

Making Macros Persistent in Tecplot 360

If you want these macro functions always available, you can put them in the tecplot.mcr file (located in Tecplot 360 installation directory).

If you are sharing the Tecplot 360 installation, for example in a Linux environment, you can put the “.tecplot.mcr” file in your Linux home directory so as not to impact other users on your network.

More information can be found in the Tecplot 360 User’s Manual (search for “tecplot.mcr”).

The post Tecplot Macro Tutorials appeared first on Tecplot.

► Ask the Expert about Tecplot 360 – Learn New Techniques!
  10 Jan, 2020

Ask the Expert about Tecplot 360

In this Webinar, customers ask questions of Tecplot 360 Product Manager, Scott Fowler. Scott will help you develop your own internal expertise as he answers your questions interactively. You’ll learn new and improved techniques, current best practices and implementation considerations.

Topics include:

  • Streamlines and Streaklines
  • Macros, PyTecplot
  • Frames & Styles

Download the files used in the webinar containing the presentation, data files, macros and Pytecplot scripts.

Download the 17MB ZIP file

The post Ask the Expert about Tecplot 360 – Learn New Techniques! appeared first on Tecplot.

► 11 Questions About Tecplot 360 Basics
    3 Dec, 2019

These questions were asked during the Webinar, From Zero to Hero: Tecplot 360 Basics. Tecplot 360 Product Manager, Scott Fowler, provides the answers below. Most of the answers show Webinar timestamps so that you can follow along in the Webinar. Learn more about Tecplot 360.

View full Webinar page

Tecplot 360 Basics Questions

1. How do you save all the steps to a macro?

The steps taken in the Webinar (timestamp 36:55) can be recorded to a macro file. Macros are the legacy scripting language of Tecplot 360, and the language on which many of our file formats are based.

  • You can customize the Tecplot 360 user interface using the quick macro panel.
  • You can customize defaults using what we call a configuration file, tecplot.cfg.
  • You can save your current work by saving to a layout file. If you have created new variables, you will also be asked to save the data file. The saved layout file is in this macro language.

Macro Video Tutorials

Tecplot 360 also has a Python API, called PyTecplot, which is available to customers on TecPLUS maintenance. Learn more about PyTecplot.

2. Why is the cylinder surface not showing velocity magnitude?

The cylinder in the Webinar example (timestamp 38:07) is a no-slip wall. Because it is a boundary condition, there are no velocities on the wall. If I want to see the velocities near the wall, I could change J planes to, for example, J=2.

3. Can we plot stagnation energy?

Yes. That is a variable that can be computed. From the main menu (Webinar timestamp 38:46), select Analyze>Calculate Variables and click Select…, then choose Stagnation Energy. See Section 21-3.2 Identifying State Variables in the Tecplot 360 User’s Manual.

4. How do I export a figure without a borderline?

You can hide the border by editing the active frame. To edit a frame (Webinar timestamp 36:55), right-click on the edge of your frame, then choose Edit Active Frame. Uncheck Show border.

Some graphics cards will have a little drop shadow on the right and bottom of plots. This is graphics card dependent. If you see a drop shadow after turning off borders, please see this article on how to correct it in the Tecplot Knowledge Base.

5. When I save a layout and move the layout to another folder, it won’t load.

When you use the menu command File>Save Layout As, there is a toggle (under the Save as type) that says Use relative paths (Webinar timestamp 36:55).

  • Checking the toggle saves the referenced data layout file as a relative path.
  • Unchecking the toggle saves the layout file as an absolute path. You should be able to load your data when it is saved as an absolute path.

You can also edit the layout file because they are saved as human readable macro files. (timestamp 39:50). I’ll quickly open a macro file and you can see what the Tecplot macro language looks like.

Here is the layout that I saved earlier. You can see that I have a command to point to linedata.plt. To move this file to a different folder, I need to change the file path to an absolute path. I can update the path in the macro file.

Relative paths are often used by people doing optimization studies when they have a folder hierarchy with, for example, Mach alpha sweep and data within each sub folder. Using a relative path, they can copy a single layout into each subfolder and load that data by relative path.

6. Can you repeat extracting the line plot?

Yes. Use Tools>Probe To Create Time Series Plot, then single click on the plot, and that will update the time series plot (Webinar timestamp 41:20). You can see that as I single click, the line plot updates through time.

For more on extracting, see the Tecplot 360 User’s Manual

7. Is it possible to export a cropped image?

Images cannot be cropped in the traditional sense. However, selecting File>Export from the main menu gives you some options (Webinar timestamp 41:53). You can export three different regions of your plot:

  • All frames – Exports all frames in your workspace.
  • Current frame – Exports only the current frame.
  • Work area – Exports entire “gray” region of your workspace.

To export only a portion of your image, you can do what we call a paper zoom. Zoom in on your plot by holding down the shift key, middle mouse button while moving the mouse up. A work area export will export the zoomed-in plot you see in the work area.

8. How can I get two plots in the same frame?

In XY line plots you can plot up to five X-axes and up to five Y-axes (Webinar timestamp 41:20).

In this case we’ll use multiple Y-axes. In the line plot, go into the Mapping Style dialog from the Plot sidebar, and you will see that there are multiple Y axes. For this example, select RHO and Velocity Magnitude to plot in the same frame, then press CNTL+F to fit the view. You can see that these have two very different scales. In the Mapping Style dialog, set RHO to Y2, and now row is on the Y2 axis.

This will give you two plots in the same frame. Select the adjuster toolAdjust tool to move the label.

9. Can I add LaTeX symbols to the legend?

You cannot use LaTeX symbols in a legend, but you can add LaTeX annotation to your plot and position it near the legend (Webinar timestamp 47:16). Here is how to do that:
  • Select the text tool text tool.
  • On your plot, click where you want to position the text, which brings up the Text Details dialog.
  • Press the LaTeX button (upper right side of Text Details dialog).
  • Add the LaTeX annotation, set the Size in points, and press Accept.
  • Click on the legend, which brings up the contour & Multi-Coloring Details dialog.
  • Toggle off Show header.
  • Click on the Legend Box… and click No Box.
  • Close the dialog.
  • Click on your LaTeX annotation and move it above the legend.

That will mimic adding LaTeX symbols to the legend.

Watch the Video on LaTeX Fonts

10. What is the Shade used for?

In 3D plots, zone effects (translucency and lighting) cause color variation (shading) throughout the zone(s). Shading can also help you discern the shape of the plot. To add shading to your plot, toggle-on Shade in the Plot sidebar. Use the Shade page of the Zone Style dialog to customize shading.

For information on translucency and lighting zone effects refer to Chapter 13, and for information on shade, refer to section 12-1 in the Tecplot 360 User’s Manual.

11. How do I perform a Fourier Transform?

In XY line plots, navigate to the Data>Fourier Transform… menu option. Follow along in this video for a demonstration: Loading Excel Data and FFT in Tecplot 360.

The post 11 Questions About Tecplot 360 Basics appeared first on Tecplot.

► Multiple Domains, Multiple Scales, One Visualization Tool
  13 Nov, 2019

Case study contributed by Michael Callaghan, PhD, P.Eng – Senior Applications Engineer, Aquanty

Aquanty is a leading-edge water resources science and technology firm specializing in predictive analytics, simulation and forecasting, research services, and IoT. Aquanty’s solutions and services are deployed globally across a broad range of industrial sectors including; agriculture, oil and gas, mining, watershed management, contaminant remediation, and nuclear storage and disposal. Aquanty’s flagship platform, HydroGeoSphere, is a class leader in fully integrated three-dimensional surface/subsurface modeling.

What Is an Integrated Surface-Subsurface Hydrologic Cycle Model?

Hydrological Cycle.

Figure 1. Hydrological Cycle.

HGS SimulationHydroGeoSphere (HGS) is a three-dimensional control-volume finite element simulator which is designed to simulate the entire terrestrial portion of the hydrologic cycle. It uses a globally-implicit approach to simultaneously solve the 2D diffusive-wave equation and the 3D form of Richards’ equation.

The basis for HGS’ integrated computation is multiple 1D, 2D, 3D ‘domains’ that interact with each other, including:

  • a 2D overland flow domain,
  • a 3D subsurface flow domain that can include separate discrete fractures and dual permeability domains,
  • as well as 1D surface flow channels,
  • 1D subsurface tile drains,
  • and 1D water wells.

Data intensive model output for the hydrologic cycle benefits from the flexibility of Tecplot 360 for visualization, which often requires plotting of multiple domains simultaneously and in 3D.

“We chose Tecplot 360 because of the quality of the plots. We need to present our results to clients, and for that plot quality there is no substitute to Tecplot 360.”

– Michael Callaghan, PhD, P.Eng, Senior Applications Engineer, Aquanty

Visualization of Surface Water – Groundwater Interaction

Simulating groundwater-surface water interaction in complex topography such as hummocky terrain has traditionally been viewed as a significant challenge to the hydrologic modelling community, for example so-called fill and spill behavior.

However, with HGS, the complex processes by which water movement is influenced by the combination of surface topography and highly variable subsurface hydrostratigraphy or preferential flow pathways can be readily reproduced.

In the example shown in Figure 2, precipitation is falling on an upland area which then initiates overland flow, filling, and spilling of surface depressions. The Tecplot 360 animation illustrates depression-focused groundwater recharge occurring beneath the depressions, with both a perched water table and a fractured aquitard influencing sub-surface water movement. 

Figure 2. Visualization of surface water-groundwater interaction.

Flood Inundation Visualization

HGS models may range across a number of scales from centimeters to meters to kilometers to 100’s of kilometers. Use of an unstructured finite element mesh makes this possible.

Tecplot 360’s inherent flexibility with unstructured grids makes it a very useful visualization tool across many scales of problems.

In this application, HydroGeoSphere is being used to recreate the Southern Alberta, Canada flooding that occurred in June 2013. The simulation results presented in Figure 3 depict a flood pulse derived from basin scale hydrologic simulations being routed across a local scale model of the City of Medicine Hat, Alberta, Canada. LiDAR-derived topography was used as input for this model, and results show excellent agreement between simulated and observed high water marks.

Figure 3. Flood inundation visualization.

The above visualization cases are made possible with Tecplot 360. HGS is highly integrated with Tecplot ecosystem using a powerful post-processing tool to directly produce results in Tecplot file formats. Tecplot 360 is an essential tool for everyday 2D data plotting, high quality 3D model visualization, results inspection and evaluation.

Learn more about Aquanty and HydroGeoSphere »

Try Tecplot 360 for Free

The post Multiple Domains, Multiple Scales, One Visualization Tool appeared first on Tecplot.

► Tecplot Add-in for Excel
    6 Nov, 2019

Tecplot Add-in for Excel

This tutorial is for using the Tecplot add-in for Excel. It is available for Windows and found in the \util folder of your Tecplot 360 installation.

To enable the add-in for Excel, run the file RunTecplot5.xla, and instruct Excel to enable macros. The tool will then be found under the add-ins tab of Excel.

This add-in will load cells from left to right and top to bottom, so make the selection of cells that you wish to load into Tecplot 360, start in the upper left and move to the lower right.

This first example is in table format with multiple dependent variables in one zone.

Once the cells have been selected, click Send to Tecplot. This add-in will then create a new ASCII file from the cell data that is then loaded into Tecplot 360. We can toggle on mappings for the other variables in the Mapping Style dialog to compare our data.

In another example in table format, we can load in multiple zones, where the breaks between cells indicate the separation between the zones. The Tecplot add-in for Excel also supports loading and cell data in carpet format. This will load in a 2D dataset from the cells and assign variables X, Y, and V.

This concludes the tutorial for the Tecplot add-in for Excel. Thank you for watching.

The post Tecplot Add-in for Excel appeared first on Tecplot.

► From Zero to Hero: Tecplot 360 Basics
  14 Oct, 2019


Get started with Tecplot 360 while learning best practices for plotting and analyzing your data. This 30-minute Webinar will walk you through everything from loading your data to exporting images and animations.

The agenda includes (but won’t be limited to!):

  • Loading your data
  • Exploring your data
  • Working with zones, variables, slices, iso-surfaces, streamtraces
  • Calculating new variables
  • Extracting data over time
  • Line plotting and frame linking
  • Exporting images and animations

The post From Zero to Hero: Tecplot 360 Basics appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► A bit more on the ANSYS / Aras OEM deal
  22 Jan, 2020

A bit more on the ANSYS / Aras OEM deal

I had a number of questions about the new / upgraded relationship between ANSYS and Aras that was announced last week and reached out to both companies for more details. ANSYS VP Strategy and Partnerships Sin Min Yap told me the following; his comments are preceded by his initials, my comments are below.

Why Aras? What were the most important reasons for ANSYS to go for an SPDM solution based on Aras PLM technology?

SMY: Both ANSYS and Aras have a vendor neutral and open approach, and customers have already validated this approach based on several ongoing engagements with ANSYS Minerva. It should be noted that ANSYS Minerva leverages the underlying Aras platform, and not the PLM offerings from Aras. For e.g. ANSYS Minerva leverages the core platform capabilities such as configuration management and PDM/PLM connectivity, and not Comet technologies or other PLM applications, from Aras.

MS: I didn’t realize that Minerva didn’t use the PLM offerings – I thought it was a customized-for-CAE implementation of Aras Innovator, excluding Comet. So this new arrangement is clearly an expansion.

Why now?

SMY: Our partnership and collaboration have been ongoing for ~2 years, with early customers being made aware of this under NDA. The first commercial release of ANSYS Minerva was only in 2019R3, and also without a significant marketing push. Now that we are doing a more focused messaging and launch of ANSYS Minerva as part of 2020R1, it was the right time to make the public announcement on our Aras partnership.

Is this real, with a serious investment in this platform, or is it PR?

SMY: Open ecosystem support is one of the pillars of the pervasive simulation strategy at ANSYS to connect simulation to the engineering processes at our customers. The release of ANSYS Minerva in 2019R3 (leveraging the Aras platform mentioned in the OEM agreement) demonstrates our commitment to provide solutions to support this digital transformation journey of our customers.

MS: Early days, but let’s take that as a commitment of real investment.

What happens to ANSYS Minerva, which I understand is “powered by Aras”? How will this go to market — via Aras’ app store or ANSYS’ sales teams or both or something new? Who supports what?

SMY: As mentioned above, this partnership announcement is directly related to ANSYS Minerva (and not a creation of something new or different), and it was just a matter of timing the PR with the first broad announcement of ANSYS Minerva is 2020R1. The go to market of ANSYS Minerva is only as an ANSYS product through S sales effort. ANSYS will support the customers directly for the ANSYS Minerva solution, and Aras and ANSYS will coordinate second line support to manage the OEM level interactions.

Do you have any similar plans regarding other PLM technology vendors?

SMY: We are not ready to share future plans at this time. We continue to engage very closely with Minerva customers and will respond based on their needs.

So there you have it. Mr. Yap was quite candid and I thank him for his help. I’ve also reached out to Aras but we keep missing one another — I look forward to learning more.

The post A bit more on the ANSYS / Aras OEM deal appeared first on Schnitger Corporation.

► PLM goes green?
  21 Jan, 2020

PLM goes green?

I believe in science. In the null hypothesis. In finding the root cause. And science says we are not doing nearly enough to reverse the damage already done to our planet by our homes, factories, cars, ships and planes, and to prevent further climate change. So when I got a press release about a PLM-centric initiative to deal with the ecological crisis facing our planet, I had to share.

The PLM Green Global Alliance is in its infancy, currently made up of Richard McFall from PLM Alliances, Jos Voskuil from TacIT, Oleg Shilovitsky from Beyond PLM, and Bjorn Fidjeland from plmPartner and has a

mission to create a global connection and community between professionals who use, develop, market, or support Product Lifecycle Management (PLM) enabling technologies and software solutions that have value in addressing the causes and consequences of climate change due to human-generated greenhouse gas emissions.

PGGA wants to help the PLM community of developers, users, researchers, and academics to use PLM as intended to make products and processes more efficient (and, presumably, therefore using less power and other inputs). PGGA also intends to promote work on renewable and other sources of energy, power storage and transmission, lowering carbon emissions, and greening manufacturing, among other goals.

Why? Because, as PGGA founder (and ex-CIMdata member) Rich McFall comments,

“We face an urgent challenge to create a more sustainable and green future for our industries, economies, communities, and all life forms on our planet that depend on healthy interdependent ecosystems. In our informal alliance we seek to educate, advocate, and collaborate for greater recognition of the role of PLM to help assess, reduce, mitigate, and adapt to the effects of climate change now being experienced across the globe. There are many examples of how PLM-related technologies are doing just that, but few focused non-politicized platforms where resources and application case studies can be researched, shared, and promoted for the collective good. We plan to change that as our modest contribution.”

And they welcome other members: Joining the PGGA is available to anyone with an interest in the intersection of PLM and Green. There’s no website yet, but you can reach the PGGA at

It’s interesting (and likely not coincidental) that this comes at the same time as major investors like Blackrock are rethinking their strategy, saying they’ll shift away from environmentally risky projects to a greener stance. (They’ve been slammed by some for not doing this whole-heartedly enough, but any change in the right direction is good, in my view. But let’s be careful about green-washing, where misleading information makes something appear more environmentally sound than it really is.)

Climate change will be a huge topic at this week’s Davos World Economic Forum — in fact, Greta Thunberg has already chided the grownups for doing too little. Money talks and Blackrock’s toe-in-the-water makes it look like it’s starting to listen, too. We’ve all seen the impact consumers can have when we decide to ditch disposable plastic straws or use reusable water bottles. If we weight “green” options during product design, and explore less damaging material choices via, for example, simulation and other PLMish technologies, who knows what we can accomplish?

The title image is of smokestacks, photographer unknown. I got it using a Creative Commons license from

The post PLM goes green? appeared first on Schnitger Corporation.

► Simcenter Amsterdam: making simulation a routine part of design
  16 Jan, 2020

Simcenter Amsterdam: making simulation a routine part of design

Last December I went to Amsterdam to attend Siemens’ Simcenter Symposium. It’s one of my favorite events because it brings together so many different … everythings: auto talks to aero, who talk to marine; STAR-CCM+ people learn about Amesim; and we all learned that pigeons can detect cancer (yep, a scientist was paid to figure that out).

In total, over 600 people attended the more than 150 sessions, on everything from CFD to EMAG to process changes needed to move simulation into the hands of more designers. That’s a huge increase in all ways that matter from just a year ago: more presentations, more attendees, more diversity of application … it’s hard to know why with any certainty but it would appear that Siemens’ intentions to create a portfolio multiplier is working, as more of its users take advantage of more of the tools. This used to be a CFD-weighted event (since it’s based on the old CD-adapco event) that’s now truly multi-physics, if I may be allowed to use the term in this way.

Many take-aways. Here are my top items:

  • The Simcenter portfolio has its roots in NX Simulation, which came about largely because of the acquisition of a Nastran variant over 15 years ago. Nastran is still one core of the offering, but is rarely mentioned. I asked Jan Leuridan, Siemens SVP of Simulation and Test solutions about this and he pointed out that most people use it but don’t see it as worthy of mention. Interesting: where others highlight their solvers as points of differentiation, to Siemens and its users, they’re tools, important but not where actual action happens.
  • I attended last year’s event in Prague (you can read about that here) and was impressed by how many of the users were exploring the breadth of the Siemens offering. Much of it was new to them at the time. Just one year later, the focus wasn’t on point solutions but on combining physics, more advanced usage of Siemens’ optimization tools, and generally up-skilling rather than exploration. Maybe it’s just because different individuals attended the two events, but the tone in Amsterdam was more strategic than that in Prague.
  • The event kicked off with customer keynotes that gave a high-level look at how simulation changes outcomes. Henrik Alfredsson of Aker Solution talked about how CAE and IoT created an entire new line of business that enables Aker to both design and install subsea gas production systems, and then (the new offering) to monitor and predict performance.
  • Gugliemo Caviasso of Maserati took us in a different direction. Many people buy a Maserati because of its distinctive engine growl; what would that be in a world of electric engines? Should the manufacturer mask mechanical noises that we can’t hear over the sound of today’s combustion engine? Yes, Maserati does a significant amount validation and verification, but is also using CAE to redefine its brand values. (I checked with a car-enthusiast who once owned a Maserati. She fell in love with the car’s look but bought it because of its craftsmanship and the engine’s rumble. Creating that rumble without combustion is clearly a tough problem for Maserati and other car companies to crack.)
  • The keynote that most people were waiting for was, of course, Siemens’. Dr. Leuridan gave a quick tour through the offering, from 0D/1D to 3D, from generative design in concept stages to test in just-before-production, and from the automotive through shipbuilding industries. He also noted that Simcenter will soon be available via a token model (STAR-CCM+ and HEEDS already are), which got a lot of interest from attendees I spoke with; said that the machine learning component of artificial intelligence will play an increasingly important role in the world of CAE (more on both, below). Finally, Dr. Leuridan hammered home Siemens’ unique value proposition: the integration of simulation and test in one platform. The ability to simulate and validate with test; to simulate in advance of test to target test; and the ability to use test to define areas for detailed simulation at the system and component level takes Siemens’ offering from the theoretical to the practical.
  • One thing that was missing, for me, was the next step: using real-time data to drive simulation. Say you have an IoT system in place to gather data (in Siemens’ world, that’s Mindsphere). Wouldn’t it be incredibly useful to direct some of that flow to a simulation, if a potential problem is indicated? Mr. Alfredsson of Aker is taking the first steps; I can’t wait to see how this develops.
  • The Simcenter team was generous with its roadmaps, with each major product set offering an update. Nearly all were standing-room only, which means that attendees want to know where the tools they’ve invested in are going — and also, as a couple told me, to figure out where they need focus their training and up-skilling efforts.
  • I can’t cover every product, but the general themes of the roadmaps were this: improved and consistent UI across the platform, the ability to launch more simulation types from Simcenter 3D environment, more physics across the applications, workflow improvements to speed FEA model creation,
  • One thing worth mentioning from the Simcenter 3D roadmap session ties back to test. With the coming release, Siemens added new test/analysis correlation tools for test planning. This should allow analysis to help test engineers position sensors during physical test. There was also something about transfer path analysis between test and simulation that I didn’t quite get that had the audience take notice: apparently, testers can capture these forces during test and pass them back to CAE which can compute loads for NVH? Dunno. If this matters to you, ask Siemens!
  • HEEDS, the design space exploration solution, is still one of my favorite CAE apps. As per usual, I didn’t understand every part of the presentation on the latest release, but support for Python 3.6 and the ability to record and play back macros seemed to please people looking to automate design exploration. With version 2019.2, users can use AI (well, adaptive machine learning) to sample the design space and improves response surface accuracy. As across the platform, there are also usability enhancements, template tools, and streamlined connections to Autodesk Inventor and Aspen HYSYS, as well as Simcenter 3D.
  • And, while we’re on HEEDS, that was another tool that wasn’t explicitly mentioned as often as it used to be. In years past, customer presenters would spent a good chunk of their allotted time talking about how they set up HEEDS, how hard it was and how confident they were of their results. As with NX Nastran, it seems to have become an understood, unmentioned part of the tool set that’s, nevertheless, an integral part of what many people do. Perhaps it’s the UI improvements over the last few years, maybe it’s the more affordable licensing — it’s definitely much more the norm than it used to be.

The customer presentations were, as always, fascinating and highlight how much there is to learn about the world around us and the products we rely on every day. They all had a couple of things in common: a problem, some exploration and discovery and then a solution — that’s engineering, after all. But in many cases, the discovery led to new questions which led to new discoveries, which led to … and so on. Somewhat new for me was how many of the speakers mentioned using Teamcenter (Siemens’ data management and PLM solution) to drive these processes and manage the massive amounts of data generated.

This conference used to be a simulation guru experience (the early STAR-CCM+ events I attended were expert-fests). In 2019, it broadened to include new roles (designers, material specialists, test engineers and others) because they need to understand simulation –how it works, where it does and doesn’t apply, what the limitations might be for their use case– to make a bigger business impact. That was true again this year, as several presenters talked about using the automation tools across Simcenter to create advanced simulation tools for designers.

One last thing: One of the most interesting presentations had absolutely nothing to do with CAE. Dr. Hannah Fry, co-author of The Indisputable Existence of Santa Claus: The Mathematics of Christmas (actually a lot of fun; I read it on the flight home), talked about AI and its impact on humans. She covered whether pigeons can diagnose cancer (yes), that AI can go hideously wrong when applying algorithms to prison sentencing guidelines, and pointed out that humans write the algorithms and need to both plan for the worst and take responsibility for the outcomes. It was an interesting choice for Siemens, since many people wonder if they will lose their jobs to an algorithm. Dr. Fry’s examples make clear that some tasks can be done by pigeons or humans or algorithms — but not all. My take: humans can’t outsource the hard choices, at least for now.

The title shot is a view of one of Amsterdam’s many canals in an early morning fog. Gorgeous, no?

This is a video, produced by Statoil, of the Aker undersea compressor: . They don’t mention Aker, so here’s a video Aker has created about the subsea compressor system, hinting at the analytics at 1:30 or so: . If I can find a public video of the analytics, I’ll post that — it’s brilliant.

Note: Siemens graciously covered some of the expenses associated with my participation in the event but did not in any way influence the content of this post.

The post Simcenter Amsterdam: making simulation a routine part of design appeared first on Schnitger Corporation.

► Aras & ANSYS team up but no acquisition
  14 Jan, 2020

Aras & ANSYS team up but no acquisition

This just came into the inbox:

Aras Licenses Platform to ANSYS in Strategic OEM Deal

Partnership will enable better processes and data management of simulations for digital thread traceability across the lifecycle

You know Aras, the open source PLM platform, and you know ANSYS, the CAE powerhouse. What are they doing together? Aras is licensing the Aras Innovator platform to ANSYS so that ANSYS can use it to build configuration management, interoperability with PDM/PLM solutions, and “simulation-specific capabilities to … connect simulation and optimization to the business of engineering”.

Allrighty. What does THAT mean? It’s likely a recognition of the fact that Siemens, with Simcenter+Teamcenter, and Dassault Systemes, with SIMULIA on the V6 3DEXPERIENCE platform, offer something it cannot. Even though ANSYS has had a simulation lifecycle management tool for a decade now, modern innovation cycles need simulation to be governed by the processed and connections made possible by PLM. Simulation can help identify the perfect design, but it’s pointless if it can’t be economically manufactured and supported in the market. Connecting ANSYS’ broad CAE bench to a PLM like Aras lets its users tap into those workflows, ensuring consistent processes and traceability. And since Aras maintains APIs to lots of other platforms, an ANSYS user can get to all sorts of enterprise system via Aras Innovator.

According to the info made public today, ANSYS isn’t buying Aras. Right now, any way. ANSYS says it will deliver commercial offerings for simulation process and data management, process integration, design optimization and simulation-driven data science. on top of Aras Innovator.

Peter Schroer, CEO of Aras, says “We believe that simulation is essential to developing tomorrow’s next generation products, and that better data and process management of simulations is required to enable the digital processes of the future which will support these products. We see the ANSYS and Aras partnership as a potential game changer in connecting simulation to engineering processes for traceability, access and reuse across the product lifecycle.”

For its part, ANSYS’ Navin Budhiraja, VP of cloud and platform business said, “… this unique collaboration combines the strengths of ANSYS’ industry-leading multiphysics portfolio and the resilient platform from Aras for digital connectivity to dramatically enhance customer value … [W]e see the ability of ANSYS solutions to interoperate and link with heterogeneous systems as an important step to accelerate the digital transformation for our customers.”

So many questions: How much ANSYS is investing in this platform? What happens to ANSYS Minerva, the multiphysics collaboration application which carries the logo, “powered by Aras”? Is today’s announcement an outgrowth of a demonstrator AMC Bridge showed last summer (see it here)? AMC Bridge says it “enables users to access Aras Innovator directly from ANSYS AIM, store ANSYS AIM files on the Aras Innovator server, and use the functionality of Aras Innovator to manage ANSYS AIM projects.” How will this go to market — via Aras’ app store or ANSYS’ sales teams or both or something new? Who supports what?

I’ll let you know if I find out more. But for now, the key things are 1. it’s a partnership, not an acquisition; 2. Aras is serious about CAE; and 3. ANSYS continues to reach outside its traditional comfort zone. Partnerships with SAP, PTC and nowAras change the conversation about CAE and make it relevant in new and interesting contexts.

The post Aras & ANSYS team up but no acquisition appeared first on Schnitger Corporation.

► At AU 2019, AEC continued to rule
    9 Jan, 2020

At AU 2019, AEC continued to rule

Did I go to AU in 2019? Why, yes, I did. But I had the flu so only participated in a fraction of what was going on and only now have the time to share what I learned. 13,000 people, 13,000 stories and I only got a tiny bit, sigh. Since I had to narrow down, I focused on AEC (architecture, engineering and construction) — which makes sense since that’s where Autodesk’s 2019 acquisitions took place.

Here, in no particular order, are my key discoveries:

  • The theme for the event was “Better Starts Here”. Not an awesome tagline — I want to be more than better. I want to be awesome — but “better” is a goal that’s more readily achievable, especially for a business with a lot of intractable parts. It’s also a good tagline for users looking to up-skill or achieve a certification, something very important to their career paths. So Better it was.
  • CEO Andrew Anagnost used his keynote to home in on what he sees as “better”: reducing waste by building modularly and perhaps off the construction site. Digitally, better might mean applying that modular approach to creating libraries of proven design elements that are combined in new and intriguing ways. For Autodesk, better clearly means moving beyond the transition to subscriptions and on to improving the portfolio.
  • Also during the keynote, we got an entirely different look at “better”. Elizabeth Hausler, founder of BuildChange, explained that many people worldwide live in substandard housing in areas that are at risk of mudslides and earthquakes. Event, disaster, displacement or harm. Not a good outcome. Some suggest relocating those residents; others believe that the best approach is to leave people in their communities and update the structures to have greater resilience. Enter BuildChange. Using Revit and Autodesk Dynamo Studio, BuildChange can create a modified design in a few hours, down from a few weeks, and work with the family to ensure that it meets their needs (not those of a well-meaning foreigner) while improving safety. It’s an awesome use of technology to make things “better”. I’m thinking we might even be verging on “awesome” here …
  • Before the main AU kicked off, the company ran a couple of pre-events, including the Connect & Construct Summit that I attended. During the pre-con, we heard from designers, engineers, constructors and others about how they use Autodesk’s AEC portfolio, and from Autodesk about the Assemble, BuildingConnected and PlanGrid acquisitions and how they are being integrated with BIM 360.
  • One C&C panel session was particularly fascinating. Four executives (one owner, one prime contractor and two trade contractors) spoke about how they typically work and the tools they use. The main lesson: everyone wants to work smarter, which requires technology and process change. But not tech for its own sake; it has to prove business benefit. What benefit? Safer work conditions, for sure. More efficient, definitely — and that’s interesting in a world where many job functions are billed by the hour. One would think fewer hours equal less revenue, which is not desirable. That might be true in some cases, but labor shortages change those economics right now.
  • Surprising, the owner and at least one of the primes welcome apps that project teams find useful and uses the teams as a way of keeping current with the thousands of apps on the market. Collaboration, back-office, chunking a BIM model for a particular trade — apps keep coming and can make a real difference to productivity. One contractor, however, urged caution: the 2103 Target data breach that exposed 40 million users’ data came about because hackers stole credentials from an HVAC contractor and used them to, well, hack. Be careful what you let onto your platform!
  • Everyone kept mentioning that shortage of skilled labor, both in the office and on the job site. Again, tech can help but the panel urged rethinking traditional processes and how people collaborate. The owner was clear: she was interested in anything that lets her complete projects accurately, on time and on budget, and is willing to work with contractors to make that happen. It was a terrific panel and the openness of all of the participants was very encouraging.
  • I also attended sessions on each of the acquired AEC products (Assemble, BuildingConnected and PlanGrid). It was clear that many attendees were new to one or more of the acquisitions. Said another way, there’s a lot of scope in Autodesk’s customer base for cross-selling. Some of the early integrations are truly impressive: PlanGrid can open Revit models — meaning that I can be on a job site, looking at my day’s tasks and work backwards to figure out when and who ordered those sinks to be moved so that one cannot now open the bathroom door. Attendees were agog.
  • Back, quickly, to the exec panel: Integration is good but they cautioned Autodesk to keep open the pathways that integrate competitor products into that family. Not all projects warrant all of Autodesk’s firepower and some teams prefer competitor products. Autodesk seemed to listen, though we’ll have to check back next year to see how it’s going.
  • Also at the C&C, Autodesk announced its Construction Cloud offering, which brings together the AEC portfolio to “connect headquarters, office, and field teams to increase collaboration and productivity.” I’ve found out since AU that Construction Cloud aims to combine technologies to deliver on three value propositions: simple to use but powerful tools specific to construction workflows; a Builders Network (the BuildingConnected acquisition) to connect owners and builders to trade partners; and Predictive Insights (now Construction IQ but ultimately more), which applies machine learning to project data. It’s early days for Construction Cloud so we’ll have to see how it plays out, but it heads in the right direction. AEC projects, more than most industries, happen in dispersed locations and with varying levels of IT skill. Targeted solutions will make it far easier for this industry to adopt.
  • I did make it to the Manufacturing Keynote, where Autodesk and ANSYS announced interoperability between Autodesk Fusion 360 and ANSYS Mechanical. A bit confusing, since Autodesk has spent so much on building out its own CAE capability, but a recognition of how much of a standard ANSYS Mechanical is in some industries.
  • Autodesk also announced that it is partnering with aPriori to integrate its costing solution with Fusion 360‘s generative design capabilities. Makes so much sense: there’s no point designing something you can’t economically make. Not unique, but very important.
  • Last and also on the Manufacturing side, Autodesk is finally integrating the technology behind Delcam PowerMILL (on the desktop) into Fusion 360. The users I was sitting with saw this as hugely important in their move to Fusion 360 — and hope that more Delcam functionality follows. Quickly.
  • Notice something in those last three bullets? Fusion 360, not Inventor. The Manufacturing keynote did cover enhancements to Inventor but the majority of new stuff is heading for Fusion. I talked to one Inventor user who feels like he’s missing out; he wants to move to Fusion 360 but there’s no way his IT department will allow (today) a browser based, connected app. I’m sure others agree …
  • While I was at the C&C pre-con, the ballroom next door hosted Forge DevCon, the event dedicated to Autodesk’s Forge platform ecosystem. A quick chat by the coffee station confirmed what I learned when I last attended DevCon: Forge is incredibly exciting for partners and customers who see themselves able to develop applications on top of Autodesk data and processes. Autodesk continues to tweak the platform, improving interoperability, for example, between Inventor and Revit. That’s important for the many cases where mechanical CAD and architectural BIM have to work together (think a highly stylized front desk in a hotel or the modular furniture in the hotel room). The folks I spoke to appreciate Autodesk’s efforts but wish it would all go a bit faster — a good sign that people are excited by the opportunity in front of them.

My AU in 2019 was a bit limited –I didn’t get to nearly as many sessions as I wanted and had only one quick lap around the massive show floor– but even so it’s clear that Autodesk wants to help customers create and leverage useful data. Not just drawings. In the past, I’d ride from the airport to the hotel with AutoCAD users who didn’t even know what else Autodesk offers; those days are probably gone. Whether it’s generative design, Construction IQ or something else, the quest now is to find that data nugget that will move a business in a new direction.

The title image is of BuildChange’s Dr. Hausler during the keynote.

Note: Autodesk graciously covered some of the expenses associated with my participation in the event but did not in any way influence the content of this post.

The post At AU 2019, AEC continued to rule appeared first on Schnitger Corporation.

► Hexagon ups the ante – announces deals in mining & geosystems
    8 Jan, 2020

Hexagon ups the ante – announces deals in mining & geosystems

Altair started the year with an acquisition but we have to give props to Hexagon for the pace of their deals: closing one deal from 2019 and two brand new ones today.

First the new news. Hexagon is acquiring Blast Movement Technology (BMT), maker of monitoring technology and analysis for open pit mines. Miners will discover a vein of ore, and blast away surrounding material to make it simpler to extract. BMT’s solution collects data from sensors that move with the blasted material. BMT’s algorithms then calculate things like the post-blast location of ore and precise dig lines based on the measured movement.

BMT currently supports 100 customer sites mining nine commodities in nearly 40 countries. Sales in 2019 were €19 million. The purchase price wasn’t disclosed.

I live in an area of the US where there is little (no?) mining but attended a sub-conference on mining a couple of years ago. It’s far more technical and data-driven than I had realized, and the days of randomly setting off charges to see what can be found are long gone — a movie stereotype that’s simply not true. Using sensors, ground-penetrating radar, and advanced modeling techniques enable miners to operate more safely and profitably.

Hexagon President and CEO Ola Rollén adds, “Today’s acquisition of BMT is a powerful addition to our Smart Mine portfolio, further closing the drill and blast loop for our customers, and ultimately, improving their ability to measure, manage and improve mining operations from pit to plant.”

Next, Geopraevent, which makes monitoring and alarm systems for early detection and warning of landslides, rockfalls, and avalanches. Geopraevent’s turnkey systems use sensors such as radar, webcams and cameras, and measuring technologies and software to detect these natural hazards. Then, algorithms evaluate this data in real time to detect critical trends and, when indicated, launch alarms and other actions.

I don’t know if these are Geopraevent solutions, but my mind immediately went to the warning systems put in place in the Pacific Rim after the devastating tsunami in Japan in 2011. Those combine earthquake sensors, cameras and other sensors that trigger sirens and other alerts in case of potential danger. Geopraevent says it has nearly 100 live systems in operation, serving governments and private infrastructure operators in transportation, public safety, tourism, mining and energy.

About this deal, Mr. Rollén said, “Natural hazard monitoring improves the safety of roads and railways, especially when traditional constructive measures like tunnels or dams aren’t feasible … By combining [Geopraevent’s] domain knowledge – along with its proven technologies and services – with Hexagon’s global footprint and complementary solutions, we can offer more customers the early detection and warning systems necessary for protecting human lives.”

Geopraevent is also fully consolidated as of today. Hexagon says that the acquisition will have no significant impact on Hexagon’s earnings.

Finally, Hexagon says it completed the acquisition of Volume Graphics, a deal announced in November (my writeup). Quick recap: Volume Graphics makes industrial computed tomography (CT) software and had revenue of €26 million in 2019.

In total, Hexagon said, the three deals will have a negative impact of €25 million on its fourth quarter 2019 earnings statement to cover one-off items related to overlapping technologies, and transaction and integration costs.

We should learn more about all of this when Hexagon announce its 2019 results on 5 February.

The post Hexagon ups the ante – announces deals in mining & geosystems appeared first on Schnitger Corporation.


Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: