CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home >

CFD Blog Feeds

Another Fine Mesh top

► 25 Years of CFD Meshing
  11 Nov, 2019
While you were enjoying your day off yesterday, Pointwise silently turned 25 years old. We had already celebrated with a lot of you at the reception we hosted at AIAA Aviation this summer in Dallas. (If only that rainstorm hadn’t … Continue reading
► This Week in CFD
    8 Nov, 2019
The past two weeks have yielded two “must see” CFD articles. The first is on the data management and visualization aspects of using pressure sensitive paint in wind tunnel testing of NASA’s Space Launch System. The second delves into splines … Continue reading
► I’m Kristen Karman-Shoemake and This Is How I Mesh
    7 Nov, 2019
I was born in Arlington, TX and spent most of my childhood in and around the Fort Worth area. When I was in high school, my family moved to Chattanooga, TN. I then went on to UTK for my undergraduate … Continue reading
► Meet Michael Honke, Our Intern for the Fall Semester
  28 Oct, 2019
A few weeks ago we welcomed Michael Honke to Pointwise’s Product Development team. Michael is currently completing a masters in computer science at the University of Waterloo.  Michael is a Scientific Technologist (his undergraduate degree is in Physics from the … Continue reading
► This Week in CFD
  25 Oct, 2019
ICYMI (and how could you), we begin this week’s round up of CFD and CAE news with articles about PTC’s announcment that they will acquire Onshape and bring the latter’s “CAD as a service” alongside Creo and their other “on … Continue reading
► This Week in CFD
  11 Oct, 2019
This week’s CFD news includes an amazing simulation of a Mars lander done with FUN3D and a couple of interesting job openings from our friends at Siemens. And from the world of unique CFD applications comes the one shown here … Continue reading

F*** Yeah Fluid Dynamics top

► 2019 APS DFD Schedule
  21 Nov, 2019

It’s time for the annual American Physical Society Division of Fluid Dynamics meeting, and once again, I’ll be attending. If you’ve been scanning the program wondering where I am, wonder no more! Here’s a run-down of the talks and events I’ll be appearing at:

Yes, in very exciting news, the DFD has offered me an invited talk this year, and I hope to see many of you in the audience come Monday! Maybe for those who can’t attend in person, we can get a volunteer or two to live-tweet it?

As usual, I’ll be hanging around throughout the conference, so if you see me, feel free to come say hi.

► Wave Clouds in the Front Range
  21 Nov, 2019

Last Sunday night metro Denver was treated to a rare sight: clouds resembling breaking waves formed near sunset. These are Kelvin-Helmholtz clouds, and the comparison to ocean waves is apt, since the same physics is behind both. Winds were unusually calm near the ground Sunday night, but strong winds blew at the altitude just above the lower cloud layer. That velocity difference created strong shear where the two air layers met. With the cloud layer in place to differentiate the slower-moving air from the faster, we can what’s normally invisible: how the two air layers mix.

The Denver Post has several more views of the wave clouds from around the area, and you can learn lots more about the Kelvin-Helmholtz instability here. (Image credit: R. Fields; via the Denver Post)

► Finding New States of Matter
  20 Nov, 2019

As children we’re taught that there are three basic states of matter: solids, liquids, and gases. The latter two are known scientifically as fluids. But the world doesn’t divide quite so simply into those three categories, and scientists have since named several other states of matter, including plasmassuperfluids, and Bose-Einstein condensates.  Many of these types of matter only exist under extreme temperatures and/or pressures, which makes them difficult to observe. Scientists have instead turned to numerical simulation to discover and study these exotic materials.

One of the latest discoveries among these bizarre materials is a form of potassium that simultaneously displays properties of a solid and a liquid. Inside this so-called chain-melted potassium, there’s a complex crystalline lattice containing smaller chains of atoms. One author described the material thus: “ It would be like holding a sponge filled with water that starts dripping out, except the sponge is also made of water.” That certainly boggles my mind! (Image credit: Turtle Rock Scientific; research credit: V. Robinson et al.; via NatGeo; submitted by Emily R.)

► Coke and Butane Rockets
  19 Nov, 2019

Rocket science has a reputation for being an incredibly difficult subject. But while there’s complexity in the execution, the concept behind rockets is pretty simple: throw mass out the back really fast and you’ll move forward. Whether you’re talking about a Saturn V or these Coke-and-butane-powered bottles, the basic principle is the same.

These rockets get their kick mostly from the added butane, which has a very low boiling point. When the bottle is flipped, the lighter butane is forced to rise through the Coke. With a large surface area of liquid butane exposed to the warmer Coke, the butane becomes gaseous. That sudden increase in volume forces a liquid-Coke-and-gaseous-butane mixture out of the bottle, which has a helpful nozzle shape to further increase the propellant’s speed. Once the phase change is underway, the rocket quickly takes off! (Image and video credit: The Slow Mo Guys)

► Making Drops Stick
  18 Nov, 2019

As you may have noticed when washing vegetables, many plants have superhydrophobic leaves. Water just beads up on their surface and slides right off. This is a useful feature for plants that want to direct that water toward their roots, but it’s a frustration in agriculture, where that superhydrophobicity means extra spraying of pesticides in order to get enough to stick to the plant.

But that may not be the case for much longer. Researchers have found that adding a little polymer to water droplets (right) can suppress their ability to rebound (left) from superhydrophobic surfaces. Above a critical concentration, the high shear rate of the droplet as it tries to detach activates the viscoelastic properties of the polymer. That viscoelasticity suppresses the rebound, keeping the droplet attached. That’s good news for everyone, since it means less spraying is needed to protect crops. (Image and research credit: P. Dhar et al.)

► Whiskey Stains
  15 Nov, 2019

Complex fluids leave behind fascinating stains after they evaporate. We’ve seen previously how coffee forms rings and whisky forms more complicated stains as surface tension changes during evaporation drive particles throughout the droplet. Now researchers are considering the differences between traditional Scottish whisky, which ages in re-used, uncharred barrels, and American whiskeys like bourbon, which are required to age in new, charred white oak barrels.

When diluted, the American whiskeys form web-like patterns – seen above – that vary from brand to brand, like a fingerprint. The charring of the barrels allows American whiskeys to pick up more water-insoluble molecules compared to whisky aged in uncharred barrels. Since the webbed patterns form in American whiskey but not Scotch whisky, it’s likely those molecules play an important role in the evaporation dynamics and subsequent staining. (Image credit: S. Williams et al.; research credit: S. Williams et al.; via APS Physics; submitted by Kam-Yung Soh)

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

CFD Online top

► Mixing of Ammoniak and Exhaust
  16 Aug, 2019
Dear Foamers,

in my thesis I worked with static mixers.
If you like to see my case you can see it here.
https://www.dropbox.com/sh/5rndjj0qs...Wci0dlNqa?dl=0
Feel free to ask!
► Determination of mixing quality/ uniformity index
  16 Aug, 2019
Dear guys,

for a long time I had problems determining the mixing quality of a mixing line. Now I've come across a usable formula. I would like to share this with you.
It is the degree of uniformity also called uniformity index.
The calculation is cell-based.
U = 1 - (SUM^{N}_{i=1}(Ci-Cm))/(2*N*Cm)
with N cells
and concentration of a cell Ci
and the arythmetic agent Cm
Cm = (SUM^{N}_{i=1}(Ci))/N
The easiest way is to export the cells with concentration of the considered region (outlet) and create an Excel file.
An example is shown in my public dropbox:
https://www.dropbox.com/sh/vm5qlawb0j611dp/AAD51PsCxgc4CUwMmBNWIqIxa?dl=0
Greetings Philipp
► Connecting Fortran with VTK - the MPI way
  24 May, 2019
I wrote a little couple of programs, respectively in Fortran and C++, as a proof of concept for connecting a Fortran program to a sort of visualization server based on VTK. The nice thing is that it uses MPI for the connection, so on the Fortran side nothing new and scary.

The code (you can find it at https://github.com/plampite/vtkForMPI) and the idea strongly predate a similar example in Using Advanced MPI by W. Gropp et al., but makes it more concrete by adding actual visualization based on VTK.

Of course, this is just a proof of concept, and nothing really interesting is really visualized (just a cylinder with parameters passed from Fortran side), but it is intended as an example to adapt to particular use cases (the VTK itself is taken from https://lorensen.github.io/VTKExamples/site/, where a lot of additional examples are present).
► Direct Numerical Simulation on a wing profile
  14 May, 2019
Direct Numerical Simulation on a wing profile

1 billion points DNS (Direct Numerical Simulation) on a NACA4412 profile at 5 degrees angle of attack. Reynolds number is 350000 per airfoil chord and Mach number is 0.117. Both upper and lower turbulent boundary layers are tripped respectively at 15% and 50% by roughness elements evenly spaced in the boundary layer created by a zonal immersed boundary condition (Journal of Computational Physics, Volume 363, 15 June 2018, Pages 231-255, https://www.sciencedirect.com/science...). The spanwise extent is 0.3*chord. The computation has been performed on a structured multiblock mesh with the FastS compressible flow solver developed by ONERA on 1064 MPI cores. The video shows the early stages of the calculation (equivalent to 40000 time steps) highlighting the spatial development of fine-scale turbulence in both attached boundary layer and free wake. Post-processing and flow images have been made with Cassiopée (http://elsa.onera.fr/Cassiopee).
► NACA4 airFoils generator
  20 Feb, 2019
https://github.com/mcavallerin/airFoil_tools


generate 3D model for foils
Attached Thumbnails
Click image for larger version

Name:	airfoilWinger.png
Views:	225
Size:	72.5 KB
ID:	452  
► Use gnuplot to plot graph of friction_coefficient for T3A Flat Plate case in OpenFOAM
  13 Feb, 2019
Hello,
I am new to OF and gnuplot. I am working on the T3A flat plate case in tutorials of OpenFOAM. I was struggling a lot to plot graph of friction coefficient from simulation and experimental data using the default plot_file (creatGraphs.plt) provided with the tutorial. I looked on the internet for a solution but remained unsuccessful. So, after trying for some time, I got the graph correct so I decided to share it here for use of anyone else.

The trick is we have to edit the default plot file provided in the case file. :) This is how the default file looks like:
Code:
#!/bin/sh
cd ${0%/*} || exit 1                        # Run from this directory

# Test if gnuplot exists on the system
command -v gnuplot >/dev/null 2>&1 || {
    echo "gnuplot not found - skipping graph creation" 1>&2
    exit 1
}

gnuplot<<GNUPLOT
    set term post enhanced color solid linewidth 2.0 20
    set out "graphs.eps"
    set encoding utf8
    set termoption dash
    set style increment user
    set style line 1 lt 1 linecolor rgb "blue"  linewidth 1.5
    set style line 11 lt 2 linecolor rgb "black" linewidth 1.5

    time = system("foamListTimes -case .. -latestTime")

    set xlabel "x"
    set ylabel "u'"
    set title "T3A - Flat Plate - turbulent intensity"
    plot [:1.5][:0.05] \
        "../postProcessing/kGraph/".time."/line_k.xy" \
        u (\$1-0.04):(1./5.4*sqrt(2./3.*\$2))title "kOmegaSSTLM" w l ls 1, \
        "exptData/T3A.dat" u (\$1/1000):(\$3/100) title "Exp T3A" w p ls 11

    set xlabel "Re_x"
    set ylabel "c_f"
    set title "T3A - Flat Plate - C_f"
    plot [:6e+5][0:0.01] \
        "../postProcessing/wallShearStressGraph/".time."/line_wallShearStress.xy" \
        u ((\$1-0.04)*5.4/1.5e-05):(-\$2/0.5/5.4**2) title "kOmegaSSTLM" w l, \
        "exptData/T3A.dat" u (\$1/1000*5.4/1.51e-05):2 title "Exp" w p ls 11
GNUPLOT

#------------------------------------------------------------------------------
After editing it, it should look like the following:
Code:
# #!/bin/sh
# cd ${0%/*} || exit 1                        # Run from this directory

# # Test if gnuplot exists on the system
# command -v gnuplot >/dev/null 2>&1 || {
    # echo "gnuplot not found - skipping graph creation" 1>&2
    # exit 1
# }

# gnuplot<<GNUPLOT
    set term post enhanced color solid linewidth 2.0 20
    set out "graphs2.eps"
    set encoding utf8
    set termoption dash
    set style increment user
    set style line 1 lt 1 linecolor rgb "blue"  linewidth 1.5
    set style line 11 lt 2 linecolor rgb "black" linewidth 1.5

    time = system("foamListTimes -case .. -latestTime")

    # set xlabel "x"
    # set ylabel "u'"
    # set title "T3A - Flat Plate - turbulent intensity"
    # plot [:1.5][:0.05] \
        # "../postProcessing/kGraph/".time."/line_k.xy" \
        # u (\$1-0.04):(1./5.4*sqrt(2./3.*\$2))title "kOmegaSSTLM" w l ls 1, \
        # "exptData/T3A.dat" u (\$1/1000):(\$3/100) title "Exp T3A" w p ls 11

    set xlabel "Re_x"
    set ylabel "c_f"
    set title "T3A - Flat Plate - C_f"
    plot [:6e+5][0:0.01] \
		"/home/purnp2/OpenFOAM/purnp2-v1812/run/T3A/postProcessing/wallShearStressGraph/269/line_wallShearStress.xy" \
        u (($1-0.04)*5.4/1.5e-05):(-$2/0.5/5.4**2) title "kOmegaSSTLM" w l, \
        "/home/purnp2/OpenFOAM/purnp2-v1812/run/T3A/validation/exptData/T3A.dat" u ($1/1000*5.4/1.51e-05):2 title "Exp" w p ls 11
# GNUPLOT

#------------------------------------------------------------------------------
Please notice the following changes:
a. the lines need to be commented out.
2. the back-slash (\) sign deleted before all dollar-sign ($) which is used to represent a column in a file used by gnuplot for plotting the graph.
3. full path of data files is added instead of a path to the data file from the current working directory.

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:  https://openfoam.org
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

vertices
(
    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15
);

blocks
(
    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);

edges
(
);

boundary
(
    inlet
    {
        type patch;
        faces
        (
            (0 8 12 4)
        );
    }
    outlet
    {
        type patch;
        faces
        (
            (3 7 15 11)
        );
    }
    lowerWall
    {
        type wall;
        faces
        (
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        );
    }
    upperWall
    {
        type patch;
        faces
        (
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        );
    }
    frontAndBack
    {
        type empty;
        faces
        (
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)
        );
    }
);

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

edges
(
        polyLine 1 2
        (
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)
        )

        polyLine 9 10
        (
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)
        )
);

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!

Cheers.

This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
200
0.000012924
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:

T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

Summary

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

https://wiki.openfoam.com/%223_weeks%22_series

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.

Summary

Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with curiosityFluidsAirfoilMesher.py

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!

Download

You can download the script here:

https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.

Instructions

(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh

PS
You need to run this with python 3, and you need to have numpy installed

Inputs

The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.

Examples

12% Joukowski Airfoil

Inputs:

With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:


With these inputs, the result looks like this:


Mesh Quality:


Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).

Inputs:


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality

Summary

Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

Hanley Innovations top

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 


The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 


Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.


Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.


Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.



Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:



► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse


Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit http://www.hanleyinnovations.com/stallion3d.html or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017


Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.




More information about the software can be found at the following url:
http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

Thanks for reading.


► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Avoid Testing Pitfalls
  24 Jan, 2017


The only way to know if your idea will work is to test it.  Rest assured, as a design engineer your idea and designs will be tested over and over again often in front of a crowd of people.

As an aerodynamics design engineer, Stallion 3D helps you to avoid the testing pitfalls that would otherwise keep you awake at night. An advantage of Stallion 3D is it enables you to test your designs on the privacy of your laptop or desktop before your company actually builds a prototype.  As someone who uses Stallion 3D for consulting, I find it very exciting to see my designs flying the way they were simulated in the software. Stallion 3D will assure that your creations are airworthy before they are tested in front of a crowd.

I developed Stallion 3D for engineers who have an innate love and aptitude for aerodynamics but who do not want to deal with the hassles of standard CFD programs.  Innovative technologies should always take a few steps out of an existing process to make the journey more efficient.  Stallion 3D enables you to skip the painful step of grid (mesh) generation. This reduces your workflow to just a few seconds to setup and run a 3D aerodynamics case.

Stallion 3D helps you to avoid the common testing pitfalls.
1. UAV instabilities and takeoff problems
2. Underwhelming range and endurance
3. Pitch-up instabilities
4. Incorrect control surface settings at launch and level flight
5. Not enough propulsive force (thrust) due to excess drag and weight.

Are the results of Stallion 3D accurate?  Please visit the following page to see the latest validations.
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.
► Flying Wing UAV: Design and Analysis
  15 Jan, 2017

3DFoil is a design and analysis software for wings, hydrofoils, sails and other aerodynamic surfaces. It requires a computer running MS Windows 7,8 and 10.

I wrote the 3DFoil software several years ago using a vortex lattice approach. The vortex lattice method in the code is based on vortex rings (as opposed to the horse shoe vortex approach).  The vortex ring method allows for wing twist (geometric and aerodynamic) so a designer can fashion the wing for drag reduction and prevent tip stall by optimizing the amount of washout.  The approach also allows sweep (backwards & forwards) and multiple dihedral/anhedral angles.
Another feature that I designed into 3DFoil is the capability to predict profile drag and stall. This is done by analyzing the wing cross sections with a linear strength vortex panel method and an ordinary differential equation boundary layer solver.   The software utilize the solution of the boundary layer solver to predict the locations of the transition and separation points.

The following video shows how to use 3DFoil to design and analyze a flying wing UAV aircraft. 3DFoil's user interface is based on the multi-surface approach. In this method, the wing is designed using multiple tapered surface where the designer can specify airfoil shapes, sweep, dihedral angles and twist. With this approach, the designer can see the contribution to the lift, drag and moments for each surface.  Towards the end of the video, I show how the multi-surface approach is used to design effective winglets by comparing the profile drag and induced drag generated by the winglet surfaces. The video also shows how to find the longitudinal and lateral static stability of the wing.



The following steps are used to design and analyze the wing in 3DFoil:
1. Input the dimensions and sweep half of the wing (half span)
2. Input the dimensions and sweep of the winglet.
3. Join the winglet and main wing.
4. Generate the full aircraft using the mirror image insert function.
5. Find the lift drag and moments
6. Compute longitudinal and lateral stability
7. Look at the contributions of the surfaces.
8. Verify that the winglets provide drag reduction.

More information about 3DFoil can be found at the following url: http://www.hanleyinnovations.com/3dfoil.html.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

CFD and others... top

► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 


► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.


A Tank outside my taxi



A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion


To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.



► The Linux Version of meshCurve is Now Ready for All to Download
  20 Apr, 2016
The 64-bit version for the Linux operating system is now ready for you to download. Because of the complexities associated with various libraries, we experienced a delay of slightly more than a month. Here is the link again.

Please let us know your experience, good or bad. Good luck!
► Announcing meshCurve: A CAD-free Low Order to High-Order Mesh Converter
  14 Mar, 2016
We are finally ready to release meshCurve to the world!

The description of meshCurve is provided in AIAA Paper No. 2015-2293. The primary developer is Jeremy Ims, who has been supported by NASA and NSF. Zhaowen Duan also made major contributions. By the way, Aerospace America also highlighted meshCurve in its 2015 annual review issue (on page 22). Many congratulations to Jeremy and Zhaowen on this major milestone!

The current version supports both the Mac OS X and Windows (64 bit) operating systems. The Linux version will be released soon.

Here is roughly how meshCurve works. The input is a linear mesh in the CGNS format. Then the user selects which boundary patches should be reconstructed to high-order. After that, geometrically important features are detected. The user can also manually select or delete features. Next the selected patches are reconstructed to add curvature. Finally the interior volume meshes are curved (if necessary). The output mesh is also stored in CGNS format.

We have tested the tool with meshes in the order of a million cells. But I still want to lower your expectation. So try it out yourself and let us know if you like it or hate it. Please do report bugs so that improvements can be made in the future.

Good luck!

Oh, did I mention the tool is completely free? Here is the meshCurve link again.






ANSYS Blog top

► How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon
  10 Oct, 2018
Illini EV Concept Team Photo at Shell Eco Marathon 2018

Illini EV Concept Team Photo at Shell Eco Marathon 2018

Weight is the enemy of all teams that design electric cars for the Shell Eco Marathon.

Reducing the weight of electric cars improves the vehicle’s acceleration and power efficiency. These performance improvements make all the difference come race day.

However, if the car’s weight is reduced too much, it could lead to safety concerns.

Illini EV Concept (Illini) is a Shell Eco Marathon team out of the University of Illinois. Team members use ANSYS academic research software to optimize the chassis of their electric car without compromising safety.

Where to Start When Reducing the Weight of Electric Cars?

Front bump composite failure under a load of 2000N.

Front bump composite failure under a load of 2000N.

The first hurdle of the Shell Eco Marathon is an initial efficiency contest. Only the best teams from this efficiency assessment even make it into the race.

Therefore, Illini concentrates on reducing the most weight in the shortest amount of time to ensure it makes it to the starting line.

Illini notes that its focus is on reducing the weight of its electric car’s chassis.

“The chassis is by far the heaviest component of our car, so ANSYS was used extensively to help design our first carbon fiber monocoque chassis,” says Richard Mauge, body and chassis leader for Illini.

“Several loading conditions were tested to ensure the chassis was stiff enough and the carbon fiber did not fail using the composite failure tool,” he adds.

Competition regulations ensure the safety of all team members. These regulations state that each team must prove that their car is safe under various conditions. Simulation is a great tool to prove a design is within safety tolerances.

“One of these tests included ensuring the bulkhead could withstand a 700 N load in all directions, per competition regulations,” says Mauge. If the teams’ electric car designs can’t survive this simulation come race day, then their cars are not racing.

Iterate and Optimize the Design of Electronic Cars with Simulation

Front bump deformation under a load of 2000N.

Front bump deformation under a load of 2000N.

Simulations can do more than prove a design is safe. They can also help to optimize designs.

Illini uses what it learns from simulation to optimize the geometry of its electric car’s chassis.

The team found that its new designs have a torsional rigidity increase around 100 percent. This is after a 15 percent decrease in weight compared to last year’s model.

“Simulations ensure that the chassis is safe enough for our driver. It also proved that the chassis is lighter and stiffer than ever before. ANSYS composite analysis gave us the confidence to move forward with our radical chassis redesign,” notes Mauge.

The story optimization story continues from Illini. It plans to explore easier and more cost-effective ways to manufacture carbon fiber parts. For instance, the team wants to replace the core of its parts with foam and increase the number of bonded pieces.

If team members just go with their gut on these hunches, they could find themselves scratching their heads when something goes wrong. However, with simulations, the team makes better informed decisions about its redesigns and manufacturing process.

To get started with simulation, try our free student download. For student teams that need to solve in-depth problems, check out our software sponsorship program.

The post How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon appeared first on ANSYS.

► Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers
    9 Oct, 2018
This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

Simulation data sets have a funny habit of ballooning as engineers move through the development cycle. At some point, post-processing these data sets on a single machine becomes impractical.

Engineers can speed up post-processing by spatially or temporally decomposing large data sets so they can be post-processed across numerous servers.

The idea is to utilize the idle compute nodes you used to run the solver in parallel to now run the post-processing in parallel.

In ANSYS 19.2 Ensight Enterprise you can spatially or temporally decompose data sets. Ensignt Enterprise is an updated version of EnSight HPC.

Post-Processing Using Spatial Decomposition

EnSight is a client/server architecture. The client program takes care of the graphical user interface (GUI) and rendering operations, while the server program loads the data, creates parts, extracts features and calculates results.

If your model is too large to post-process on a single machine, you can utilize the spatial decomposed parallel operation to assign each spatial partition to its own EnSight Server. A good server-to-model ratio is one server for every 50 million elements.

Each EnSight Server can be located on a separate compute node on any compute resource you’d like. This allows engineers to utilize the memory and processing power of heterogeneous high-performance computing (HPC) resources for data set post-processing.

The engineers effectively split the large data set up into pieces with each piece assigned to its own compute resource. This dramatically increases the data set sizes you can load and process.

Once you have loaded the model into EnSight Enterprise, there are no additional changes to your workflow, experience or operations.

Post-Processing Using Temporal Decomposition

Keep in mind that this decomposition concept can also be applied to transient data sets. In this case, the dataset is split up temporally rather than spatially. In this scenario, each server receives its own set of time steps.

A turbulence simulation created using EnSight Enterprise post-processing

EnSight Enterprise offers performance gains when the server operations outweigh the communication and rendering time of each time step. Since it’s hard to predict network communication or rendering workloads, you can’t easily create a guiding principle for the server-to-model ratio.

However, you might want to use a few servers when your model has more than 10 million elements and over a hundred time steps. This will help keep the processing load of each server to a moderate level.

How EnSight Speeds Up the Post-Processing of Large Simulation Data Sets

Another good tip to ensure you are post-processed optimally within EnSight Enterprise. Engineers achieve the best performance gains by pre-decomposing the data and locating it locally to the compute resources they anticipate using. Ideally, this data should be in EnSight Case format.

To learn more, check out Ensight or register for the webinar Analyze, Visualize and Communicate Your Simulation Data with ANSYS EnSight.

The post Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers appeared first on ANSYS.

► Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing
    8 Oct, 2018

Your design team will make informed decisions about the products they create when they bring detailed simulations up front in the development cycle.

The 19.2 release of ANSYS Discovery AIM facilitates the need of early simulations.

It does this by streamlining templates for physics-aware meshing and rapid results.

High-Fidelity Simulation Through Physics-Aware Meshing

 Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Analysts have likely told your design team about the importance of a quality mesh to achieve accurate simulation results.

Creating high quality meshes takes time and specialized training. Your design team doesn’t likely have the time or patience to learn this art.

To account for this, Discovery AIM automatically incorporates physics-aware meshing behind the scenes. In fact, your design team doesn’t even need to see the mesh creation process to complete the simulation.

This workflow employs several meshing best practices analysts typically use. The tool even accounts for areas that require mesh refinements based on the physics being assessed.

For instance, areas with a sliding contact gain a finer mesh so the sliding behavior can be accurately simulated. Additionally, areas near the walls of fluid-solid interfaces are also refined to ensure this interaction is properly captured. Physics-aware meshing ensures small features and areas of interests won’t get lost in your design team’s simulation.

The simplified meshing workflow also lets your design team choose their desired solution fidelity. This input will help the software balance the time the solver takes to compute results with the accuracy of the results.

Though physics-aware meshing can create the mesh under the hood of the simulation process, it still has tools allowing user-control of the mesh. This way, if your design team chooses to dig into the meshing details — or an analyst decides to step in — they can finely tune the mesh.

Capabilities like this further empower designers as techniques and knowledge traditionally known only by analysts are automated in an easy-to-use fashion.

Gain Rapid Results in Important Areas You Might Miss

The 19.2 release of Discovery AIM has seen improvements with its ability to enable your design team to explore simulation results.

Many analysts will know instinctively where to focus their post-processing, but without this experience, designers may miss areas of interest.

Discovery AIM enables the designer to interactively explore and identify these critical results. These initial results are rapidly displayed as contours, streamlines or field flow lines.

Field flow and streamlines for an electromagnetics simulation

Field flow and streamlines for an electromagnetics simulation

Once your design team finds locations of interest within the results, they can create higher fidelity results to examine those area of interest in further detail. Designers can then save the results and revisit them when comparing design points or after changing simulation inputs.

To learn more about other changes to Discovery AIM — like the ability to directly access fluid results — watch the Discovery AIM 19.2 release recorded webinar or take it for a test drive.

The post Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing appeared first on ANSYS.

► Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer
    5 Oct, 2018
Traditional chemotherapy can often be blocked by a tumor’s stroma.

Traditional chemotherapy can often be blocked by a tumor’s stroma.

There are few illnesses as crafty as pancreatic cancer. It spreads like weeds and resists chemotherapy.

Pancreatic cancer is often asymptomatic, has a low survival rate and is often misdiagnosed as diabetes. And, this violent killer is almost always inoperable.

The pancreatic tumor’s resistance to chemotherapy comes from a shield of supporting connective tissue, or stroma, which it builds around itself.

Current treatments attempt to overcome this defense by increasing the dosage of intravenously administered chemotherapy. Sadly, this rarely works, and the high dosage is exceptionally hard on patients.

Nonetheless, doctors need a way to shrink these tumors so that they can surgically remove them without risking the numerous organs and vasculature around the pancreas.

“We say if you can’t get the drugs to the tumor from the blood, why not get it through the stroma directly?” asks William Daunch, CTO at Advanced Chemotherapy Technologies (ACT), an ANSYS Startup Program member. “We are developing a medical device that implants directly onto the pancreas. It passes drugs through the organ, across the stroma to the tumor using iontophoresis.”

By treating the tumor directly, doctors can theoretically shrink the tumor to an operable size with a smaller dose of chemotherapy. This should significantly reduce the effects of the drugs on the rest of the patient’s body.

How to Treat Pancreatic Cancer with a Little Electrochemistry

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Most of the drugs used to treat pancreatic cancer are charged. This means they are affected by electromotive forces.

ACT has created a medical device that takes advantage of the medication’s charge to beat the stroma’s defenses using electrochemistry and iontophoresis.

The device contains a reservoir with an electrode. The reservoir connects to tubes that connect to an infusion pump. This setup ensures that the reservoir is continuously filled. If the reservoir is full, the dosage doesn’t change.

The tubes and wires are all connected into a port that is surgically implanted into the patient’s abdomen.

A diagram of ACT’s chemotherapy medical device.

A diagram of ACT’s chemotherapy medical device.

The circuit is completed by a metal panel on the back of the patient.

“When the infusion pump runs, and electricity is applied, the electromotive forces push the medication into the stroma’s tissue without a needle. The medication can pass up to 10 to 15 mm into the stroma’s tissue in about an hour. This is enough to get through the stroma and into the tumor,” says Daunch.

“Lab tests show that the medical device was highly effective in treating human pancreatic cancer cells within mice,” added Daunch. “With conventional infusion therapy, the tumors grew 700 percent and with the device working on natural diffusion alone the tumors grew 200 percent. However, when running the device with iontophoresis, the tumor shrank 40 percent. This could turn an inoperable tumor into an operable one.” Subsequent testing of a scaled-up device in canines demonstrated depth of penetration and the low systemic toxicity required for a human device.

Daunch notes that the Food and Drug Administration (FDA) took notice of these results. ACT’s next steps are to develop a human clinical device and move onto to human safety trials.

Simulation Optimized the Fluid Dynamics in the Pancreatic Cancer Chemotherapy Implant

Before these promising tests, ACT faced a few design challenges when coming up with their chemotherapy implant.

For example, “There was some electrolysis on the electrode in the reservoir. This created bubbles that would change the electrode’s impedance,” explains Daunch. “We needed a mechanism to sweep the bubbles from the surface.”

An added challenge is that ACT never knows exactly where doctors will place the device on the pancreas. As a result, the mechanism to sweep the bubbles needs to work from any orientation.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

“We used ANSYS Fluent and ANSYS Discovery Live to iterate a series of designs,” says Daunch. “Our design team modeled and validated our work very quickly. We also noticed that the bubbles didn’t need to leave the reservoir, just the electrode.”

“If we place the electrode on a protrusion in a bowl-shaped reservoir the bubbles move aside into a trough,” explains Daunch. “The fast fluid flow in the center of the electrode and the slower flow around it would push the bubbles off the electrode and keep them off until the bubbles floated to the top.”

As a result, the natural fluid flow within the redesigned reservoir was able to ensure the bubbles didn’t affect the electrode’s impedance.

To learn how your startup can use computational fluid dynamics (CFD) software to address your design challenges, please visit the ANSYS Startup Program.

The post Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer appeared first on ANSYS.

► Making Wireless Multigigabit Data Transfer Reliable with Simulation
    4 Oct, 2018

The demand for wireless communications with high data transfer rates is growing.

Consumers want wireless 4K video streams, virtual reality, cloud backups and docking. However, it’s a challenge to offer these data transfer hogs wirelessly.

Peraso aims to overcome this challenge with their W120 WiGig chipset. This device offers multigigabit data transfers, is as small as a thumb-drive and plugs into a USB 3.0 port.

The chipset uses the Wi-Fi Alliance’s new wireless networking standard, WiGig.

This standard adds a 60 GHz communication band to the 2.4 and 5 GHz bands used by traditional Wi-Fi. The result is higher data rates, lower latency and dynamic session transferring with multiband devices.

In theory, the W120 WiGig chipset could run some of the heaviest data transfer hogs on the market without a cord. Peraso’s challenge is to design a way for the chipset to dissipate all the heat it generates.

Peraso uses the multiphysics capabilities within the ANSYS Electronics portfolio to predict the Joule heating and the subsequent heat flow effects of the W120 WiGig chipset. This information helps them iterate their designs to better dissipate the heat.

How to Design High Speed Wireless Chips That Don’t Overheat

Systems designers know that asking for high-power transmitters in a compact and cost-effective enclosure translates into a thermal challenge. The W120 WiGig chipset is no different.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

The chipset includes active/passive components and two main chips that are mounted on a printed circuit board (PCB). The system reaches considerably high temperatures due to the Joule heating effect.

To dissipate this heat, design engineers include a large heat sink that connects only to the chips and a smaller one that connects only to the PCB. The system is also enclosed in a casing with limited openings.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Traditionally, optimizing this set up takes a lot of trial and error as measuring the air flow within the enclosure would be challenging.

Instead, Peraso uses ANSYS SIwave to simulate the Joule heating effects of the system. This heat map is transferred to ANSYS Icepak, which then simulates the current heat flow, orthotropic thermal conductivity, heat sources and other thermal effects.

This multiphysics simulation enables Peraso to predict the heat distribution and the temperature at every point of the W120 WiGig chipset.

From there, Peraso engineers iterate their designs until they reached their coolest setup.

This simulation led design tactic helps Peraso optimize their system until they reached a heat transfer balance they need. To learn how Peraso performed this iteration, read Cutting the Cords.

The post Making Wireless Multigigabit Data Transfer Reliable with Simulation appeared first on ANSYS.

► Designing 5G Cellular Base Station Antennas Using Parametric Studies
    3 Oct, 2018

There is only so much communication bandwidth available. This will make it difficult to handle the boost in cellular traffic expected from the 5G network using conventional cellular technologies.

In fact, cellular networks are already running out of bandwidth. This severely limits the number of users and data rates that can be accommodated by wireless systems.

One potential solution is to leverage beamforming antennas. These devices transmit different signals to different locations on the cellular network simultaneously over the same frequency.

Pivotal Commware is using ANSYS HFSS to design beamforming antennas for cellular base stations that are much more affordable than current technology.

How 5G Networks Will Send More Signals on Existing Bandwidths

A 28 GHz antenna for a cellular base station.

A 28 GHz antenna for a cellular base station.

Traditionally, cellular technologies — 3G and 4G LTE — crammed more signals on the existing bandwidth by dividing the frequencies into small segments and splitting the signal time into smaller pulses.

The problem is, there is only so much you can do to chop up the bandwidth into segments.

Alternatively, Pivotal’s holographic beamforming (HBF) antennas are highly directional. This means they can split up the physical space a signal moves through.

This way, two cells in two locations can use the same frequency at the same time without interfering with each other.

Additionally, these HBF antennas use varactor (variable capacitors) and electronic components that are simpler and more affordable than existing beamforming antennas.

How to Design HBF Antennas for 5G Cellular Base Stations

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

Antenna design companies — like Pivotal — are always looking to design devices that optimize cost, size, weight and power (C-SWaP) and performance.

So, how was Pivotal able to account for C-SWaP and performance so thoroughly?

Traditionally, this was done by building prototypes, finding flaws, creating new designs and integrating manually.

Meeting a product launch with an optimized product using this manual method is grueling.

Pivotal instead uses ANSYS HFSS to simulate their 5G antennas digitally. This allows them to assess their HBF antennas and iterate their designs faster using parametric studies.

For instance, Pivotal wants to optimize their design for performance characteristics like roll-off. To do so they can plug in the parameter values, run simulations with these values and see how each parameter affects roll-off.

By setting up parametric studies, Pivotal assess which parameters affect performance and C-SWaP the most. From there they could weigh different trade-offs until they settled on an optimized design that accounted for all the factors they studied.

To see how Pivotal set up their parametric studies and optimize their antenna designs, read 5G Antenna Technology for Smart Products.

The post Designing 5G Cellular Base Station Antennas Using Parametric Studies appeared first on ANSYS.

Convergent Science Blog top

► Changing the CFD Conference Game with the CONVERGE UC
  21 Aug, 2019

As the 2019 CONVERGE User Conference in New Orleans approaches, I’ve been thinking about the past five years of CONVERGE events. Let me take you back to the first CONVERGE User Conference. It was September 2014 in Madison, Wisconsin, and I was one of the first speakers. I talked about two-phase flows and the spray modeling we were doing at Argonne National Laboratory. Many of the people in the audience didn’t know you could do the kinds of calculations in CONVERGE that we were doing. Take needle wobble, for example. At the time, people didn’t know that you could not only move the needle up and down, but you could actually simulate it wobbling. After my talk, we had many interesting discussions with the other attendees. We made connections with international companies that we otherwise would not have had the chance to meet, and we formed collaborations with some of those companies that are still ongoing today.

At Argonne National Laboratory, I lead a team of more than 20 researchers, all of them focused on simulating either piston engines or gas turbines using high-performance computing. Our goal is to improve the predictive capability of piston engine and gas turbine simulations, and we do a lot of our work using CONVERGE. We develop physics-based models that we couple with CONVERGE to gain deeper insights from our simulations.

We routinely attend and present our work at conferences like SAE World Congress and ASME, and what really sets the CONVERGE User Conference apart is the focus of the event—it’s dedicated towards the people doing simulation work with piston engines, gas turbines, and other real-world applications. The user conference is the go-to place where we can meet all of the people doing 3D CFD simulations, so it’s a fantastic networking opportunity. We get to speak to people from academia and industry and learn about their research needs—understand what their pain points are, what their bottlenecks are, where the physics is not predictive enough. Then we take that information back to Argonne, and it helps us focus our research. 

Apart from the networking, the CONVERGE User Conference is also a great venue for presenting. My team has presented at the CONVERGE conferences on a wide variety of topics, including lean blow-out in gas turbine combustors, advanced ignition systems, co-optimization of engines and fuels, predicting cycle-to-cycle variation, machine learning for design optimizations, and modeling turbulent combustion in compression ignition and spark ignition engines. The attendees are engaged and highly technical, so you get direct, focused feedback on your work that can help you find solutions to challenges you may be encountering or give ideas for future studies.

The presenters themselves take the conference seriously. The quality of the presentations and the work presented is excellent. If you’ve never attended a CONVERGE User Conference before, my advice to you is to try to be a sponge. Bring your notebooks, bring your laptops, and take as many notes as you can. The amount of useful information you will gain from this conference is enormous and more relevant than other conferences you may attend, since this event is tailored for a specific audience. The CONVERGE User Conference also draws speakers from all over the world, which provides a unique opportunity to hear about the challenges that automotive original equipment manufacturers (OEMs), for example, face in other countries, which are different challenges than those in the United States. Listening to their presentations and getting access to those speakers has been very helpful for us. And since there are plenty of opportunities for networking, you can interact with the speakers at the conference and connect with them later on if you have further questions.

Overall, the CONVERGE User Conference is a great opportunity for presenting, learning, and networking. This is a conference where you will gain a lot of useful knowledge, meet many interesting people, and have some fun at the evening networking events. If you haven’t yet come to a CONVERGE User Conference—I highly recommend making this year your first.


Interested in learning more about the CONVERGE User Conference? Check out our website for details and registration!

► Apollo 11 at 50: Balancing the Two-Legged Stool
  15 Jul, 2019

On July 16th, I will look up at the night sky and celebrate the 50-year anniversary of the launch of Apollo 11. As I admire the full moon, the CFDer in me will think about the classic metaphor of the three-legged stool. Modern engineering efforts depend on theory, simulation, and experiment: Theory gives us basic understanding, simulation tells us how to apply this theoretical understanding to a practical problem, and experiment confirms that our applied understanding is in agreement with the physical world. One element does not seek to replace another; instead, each element reinforces the others. By modern standards, simulation did not exist in the 1960s⁠—NASA’s primary “computers” were the women we saw in Hidden Figures, and humans are limited to relatively simple calculations. When NASA sent people to the moon, it had to build a modern cathedral balanced atop a two-legged stool.

I like the cathedral metaphor for the Saturn V rocket because it expresses some unexpected similarities between the efforts. A medieval cathedral was a huge, societal construction effort. It required workers from all walks of life to contribute above and beyond, not just in scale but in care and diligence. Designers had to go past what they fully understood, overcoming unknown engineering physics through sheer persistence. The end product was a unique and breathtaking expression of craftsmanship on a colossal scale.

In aerospace, we are habituated to assembly lines, but each Saturn V was a one-off. The Apollo program as a whole employed some 400,000 people, and the Saturn family of launch vehicles was a major slice of the pie. Though their tools were certainly more advanced than a medieval artisan’s, these workers essentially built this 363-foot-tall rocket by hand. They had to, because the rocket had to be perfect. The rocket had to be perfect because there was so little margin for error, because engineers were reaching so far beyond the existing limits of understanding. Huge rockets are not routine today, but I want to highlight a few design challenges of the Saturn V as places where modern simulation tools would have had a program-altering effect.

The mighty F-1 remains the largest single-chambered liquid-fueled rocket engine ever fired. All aspects of the design process were challenging, but devising a practical combustion chamber was particularly torturous. Large rocket engines are prone to a complex interaction between combustion dynamics and aeroacoustics. Pressure waves within the chamber can locally enhance the combustion rate, which in turn alters the flow within the engine. If these physical processes occur at the wrong rates, the entire system can become self-exciting and unstable. From a design standpoint, engineers must control engine stability through chamber shaping, fuel and oxidizer injector design, and internal baffling. 

Without any way to simulate the fuel injection, mixing, combustion, and outflow, engineers were left with few approaches other than scaling, experimentation, and doggedness. They started with engines they knew and understood, then tried to vary them and enlarge them. They built a special 2D transparent thrust chamber, then applied high-speed photography to measure the unsteadiness of the combustion region. They literally set off tiny bombs within an operating engine, at a variety of locations, monitoring the internal pressure to see whether the blast waves decayed or were amplified. Eventually they produced a workable design for the F-1, but, in the words of program manager Werner von Braun:

…lack of suitable design criteria has forced the industry to adopt almost a completely empirical approach to injector and combustor development… [which] does not add to our understanding because a solution suitable for one engine system is usually not applicable to another…

It was being performed by engineers, but in some senses, it wasn’t quite engineering. Persistence paid off in the end, but F-1 combustion instability almost derailed the whole Apollo program.

Close-up of an F-1 injector plate. Many of the 1428 liquid oxygen injectors and 1404 RP-1 fuel injectors can be seen. The injector plate is about 44 inches in diameter and is split into 13 injector compartments by two circular and twelve radial baffles. Photo credit: Mike Jetzer (heroicrelics.org).

Imagine if Rocketdyne engineers had had access to modern simulation tools! A tool like CONVERGE can simulate liquid fuel spray impingement directly, allowing an engineer to parametrically vary the geometry and spray parameters. A tool like CONVERGE can calculate the local combustion enhancement of impinging pressure fluctuations, allowing an engineer to introduce different baffle shapes and structures to measure their moderating effect. And the engineer can, in von Braun’s words, add to his or her understanding of how to combat combustion instability.

Snapshot from an RP-1 fuel tank on a Saturn I (flight SA-5). This camera looks down from the top center of the tank. Note the anti-slosh baffles. Photo credit: Mark Gray on YouTube.

Fuel slosh in the colossal lower-stage tanks presented another design challenge. The first-stage liquid oxygen tank was 33 feet in diameter and about 60 feet long. How do you study slosh in such an immense tank while subjecting it to what you think will be flight-representative vibration and acceleration? What about the behavior of leftover propellant in zero gravity? In the 1960s, the answer was you built the rocket and flew it! In fact, the early Saturn launches (uncrewed, of course) featured video cameras to monitor fuel flow within the tanks. Cameras of that era recorded to film, and these cameras were housed in ejectable capsules. After collecting their several minutes of footage, the capsules would deploy from the spent stage and parachute to safety. I bet those engineers would have been over the moon if you had presented them with modern volume of fluid simulation tools.

Readers who have watched Apollo 13 may recall that the center engine of the Saturn V second stage failed during the launch. This was due to pogo, another combustion instability problem. In a rocket experiencing pogo, a momentary increase in thrust causes the rocket structure to flex, which (at the wrong frequency) can cause the fuel flow to surge, causing another self-exciting momentary increase in thrust. In severe cases, this vibration can destroy the vehicle. Designers added various standpipes and accumulators to de-tune the system, but this was only performed iteratively, flying a rocket to measure the effects. Today, we can study the fluid-structure interaction before we build the structure! Modern simulation tools are dramatic aids to the design process.

Saturn V first-stage anti-pogo valve. Diagram credit: NASA.

Today’s aerospace engineering community is doing some amazing things. SpaceX and Blue Origin are landing rockets on their tails. The United Launch Alliance has compiled a perfect operational record with the Delta IV and Atlas V. Companies like Rocket Lab and Firefly Aerospace are demonstrating that you don’t need to have the resources of a multinational conglomerate to put payloads into orbit. But for me, nothing may ever surpass the incredible feat of engineers battling physical processes they didn’t fully understand, flying people to the moon on a two-legged stool.

Interested in reading more about the Saturn V launch vehicle? I recommend starting with Dr. Roger Bilstein’s Stages to Saturn.

► CONVERGE Chemistry Tools: The Simple Solution to Complex Chemistry
  20 May, 2019

As I’ve started to cook more, I’ve learned the true value of multipurpose kitchen utensils and appliances. Especially living in an apartment with limited kitchen space, the fewer tools I need to make delicious meals, the better. A rice cooker that doubles as a slow cooker? Great. A blender that’s also a food processor? Sign me up. Not only do these tools prove to be more useful, but they’re also more economical.

The same principle applies beyond kitchen appliances. CONVERGE CFD software is well known for its flow solver, autonomous meshing, and fully coupled chemistry solver, but did you know that it also features an extensive suite of chemistry tools, with even more coming in version 3.0? Whether you need to speed up your abnormal combustion simulations, create and validate new chemical mechanisms, expedite your design process with 0D or 1D modeling, or compare your chemical kinetics experiments with simulated results, CONVERGE chemistry tools have you covered. The many capabilities of CONVERGE translate to a broadly applicable piece of software for CFD and beyond.

Zero-Dimensional Simulations

CONVERGE 3.0 expands on the previous versions’ 0D simulation capabilities with a host of new tools and reactors that are useful across a wide range of applications. If you’re running diesel engine simulations, you can take advantage of CONVERGE’s autoignition utility to quickly generate ignition delay data for different combinations of temperature, pressure, and equivalence ratio. Furthermore, you can couple the autoignition utility with 0D sensitivity analysis to determine which reactions and species are important for ignition or to determine the importance of various reactions in forming a given species.

The variable volume tool in CONVERGE 3.0 is a closed homogeneous reactor that can simulate a rapid compression machine (RCM). RCMs are ideal for chemical kinetics studies, especially for understanding autoignition chemistry as a function of temperature, pressure, and fuel/oxygen ratio.

Another new reactor model is the 0D engine tool, which can provide information on autoignition and engine knock. HCCI engines operate by compressing well-mixed fuel and oxidizer to the point of autoignition, and so you can use the 0D engine tool to gain valuable insight into your HCCI engine.

For other applications, look toward the well-stirred reactor (WSR) model coming in 3.0. The WSR assumes a high rate of mixing so that the output composition is identical to the composition inside the reactor. WSRs are thus useful for studying highly mixed IC engines, highly turbulent portions of non-premixed combustors, and ignition and extinction limits on residence time such as lean blow-out in gas turbines.

In addition to the new 0D reactor models, CONVERGE 3.0 will also feature new 0D tools. The chemical equilibrium (CEQ) solver calculates the concentration of species at equilibrium. The CEQ solver in CONVERGE, unlike many equilibrium solvers, is guaranteed to converge for any combination of gas species. The RON/MON estimator finds the research octane number (RON) and motor octane number (MON) for a fuel by finding the critical compression ratio (CCR) at which autoignition occurs and correlates this with the CCR of PRF fuel composition using the LLNL Gasoline Mechanism.

One-Dimensional Simulations

For 1D simulations, CONVERGE contains the 1D laminar premixed flame tool, which calculates the flamespeed of a combustion reaction using a freely propagating flame. You can use this tool to ensure your mechanisms yield reasonable flamespeeds for specific conditions and to generate laminar flamespeed tables that are needed for some combustion models, such as G-Equation, ECFM, and TFM. In CONVERGE 3.0, this solver has seen significant improvement in parallelization and scalability, as shown in Fig. 1. You can additionally perform 1D sensitivity analysis to determine how sensitive the flamespeed is to the various reactions and species in your mechanism.

Figure 1. Parallelization (left) and scalability (right) of the CONVERGE flamespeed solver.

CONVERGE 3.0 also includes a new 1D reactor model: the plug flow reactor (PFR). PFRs can be used to predict chemical kinetics behavior in continuous, flowing systems with cylindrical geometry. PFRs have commonly been applied to study both homogeneous and heterogeneous reactions, continuous production, and fast or high-temperature reactions.

Chemistry Tools

Zero- and one-dimensional simulation tools aren’t all CONVERGE has to offer. CONVERGE also features a number of tools for optimizing reaction mechanisms and interpreting your chemical kinetics simulation results.

Detailed chemistry calculations can be computationally expensive, but you can decrease computational time by reducing your chemical mechanism. CONVERGE’s mechanism reduction utility eliminates species and reactions that have the least effect on the simulation results, so you can reduce computational expense while maintaining your desired level of accuracy. In previous versions of CONVERGE, mechanism reduction was only available to target ignition delay. In CONVERGE 3.0, you can also target flamespeed, so you can ensure that your reduced mechanism maintains a similar flamespeed as the parent mechanism.

CONVERGE additionally offers a mechanism tuning utility to optimize reaction mechanisms. This tool prepares input files for running a genetic algorithm optimization using CONVERGE’s CONGO utility, so you can tune your mechanism to meet specified performance targets.

If you’re developing multi-component surrogate mechanisms, or you need to add additional pathways or NOx chemistry to a fuel mechanism, the mechanism merge tool is the one for you. This tool combines two reaction mechanisms into one and resolves any duplicate species or reactions along the way.

CONVERGE 3.0 will feature new table generation and visualization tools. With the tabulated kinetics of ignition (TKI) and tabulated laminar flamespeed (TLF) tools, you can generate ignition or flamespeed tables that are needed for certain combustion models. To visualize your results, you can run a CONVERGE utility to prepare your tables for visualization in Tecplot for CONVERGE or other visualization software.

Figure 2. 3D visualization of flamespeed as a function of pressure and temperature.

CONVERGE’s suite of chemistry tools is just one of the components that make CONVERGE a robust, multipurpose solver. And just as multipurpose kitchen appliances have more uses during meal prep, CONVERGE’s chemistry capabilities mean our software has a broad scope of applications for not just CFD—but for all of your chemical kinetics simulation needs. Interested in learning more about CONVERGE or CONVERGE’s chemistry tools? Contact us today!

► Your &lt;span class=&quot;text-lowercase&quot;&gt;μ&lt;/span&gt; Matters: Understanding Turbulence Model Behavior
    6 Mar, 2019

I recently attended an internal Convergent Science advanced training course on turbulence modeling. One of the audience members asked one of my favorite modeling questions, and I’m happy to share it here. It’s the sort of question I sometimes find myself asking tentatively, worried I might have missed something obvious. The question is this:

Reynolds-Averaged Navier Stokes (RANS) turbulence models and Large-Eddy Simulation (LES) turbulence models have very different behavior. LES will become a direct numerical simulation (DNS) in the limit of infinitesimally fine grid, and it shows a wide range of turbulent length scales. RANS does not become a DNS, no matter how fine we make the grid. Rather, it shows grid-convergent behavior (i.e., the simulation results stop changing with finer and finer grids), and it removes small-scale turbulent content.

If I look at a RANS model or an LES turbulence model, the transport equations look very similar mathematically. How does the flow ‘know’ which is which?

There’s a clever, physically intuitive answer to this question, which motivates the development of additional hybrid models. But first we have to do a little bit of math.

Both RANS and LES take the approach of decomposing a turbulent flow into a component to be resolved and a component to be modeled. Let’s define the Reynolds decomposition of a flow variable ϕ as

$$\phi = \bar \phi \; + \;\phi’,$$

where the overbar term represents a time/ensemble average and the prime term is the fluctuating term. This decomposition has the following properties:

$$\overline{\overline{\phi}} = \bar \phi \;\;{\rm{and}}\;\;\overline{\phi’} = 0.$$

Figure 1 Schematic of time-averaging a signal.

LES uses a different approach, which is a spatial filter. The filtering decomposition of ϕ is defined as

$$\phi  = \left\langle \phi  \right\rangle + \;\phi ”,$$

where the term in the angled brackets is the filtered term and the double-prime term is the sub-grid term. In practice, this is often calculated using a box filter, a spatial average of everything inside, say, a single CFD cell. The spatial filter has different properties than the Reynolds decomposition,

$$\left\langle {\left\langle \phi  \right\rangle } \right\rangle \ne \left\langle \phi  \right\rangle \;\;{\rm{and}}\;\;\left\langle {\phi ”} \right\rangle  \ne 0.$$

Figure 2 Example of spatial filtering. DNS at left, box filter at right. (https://pubweb.eng.utah.edu/~rstoll/LES/Lectures/Lecture04.pdf )

To derive RANS and LES turbulence models, we apply these decompositions to the Navier-Stokes equations. For simplicity, let’s consider only the incompressible momentum equation. The Reynolds-averaged momentum equation is written as

$$\frac{{\partial \overline {{u_i}} }}{{\partial t}} + \frac{{\partial \overline {{u_i}}\; \overline {{u_j}} }}{{\partial {x_j}}} = – \frac{1}{\rho }\frac{{\partial \overline P }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left[ {\mu \left( {\frac{{\partial \overline {{u_i}} }}{{\partial {x_j}}} + \frac{{\partial \overline {{u_j}} }}{{\partial {x_i}}}} \right) – \frac{2}{3}\mu \frac{{\partial \overline {{u_k}} }}{{\partial {x_k}}}{\delta _{ij}}} \right] – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\overline {{{u’}_i}{{u’}_j}}} } \right).$$

This equation looks the same as the basic momentum transport equation, replacing each variable with the barred equivalent, with the exception of the term* in red. That’s where the RANS model will make a contribution.

The LES momentum equation, again neglecting Favre filtering, is written

$$\frac{{\partial \left\langle {{u_i}} \right\rangle }}{{\partial t}} + \frac{{\partial \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle }}{{\partial {x_j}}} =  – \frac{1}{\rho }\frac{{\partial \left\langle P \right\rangle }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{{\partial \left\langle {{\sigma _{ij}}} \right\rangle }}{{\partial {x_j}}} – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\left\langle {{u_i}{u_j}} \right\rangle}}  – \rho \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle  \right).$$

Once again, we have introduced a single unclosed term*, shown in red. As with RANS, this is where the LES model will exert its influence.

These terms are physically stress terms. In the RANS case, we call it the Reynolds stress.

$${\tau _{ij,RANS}} =  – \rho \overline {{{u’}_i}{{u’}_j}}.$$

In the LES case, we define a sub-grid stress as follows:

$${\tau _{ij,LES}} = \rho \left( {\left\langle {{{u}_i}{{u}_j}} \right\rangle  – \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle } \right).$$

By convention, the same letter is used to denote these two subtly different terms. It’s common to apply one more assumption to both. Kolmogorov postulated that at sufficiently small scales, turbulence was statistically isotropic, with no preferential direction. He also postulated that turbulent motions were self-similar. The eddy viscosity approach invokes both concepts, treating

$${\tau _{ij,RANS}} = f\left( {{\mu _t},\overline V } \right)$$

and

$${\tau _{ij,LES}} = g\left( {{\mu _t},\overline V } \right),$$

where \(\overline V \) represents the vector of transported variables: mass, momentum, energy, and model-specific variables like turbulent kinetic energy. We have also introduced \({\mu _t}\), which we call the turbulent viscosity. Its effect is to dissipate kinetic energy in a similar fashion to molecular viscosity, hence the name.

If you skipped the math, here’s the takeaway. We have one unclosed term* each in the RANS and LES momentum equations, and in the eddy viscosity approach, we close it with what we call the turbulent viscosity \({\mu _t}\). Yet we know that RANS and LES have very different behavior. How does a CFD package like CONVERGE “know” whether that \({\mu _t}\) is supposed to behave like RANS or like LES? Of course the equations don’t “know”, and the solver doesn’t “know”. The behavior is constructed by the functional form of \({\mu _t}\).

How can the turbulent viscosity’s functional form construct its behavior? Dimensional analysis informs us what this term should look like. A dynamic viscosity has dimensions of density multiplied by length squared per time. If we’re looking to model the turbulent viscosity based on the flow physics, we should introduce dimensions of length and time. The key to the difference between RANS and LES behavior is in the way these dimensions are introduced.

Consider the standard k-ε model. It is a two-equation model, meaning it solves two additional transport equations. In this case, it transports turbulent kinetic energy (k) and the turbulent kinetic energy dissipation rate (ε). This model calculates the turbulent viscosity according to the local values of these two flow variables, along with density and a dimensionless model constant as

$${\mu _t} = {C_\mu }\rho \frac{{{k^2}}}{\varepsilon }.$$

Dimensionally, this makes sense. Turbulent kinetic energy is a specific energy with dimensions of length squared per time squared, and its dissipation rate has dimensions of length squared per time cubed. In a sufficiently well-resolved solution, all of these terms should limit to finite values, rather than limiting to zero or infinity. If so, the turbulent viscosity should limit to some finite value, and it does.

Figure 3 Example of a grid-converged RANS simulation: the ECN Spray A case, with a contour plot for illustration.

LES, in contrast, directly introduces units of length via the spatial filtering process. Consider the Smagorinsky model. This is a zero-equation model that calculates turbulent viscosity in a very different way. For the standard Smagorinsky model,

$${\mu _t} = \rho C_s^2{\Delta ^2}\sqrt {{S_{ij}}{S_{ij}}},$$

where \({C_s}\) is a dimensionless model constant, \({S_{ij}}\) is the filtered rate of strain tensor, and Δ is the grid spacing. Once again, the dimensions work out: density multiplied by length squared multiplied by inverse time. But what do the limits look like? The rate of strain is some physical quantity that will not limit to infinity. In the limit of infinitesimal grid size, the turbulent viscosity must limit to zero! The model becomes completely inactive, and the equations solved are the unfiltered Navier-Stokes equations. We are left with a direct numerical simulation.

When I was a first-year engineering student, discussion of dimensional analysis and limiting behaviors seemed pro forma and almost archaic. Real engineers in the real world just use computers to solve everything, don’t they? Yes and no. Even those of us in the computational analysis world can derive real understanding, and real predictive power, from considering the functional form of the terms in the equations we’re solving. It can even help us design models with behavior we can prescribe a priori.

Detached Eddy Simulation (DES) is a hybrid model, taking advantage of the similarity of functional forms of the turbulent viscosities in RANS and LES. DES adopts RANS-like behavior near the wall, where we know an LES can be very computationally expensive. DES adopts LES behavior far from the wall, where LES is more computationally tractable and unsteady turbulent motions are more often important.

The math behind this switching behavior is beyond the scope of a blog post. In effect, DES solves the Navier-Stokes equations with some effective \({\mu _{t,DES}}\) such that \({\mu _{t,DES}} \approx {\mu _{t,RANS}}\) near the wall and \({\mu _{t,DES}} \approx {\mu _{t,LES}}\) far from the wall, with \({\mu _{t,RANS}}\) and \({\mu _{t,LES}}\) selected and tuned so that they are compatible in the transition region. Our understanding of the derivation and characteristics of the RANS and LES turbulence models allows us to hybridize them into something new.

Figure 4 DES simulation over a backward facing step with CONVERGE

*This term is a symmetric second-order tensor, so it has six scalar components. In some approaches (e.g., Reynolds Stress models), we might transport these terms separately, but the eddy viscosity approach treats this unknown tensor as a scalar times a known tensor.

► What’s Knockin&#8217; in Europe?
  29 Jan, 2019

The Convergent Science GmbH team is based in Linz, Austria and provides support to our European clients and collaborators alike as they tackle the hard problems. One of the most interesting and challenging problems in the design of high efficiency modern spark-ignited (SI) internal combustion engines is the prediction of knock and the development of knock-mitigation strategies. At the 2018 European CONVERGE User Conference (EUC), several speakers presented recent work on engine knock.

This winter, when I cold-started my car, I heard a loud knocking noise. Usually, though, knocking is more prevalent in engines that operate near the edge of the stability range. The first step of knocking is spontaneous secondary ignition (autoignition) of the end-gases ahead of the flame front. When the pressure waves from this autoignition hit the walls of the combustion chamber, they often make a knocking noise and damage the engine. Knock is challenging to simulate because you must correctly calculate critical local conditions and simultaneously track the pressure waves that are traveling rapidly across the combustion chamber.

To enable you to easily model these conditions, CONVERGE offers autonomous meshing, full-cycle simulation, and flexible boundary conditions. Adaptive Mesh Refinement allows you to add cells and spend computational time on areas where the knock-relevant parameters (such as local pressure difference, heat release rate, and species mass fraction of radicals that indicate autoignition) are rapidly changing. CONVERGE can predict autoignition with surrogate fuels, changing physical engine parameters, and a spectrum of operating conditions.

EUC keynote speaker Vincenzo Bevilacqua from Porsche Engineering presented an intriguing new approach (re-defining knock index) to evaluate the factors that may contribute to knock and to identify a clear knock limit. In another study, researchers from Politecnico di Torino investigated the feasibility of water injection as knock mitigation strategy. In yet another study, Max Mally and his colleagues from VKA RWTH Aachen University used RANS to successfully reproduce combustion and knock with a spark-timing sweep approach at various exhaust gas recirculation (EGR) percentages. You can see in the figure below that they were able to capture the moving pressure waves.


The rapid propagation of the pressure waves across the combustion chamber functions much like a detonation. Source: Mally, M., Gunterh, M., and Pischinger, S., “Numerical Study of Knock Inhibition with Cooled Exhaust Gas Recirculation,” CONVERGE User Conference-Europe, Bologna, Italy, March 19-23, 2018.

Advancing the spark, using lean burn, turbo-charging, or running at a high compression ratio can increase the likelihood of knock. However, each cycle in an SI engine is unique, and thus autoignition is not a consistent phenomenon. When simulating an SI engine, it is critical to simulate multiple cycles to identify the limits of the operating conditions at which knock is likely to occur. (Fortunately, CONVERGE can easily run multi-cycle simulations!)

Knock is one of the limiting factors in engine design because many of the techniques that improve the thermal efficiency and enable downsizing of the engine increase the likelihood of knock. Here at Convergent Science, we encourage you to solve the hard problems. Go on, knock it out of park.


► 2018: CONVERGE-ING ON A DECADE
  17 Dec, 2018

Convergent Science thrived in 2018, with many successes, rapid growth, and consistent innovation. We celebrated the tenth anniversary of the commercial release of CONVERGE. The Convergent Science employee count surpassed 100, and our India office tripled in size. We formed new partnerships and collaborations and continued to bring CONVERGE to new application areas. Simultaneously, we endeavored to increase the prominence of CONVERGE in internal combustion applications and grew our market share.

Our dedicated team at Convergent Science ensures that CONVERGE stays on the cutting-edge of CFD software—implementing new models, enhancing CONVERGE features, increasing simulation speed and accuracy—while also offering exceptional support and customer service to our clients.

New Application Successes

Increasingly, clients are using CONVERGE for new applications and great strides are being made in these fields. Technical presentations and papers on gerotor pumps, blood pumps, reciprocating compressors, scroll compressors, and screw machines this year reflected CONVERGE’s increased use in the pumps and compressors markets. Research projects using CONVERGE to model gas turbine combustion, lean blow-out, ignition, and relight are going strong. In the field of aftertreatment, new acceleration techniques have been implemented in CONVERGE to enable users to accurately predict urea deposits in Urea/SCR aftertreatment systems while keeping pace with rapid prototyping schedules. In addition, we were thrilled to see the first paper using CONVERGE for offshore wind turbine modeling published this year, as part of a collaborative effort with the University of Massachusetts Amherst.

CONVERGE Featured at SAE, DOE Merit Review, and ASME ICEF

CONVERGE’s broad use in the automotive industry was showcased at the Society of Automotive Engineers World Congress Experience (SAE WCX18), with more than 30 papers presenting CONVERGE results. Convergent Science cultivates collaboration with industry, academic, and research institutions, and the benefit of these collaborations was prominently displayed at SAE WCX18. Organizations such as General Motors, Caterpillar, Ford, Jaguar Land Rover, Isuzu Motors, John Deere, Renault, Aramco Research Center, Argonne National Laboratory, King Abdullah University of Science and Technology (KAUST), Saudi Aramco, and the University of Oxford all authored papers describing CONVERGE results. These papers spanned a wide array of topics, including fuel injection, chemical mechanisms, HCCI, GCI, water injection, LES, spray/wall interaction, abnormal combustion, machine learning, soot modeling, and aftertreatment systems.

At the 2018 DOE Merit Review, CONVERGE was featured in 17 of the advanced vehicle technologies projects that were reviewed by the U.S. Department of Energy. The broad range of topics of the projects is a testament to the versatility and broad applicability of CONVERGE. The research for these projects was conducted at Argonne National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratories, the Department of Energy, National Renewable Energy Laboratory, and the University of Michigan.

CONVERGE was once again well represented at the ASME Internal Combustion Engine Fall Technical Conference (ICEF). At ASME ICEF 2018, 18 papers included CONVERGE results, with topics ranging from ignition systems and injection strategies to emissions modeling and predicting cycle-to-cycle variation. I was honored to have the opportunity to further my cause of defending the IC engine in a keynote presentation.

New Partnerships and Collaborations

At Convergent Science, we take pride in fostering partnerships and collaborations with companies and institutions to spark innovation and bring our best software to the CFD community. This year, we renewed our partnership with Roush Yates Engines, who had a fantastic 2018 season, achieving the company’s 350th win and winning the Monster Energy NASCAR Cup Series Championship. We formed a new partnership with Tecplot and integrated their industry-leading visualization software into CONVERGE Studio. In addition, we entered into new partnerships with the National Center for Supercomputing Applications and two Dassault Systèmes subsidiaries, Spatial Corp. and Abaqus. These partnerships improve the usability and applicability of CONVERGE and help CONVERGE reach new markets.

CONVERGE in Italy

We had a great showing of CONVERGE users at our second European CONVERGE User Conference held this year in Bologna, Italy. Attendees shared their latest research using CONVERGE for a host of different applications, from modeling liquid film boiling and mitigating engine knock to developing turbulent combustion models and simulating premixed burners with LES. For one of our networking events, we rented out the Ferrari Museum in Maranello, where we were treated to a tour of the museum and ate dinner surrounded by cars we wished we owned. We also enjoyed traditional Bolognese cuisine at the Osteria de’ Poeti and a reception at the Garganelli Restaurant. 

Turning 10 at the U.S. CONVERGE User Conference

It seemed only fitting to celebrate ten years of CONVERGE back where it all started in Madison, Wisconsin. During the fifth annual North American User Conference, we commemorated CONVERGE’s tenth birthday with a festive evening at the historic Orpheum Theater in downtown Madison. During the celebration, we heard from Jamie McNaughton of Roush Yates Engines, who discussed the game-changing impact of CFD on creating winning racing engines. Physics Girl Dianna Cowern entertained us with her live physics demonstrations and her unquenchable enthusiasm for all things science. I concluded the evening with a brief presentation (which you can check out below) reflecting on the past decade of CONVERGE and looking forward to the future. We were incredibly grateful to be able to celebrate the successes of CONVERGE with our users who have made these past ten years possible.

In addition to our 10-year CONVERGE celebration, we hosted our third trivia match at the Convergent Science World Headquarters. At the beautiful Madison Club, we heard a fascinating round of presentations on topics including gas turbine modeling, offshore fluid-structural dynamics, machine learning, and a wide range of IC engine applications.

Convergent Science India

The Convergent Science India office in Pune celebrated its one-year anniversary in August. The office has transformed in the span of the last year and a half. The employee count more than tripled—from two employees at the end of 2017 to seven at the end of 2018. Five servers are now up and running and the office is fully staffed. We’re thrilled with the service and support our Pune office has been able to offer our clients all around India.

CONVERGE 3.0 Coming Soon

CONVERGE 3.0 is slated to be released soon, and we truly believe this new version of CONVERGE will once again change the CFD game. In 3.0, you can look forward to our new boundary layer mesh and inlaid mesh features, which will allow greater meshing flexibility for accurate results at less computational cost. Our new partnership with Spatial Corp. will enable CONVERGE users to directly import CAD files into CONVERGE Studio, greatly streamlining our Studio users’ workflow. We’ve also focused a lot of our attention this year towards enhancing our chemistry tools to be more efficient, robust, and applicable to an even greater range of flow and combustion problems. We’ve added new 0D and 1D reactors, including a perfectly stirred reactor, 0D HCCI engine, RON and MON estimators, plug flow reactors, and improved our 1D laminar flame solver. Additionally, we enhanced our mechanism reduction capability by targeting both ignition delay and laminar flamespeed. But perhaps the most anticipated aspect of CONVERGE 3.0 is the scaling. 3.0 demonstrates dramatically superior parallelization compared to 2.4 and shows significant speedup even on thousands of cores.

Looking Ahead

2019 promises to be an exciting year. With the upcoming release of CONVERGE 3.0, we’re looking forward to growing CONVERGE’s presence in new application areas, continuing our work on pumps and compressors, and expanding our presence in aftertreatment and gas turbine markets. We will continue working hard to expand the usage of CONVERGE in the European, Asian, and Indian automotive markets. Above all, we look forward to more innovation, more collaboration, and continuing to provide unparalleled support to our clients. Want to join us? Check out our website to find out how CONVERGE can help you solve the hard problems.


Kelly looks back on the past decade of CONVERGE during the 10-Year Celebration at the 2018 CONVERGE User Conference-North America. The video Kelly references in his presentation is a video tribute to CONVERGE that was played earlier in the evening, Turning 10: A CONVERGE History.

Numerical Simulations using FLOW-3D top

► HPC Release of FLOW-3D v12.0 Webinar
  15 Nov, 2019

Are you looking to accelerate your simulation turnaround times? Do you want to run high-resolution CFD models with FLOW-3D, expand your design space and perform design of experiment (DOE) studies? Join CFD engineer Allyce Jackman for a technical webinar providing an overview of high performance computing with FLOW-3D CLOUD. She will introduce the FLOW-3D CLOUD platform and give a breakdown of the easy-to-use cloud infrastructure through a live demo. You will learn how to host your entire simulation workflow on the cloud and deploy hardware and software resources on-demand, including performance benchmarks for understanding scaling and speed-up on the cloud.

December 11 at 1:00 pm EST

Allyce Jackman, CFD Engineer

About the Presenter

Allyce Jackman received her Bachelor’s in Mechanical Engineering from the University of New Mexico in May 2018. Allyce’s background is in HVAC design and renewable energy systems. Her work at Flow Science focuses on multiphysics with an emphasis on coating, microfluidics and additive manufacturing.

► HPC Release of FLOW-3D v12.0
  13 Nov, 2019

The HPC-enabled FLOW-3D v12.0 takes full advantage of the most advanced hardware available with pricing options available for entry-level through enterprise scale clients.

Santa Fe, NM, November 13, 2019 — Flow Science, Inc. has announced that it has released the HPC-enabled FLOW-3D v12.0. The HPC version of FLOW-3D v12.0 can be run on in-house clusters or on the FLOW-3D CLOUD software-a-as-a-service platform, which provides high performance computing as well as the lowest cost entry point to FLOW-3D.

Existing HPC customers and IT admins will benefit from one-time cluster hardware configuration setup, improved support for multiple job schedulers, and a simplified interface for setting up simulation on any compatible HPC platform.

Our HPC-enabled products in tandem with our cloud platform, which boasts the latest and the greatest hardware allow us to provide our customers the tools they need, when they need them, in order to accelerate their R&D and stay ahead of their competitors. Whether you are running FLOW-3D on a single core or 1000s of CPU cores, FLOW-3D is engineered to take full advantage of the ongoing advancements in hardware, said Flow Science CEO, Amir Isfahani.

FLOW-3D v12.0 marks an important milestone in the design and functionality of the graphical user interface, which simplifies model setup and improves user workflows. A state-of-the-art Immersed Boundary Method brings greater accuracy to FLOW-3D v12.0’s solutions. Featured developments include the Sludge Settling Model, the 2-Fluid 2-Temperature Model, and the Steady State Accelerator, which allows users to model their free surface flows even faster. The HPC-enabled version of FLOW-3D v12.0 allows customers to access these advanced simulation options at an accelerated pace. Performance benchmarks of the HPC-enabled version of FLOW-3D v12.0 are available.

From running design variations simultaneously to solving fine-resolution, large, and highly-complex design scenarios that take weeks to run on a high-end workstation, our HPC-enabled products get you the answer you need as quickly as possible on your in-house cluster or on our cloud platform, added Amir Isfahani.

A live webinar will provide an overview of high performance computing and FLOW-3D CLOUD with an emphasis on deploying hardware and software resources on demand, such as performance benchmarks for understanding scaling and speed-up on the cloud. The webinar will take place on December 11, 2019 at 1:00 pm EST. Online registration is available.

Flow Science has made this new release available to customers who are currently under maintenance contracts.

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
Attn: Amanda Ruggles
info@flow3d.com
+1 505-982-0088

► mt1
  13 Nov, 2019
FLOW-3D Workshops Water Civil Infrastructure Request to host a workshop

2020 FLOW-3D Workshop Schedule

January – March April – June July – September October – December January – March

February 7

New Orleans, LA tbd

February 20

Orlando, FL Orange County Utilities

February 25

Phoenix, AZ tbd

March 10

Atlanta, GA Golder

March 18

Cincinnati, OH tbd

March 31

Chicago, IL MWRD

April – June

April 1

Milwaukee, WI MMSD

April 7

Houston, TX tbd

April 8

Austin, TX tbd

April 9

New York, NY tbd

April 14

Denver, CO tbd

May 5

Philadelphia, PA tbd

May 7

Norfolk, VA tbd

May 13

Columbus, OH tbd

May 14

Cleveland, OH tbd

May 15

Indianapolis, IN tbd

June 17

Pittsburgh, PA tbd
July – September

July 7

Seattle, WA tbd

July 8

Portland, OR tbd

July 9

Sacramento, CA tbd

August 4

Boston, MA tbd

August 12

Los Angeles, CA tbd

August 20

Quebec City, QC tbd

August 21

Montreal, QC tbd

August 26

Niagara Falls, NY tbd

August 27

Calgary, AB tbd

August 28

Vancouver, BC tbd

September 9

St. Louis, MO tbd
October – December

October 13

Las Vegas, NV tbd

October 14

Salt Lake City, UT tbd

October 15

San Francisco, CA tbd

Workshop Registration

Workshop Details

  • Registration is limited to 15 attendees
  • Cost: $499 (private sector); $299 (government); $99 (academic)
  • 9:00 am – 4:00 pm
  • 30-day FLOW-3D license
  • Bring your own laptop and mouse to follow along, or just watch
  • Lunch provided by Flow Science

Workshop Cancellation and Licensing Policy

Flow Science reserves the right to cancel a workshop at any time, due to reasons such as inclement weather, insufficient registrations, or instructor unavailability. In such cases, a full refund will be given, or attendees may opt to transfer their registration to another workshop. Flow Science is not responsible for any hotel, transportation, or any other costs incurred. Attendees who are unable to attend a workshop may cancel up to one week in advance to receive a full refund. Attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop; after that date, no refunds will be given. Alternatively, an attendee can request to have their registration transferred to another workshop. Workshop licenses are available to prospective or lapsed users only. Existing users are welcome to register for a workshop, and should contact sales@flow3d.com to discuss their licensing options.

About our Workshops

FLOW-3D workshop
A FLOW-3D workshop for water and environmental applications in Lyon, France. A special thank you to our host, Électricité de France.
Curious about FLOW-3D? Want to learn about how we go about modeling the most challenging free surface hydraulic applications? Our workshops are designed to deliver focused, hands-on, yet wide-ranging instruction that will leave you with a thorough understanding of how FLOW-3D is used in key water infrastructure industries. Through hands-on examples, you will explore the hydraulics of typical dam and weir cases, municipal conveyance and wastewater problems, and river and environmental applications. Later, you will be introduced to more sophisticated physics models, including air entrainment, sediment and scour, thermal plumes and density flows and particle dynamics. By the end of the day, you will have set up six models, absorbed the user interface and steps that are common to three classes of hydraulic problems, and used the advanced post-processing tool FlowSight to analyze the results of your simulations. This one-day class is comprehensive yet accessible for engineers new to CFD methods.

Workshop attendees will receive a 30-day license of FLOW-3D.

Who should attend?

  • Practicing engineers working in the water resources, environmental, energy and civil engineering industries
  • Regulators and decision makers looking to better understand what state-of-the-art tools are available to the modeling community
  • University students interested in using CFD in their research
  • All modelers working in the field of environmental hydraulics

What will you learn?

  • How to import geometry and set up free surface hydraulic models, including meshing and initial and boundary conditions.
  • How to add complexity by including sediment transport and scour, particles, scalars and turbulence.
  • How to use sophisticated visualization tools such as FlowSight to effectively analyze and convey simulation results.
  • Advanced topics, including air entrainment and bulking phenomena, shallow water and hybrid 3D/shallow water modeling, and chemistry.
FLOW-3D Workshop
A very successful FLOW-3D workshop for water and environmental applications in Bangkok, organized by our Thai partner DTA and hosted generously by King Mongkut’s University of Technology Thonburi (KMUTT). Special thanks to Prof. Chaiyuth Chinnarasri.

You’ve completed the one-day workshop, now what?

We recognize all will not be absorbed in one day, and you may want to further explore the capabilities of FLOW-3D by setting up your own problem or comparing CFD results with prior measurements in the field or in the lab. After the workshop, your license will be extended for 30 days. During this time you will have the support of one of our CFD engineers who will help you work through your specifics. You will also have access to our web-based training videos covering introductory through advanced modeling topics.

Host a workshop!

If you would like to host a workshop, we are happy to travel to your location.

How does it work?

You provide a conference room for up to 15 attendees, a projector and Wi-Fi. Flow Science provides the training, workshop materials, lunch and licenses. As host, you also receive three workshop seats free of charge, which you can offer to your own engineers, to consultants, or to partnering companies.

Request to host a workshop

About the Instructors

Brian Fox, CFD EngineerBrian Fox is a senior applications engineer with Flow Science who specializes in water and environmental modeling. Brian received an MS in Civil Engineering from Colorado State University with a focus on river hydraulics and sedimentation. He has over 10 years of combined experience working within private, public and academic sectors in water and environmental engineering applications. His experience includes using 1D, 2D and 3D hydraulic models for projects including fish passage, river restoration, bridge scour analysis, sediment transport modeling and analysis of hydraulic structures. linkedin_small John Wendelbo, Director of SalesJohn Wendelbo, Director of Sales, focuses on modeling challenging water and environmental problems. John graduated from Imperial College with an MEng in Aeronautics, and from Southampton University with an MSc in Maritime Engineering Science. John joined Flow Science in 2013. linkedin_small
► FLOW-3D CAST’s Low Pressure Die Casting Workspace
  10 Nov, 2019

In this technical webinar, metal casting engineer Ajit D’Brass will review FLOW-3D CAST’s low pressure die casting workspace. After reviewing the software’s fundamental modeling capabilities, an in-depth look at model setup in FLOW-3D CAST‘s graphical user interface will be highlighted. Ajit will showcase the simple step-by-step progression used for a successful simulation, emphasizing the methods available to modelers for a correct characterization of pressures throughout the system.

December 5 at 1:00 pm EST

About the Presenter

Ajit D'Brass, CFD Engineer, Metal Casting Applications

Ajit D’Brass studied manufacturing engineering with a concentration on metal casting at Texas State University. His current work focuses on how to expedite the design phase of a casting through functional, efficient, user-friendly process simulations. Ajit helps customers use FLOW-3D CAST to create streamlined, sustainable workflows.

► FLOW-3D World Users Conference 2020
    9 Nov, 2019

We invite our customers from around the world to join us at the FLOW-3D World Users Conference 2020 to celebrate 40 years of FLOW-3D.

The conference will be held on June 8-10, 2020 at the Maritim Hotel in Munich, Germany. Join engineers, researchers and scientists from some of the world’s most renowned companies and institutions to hone your simulation skills, explore new modeling approaches and learn about the latest software developments. This year’s conference features metal casting and water & environmental application tracks, a variety of advanced training sessions, in-depth technical presentations by our customers, and the latest product developments presented by Flow Science’s senior technical staff. The conference will be co-hosted by Flow Science Deutschland.

We are extremely pleased to announce that Hubert Lang of BMW will be this year’s keynote speaker.

Keynote Speaker Announced! 

Hubert Lang, BMW, Keynote Speaker
Hubert Lang, BMW, Keynote Speaker at the FLOW-3D World Users Conference 2020

15 years of FLOW-3D at BMW

Hubert Lang studied Mechanical Engineering with a focus on automotive engineering at Landshut University of Applied Sciences. In 1998, he started in BMW’s Light Metal Foundry in Landshut, working in their tool design department, where he oversaw the development of casting tools for six-cylinder engines. In 2005, Hubert moved to the foundry’s simulation department, where he was introduced to FLOW-3D’s metal casting capabilities. Since then, he has led considerable expansion in the use of FLOW-3D, both in the volume of simulations as well as the number of application areas.

Today, BMW uses FLOW-3D for sand casting, permanent mold gravity casting, low pressure die casting, high pressure die casting, and lost foam casting. FLOW-3D has also been applied to several special projects at BMW, such as supporting the development of an inorganic binder system for sand cores through the development of a core drying model; calculation of the heat input during coating of cylinder liners; the development of the casting geometry for the injector casting procedure; and the layout and dimensioning of cooling systems for casting tools. 

BMW Museum and BMW World Tour

We are pleased to offer a tour of the BMW Museum and BMW World as part of the conference offerings. The tour will take place at 17:30 after the technical proceedings on Tuesday, June 9. You can sign up for the tour when you register for the conference.

Advanced Training Topics

Advanced training topics include a FLOW-3D CAST Version Up seminar, a FLOW-3D AM Version Up seminar, and sessions focused on Troubleshooting and Municipal topics. You can sign up for these trainings when you register online.

Call for Abstracts

Share your experiences, present your success stories and obtain valuable feedback from the FLOW-3D user community and our senior technical staff. We welcome abstracts on all topics including those focused on the following applications:

  • Metal Casting
  • Additive Manufacturing
  • Civil & Municipal Hydraulics
  • Consumer Products
  • Micro/Nano/Bio Fluidics
  • Energy
  • Aerospace
  • Automotive
  • Coating
  • Coastal Engineering
  • Maritime
  • General Applications

Abstracts should include a title, author(s) and a 200 word description. Please email your abstract to info@flow3d.com by Friday, April 19.

Registration and training fees will be waived for presenters. Go here for presenter information.

Conference Dinner

This year’s conference dinner will be held in the ever-popular Augustiner-Keller. All conference attendees and their guests are invited to join us on Tuesday, June 9 for a traditional German feast in a beautiful beer garden. The conference dinner will take place following the BMW Tour.

Travel

► Software Engineer
  30 Oct, 2019

Our software, FLOW-3D, is used by engineers, designers, and scientists at top manufacturers and institutions throughout the world to simulate and optimize product designs. Many of the products used in our daily lives, from many of the components in an automobile to the paper towels we use to dry our hands, have actually been designed or improved through the use of FLOW-3D.

Open position

Flow Science has a job opportunity for a motivated, creative and collaborative Software Engineer. As a Software Engineer, you will use your object-oriented programming skills to create and maintain the user interface between our simulation software and the end user. You’ll have the opportunity to combine your creative skills with your analytical skills to contribute to a powerful tool used by customers around the world.

Required education, experience, and skills

  • A Bachelor’s degree in computer science, computer engineering, or related degree
  • A minimum of three years programming experience in a structured software environment, or academic setting
  • Object-oriented programming skills using C++
  • Graphical User Interface (GUI) development experience
  • Comfortable in both Windows and Linux environments

Preferred experience and skills

  • Knowledge in modern design/architectural patterns
  • Experience with Qt framework
  • Comfortable with version control systems such as Git or SVN
  • Experience with UML.
  • C++ 11/14/17 knowledge.
  • OpenGL/graphics programming
  • Familiarity with the VTK API

Benefits

Flow Science offers an exceptional compensation and benefits package superior to those offered by most large companies. Flow Science employees receive a competitive base salary, employer paid medical, dental, vision coverage, life and disability insurances, relocation assistance, 401(k) retirement plan with extremely generous employer matching, and an outstanding incentive compensation plan that offers year-end bonus opportunity.

Contact

A resume and cover letter should be e-mailed to careers@flow3d.com.

Learn more about careers at Flow Science >

Mentor Blog top

► On-demand Web Seminar: Accelerating Thermal Design with BCI-ROM
  18 Nov, 2019

This presentation discusses the current approaches available for analyzing the temperature response of an electronic system and introduces a new boundary condition independent reduced order modeling method (BCI-ROM). Examples shown will include a multi-chip module (MCM) and an IGBT subject to the UDDS: FTP-72 Drive Cycle for an Electric Vehicle.

► Event: Leveraging CAD-embedded CFD analysis & design exploration
  28 Oct, 2019

A hands on workshop to experience using Simcenter FLOEFD, fully CAD embedded CFD for NX, Solid Edge , Creo and CATIA , to better enable CFD analysis frontloading in developement to aid earlier design decision making and combining CFD with improved design exploration and optimziation studies using HEEDS.

► On-demand Web Seminar: Simulation-driven Design: Front-load simulation for better designs faster
  18 Oct, 2019

Simulation is now being used to synthesize and define the physical design, rather than to simply evaluate and validate final designs. Innovative manufacturers take advantage of advances in simulation technology and development processes to beat their competitors to market with higher quality, longer lasting products. Learn how this new age of simulation-driven design is made possible.

► On-demand Web Seminar: Building Better Designs Using 1D/3D Simulation
    3 Oct, 2019

How can 1D/3D simulation help you to design better systems?  Watch this presentation to find out.

► Technology Overview: Peers Perspective - Thermal management challenges for wearable electronic devices
    3 Oct, 2019

John Wilson, technical marketing engineer at Mentor, a Siemens Business, discusses junction temperatures and their importance in consumer electronic product design. To develop reliable wearable products users are turning to simulation tools like Simcenter™ Flotherm™ and Simcenter™ FLOEFD™ to ensure product performance is understood and managed appropriately before build.

► Technology Overview: Peers Perspective - Importance of junction temperature in electronics components thermal reliability
    3 Oct, 2019

John Wilson, technical marketing engineer at Mentor, a Siemens Business, explains how junction temperatures have a direct correlation to product reliability and performance. Using tools such as Simcenter™ T3STER™ and Simcenter™ Flotherm™ enable users to tackle the challenge of designing for reliability without the need for prototyping.

Tecplot Blog top

► Learning Tecplot 360 Just Got Easier! 
  23 Apr, 2019

Tecplot 360 Basics Certification

Earning your Tecplot 360 Certification helps build engineering skills and shows your knowledge of this popular visualization and analysis tool. Listing your certification on your resume, CV and LinkedIn profile can help future employers match your skills to their job requirements.

To earn your certification, use a Tecplot 360 free trial, or upgrade your licensed software to the latest version. We recommend using the newest version of Tecplot 360.

The training materials include:

  • A video that walks you step-by-step through each skill you need to learn. You can watch the video on YouTube or download the MP4 to watch offline.
  • A training manual in PDF format has the instructions from the video if you prefer to follow a written guide.
  • The datasets used in the Video and Manual to follow along. You are welcome to use your own datasets if you prefer.

Click the button below for all instructions and materials you need to get certified.

Tecplot 360 Basics Certification

The training covers Tecplot 360 Basics:

  • Compatible Tecplot 360 data formats
  • Loading data
  • Working with zones
  • Slices and streamtraces
  • Pages and frames
  • Plot types
  • Derived objects and calculations
  • Image formats, animations and exporting. 

Once you complete the test and upload your plot, we will review your work and send you your Certificate! Note that certification is given at the sole discretion of Tecplot, Inc. and is based on evaluation of your submitted answers and plot.

 

More Ways to Learn Tecplot 360

Are you already Tecplot 360 Certified and looking to learn more advanced topics? Here are more ways to learn Tecplot 360.

Tecplot 360 Getting Started Guide

A complete set of documentation is included with your Tecplot 360 installation, and links can also be found on our website Documentation. Four tutorials that get progressively more advanced are in our Getting Started Manual. The tutorials include:

  1. External Flow – Using the Onera M6 wing model, this tutorial covers loading the data, producing a basic plot, slicing, streamtraces, isosurfaces, probing, and comparing simulated and experimental data (including normalizing the data).
  2. Understanding Volume Surfaces – The DuctFlow dataset is used in this tutorial as an example of how Tecplot 360 renders volume surfaces using Surfaces to Plot.
  3. Transient Data – A wind turbine dataset with 127 time steps helps you understand how transient (time-based) data is structured and how to produce animated contour plots, extract data over time for analysis, and calculate and visualize additional variables using the Tecplot 360 analysis tools.
  4. Finite Element Analysis – A transient FEA dataset of a connecting rod created with LSDYNA is used in this tutorial.

Datasets used in these examples are included with your installation of Tecplot 360, or they can be found in our Getting Started Bundle.

Getting Started Manual

Video Tutorials

Over the past few years, we have built up an extensive Video Library – 82 videos to date! Topics range from loading data to working with transient data, and everything in between. The videos are sorted by most current first, but there is a long list of categories to help you find the topic most interesting to you. Many of the videos have been transcribed if you prefer reading the tutorial as you work.

Tecplot 360 Video Library

Written Tutorials

Speaking of learners who prefer reading to watching videos, the Tecplot Blog contains a number of tutorials. The blog tutorials are in the “Tecplot Tip” category.

Browse our Tecplot Tips

Free Tecplot 360 Training

We offer one hour of free online training when you purchase a Tecplot 360 license, or when you download a Tecplot 360 Free Trial. The training can be tailored to meet your specific workflows, or we can walk you through the standard training modules – answering your specific questions along the way.

Download a Free Trial

Group Training – Onsite or Online

Last but not least, companies have found it very helpful to get all their engineers and scientists up-to-speed quickly on Tecplot 360. We can address your specific business challenges and cover designated topics. You provide the facilities and computers, we provide the expert instructor and in-depth materials. To find out more, please email at support@tecplot.com or use the Contact Form.

For Tecplotters in Europe and Western Asia, our Tecplot Europe office holds Tecplot User Days for beginning through advanced users. These training sessions are free of charge! The schedule and registration forms are up on our Tecplot Europe Training page.

Tecplot Europe Training

The post Learning Tecplot 360 Just Got Easier!  appeared first on Tecplot.

► Python Multiprocessing Accelerates Your CFD Analysis Time
  17 Apr, 2019

PyTecplot and the Python multiprocessing module was the subject of our recent Webinar. PyTecplot already has a lot of great capability, and bringing in the Python multiprocessing toolkit allows you to accelerate your work and get it done even faster. This blog answers some questions asked during the Webinar.

1. What is PyTecplot?

PyTecplot is an API to control Tecplot 360. PyTecplot is a separate installation from Tecplot 360. When you have Tecplot 360 installed, PyTecplot will need to be installed separately. Because this is a Python module you have to install it as part of your Python installation. A 64-bit installation of Python 2.7 or Python 3.4 and newer is required. All of this information is in our (very thorough) documentation.

PyTecplot Documentation

2. What is Python multiprocessing?

Multiprocessing is a process-based Python “threading” interface. “Threading” is in quotes because it is not actually using multi-threading. It’s actually spawning separate processes. We encourage you to read more in the Python documentation, Python multiprocessing library.

In the Webinar we show you a method to use the Python Multiprocessing Library in conjunction with PyTecplot to accelerate the generation of movies and images. This technique can go beyond just the generation of images. You can extract information from your simulation data as well. The recent Webinar shows you how to use the multiprocessing toolkit in conjunction with PyTecplot. We use a transient simulation of flow around a cylinder as the example, but have timings from several different cases.

The recording and the scripts from the Webinar “Analyze Your Time-Dependent Data 6x Faster” can be found on our website.

Watch the Webinar

3. Is PyTecplot included in the package of Tecplot for CONVERGE?

Last year we partnered with Convergent Science, which makes a CFD code that is used quite heavily in internal combustion, but they also can work with many other application areas. In our partnership if you buy CONVERGE, you get free access to Tecplot for CONVERGE. Tecplot for CONVERGE allows you to use PyTecplot but only through the Tecplot 360 Graphical User Interface.

To have the capability of running PyTecplot in batch mode, as shown in the Webinar, you will need to upgrade to the full version of Tecplot 360.

Request a Quote

4. Does Tecplot 360 run well with other software like Star-CCM+?

Tecplot 360 does not have a direct loader for Star-CCM+. However, you can export from Star-CCM+ to CGNS, Ensight or Tecplot format, all of which can be read by Tecplot 360.

Tecplot 360 Compatible Solvers
Swimmer

5. When running PyTecplot in batch mode, Is session.stop required to clean up the temporary files? Or can you just let the process exit?

Yes and no. We found that on Linux, the multiprocessing toolkit just terminates the process resulting in a core dump. It is very important to call session.stop to avoid these core dump files.

6. What PyTecplot Resources Do You Have?

The post Python Multiprocessing Accelerates Your CFD Analysis Time appeared first on Tecplot.

► Predictive Ocean Model Helps Coastal Estuarine Research
    5 Apr, 2019

Jonathan Whiting is a member of the hydrodynamic modeling group at Pacific Northwest National Laboratory in Washington State. He has been a Tecplot 360 user since 2014.

PNNL and the Salish Sea Model

Pacific Northwest National Laboratory (PNNL) is a U.S. Department of Energy laboratory with a main campus in Richland, Washington. The PNNL mission is to advance the frontiers of knowledge by taking on some of the world’s greatest science and technology challenges. The lab has distinctive strengths in chemistry, earth sciences and data analytics and deploys them to improve America’s energy resiliency and to enhance national security.

Jonathan is part of the Coastal Sciences Division at PNNL’s Marine Sciences Laboratory. The hydrodynamic modeling group in Seattle, WA works primarily to promote both ecosystem management and the restoration of the Salish Sea and Puget Sound with the Salish Sea Model.

The Salish Sea Model is a predictive ocean modeling tool for coastal estuarine research, restoration planning, water-quality management, and climate change response assessment. It was initially created to evaluate the sensitivity of Puget Sound acidification to ocean and fresh water inputs and to reproduce hypoxia in the Puget Sound while examining its sensitivity to nutrient pollution, funded by the Washington State Department of Ecology. Now it is being applied to answer the most pressing environmental challenges in the greater Salish Sea region.

PNNL is currently in the first year of a three-year project to enhance the Salish Sea Model. The goals are to increase the model’s resolution and to make it operational, which means assuring the model runs on schedule and gets results that are continuously available to the public—including predictions a week or so ahead. This will allow for new applications such as the tracking of oil spills during response activities.

Jonathan has worked with the modeling team on several habitat restoration planning projects along the Snoqualmie and Skagit rivers in Washington’s Puget Sound region. Jonathan was responsible for post-processing model outputs into analytical and geospatial products to meet client expectations and to convey findings that aid project planning and stakeholder engagement.

The Challenge: Creating Consistent, High-Quality Visualization for Model Post-Processing

The hydrodynamics modeling group uses the Finite Volume Community Ocean Model (FVCOM) simulation code.

For the recent Skagit Delta Hydrodynamic Modeling project, a high-resolution triangular unstructured grid was created with 131,471 elements and 10 terrain-following sigma layers in the vertical plane. Post-processing was conducted on five time snapshots per scenario across 11 scenarios (including a baseline). Each file was about 55MB in uncompressed binary format.

The sheer quantity of plots was very challenging to handle, and it was important to generate clean plots that clearly conveyed results.

The Solution – Tecplot 360

Jonathan most often uses Tecplot 360 to generate top-down plots and videos that visualize parameters geospatially across an area. He then uses that visualization to convey meaningful project implications to his clients, who in turn use the products to inform program stakeholders and the public.

To handle the quantity of data Jonathan was working with, Tecplot 360 product manager Scott Fowler gave him a quick demonstration of Tecplot 360 and showed him how to use Chorus, the parametric design space analysis tool within Tecplot 360. Chorus allowed Jonathan to analyze a single dataset with multiple plots in a single view over time by using the matrix tool, easing the bulk generation of plots.

Tecplot support and development teams have been working closely with Jonathan, especially by adding new geospatial features to the software that enhance its automation and efficiency.

According to Jonathan, the key strengths in Tecplot’s software have been:

  • Ease of use
  • Availability of scripting to assist bulk processing
  • Variety of tools and features, such as georeferenced images

Using Tecplot 360 has allowed Jonathan to create professional plots that enhance the impact of their modeling work.

How Will Jonathan Use Tecplot In the Future?

Jonathan’s personal niche has become trajectory modeling, so he is also interested in using Tecplot to generate visuals associated with the movement of objects on the surface by using streamlines, velocity gradients, slices, and more. He intends to take a deeper dive into the vast capabilities of Chorus and PyTecplot in the future!

 


 

Tecplot 360’s latest geoscience-focused release, Tecplot 360 2018 R2, includes the popular FVCOM loader and has the ability to insert georeferenced images that put your data in context. Tecplot 360 will automatically position and scale your georeferenced Google Earth or Bing Maps images.

Learn more about how Tecplot 360 is used for geoscience research.

Try Tecplot 360 for Free

The post Predictive Ocean Model Helps Coastal Estuarine Research appeared first on Tecplot.

► Parallel SZL Output from SU2
    2 Apr, 2019

At the end of February 2019, I did a presentation at the SIAM Conference on Computer Science and Engineering (CSE) in Spokane Washington. I live in the Seattle area, and Spokane is reasonably close, so I decided to drive instead of fly. Unfortunately, the entire nation, including Washington state, was still in the grips of the dreaded “polar vortex.” The night before my drive to Spokane all of the mountain passes were closed due to heavy snowfall. They opened in time but the drive was slippery and slow. I probably should have taken a flight instead! On the drive, I came up with this Haiku…

Driving to Spokane
Snow whirlwinds on pavement
Must make conference!

Join the Tecplot Community

Stay up-to-date by subscribing to the TecIO Newsletter, events and product updates.

Subscribe to Tecplot

The Goal: Adding Parallel SZL Output to SU2

My presentation at the SIAM CSE conference was on the progress made adding parallel SZL (SubZone Load-on-demand) file output to SU2. The SU2 suite is an open-source collection of C++ based software tools for performing Partial Differential Equation (PDE) analysis and solving PDE-constrained optimization problems. The toolset is designed with Computational Fluid Dynamics (CFD) and aerodynamic shape optimization in mind, but is extensible to treat arbitrary sets of governing equations such as potential flow, elasticity, electrodynamics, chemically-reacting flows, and many others. SU2 is under active development by individuals all around the world on GitHub and is released under an open-source license. For more details, visit SU2 on Github.

The Challenge: Building System Compatibility

We implemented parallel SZL output in SU2 using the TecIO-MPI library, available for free download from the TecIO page. In some CFD codes, such as NASA’s FUN3D code, each user site is required to download and link the TecIO library. However, in the case of SU2 we decided to include the obfuscated TecIO source code directly into the distribution of SU2. This makes it much easier for the user – they need only download and build SU2 and they have SZL file output available.

However, this did add some complications from our end.

The main complication is that SU2 is built using the GNU configure system whereas TecIO is built using CMake. We had to create new automake, autoconfig, and m4 script files to seamlessly build TecIO as a part of the SU2 build.

If you find yourself integrating TecIO source into a CFD code built with the GNU configure system, feel free to shoot me some questions – scottimlay@tecplot.com

Implementing Serial vs. Parallel TecIO

Once TecIO was building as part of the SU2 build, it was straight-forward to get the serial version of SZL output working. SU2 already included an older version of TecIO, so we simply replaced those calls with the newer TecIO calls.

To get the parallel SZL output (using TecIO-MPI) working was a little more complicated. Specifically, it required knowing which nodes on each MPI rank were ghost nodes. Ghost nodes are nodes that are duplicated between partitions to facilitate the communication of solution data between MPI ranks. We only want the node to show up once in the SZL file, so we need to tell TecIO-MPI which nodes are the ghost nodes. In addition, CFD codes often utilize ghost cells (finite-element cells duplicated between MPI ranks) which must be supplied to TecIO-MPI. This information took a little effort to extract from the SU2 “output” framework.

High-Lift Prediction Workshop

The first test case is the Common Research Model from the
High-Lift Prediction workshop.

How Well Does It Perform?

We now have a version of SU2 that is capable of writing SZL files in parallel while being run on an HPC system. The next obvious questions: “How well does it perform?”

Test Case #1: Common Research Model (CRM) in High-Lift Configuration

The first test case is the Common Research Model from the High-Lift Prediction workshop. It was run with 3 grid refinement levels:

  • 10 million cells
  • 47.5 million cells
  • 118 million cells

These refinements allowed us to measure the effect of problem size on the overhead of parallel output. All three cases were run on 640 MPI Ranks on the NCSA Blue Waters supercomputer. The results are shown in the following table:

  10M Cells 47.5M Cells 118M Cells
Time for CFD Step 17.6 sec 70 sec 88 sec
Time Restart write 6.1 sec 10.7 sec 31.4 sec
Time SZL File Write 43.9 sec 171 sec 216 sec

For comparison we include the cost of incrementing the solution a single CFD time step and the cost of writing an SU2 restart file. It should be noted that the SU2 restart file only contains the conservative field variables – no grid variables and no auxiliary variables – so there is far less writing involved with the creation of the restart file. The cost of writing the SZL file is roughly 2.5 the cost of a single time step. If you write the SZL file infrequently (every 100 steps or so) this overhead is fairly small (2.5%).

Test Case #2: Inlet

The second test case is an inlet like you might find on the next generation jet fighter. It aggressively compresses the flow to keep the inlet as short as possible.

The inlet was analyzed using 93 million tetrahedral cells and 38 million nodes. As with the CRM case, the inlet case was run on the NCSA Blue Waters computer using 640 MPI ranks.

SU2 takes 74.7 seconds to increment the inlet CFD solution by one time-step and 31 seconds to write a restart file. To write the SZL plot file requires 216 seconds – 2.9 times as long as a single CFD time step.

Availability

The parallel SZL file output is currently in the pull-request phase of SU2 development. Once it is accepted it will be available in the Develop branch on GitHub. On occasion (I’m told every six months to a year), the develop branch is merged into the master branch. If you are interested in trying the parallel SZL output from SU2, send me an email (scottimlay@tecplot.com) and I’ll let you know which branch to download.

Better yet, subscribe to our TecIO Newsletter and we will send you the updates.

Subscribe to Tecplot


Scott Imlay
Scott Imlay
Chief Technical Officer
Tecplot, Inc.

The post Parallel SZL Output from SU2 appeared first on Tecplot.

► Improving TecIO-MPI’s Parallel Output Performance
  20 Mar, 2019
 
 
TecIO, Tecplot’s input-output library, enables applications to write Tecplot binary files. Its parallel version, TecIO-MPI, enables MPI parallel applications to output Tecplot’s newer binary format, .szplt.

TecIO-MPI was first released in 2016. Since then, we’ve received feedback from some customers that its parallel performance for outputting unstructured-grid solution data needed improvement. So we embarked on an effort to understand and eliminate bottlenecks in TecIO-MPI’s execution.

Customer reports 15x speed-up in writing data from FUN3D when using the new TecIO-MPI library!
 
Learn more and download the TecIO Library

Understanding What Customers are Seeing

To understand what our customers were seeing, we needed to be able to run our software on hardware representative of what our customers were running on, namely, a supercomputer. The problem is that we don’t own one. We also needed parallel profiling software that would help us identify bottlenecks, or “hot spots,” in our code, including in the MPI inter-process communication. We made some progress in Amazon EC2 using open-source profiling software, but had greater success using Arm (formerly Allinea) Forge software at the National Center for Supercomputing Applications (NCSA).

NCSA has an industrial partners program that provides access to their iForge supercomputer and a wide array of open source and commercial software, including Arm Forge. iForge has over 2,000 CPU cores and runs IBM’s GPFS parallel file system, so it was a good platform to test our software. Arm Forge, specifically its MAP profiling tool, provided the ability to easily identify hot spots in our software, and to drill down through the layers of our source code to see exactly where the performance problems lay.

An additional application to NCSA also gave us access to their Blue Waters petascale supercomputer, which features about 400,000 CPU cores and the Lustre parallel file system1. This gave us the ability to scale our testing up to larger problems, and to test the performance on another popular parallel file system.

Arm MAP with Region of Time Selected

Performance Improvement Results

Using iForge hardware and Arm Forge software, we were able to identify two sources of performance problems in TecIO-MPI:

  • Excessive time spent in writing small chunks of data to disk.
  • Too much inter-process exchange of small chunks of data.

Consolidating these has led to an order-of-magnitude reduction in output time. Testing with three different computational fluid dynamics (CFD) flow solvers indicates output times, for structured or unstructured grids, roughly equal to the time required to compute a single solver iteration.

We will continue to collect feedback from users with an eye to additional improvements as TecIO-MPI is implemented in additional solvers. We invite you to provide us with your own experience!

Take our TecIO Survey

How to Obtain TecIO Libraries

TecIO and TecIO-MPI, along with instructions in Tecplot’s Data Format Guide, are installed with every Tecplot 360 installation.

It is recommended, however, that you obtain and compile source for TecIO-MPI applications, because the various MPI implementations are not binary-compatible. Source for TecIO and TecIO-MPI, and the Data Format Guide, are all available via a My Tecplot account.

For more information and access to the TecIO Library, please visit:

TecIO Library

1This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.



By Dr. David E. Taflin
Senior Software Development Engineer | Tecplot, Inc.

Read Dave’s employee profile »

The post Improving TecIO-MPI’s Parallel Output Performance appeared first on Tecplot.

► Calculating a New Variable
  11 Mar, 2019

Data Alteration through Equations

Engineers using Tecplot 360 often need to create a new variable which is based on a numeric relationship of existing variables already loaded into Tecplot.

This powerful capability for calculating a new variable uses a simple method. To start, load your data into Tecplot 360. In this example, we loaded the VortexShedding.plt data located in the Tecplot 360 examples folder.

Choose Alter -> Specify Equations from the Data menu.
Alternately, click the equations icon Equations Icon

You will see the Specify Equations dialog shown at right.

We will now calculate the difference between two variables. In the Zones to Alter list, click All.

Initialize the new variable T(K)Difference, by typing in the Equation(s) window:

{T(K)Difference} = 0

Click Compute

Now find the difference for variable T(K) between zone 2 and 3 (for example, T(K) in zone 3 – T(K) in zone 2) to T(K)Difference). You can do this for any two variables that have a similar structure.

Select the zones you want to receive the difference value. Type the following equation using the exact variable name from the Data Set Information dialog.

{T(K)Difference} = {T(K)}[3]-{T(K)}[2]

Click Compute

The new variable T(K)Difference is now available for plotting. Open the Data Set Information dialog from the Data menu and view the new variable T(K)Difference.

Note that changes made to the dataset in the Specify Equations dialog are not made to the original data file. You can save the changes by saving a layout file or writing the new data to a file. Saving a layout file will keep your data file in its original state, but use journaled commands to reapply the equations.

Learn more in Chapters 20 and 21 of the Tecplot 360 User Manual.


This blog was originally published in 2013 and has been updated and expanded.

The post Calculating a New Variable appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► Quickie: ESI’s Q3 goes to plan
  20 Nov, 2019

Archives

ESI yesterday released Q3 results that show … no real change. The company’s reinvention continues.

Revenue in Q3 was €28.6 million, up 2% as reported and down 1% in constant currencies (cc). Software license revenue was up 1% (but down 3% cc) to €20.7 million. Services revenue was up 7% (up 5% cc) to €7.9 million.

For the year to date, revenue from the Americas was €15 million, up 6% as reported and up 1% cc. From EMEA, revenue was €34 million, up 1% as reported and cc. Finally, revenue from Asia was €35 million, up 3.0% as reported (down 1% cc).

If you’re keeping track, ESI’s financial performance has long been weighted to a very strong Q4 — and, until this year, that included November, December and January. 2019 is different: it won’t include January as the company moves its fiscal year to match the calendar year. Look for a report early next year on a shortened Q4 but expect growth to pick up in 2020.

The post Quickie: ESI’s Q3 goes to plan appeared first on Schnitger Corporation.

► AVEVA’s firing on all cylinders in H1, poised for $1B revenue in F2020
  18 Nov, 2019

Archives

Schneider Electric reported results a couple of weeks ago that caused AVEVA (partly owned by SE) to say that everything was going to plan — optimistic but basically content-free. Last week, AVEVA remedied that by offering a flood of detail about its first half of fiscal 2020. First, those details, then some comments:

  • Total revenue was £392 million, up 17% as reported. On an organic, constant currency basis, revenue was up 12%. AVEVA said growth was driven by strong sales execution, cross-selling of the combined product portfolio to an enlarged customer base and a number of large multi-year contracts, including one for which the revenue was partly recognized upfront
  • Revenue from rentals and subscriptions was £141 million, up 84% as reported (up 77% in constant currencies, cc), while revenue from initial fees and perpetuals was £85 million, down 12% (down 13% cc) as AVEVA continues to shift its customers to repeatable forms of revenue. Without that large contract mentioned above, subs would have been up 60%
  • Software support and maintenance revenue was £102 million, up 8% (flat cc)
  • Services revenue was £64 million, down 7% (down 10% cc) as AVEVA shifts focus to focused on higher margin projects and works to create more standard, repeatable solutions, which require less configuration and customization
  • Channel sales represented about 1/3 of total, performing “well,” with “double digit growth across all regions” due, in part, to moving more products through the (legacy SE) Monitoring & Control partners, and to launching the AVEVA Partner Network, which will be able to sell the entire AVEVA portfolio. That’s a big deal because one of the limiters the old AVEVA faced was numbers — getting enough feet on the street to sell and support its products on a global scale
  • Direct revenue was, therefore, 2/3 of total, characterized as “strong, benefiting from investment and revised sales incentives”
  • By business unit, Engineering (the design and simulation products) reported revenue of around£165 million, up 20% cc, led by strong growth in Plant and Marine design tools even as customers move to rentals and subscriptions and away from perpetual license deals. Engineering represented 42% of total revenue, compared to 43% in the last half of fiscal 2019. Don’t read too much into that slight dip — AVEVA has just started reporting this split, there’s a shift in perpetuals vs subs that throws everything off, and the year is typically not evenly weighted– this category is hugely important to AVEVA and that is unlikely to change
  • Monitoring & Control reported revenue of about £125million, up in the “low single digit” cc, with solid growth in HMI SCADA partly offset by lower sales in pipeline Monitoring & Control. Why? Because AVEVA is being  “more selective bidding for contracts.” Monitoring & Control represented 32% of total revenue, flat with earlier this year
  • Asset Performance Management represented 14% of the Group’s total revenue, or approximately £55 million, up “double digits” cc, with “particularly strong growth from AVEVA Predictive Analytics, which achieved a large order win in North America.”
  • Planning and Operations was about 12% of the business (up from 11% earlier this year) or £47 million, reporting “mid-teens” growth
  • By geo, revenue from EMEA was up 4% to £135 million, led by 14.% growth in recurring revenue growth
  • The Americas also saw the effects of AVEVA’s ”planned reduction in Initial & Perpetual Licenses and Training & Services revenue” but, even so, total revenue was up 7% to £133 million
  • Finally, revenue from Asia Pacific was up 49% to £124 million on “strong growth” in Korea, China and Australia and particular strength in Engineering and Planning & Operations. Regional growth further benefited from a large multi-year contract with a key global account customer in Australia, likely Worley (but don’t tell anyone, it’s a secret)

By end-market, the oil  & gas sector still dominates AVEVA’s business, accounting for about 40% of revenue. Because of the addition of the SE brands, AVEVA now addresses a much larger range of customers, from design and construction during the capital phase of an asset to operational phases across upstream, midstream and downstream. AVEVA reports that overall market conditions were “stable, with steady capital and operating expenditure across all segments”. 

Marine, chemicals/petrochemicals, and packaged goods (in which AVEVA includes food & beverage and pharma), power and metals & mining each represent 5% to 10% of revenue. AVEVA says marine did well due to “product cycle upgrades and a large multi-year rental contract win with a shipbuilder in Asia”. AVEVA said its other end markets are “largely non-cyclical and are primarily driven by structural growth as industries make increasing use of technology to drive efficiency” and that those drivers are “strong.”

AVEVA said that as part of its rationalization of its sales capacity, it had sold off a wholly owned distributor in Italy and intends to sell other distributors in Germany and Scandinavia.

Well. A lot to parse. My main takeaways: Legacy AVEVA needed more resources to expand and sell a solid offering; legacy SE needed to invest more in its software assets than a hardware company seemed willing to do. Together, they seem more than able to weather the combined effects of go-to-market changes, the move from perpetual to subs and rationalizing products/offices/teams. Of course, there’s a lot to be said for being in the right place at the right time and, as CEO Craig Hayman told investors,

“In the past year, the number of companies with a formalized near-term digital strategy has doubled, and productivity improvement is a major driver. [As one example,] Suncor Energy is Canada’s largest integrated energy company, and they are on a journey to put data to work and put it in the hands of their employees. They are the first integrated energy company to leverage the cross-portfolio combination of AVEVA’s PRiSM predictive asset analytics, together with real-time performance management and optimization capabilities of ROMeo, across their enterprise.”

And I think those are the key: Cross-portfolio. First customer of its type. Digital strategy. Across enterprise. This stuff isn’t easy, but it is becoming more of an imperative –I’m sure Suncor’s competitors, if they weren’t thinking along these lines, they are now– and AVEVA stands to benefit.

AVEVA doesn’t do outlook, but investment analysts in London do. AVEVA said this: “Demand for AVEVA’s products is strong, driven by the ongoing digitalisation … and stable conditions in key end markets. AVEVA’s combined product offering is seeing growing industry recognition. Against this backdrop, the benefits of integration and measures taken to drive growth, improve revenue mix and increase margin are having a positive impact. The Group’s order pipeline is solid and the outlook remains positive.” Investors took that in and came up with a revenue consensus of £834m million, up 8%, for fiscal 2020. That’s $1,100 million at today’s exchange rate — MAKING AVEVA A BILLION DOLLAR COMPANY BY REVENUE. [That deserves shouting!]

The post AVEVA’s firing on all cylinders in H1, poised for $1B revenue in F2020 appeared first on Schnitger Corporation.

► Altair’s Q3 soft, guidance for Q4 disappoints — but diversification will boost 2020
  17 Nov, 2019

Archives

Altair reported results last week for its fiscal Q3 and, while they weren’t awful, they didn’t do much to reassure investors about the rest of the year. The details and then some thoughts:

  • GAAP revenue for Q3 was $100 million, up 18% and up 17% in constant currencies (cc)
  • Non-GAAP revenue for the quarter was $103 million, up 18% but still about 3% below consensus
  • GAAP software revenue was $78 million, up 21% (up 23% cc). CFO Howard Moriff noted that $4m in revenue was not recognized in the quarter due to accounting under the ASC 606 rule, and noted that Altair would have reached its guidance if not for this accounting quirk
  • Within the software total, license revenue was $47 million, up 15%, while maintenance revenue was $31 million, up 33%
  • Software-related services revenue was $8 million, down 8%
  • Client engineering services revenue was up 5% to $13 million

Altair also announced that it has acquired EDEM, makers of discrete element method (DEM) simulation tools targeted at mining, pharmaceutical and other process manufacturing industries. Two things: First, DEM is used to model how objects react to one another inside a machine, such as gravel and coal, tablets, powders and other things that are not quite liquids but also not stationary objects. As Altair’s press release puts it, “With this acquisition, Altair customers now have the ability to design and develop power machinery while simultaneously optimizing how this equipment processes and handles bulk materials.”

Second, this acquisition further diversifies Altair’s customer base away from automotive. And that has two implications: the first is positive, in that Altair remains over-reliant on the auto industry, though less than it used to be, at 35% of total revenue. EDEM will help Altair weather storms in this potentially slowing market. But of course, that leads to a concern: can Altair sell to these new industries? TBD. EDEM is not expected to have an impact on revenue for 2019.

The acquisition is positive news. The real problem with the earnings report was Altair’s outlook: The company now expects Q4 non-GAAP revenue to be $109 million at the midpoint, well below Wall Street’s consensus of $125 million. Disappointed investors dropped the share price by as much as 20%, though it has recovered a wee bit since the news broke.

Why so underwhelming? CEO Jim Scapa said that the company underanticipated how quickly customers would switch to subscriptions, which lowered expected revenue for Q4 about $4 million (and $9 million for the year in total). One investor on the call asked how Altair could have missed these signs when it looked across its pipeline, and Mr. Scapa replied that they were surprised by customers’ choices. We’ve seen this before, say when PTC’s customers went to subs far faster than anyone thought likely. The lesson: customers value choice and it’s not always possible to predict how a sale will go. In the greater scheme of things, $9 million out of Altair’s projected revenue of $440 million in 2019 just 2%. While i’d love have in my pocket, it’s just that big a deal and it means more recognized revenue later.

Of greater concern are Mr. Scapa’s comments about a slowdown in Altair’s pipeline of automotive deals in Japan and Germany, as OEMs and their partners shift focus from traditional powertrain to newer ares such as electrification. Mr. Moroff estimated that this could affect revenue by about $11 million for 2019. Mr. Scapa told investors that Altair had typically sold into those traditional disciplines and is just now releasing products to support newer initiatives. He said that customers aren’t reducing the number of units they’re buying or choosing competitors’ products, Altair is not yet selling effectively into those new departments and disciplines. That’s an interesting take, especially as the customers I spoke with at Altair’s user conference last month didn’t mention this —and it makes sense that the ones presenting would be talking about past work, not necessarily using Altair’s newest offerings.

For Q4, Mr. Moroff sees total revenue of around $107 million (up 4%) with software revenue of about $85 million (up 7%). That means total revenue of around $440 million in 2019, up 11%, with software revenue of about $350 million  up 15% or so. Back in Q1 Altair thought revenue for the year would be $472 million, so we can see why investors might be disappointed.

That said, Mr. Scapa summed it all up this way: “This is the second quarter that we’re guiding down. We’re not happy about it but we need to be conservative … We’re getting pretty robust growth numbers [from sales teams, for 2020], so are very, very optimistic about 2020 despite what we’re seeing in Q4.”

I’d temper that a bit–but what I saw at the user conference does give every reason for optimism. Between SimSolid and DataWatch, and now Polliwog and EDEM, there’s lots of room for growth.

The post Altair’s Q3 soft, guidance for Q4 disappoints — but diversification will boost 2020 appeared first on Schnitger Corporation.

► Quickie: Siemens gets into material science with MultiMech
  15 Nov, 2019

Archives

Goodness: Two acquisitions in one week. They have been busy!

Siemens just announced that it will acquire MultiMechanics, Inc., maker of the MultiMech suite of material analysis tools. MultiMech uses finite element methods to do microstructural analysis, “strongly coupling the macro and micro mechanical response and integrating materials engineering to part design”.

I’m no material scientist, but as I understand it, Siemens wants to add MultiMech to Simcenter to create a detailed material model to inform part design — sort of a digital twin of the material at the same time as creating and analyzing the digital twin of the mechanism or part. That’s very cool.

Jan Leuridan, SVP of Simulation & Test Solutions, Siemens Digital Industries Software says the combo of materials and part design “will help to shrink the innovation cycle of new products and materials, possibly saving millions of dollars and several years in development and certification in aerospace, automotive and other sectors. Customers will have the ability to fully exploit the potential of advanced materials to optimize weight and performance in an efficient way that is not possible with classical, test-based, approaches.”

Flavio Souza, President and CTO, MultiMechanics adds that the combo “will provide a strong basis for further innovation, enabling an expansion of scope of structural simulation to include multi-physics support for applications such as minimization of part distortion, prevention of voids during material flow, and prediction of visco-elastic acoustic properties.”

MultiMechanics’ website says that it models a broad range of materials, including polymers, metals, composites, and ceramics, meaning that it plays a key role in many types of manufacturing methods, including injection-moulding and additive manufacturing.

Terms of the deal were not made public, and the acquisition is expected to close this month.

The post Quickie: Siemens gets into material science with MultiMech appeared first on Schnitger Corporation.

► Quickie: Siemens adds Atlas 3D to 3D printing bench
  13 Nov, 2019

Archives

Siemens announced yesterday that it will acquire Atlas 3D, Inc. before then end of November. Atlas 3D makes Sunata, software that helps designers optimize print orientation and support structures for direct metal laser sintering (DMLS) printers via sophisticated thermal distortion analysis.

Zvi Feuer, senior vice president, Manufacturing Engineering Software of Siemens Digital Industries Software said that Atlas 3D brings “cloud-based Sunata software [which] makes it easy for designers to determine the optimal way to 3D print parts for high quality and repeatability. The combination of Sunata with the robust CAE additive manufacturing tools in Simcenter enables a ‘right first time’ approach for industrial 3D printing.”

Chad Barden, chief executive officer of Atlas 3D, adds “Sunata … equips designers to more easily design parts that are printable, which helps companies more quickly realize the benefits of additive manufacturing.”

Atlas 3D will join Siemens Digital Industries Software. No details of the transactions were made public.

The post Quickie: Siemens adds Atlas 3D to 3D printing bench appeared first on Schnitger Corporation.

► Quickie: ANSYS beats expectations in Q3
    7 Nov, 2019

Archives

I’m battling the flu/plague/chest infection/dunno so this has to be written between sneezes and coughs. I’ll be more thorough after I’ve listened to the earnings call.

ANSYS reported Q3 revenue and earnings ahead of their own forecasts, with non-GAAP revenue of $346 million, up 18% as reported and up 19% in constant currencies (cc).

A few details:

  • Software revenue in Q3 was $137 million, up 26%
  • Lease revenue was up 64% in Q3 to $71 million while perpetual revenue was $66 million, basically flat
  • Maintenance revenue was $195 million, up 11%
  • Services revenue was $14 million, up 54%, “strongly influenced by projects to assist our customers with broader adoption of ANSYS simulation tools, as well as the contributions from recent acquisitions”
  • The channel mix was stable compared to prior quarters, with direct sales and indirect accounting for 77% and 23%, respectively, of Q3 revenue. That compares to76% / 24% for the year to date

ANSYS’ prepared materials also included this about the LSTC acquisition, completed on 1 November:

The acquisition will empower ANSYS customers to solve a new class of engineering challenges, including developing safer automobiles, aircraft and trains while reducing or even eliminating the need for costly physical testing. The transaction closed with a purchase price of $779.9 million, which included $472.7 million in cash and the issuance of 1.4 million shares of ANSYS common stock in an unregistered offering to the prior owners of LSTC. In conjunction with the transaction, ANSYS obtained $500.0 million of term debt financing to fund the cash component of the purchase price.

Lots of stuff about regions and industries: Americas led growth, trade restrictions affected revenue in China; semiconductor R&D and defense spend supported growth; automotive companies continue to invest in autonomous and electrification.

ANSYS got specific about China, pointing out that in all of 2018, non-GAAP revenue from China was just $58 million, or less than 5% of total, and that 2019 wasn’t doing too badly: through the first nine months of 2019, total non-GAAP revenue from China was $44 million, or 4.3% of total non-GAAP revenue. Why do this? Without hearing the call, I guess at two reasons; 1. to calm investors who fear that trade tensions will cause ANSYS revenue in the region to tumble and 2. to point out that even if tensions increase, the total effect on ANSYS won’t be material.

And that plays into Q4 guidance: ANSYS now believes that Q4 revenue will be between $450 million and $475 million on a GAAP basis, leading to a 2019 forecast of total revenue of $1,479 million to $1,505 million. The midpoint of that annual guidance is up about $20 million from prior guidance.

Back to bed. Later.

The post Quickie: ANSYS beats expectations in Q3 appeared first on Schnitger Corporation.


return

Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: