CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home >

CFD Blog Feeds

Another Fine Mesh top

► This Week in CFD – On Hiatus
  18 Sep, 2020
This Week in CFD, the weekly blog post with all (?) of the fluids-oriented CAE news and entertainment (?) for the CFD lover, the compulsive reader, and the morbidly curious, is going on hiatus for the rest of September and … Continue reading
► Recap of Six Recent CFD Success Stories with a Meshing Assist
    9 Sep, 2020
No one generates a mesh just to generate a mesh. The proof of a mesh’s suitability is successful use in a CFD simulation. That success can be predicated on many factors including the availability of a broad range of mesh … Continue reading
► Use of Grand Challenge Problems to Assess Progress Toward the CFD Vision 2030
    8 Sep, 2020
Join the AIAA’s CFD 2030 Integration Committee at SciTech 2021 this coming January for four invited talks and an extended Q&A session on formulation of grand challenge problems that would provide a basis for assessing progress toward the CFD Vision … Continue reading
► This Week in CFD
    4 Sep, 2020
This week’s CFD news brings some excellent reading as we head into a 3-day weekend, at least here in the U.S. It begins with a research article on undergraduate education that’s certain spark thinking if not debate. And our friends … Continue reading
► This Week in CFD
  28 Aug, 2020
This week’s CFD news includes articles that pose questions about open source software. Does it have a people problem? And are people prejudiced against it? Proving that good things never get old, there’s a multi-part video series on fluid mechanics … Continue reading
► It’s all in the numbering – mesh renumbering may improve simulation speed
  27 Aug, 2020
We all know that the mesh plays a vital role in CFD simulations. Yet, not many realize that renumbering (ordering) of the cells in the Finite Volume Method (FVM) can affect the performance of the linear solver and thus the … Continue reading

F*** Yeah Fluid Dynamics top

► Preventing Flooding
  30 Sep, 2020

The Dutch have been exceptional water engineers for centuries, a necessity in a country where more than a quarter of its territory lies below sea level. After a devastating flood in the early 1950s, the country embarked on a decades’ long endeavor to build the massive Delta Works that now protect a large portion of the population from oceanic storm surges that would otherwise flood the countryside.

As part of their efforts to instill resiliency both along the coast and upstream, the Netherlands has shifted dykes, created floodplain habitats, and built water storage into new buildings. With communities around the world at greater flood risk than ever as our climate changes, the Netherlands serves as a shining example of what’s possible with proper planning and investment. (Video and image credit: TED-Ed)

► The Wanderings of Micro-Scallops
  29 Sep, 2020

In the 19th century, botanist Robert Brown observed pollen granules beneath his microscope jittering randomly. Einstein showed that this motion resulted from the impacts of much-smaller atoms against the particles. For small enough objects, the random walk of Brownian motion dominates their dynamics. A new study explores how flexible objects move at this Brownian scale.

The researchers used trios of colloids — microscopic particles — held together by a lipid fluid layer that allows the three particles to change shape without losing contact. Essentially, each trio forms a tiny hinge. As atoms strike the colloids, they both move and change shape.

Compared to rigid shapes, the researchers found their flexible hinges moved around in space about 3-15% faster. They also found coupling between the shape changes and motion. When the colloids hinge closed, it propels them in the direction the hinge points. Because this resembles the propulsion of scallops, the researchers refer to this as the “Brownian quasi-scallop mode.” (Image and research credit: R. Verweij et al.; via phys.org)

► The Magic* Cork
  28 Sep, 2020

*Spoiler alert: it’s not magic. It’s science!

Just what makes this dropped cork float beneath the surface? Just like a normal cork, it’s buoyancy! But this seemingly straightforward video is hiding a few key elements. Firstly, the cork has been modified; it has a metal sphere inside it so that its effective density is higher than that of water.

Secondly, that liquid is not pure water; notice the hazy swirls near the bottom of the flask when the cork drops in? This is tap water that’s had a layer of salt dissolving in the bottom of it for the last day. That creates a density gradient with denser, salty water at the bottom and lighter, fresh water at the top. In fluid dynamics, we’d say the fluid is stably stratified; “stratified” meaning that there are distinct layers (strata) of different density and “stably” because the heavier ones are at the bottom.

When the cork is dropped in, it settles at the fluid layer that matches its density. Because the surrounding fluid is stably stratified, poking the cork makes it bounce slightly but return to its initial height. Our atmosphere behaves just like this when it’s stably stratified. If you displace a parcel of air, it will oscillate up and down before settling back to equilibrium. In fact, the cork and the air even bounce at the same frequency! (Video and submission credit: F. Croccolo)

► As the Fog Rolls In
  25 Sep, 2020

Although we talk about fog rolling in, it’s rare for us to have a perspective where we can truly appreciate that flow. But this photograph from Tanmay Sapkal provides just that for the low summer fogs sweeping over Marin, CA. When hot summer temperatures make inland air rise, cold, moist air from the ocean sweeps in to replace it. Once the moisture condenses, it forms thick, low clouds of fog that surge past the Golden Gate Bridge and into San Francisco Bay. (Image credit: T. Sapkal; via NatGeo)

► Why Watering Globes Are Hard to Fill
  24 Sep, 2020

If you’re leaving home for a few days and want to keep your houseplants happy, you may have tried using a watering globe – those glass bulbs with long stems that slowly release water for your plant. And if you have used one, you’ve probably noticed what a pain it can be to fill. Pour water down the neck too quickly and you’ll get splashed by a sheet of water blown back at you.

That splashback happens for the same reason that blowing across the top of a bottle plays an audible note: you’re compressing the air inside the container. When water tries to pour continuously down the watering globe’s neck, it can block the escape path needed by the air already in the globe. The increasing weight of water atop that volume of air compresses it, raising its pressure until it’s eventually high enough that it blows all the water back out the neck and into your face.

The best method to ensure that doesn’t happen is to fill the globe slowly. Try tilting it at an angle and letting only a small stream of water fall into it such that there’s always an escape route for the air. (Image and video credit: E. Challita et al.)

► Oil Drops and Filter Feeders
  23 Sep, 2020

Natural oils provide critical nutrients to filter feeders like zooplankton and barnacles. These creatures capture oil droplets on bristle-like appendages such as cilia and setae. But this droplet-catching turns into a disadvantage during petroleum spills, when capturing and ingesting oil can be lethal. A recent study looks at the fluid dynamics of oil droplet capture for these tiny creatures.

The authors found that filter feeders capture a range of droplets regardless of size and oil viscosity. But not all droplets stay attached long enough to get consumed, and the larger a droplet is, the lower the flow velocity necessary to detach it from the animal. That suggests a method of limiting uptake of spilled petroleum into the marine food chain: use surfactants to break up the oil into droplets large enough that they’ll detach from filter feeders before getting eaten. (Image credit: D. Pelusi; research credit: F. Letendre et al.; submitted by Christopher C.)

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

CFD Online top

► RANS Grid Sensitivity Divergence on LES Grid
  31 Aug, 2020
Reference on not changing y+ while doing a grid sensitivity study:

Quote:
Originally Posted by sbaffini View Post
Indeed, if y+ =4 is relative to the finest grid, it is confirmed to be a wall function problem. I can't double check now, but I'm pretty sure that the k-omega sst model in CFX uses an all y+ wall function, which means that a wall function is always active. While, in theory, such wall functions should be insensitive to the specific y+ value, they are not perfect and your case is very far from the typical wall function scenario (equilibrium boundary layer), so what you obtain is actually expected.

The only viable solution here, and I suggest you to investigate it also for your other models, is to redistribute cells in your grid to be always within y+ = 1-2, but no more. In any case, the important thing is that you can't have y+ changing between the grids when doing a grid refinement.

EDIT: I know, it sucks...
► Y+ value for Large Eddy Simulation
  31 Aug, 2020
Explanation of Y+ as it relates to viscous sublayer and advection scheme:

Quote:
Originally Posted by cfdnewbie View Post
yes, at least in the viscous sublayer. The size of your grid cell (or the number of points per unit length) determine the smallest scale you can catch on a given grid. From information theory, the Nyquist theorem tells us that we need at least 2 points per wavelength to represent a frequency (we need to be able to detect the sign change). However, 2 points per wavelength is just for Fourier-type approximations. For other schemes like O1 FV you need a lot more, maybe 6 to 10 to accurately capture a wavelength. Let's assume that you have the same grid in all of the flow (i.e. high resolution everywhere, no grid stretching or such). Then the smallest scale you can capture is determined by your grid and scheme, the better/finer, the smaller the scale.

OF course, most grids will coarsen away from the wall, so the smallest scale will "grow bigger" away from the wall as well



Ha, that's the crux of LES :) of course, the bigger y+, the fewer the small scales you will catch, but does that change the result of the bigger scales?

The answer is not straight forward, but I'll try to make it short:

Let's talk about NS-equations (or any non-linear conservation eqns). The scales represented in the equations are coupled by the non-linearity of the equations, i.e. what happens on one scale will (eventually) reach all other scales (also known as the butterfly effect). So the NS eqns represent the full "nature" with all its scales and interactions. We now truncate our "nature" by resolving only the larger scales, since our grid is too coarse.... what will happen? Will the large scales be influenced by the lack of small scales?

Hell, yeah, they will. We are lacking the balancing interaction of the small scales, since we don't have these scales. We are also lacking the physical effects that take place at small scales (dissipation).... so we have production of turbulence at large scales, the energy is handed down through the medium scales but is NOT dissipated at the small scales, since they are simply not present in our computation. Will that influence the large scales? Definitely!

That's why LES people add some type of viscosity (effect of small scales) to their computations, otherwise, their simulations would very likely just blow up!


hope this help!

cheers
► Rans
  31 Aug, 2020
Quote:
Originally Posted by vinerm View Post
That's a wrong notion that RANS or EVM models are introduced to get faster results or are expected to be used with coarse mesh. There is no such assumption behind development of these models. The only assumption in EVM is that the turbulence is isotropic and non-EVM RANS, such as, RSM don't even have that assumption.

And when it comes to wall treatment, it is not directly linked with turbulence model; even LES requires wall treatment. y^+ is a non-dimensional (Reynolds) number and for almost all industrial fluids, theoretically as well as experimentally, it is found that u^+ = y^+ up to y^+ of 5. And if it is linear within this limit, it does not matter if you have 10 points or just 1 point, the line would be same. So, y^+ being smaller than 1 is an overkill and does not help within anything.

Boundary conditions for both k and \varepsilon at the wall is 0.
► What I've done in the past years and may need someone else to pick it back up
  18 Aug, 2020
This is a blog post aimed to pass on the baton of the work I've done in the past to anyone who wants to pick it back up partially or completely, which I was still doing (or trying to do) until Hanging my volunteer gloves and moving to a new phase of my life.


This blog post could potentially be edited as time goes on and I remember about things I've done in the past and which should be picked up by someone else:
  1. Generating version template pages and logos for said versions at openfoamwiki.net - this is explained here: https://openfoamwiki.net/index.php/F...n_templates.3F and here https://openfoamwiki.net/index.php/F...AM_versions.3F
  2. Writing and testing installation instructions at https://openfoamwiki.net/index.php/Installation/Linux - The objective was to ensure that the less knowledgeable user would still be able to compile+install OpenFOAM from source code with a much higher success rate, than following the succinct instructions available at the official websites.
  3. Updating the release version links at the top right-most corner of openfoamwiki.net
  4. Uh... several other things listed at openfoamwiki.net, mostly listed here: http://openfoamwiki.net/index.php?ti...arget=Wyldckat
  5. Contributing to bug reports and fixes at openfoam.com
  6. Moderator work here at the forum, including:
    1. Hunting down spam, which nowadays is mostly automated, but not fully automated.
    2. Moving threads to the correct sub-forums.
    3. Re-arranging forums to make it easier for people to ask and answer questions, as well as finding existing answers.
    4. Warning forum members when they've not followed the rules...
    5. I wanted to have pruned all of the threads on the main OpenFOAM forum and place them in their correct sub-forums, but never got around to it. There is a thread on the moderator forum that explains how to streamline the process.
    6. I wanted to have finished moving posts into independent threads out of this still large thread: https://www.cfd-online.com/Forums/op...ed-topics.html
    7. Also out of this one: https://www.cfd-online.com/Forums/op...am-extend.html
  7. Had a list of posts/threads I wanted to look into... which is now written on this wiki page on my central repository for these kinds of notes: What I wanted to still have done for the OpenFOAM community, but never managed to find the time for it
  8. And had a list of bugs I wanted to solve: Bugs on OpenFOAM's bug tracker I wanted to tackle, but never managed to find the time for it
  9. I have over 50 repositories at https://github.com/wyldckat - most of them related to OpenFOAM and which will be left as-is for the years to come. If you want to continue working on them and even take over maintenance, open an issue on the respective repository.
► Hanging my volunteer gloves and moving to a new phase of my life
  18 Aug, 2020
TL;DR: As of 2020, I can only help during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.

Full post:
So nearly 2 years after my blog post Why I contribute to the OpenFOAM forum(s), wiki(s) and the public community, I'm writing this blog post you are reading now.

My last 3 thread posts at the forums in CFD-Online this year, were on May 7th, February 27th and January 20th. And before that, it was 10 posts over my winter vacation on the last week of 2019. Before that, it averaged out to around 1 post/month. I have 10,956 posts here at the forum and it still averages to 2.62 post/day.

I'm currently vacation, mid August 2020 and am writing this, since I'm unable to help the way I used to in the past.


So what happened?
In a short description: borderline-burning-out + ~30kg overweight.

In other words, I was still able to work, but having difficulty maintaining a stable life, which wasn't healthy to begin for years now, along with overly stressed, even if there was not much of a reason to be stressed...


What am I doing now, since early 2020?
  1. Changed my diet, namely changed my eating regiment to something I should have done over 20 years ago.
  2. Increased my physical activity to a much healthier dosage.
  3. Am moving on with my life to a new phase where I actually have to behave as a grown up, specially given I'm already 40 years old as I write this.

What does this mean for what I can do to help in the community?
Given my past efforts over a period of 10 years, I'm writing this blog post as an official stance on how much I will be able to help in the future:
  1. The majority (~99.9%) of the public contributions will be done within working hours at my job; in other words, during office hours, at work, if paid and/or affects our projects, namely what we use in OpenFOAM itself and blueCFD-Core.
  2. The remaining 0.1% outside of my job will mostly be the bug tracker at openfoam.org, given that I can't be at both openfoam.org and openfoam.com :(
  3. Everything else where I've helped in the past, will be once in a blue moon, may it be at the forum or openfoamwiki.net
  4. I don't know how many or which community/official OpenFOAM workshops I will attend in the future. I already had to give up on the Iberian User workshop of 2018, due to health reasons, i.e. what has finally led me to this decision this year of 2020.
This has been gradually occurring since at least 2015, but it has effectively come to this stopping point.


What I ask you, as you are reading this blog post?

Associated to this blog post, I'm writing another blog post which I may need to update in the near future: What I've done in the past years and may need someone else to pick it back up
edit: Aiming to wrap up writing said blog post by the end of the 19th of August 2020.

Signing off for now:
I've written some years ago in a forum post, where someone asked a vague question and I went on a rant over "as people grow older, the more they know and the more responsibilities they have, therefore the less free time they have to come and help here... so the less information you provide, the less likely you will get the answer you need".

In a way, my time has come and I need to move on with my life. But I was stressing out too much to notice it sooner. Fortunately I should still be on time to keep going forward and hopefully be able to help more the community in the future.

This has happened to the various authors of code that is currently and was in OpenFOAM in the past, where they helped people publicly over several years and ended up having to pull away from the community, because it's not easy to achieve a balance between life and working as a volunteer.


Fun fact:
Even if I don't post in the next 20 years it would still give me a rate of 1 post/month... :cool::rolleyes:
► 10 crucial parameters to check before committing to a CFD software for academia
    4 Aug, 2020
I have put together a comprehensive list of 10 crucial parameters that you, as a researcher or a teacher, should check with the CFD software provider, before committing to their software.

https://www.linkedin.com/pulse/10-cr...CEqwYpmA%3D%3D
Attached Thumbnails
Click image for larger version

Name:	cfd_academic.PNG
Views:	38
Size:	107.8 KB
ID:	511  

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:  https://openfoam.org
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

vertices
(
    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15
);

blocks
(
    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);

edges
(
);

boundary
(
    inlet
    {
        type patch;
        faces
        (
            (0 8 12 4)
        );
    }
    outlet
    {
        type patch;
        faces
        (
            (3 7 15 11)
        );
    }
    lowerWall
    {
        type wall;
        faces
        (
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        );
    }
    upperWall
    {
        type patch;
        faces
        (
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        );
    }
    frontAndBack
    {
        type empty;
        faces
        (
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)
        );
    }
);

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

edges
(
        polyLine 1 2
        (
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)
        )

        polyLine 9 10
        (
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)
        )
);

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!

Cheers.

This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
200
0.000012924
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:

T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

Summary

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

https://wiki.openfoam.com/%223_weeks%22_series

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.

Summary

Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via http://www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with curiosityFluidsAirfoilMesher.py

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!

Download

You can download the script here:

https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.

Instructions

(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh

PS
You need to run this with python 3, and you need to have numpy installed

Inputs

The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.

Examples

12% Joukowski Airfoil

Inputs:

With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:


With these inputs, the result looks like this:


Mesh Quality:


Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).

Inputs:


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality

Summary

Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via http://www.openfoam.com, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

Hanley Innovations top

► Accurate Aircraft Performance Predictions using Stallion 3D
  26 Feb, 2020


Stallion 3D uses your CAD design to simulate the performance of your aircraft.  This enables you to verify your design and compute quantities such as cruise speed, power required and range at a given cruise altitude. Stallion 3D is used to optimize the design before moving forward with building and testing prototypes.

The table below shows the results of Stallion 3D around the cruise angles of attack of the Cessna 402c aircraft.  The CAD design can be obtained from the OpenVSP hangar.


The results were obtained by simulating 5 angles of attack in Stallion 3D on an ordinary laptop computer running MS Windows 10 .  Given the aircraft geometry and flight conditions, Stallion 3D computed the CL, CD, L/D and other aerodynamic quantities.  With this accurate aerodynamics results, the preliminary performance data such as cruise speed, power, range and endurance can be obtained.

Lift Coefficient versus Angle of Attack computed with Stallion 3D


Lift to Drag Ratio versus True Airspeed at 10,000 feet


Power Required versus True Airspeed at 10,000 feet

The Stallion 3D results shows good agreement with the published data for the Cessna 402.  For example, the cruse speed of the aircraft at 10,000 feet is around 140 knots. This coincides with the speed at the maximum L/D (best range) shown in the graph and table above.

 More information about Stallion 3D can be found at the following link.
http://www.hanleyinnovations.com/stallion3d.html

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software that is accessible to engineers, designers and students.  For more information, please visit > http://www.hanleyinnovations.com


► 5 Tips For Excellent Aerodynamic Analysis and Design
    8 Feb, 2020
Stallion 3D analysis of Uber Elevate eCRM-100 model

Being the best aerodynamics engineer requires meticulous planning and execution.  Here are 5 steps you can following to start your journey to being one of the best aerodynamicist.

1.  Airfoils analysis (VisualFoil) - the wing will not be better than the airfoil. Start with the best airfoil for the design.

2.  Wing analysis (3Dfoil) - know the benefits/limits of taper, geometric & aerodynamic twist, dihedral angles, sweep, induced drag and aspect ratio.

3. Stability analysis (3Dfoil) - longitudinal & lateral static & dynamic stability analysis.  If the airplane is not stable, it might not fly (well).

4. High Lift (MultiElement Airfoils) - airfoil arrangements can do wonders for takeoff, climb, cruise and landing.

5. Analyze the whole arrangement (Stallion 3D) - this is the best information you will get until you flight test the design.

About Hanley Innovations
Hanley Innovations is a pioneer in developing user friendly and accurate software the is accessible to engineers, designs and students.  For more information, please visit > http://www.hanleyinnovations.com

► Accurate Aerodynamics with Stallion 3D
  17 Aug, 2019

Stallion 3D is an extremely versatile tool for 3D aerodynamics simulations.  The software solves the 3D compressible Navier-Stokes equations using novel algorithms for grid generation, flow solutions and turbulence modeling. 


The proprietary grid generation and immersed boundary methods find objects arbitrarily placed in the flow field and then automatically place an accurate grid around them without user intervention. 


Stallion 3D algorithms are fine tuned to analyze invisid flow with minimal losses. The above figure shows the surface pressure of the BD-5 aircraft (obtained OpenVSP hangar) using the compressible Euler algorithm.


Stallion 3D solves the Reynolds Averaged Navier-Stokes (RANS) equations using a proprietary implementation of the k-epsilon turbulence model in conjunction with an accurate wall function approach.


Stallion 3D can be used to solve problems in aerodynamics about complex geometries in subsonic, transonic and supersonic flows.  The software computes and displays the lift, drag and moments for complex geometries in the STL file format.  Actuator disc (up to 100) can be added to simulate prop wash for propeller and VTOL/eVTOL aircraft analysis.



Stallion 3D is a versatile and easy-to-use software package for aerodynamic analysis.  It can be used for computing performance and stability (both static and dynamic) of aerial vehicles including drones, eVTOLs aircraft, light airplane and dragons (above graphics via Thingiverse).

More information about Stallion 3D can be found at:



► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse


Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit http://www.hanleyinnovations.com/stallion3d.html or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017


Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.




More information about the software can be found at the following url:
http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

Thanks for reading.


► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

CFD and others... top

► Facts, Myths and Alternative Facts at an Important Juncture
  21 Jun, 2020
We live in an extraordinary time in modern human history. A global pandemic did the unthinkable to billions of people: a nearly total lock-down for months.  Like many universities in the world, KU closed its doors to students since early March of 2020, and all courses were offered online.

Millions watched in horror when George Floyd was murdered, and when a 75 year old man was shoved to the ground and started bleeding from the back of his skull...

Meanwhile, Trump and his allies routinely ignore facts, fabricate alternative facts, and advocate often-debunked conspiracy theories to push his agenda. The political system designed by the founding fathers is assaulted from all directions. The rule of law and the free press are attacked on a daily basis. One often wonders how we managed to get to this point, and if the political system can survive the constant sabotage...It appears the struggle between facts, myths and alternative facts hangs in the balance.

In any scientific discipline, conclusions are drawn, and decisions are made based on verifiable facts. Of course, we are humans, and honest mistakes can be made. There are others, who push alternative facts or misinformation with ulterior motives. Unfortunately, mistaken conclusions and wrong beliefs are sometimes followed widely and become accepted myths. Fortunately, we can always use verifiable scientific facts to debunk them.

There have been many myths in CFD, and quite a few have been rebutted. Some have continued to persist. I'd like to refute several in this blog. I understand some of the topics can be very controversial, but I welcome fact-based debate.

Myth No. 1 - My LES/DNS solution has no numerical dissipation because a central-difference scheme is used.

A central finite difference scheme is indeed free of numerical dissipation in space. However, the time integration scheme inevitably introduces both numerical dissipation and dispersion. Since DNS/LES is unsteady in nature, the solution is not free of numerical dissipation.  

Myth No. 2 - You should use non-dissipative schemes in LES/DNS because upwind schemes have too much numerical dissipation.

It sounds reasonable, but far from being true. We all agree that fully upwind schemes (the stencil shown in Figure 1) are bad. Upwind-biased schemes, on the other hand, are not necessarily bad at all. In fact, in a numerical test with the Burgers equation [1], the upwind biased scheme performed better than the central difference scheme because of its smaller dispersion error. In addition, the numerical dissipation in the upwind-biased scheme makes the simulation more robust since under-resolved high-frequency waves are naturally damped.   

Figure 1. Various discretization stencils for the red point
The Riemann solver used in the DG/FR/CPR scheme also introduces a small amount of dissipation. However, because of its small dispersion error, it out-performs the central difference and upwind-biased schemes. This study shows that both dissipation and dispersion characteristics are equally important. Higher order schemes clearly perform better than a low order non-dissipative central difference scheme.  

Myth No. 3 - Smagorisky model is a physics based sub-grid-scale (SGS) model.

There have been numerous studies based on experimental or DNS data, which show that the SGS stress produced with the Smagorisky model does not correlate with the true SGS stress. The role of the model is then to add numerical dissipation to stablize the simulations. The model coefficient is usually determined by matching a certain turbulent energy spectrum. The fact suggests that the model is purely numerical in nature, but calibrated for certain numerical schemes using a particular turbulent energy spectrum. This calibration is not universal because many simulations produced worse results with the model.

► What Happens When You Run a LES on a RANS Mesh?
  27 Dec, 2019

Surely, you will get garbage because there is no way your LES will have any chance of resolving the turbulent boundary layer. As a result, your skin friction will be way off. Therefore, your drag and lift will be a total disaster.

To actually demonstrate this point of view, we recently embarked upon a numerical experiment to run an implicit large eddy simulation (ILES) of the NASA CRM high-lift configuration from the 3rd AIAA High-Lift Prediction Workshop. The flow conditions are: Mach = 0.2, Reynolds number = 3.26 million based on the mean aerodynamic chord, and the angle of attack = 16 degrees.

A quadratic (Q2) mesh was generated by Dr. Steve Karman of Pointwise, and is shown in Figure 1.

 Figure 1. Quadratic mesh for the NASA CRM high-lift configuration (generated by Pointwise)

The mesh has roughly 2.2 million mixed elements, and is highly clustered near the wall with an average equivalent y+ value smaller than one. A p-refinement study was conducted to assess the mesh sensitivity using our high-order LES tool based on the FR/CPR method, hpMusic. Simulations were performed with solution polynomial degrees of p = 1, 2 and 3, corresponding to 2nd, 3rd and 4th orders in accuracy respectively. No wall-model was used. Needless to say, the higher order simulations captured finer turbulence scales, as shown in Figure 2, which displays the iso-surfaces of the Q-criteria colored by the Mach number.    

p = 1

p = 2

p = 3
Figure 2. Iso-surfaces of the Q-criteria colored by the Mach number

Clearly the flow is mostly laminar on the pressure side, and transitional/turbulent on the suction side of the main wing and the flap. Although the p = 1 simulation captured the least scales, it still correctly identified the laminar and turbulent regions. 

The drag and lift coefficients from the present p-refinement study are compared with experimental data from NASA in Table I. Although the 2nd order results (p = 1) are quite different than those of higher orders, the 3rd and 4th order results are very close, demonstrating very good p-convergence in both the lift and drag coefficients. The lift agrees better with experimental data than the drag, bearing in mind that the experiment has wind tunnel wall effects, and other small instruments which are not present in the computational model. 

Table I. Comparison of lift and drag coefficients with experimental data

CL
CD
p = 1
2.020
0.293
p = 2
2.411
0.282
p = 3
2.413
0.283
Experiment
2.479
0.252


This exercise seems to contradict the common sense logic stated in the beginning of this blog. So what happened? The answer is that in this high-lift configuration, the dominant force is due to pressure, rather than friction. In fact, 98.65% of the drag and 99.98% of the lift are due to the pressure force. For such flow problems, running a LES on a RANS mesh (with sufficient accuracy) may produce reasonable predictions in drag and lift. More studies are needed to draw any definite conclusion. We would like to hear from you if you have done something similar.

This study will be presented in the forthcoming AIAA SciTech conference, to be held on January 6th to 10th, 2020 in Orlando, Florida. 


► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 


► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.


A Tank outside my taxi



A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion


To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.



Convergent Science Blog top

► Leveling Up Scaling with CONVERGE 3.0
  14 Aug, 2020

In a competitive market, predictive computational fluid dynamics (CFD) can give you an edge when it comes to product design and development. Not only can you predict problem areas in your product before manufacturing, but you can also optimize your design computationally and devote fewer resources to testing physical models. To get accurate predictions in CFD, you need to have high-resolution grid-convergent meshes, detailed physical models, high-order numerics, and robust chemistry—all of which are computationally expensive. Using simulation to expedite product design works only if you can run your simulations in a reasonable amount of time.

The introduction of high-performance computing (HPC) drastically furthered our ability to obtain accurate results in shorter periods of time. By running simulations in parallel on multiple cores, we can now solve cases with millions of cells and complicated physics that otherwise would have taken a prohibitively long time to complete. 

However, simply running cases on more cores doesn’t necessarily lead to a significant speedup. The speedup from HPC is only as good as your code’s parallelization algorithm. Hence, to get a faster turnaround on product development, we need to improve our parallelization algorithm.

Let’s Start With the Basics

Breaking a problem into parts and solving these parts simultaneously on multiple interlinked processors is known as parallelization. An ideally parallelized problem will scale inversely with the number of cores—twice the number of cores, half the runtime.

A common task in HPC is measuring the scalability, also referred to as scaling efficiency, of an application. Scalability is the study of how the simulation runtime is affected by changing the number of cores or processors. The scaling trend can be visualized by plotting the speedup against the number of cores.

How Does CONVERGE Parallelize?

Parallelization in CONVERGE 2.4 and Earlier

In CONVERGE versions 2.4 and earlier, parallelization is performed by partitioning the solution domain into parallel blocks, which are coarser than the base grid. CONVERGE distributes the blocks to the interlinked processors and then performs a load balance. Load balancing redistributes these parallel blocks such that each processor is assigned roughly the same number of cells.

This parallel-block technique works well unless a simulation contains high levels of embedding (regions in which the base grid is refined to a finer mesh) in the calculation domain. These cases lead to poor parallelization because the cells of a single parallel block cannot be split between multiple processors.

Figure 1 shows an example of parallel block load balancing for a test case in CONVERGE 2.4. The colors of the contour represent the cells owned by each processor. As you can see, the highly embedded region at the center is covered by only a few blocks, leading to a disproportionately high number of cells in those blocks. As a result, the cell distribution across processors is skewed. This phenomenon imposes a practical limit on the number of levels of embedding you can have in earlier versions of CONVERGE while still maintaining a reasonable load balance.

Figure 1: Parallel-block load balancing in CONVERGE 2.4.

Parallelization in CONVERGE 3.0

In CONVERGE 3.0, instead of generating parallel blocks, parallelization is accomplished via cell-based load balancing, i.e., on a cell-by-cell basis. Because each cell can belong to any processor, there is much more flexibility in how the cells are distributed, and we no longer need to worry about our embedding levels.

Figure 2 shows the cell distribution among processors using cell-based load balancing in CONVERGE 3.0 for the same test case shown in Figure 1. You can see that without the restrictions of the parallel blocks, the cells in the highly embedded region are divided between many processors, ensuring an (approximately) equal distribution of cells.

Figure 2: Cell-based load balancing in CONVERGE 3.0.

The cell-based load balancing technique demonstrates significant improvements in scaling, even for large numbers of cores. And unlike previous versions, the load balancing itself in CONVERGE 3.0 is performed in parallel, accelerating the simulation start-up.

Case Studies

In order to see how well the cell-based parallelization works, we have performed strong scaling studies for a number of cases. The term strong scaling means that we ran the exact same simulation (i.e., we kept the number of cells, setup parameters, etc. constant) on different core counts.

SI8 PFI Engine Case

Figure 3 shows scaling results for a typical SI8 port fuel injection (PFI) engine case in CONVERGE 3.0. The case was run for one full engine cycle, and the core count varied from 56 to 448. The plot compares the speedup obtained running the case in CONVERGE 3.0 with the ideal speedup. With enough CPU resources, in this case 448 cores, you can simulate one engine cycle with detailed chemistry in under two hours—which is three times faster than CONVERGE 2.4!

Cores Time (h) Speedup Efficiency Cells per core Engine cycles per day
56 11.51 1 100% 12,500 2.1
112 5.75 2 100% 6,200 4.2
224 3.08 3.74 93% 3,100 7.8
448 1.91 6.67 75% 1,600 12.5
Figure 3: CONVERGE 3.0 scaling results for an SI8 PFI engine simulation run on an in-house cluster. On 448 cores, CONVERGE 3.0 scales with 75% efficiency, and you can simulate more than 12 engine cycles in a single day. Please note that the parallelization profiles will differ from one case to another.

Sandia Flame D Case

If the speedup of the SI8 PFI engine simulation impressed you, then just wait until you see the scaling study for the Sandia Flame D case! Figure 4 shows the results of a strong scaling study performed for the Sandia Flame D case, in which we simulated a methane flame jet using 170 million cells. The case was run on the Blue Waters supercomputer at the National Center for Supercomputing Applications (NCSA), and the core counts vary from 500 to 8,000. CONVERGE 3.0 demonstrates impressive near-linear scaling even on thousands of cores.

Figure 4: CONVERGE 3.0 scaling results for a combusting turbulent partially premixed flame (Sandia Flame D) case run on the Blue Waters supercomputer at the National Center for Supercomputing Applications[1]. On 8,000 cores, CONVERGE 3.0 scales with 95% efficiency.

Conclusion

Although earlier versions of CONVERGE show good runtime improvements with increasing core counts, speedup is limited for cases with significant local embeddings. CONVERGE 3.0 has been specifically developed to run efficiently on modern hardware configurations that have a high number of cores per node.

With CONVERGE 3.0, we have observed an increase in speedup in simulations with as few as approximately 1,500 cells per core. With its improved scaling efficiency, this new version empowers you to obtain simulation results quickly, even for massive cases, so you can reduce the time it takes to bring your product to market. 

Contact us to learn how you can accelerate your simulations with CONVERGE 3.0.


[1] The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign provides supercomputing and advanced digital resources for the nation’s science enterprise. At NCSA, University of Illinois faculty, staff, students, and collaborators from around the globe use advanced digital resources to address research grand challenges for the benefit of science and society. The NCSA Industry Program is the largest Industrial HPC outreach in the world, and it has been advancing one third of the Fortune 50® for more than 30 years by bringing industry, researchers, and students together to solve grand computational problems at rapid speed and scale. The CONVERGE simulations were run on NCSA’s Blue Waters supercomputer, which is one of the fastest supercomputers on a university campus. Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.

► The Collaboration Effect: A Decade of Innovation
    5 Aug, 2020

From the Argonne National Laboratory + Convergent Science Blog Series

The world is waiting for us to develop the tools needed to design new engine architectures, new concepts, with a finer control over the combustion process. If we can continue to make the progress we’ve achieved over the last ten years, I think society and the environment will continue to reap large rewards.

—Dr. Don Hillebrand, Division Director of the Energy Systems Division, Argonne National Laboratory

The year 2020 marks the ten-year anniversary of a fruitful collaboration between Convergent Science and the U.S. Department of Energy’s Argonne National Laboratory. Over the years, the collaboration has facilitated exciting advances in engine technology, high-performance computing and machine learning, computational methods, physical models, gas turbine and detonation engine simulations, and more. Many engineers at both Argonne and Convergent Science have contributed to these projects, but the collaboration started with one individual.

The Story Origin

Dr. Sibendu Som

Dr. Sibendu Som was introduced to CONVERGE before it was even called CONVERGE. He was a graduate student at the University of Illinois at Chicago (UIC), and in the summer of 2006 Sibendu participated in an industry internship. He worked with engineers on a computational fluid dynamics (CFD) team who were using an internal version of a code in development by a small company named Convergent Science. When Sibendu’s internship ended, he went back to UIC and continued to work with the same CFD code—at the time called MOSES.

For his thesis, Sibendu focused on improving spray models, for which he was obtaining experimental data from Argonne. Spray modeling happens to be a specialty of Dr. Kelly Senecal, Co-Owner of Convergent Science, so Kelly assisted Sibendu in his endeavors.

“Kelly helped me quite a bit,” Sibendu says, “so I actually invited him to be a part of my thesis defense committee.”

Doug Longman and Kelly Senecal

After completing his Ph.D.—and thoroughly impressing Kelly and the rest of his committee—Sibendu became a postdoc at Argonne National Laboratory in the research group of Mr. Doug Longman, Manager of Engine Research. At the time, there was only a little CFD work being done at Argonne in the combustion and spray area, so there was an opportunity to bring in a new code. Having used CONVERGE during his thesis, Sibendu was a proponent of using the software at Argonne.

Partnering with a renowned national laboratory was a big opportunity for Convergent Science. In 2010, Convergent Science had only recently switched from being a CFD consulting company to a CFD software company, and working with Argonne lent credibility to their code. Argonne also provided access to computational resources on a scale that a small company simply could not afford on their own.

“It was also a relationship thing,” Kelly says. “The partnership just started off on the right foot, and we were really happy to work with the Argonne research team.”

A Mutually Beneficial Partnership

Government and private industry have a long history of collaboration in the United States—and for good reason. These relationships are not only beneficial for both parties, but also for taxpayers. The mission of national laboratories is not to compete with industry, but to help support and enhance the missions of private companies for the benefit of the country.

“The national lab system in the United States is a national treasure,” says Dr. Don Hillebrand. “Our job is to look at big science, big physics, big chemistry, big engineering, and solve challenging problems that confront us. We make sure that knowledge or tools or technology solutions get transferred to industrial groups, who develop jobs and products and make the country competitive.”

National laboratories provide access to resources, including advanced technology and funding, that private companies are often unable to obtain on their own. For Convergent Science in particular, access to Argonne’s computational resources made it possible to test CONVERGE on large numbers of cores and to work on improving the scalability for clients who want to run highly parallel simulations. Getting access to these types of resources on the ground floor provides a huge advantage to industry partners.

Theta Supercomputer at Argonne National Laboratory

Another important function of national labs is to investigate long-term or risky areas of research. Private companies survive on the profits they make, and investing in research that does not pay off in the end can be damaging to their business. In the same vein, companies tend to focus on products that they can bring to market relatively quickly to make sure they have a consistent revenue stream. However, long-term and riskier research is critical for developing innovative technologies that have the potential to transform our lives.

“The government drives a lot of research in cutting-edge technology,” says Dr. Dan Lee, Co-Owner of Convergent Science. “They also have advanced facilities and teams of expert engineers doing fundamental research for projects that are potentially going to shape the future.”

Of course, to have an impact on society, the technology developed in national laboratories must end up in the hands of consumers. Thus the end-goal of research and development at government institutions is to transfer that technology to industry.

Ann Schlenker, Director of the Center for Transportation Research at Argonne, spent more than 30 years in industry before transitioning to Argonne. That experience gave her a deep understanding of the synergistic relationship between government and private industry.

“You need to be extremely astute at listening to the voice of the customer. And that means understanding what the challenges are, where the hurdles and difficulties are stressing the system and how best to optimize processes. Because if you can do that, you can develop timely solutions,” Ann says.

Partnering with industry helps ensure that the research at the national labs is relevant, timely, and impactful. This is one way in which these relationships benefit the taxpayer—the results of government research directly address the needs of consumers and help make the country competitive on the world stage.

Delivering Results

The collaboration between Argonne and Convergent Science has resulted in significant advances for the modeling community and the transportation industry. While the details of this research will be discussed in depth in upcoming blog posts, the projects from the past decade generally fall into two categories: advancing simulation for propulsion technologies and improving the scalability of CONVERGE on high-performance computing architectures.

Many projects have focused on modeling processes relevant to the internal combustion engine, such as studying fuel injection and sprays using experimental data from Argonne’s Advanced Photon Source, implementing state-of-the-art nozzle flow models in CONVERGE, simulating ignition, and investigating cycle-to-cycle variation.

Other key areas of focus have been modeling challenging phenomena in gas turbine combustors and breaking ground on simulating rotating detonation engines. Enhancing the scalability of CONVERGE has made it possible to run larger, more complex cases and to obtain more accurate, more relevant results from these simulations.

The overarching goal for these projects continues to be to create better models and establish techniques that will be instrumental in developing the transportation technologies of the future. Perhaps Ann sums it up best:

The day of learning is not over for combustion processes. It’s germane to our gross domestic product for U.S. economic vitality. Our transportation and combustion researchers and industry engineers work side-by-side to achieve the societal goals of better fuel economy and lower emissions. And these strong collaborations and this visionary work allow us to move fully forward with model-based system engineering, with high-fidelity, predictive capabilities that we trust.

The collaboration between Convergent Science and Argonne National Laboratory will certainly help propel us into the future. Learn more about the research performed during this collaboration in upcoming blog posts!

► Models On Top of Models: Thickened Flames in CONVERGE
    2 Jul, 2020

Any CONVERGE user knows that our solver includes a lot of physical models. A lot of physical models! How many combinations exist? How many different ways can you set up a simulation? That’s harder to answer than you might think. There might be N turbulence models and M combustion models, but the total set of combinations isn’t N*M.

Why not? In some cases, our developers haven’t completed it yet! The ECFM and ECFM3Z combustion models, for example, could not be combined with a large eddy simulation (LES) turbulence model until CONVERGE version 3.0.11. We’re adding more features all the time. One interesting example is the thickened flame model (TFM). 

The name is descriptive, of course: TFM is designed to thicken the flame. If you’re not a combustion researcher, this notion may not be intuitive. A real flame is thin (in an internal combustion engine environment, tens or hundreds of microns). Why would we want to design a model that intentionally deviates from this reality? As is often the case with physical modeling, the answer lies in what we’re trying to study.

CONVERGE is often used to study the engineering operability of a premixed internal combustion or gas turbine engine. This requires accurate simulation of macroscopic combustion dynamics (flame properties), including the laminar flamespeed. A large eddy simulation (LES) might use cells on the order of 0.1 mm

The problem may now be clear. The flame is much too thin to resolve on the grid we want to use. In fact, a detailed chemical kinetics solver like SAGE requires five or more cells across the flame in order to reproduce the correct laminar flamespeed. An under-resolved flame results in an underprediction of laminar flamespeed. Of course, we could simply decrease the cell size by an order of magnitude, but that makes for an impractical engineering calculation.

The thickened flame model is designed to solve this problem. The basic idea of Colin et al. [1] was to simulate a flame that is thicker than the physical one, but which reproduces the same laminar flamespeed. From simple scaling analysis, this can be achieved by increasing the thermal and species diffusivity while reducing the reaction rate by a factor of F. Because the flame thickening effect decreases the wrinkling of the flame front, and thus its surface area, an efficiency factor E is introduced so that the correct turbulent flamespeed is recovered.

The combination of these scaling factors allows CONVERGE to recover the correct flamespeed without actually resolving the flame itself. CONVERGE also calculates a flame sensor function so that these scaling factors are applied only at the flame front. By using TFM with SAGE detailed chemistry, a premixed combustion engineering simulation with LES becomes practical.

Hasti et al. [2] evaluated one such case using CONVERGE with LES, SAGE, and TFM. This work examined the Volvo bluff-body augmentor test rig, shown below, which has been subjected to extensive study. At the conditions of interest, the flame thickness is estimated to be about 1 mm, and so SAGE without TFM should require a grid not coarser than 0.2 mm to accurately simulate combustion.


Figure 1: Volvo bluff-body augmentor test rig [3].

With TFM, Hasti et al. show that CONVERGE is able to generate a grid-converged result at a minimum grid spacing of 0.3125 mm. We might expect such a calculation to take only about 40% as many core hours as a simulation with a minimum grid spacing of 0.25 mm.

Figure 2: Representative instantaneous temperature field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm.
Figure 3: Representative instantaneous velocity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 4: Representative instantaneous vorticity magnitude field of the bluff-body combustor.
Base grid sizes of 2 mm (above) and 3 mm (below) correspond to minimum cell sizes of 0.25 mm and 0.375 mm, respectively.
Figure 5: Transverse mean temperature profiles at x/D = 3.75, 8.75, and 13.75.
Base grid sizes of 2 mm, 2.5 mm, and 3 mm correspond to minimum cell sizes of 0.25 mm, 0.3125 mm, and 0.375 mm, respectively.

Understanding the topic of study, the underlying physics, and the way those physics are affected by our choice of physical models, are critical to performing accurate simulations. If you want to combine the power of the SAGE detailed chemical kinetics solver with the transient behavior of an LES turbulence model to understand the behavior of a practical engine–and to do so without bankrupting your IT department–TFM is the enabling technology.

Want to learn more about thickened flame modeling in CONVERGE? Check out these TFM case studies from recent CONVERGE User Conferences (1, 2, 3) and keep an eye out for future Premixed Combustion Modeling advanced training sessions.

References
[1] Colin, O., Ducros, F., Veynante, D., and Poinsot, T., “A thickened flame model for large eddy simulations of turbulent premixed combustion,” Physics of Fluids, 12(1843), 2000. DOI: 10.1063/1.870436
[2] Hasti, V.R., Liu, S., Kumar, G., and Gore, J.P., “Comparison of Premixed Flamelet Generated Manifold Model and Thickened Flame Model for Bluff Body Stabilized Turbulent Premixed Flame,” 2018 AIAA Aerospace Sciences Meeting, AIAA 2018-0150, Kissimmee, Florida, January 8-12, 2018. DOI: 10.2514/6.2018-0150
[3] Sjunnesson, A., Henrikson, P., and Lofstrom, C., “CARS measurements and visualizations of reacting flows in a bluff body stabilized flame,” 28th Joint Propulsion Conference and Exhibit, AIAA 92-3650, Nashville, Tennessee, July 6-8, 1992. DOI: 10.2514/6.1992-3650

► The Search for Soot-free Diesel: Modeling Ducted Fuel Injection With CONVERGE
  26 Mar, 2020

At the upcoming CONVERGE User Conference, which will be held online from March 31–April 1, Andrea Piano will present results from experimental and numerical studies of the effects of ducted fuel injection on fuel spray characteristics. Dr. Piano is a Research Assistant in the e3 group, coordinated by Prof. Federico Millo at Politecnico di Torino, and these are the first results to be reported from their ongoing collaboration with Prof. Lucio Postrioti at Università degli Studi di Perugia, Andrea Bianco at Powertech Engineering, and Francesco Pesce and Alberto Vassallo at General Motors Global Propulsion Systems. This work is a great example of how CONVERGE can be used in tandem with experimental methods to advance research at the cutting edge of engine technology. Keep reading for a preview of the results that Dr. Piano will discuss in greater detail in his online presentation.

The idea behind ducted fuel injection (DFI), originally conceived by Charles Mueller at Sandia National Laboratories, is to suppress soot formation in diesel engines by allowing the fuel to mix more thoroughly with air before it ignites1. Soot forms when a fuel doesn’t burn completely, which happens when the fuel-to-air ratio is too high. In DFI, a small tube, or duct, is placed near the nozzle of the fuel injector and directed along the axis of the fuel stream toward the autoignition zone. The fuel spray that travels through this duct is better mixed than it would be in a ductless configuration. Experiments at Sandia have shown that DFI can reduce soot formation by as much as 95%, demonstrating the enormous potential of this technology for curtailing harmful emissions from diesel engines.

Introduction to ducted fuel injection from Sandia National Laboratories.

While the Sandia researchers have focused on heavy-duty diesel applications, Dr. Piano and his collaborators are targeting smaller engines, such as those found in passenger cars and light-duty trucks. To understand how the fuel spray evolves in the presence of a duct, they first performed imaging and phase Doppler anemometry analyses of non-reacting sprays in a constant-volume test vessel. Figure 1 shows a sample of the experimental results. The video on the left corresponds to a free spray configuration with no duct, while the video on the right corresponds to a ducted configuration. Observe how the dark liquid breaks up and evaporates more quickly in the ducted configuration—this is the enhanced mixing that occurs in DFI.

Figure 1: Videos from experiments on non-reacting sprays in a free spray configuration (left) and a ducted configuration (right). Images were obtained from a constant-volume vessel at a rail pressure of 1200 bar, vessel temperature of 500°C, and vessel pressure of 20 bar.

Their next step was to develop a CFD model of the fuel spray that could be calibrated against the experimental results. Dr. Piano and his colleagues reproduced the geometry of the experimental setup in a CONVERGE environment, using physical models available in CONVERGE to simulate the processes of spray breakup, evaporation, and boiling, as well as the interactions between the spray and the duct. With fixed embedding and Adaptive Mesh Refinement, they were able to increase the grid resolution in the vicinity of the spray and the duct without a significant increase in computational cost. They simulated the spray penetration for both the free spray and the ducted configuration over a range of operating conditions and validated those results against the experimental data.

With a calibrated spray model in hand, the researchers were then able to run predictive simulations of DFI for reacting fuel sprays. They combined their spray model with the SAGE detailed chemical kinetics solver for combustion modeling, along with the Particulate Mimic model of soot formation. They ran simulations at different rail pressures and vessel temperatures to see how DFI would affect the amount of soot mass produced under engine-like operating conditions. Figures 2 and 3 show examples of the simulation results for a rail pressure of 1200 bar and a vessel temperature of 1000 K. Consistent with the findings of Mueller et al.1, these results show a dramatic reduction in the mass of soot produced during combustion in the ducted configuration as compared to the free spray configuration.

Figure 2: The plots on the right side show the heat release rate and soot mass produced in simulations of reacting sprays (red lines correspond to the free spray configuration and blue lines correspond to the ducted configuration). The dashed vertical lines indicate the simulation time at which the two contour plots were generated, with the free spray configuration on the left and the ducted configuration in the center. Contours are colored by soot mass, with regions of high soot mass shown in red.
Figure 3: The plots on the right side show the heat release rate and soot mass produced in simulations of reacting sprays (red lines correspond to the free spray configuration and blue lines correspond to the ducted configuration). The dashed vertical lines indicate the simulation time at which the two contour plots were generated, with the free spray configuration on the left and the ducted configuration in the center. Contours are colored by soot mass, with regions of high soot mass shown in red.

While these early results are promising, Dr. Piano and his collaborators are just getting started. They will continue using CONVERGE to investigate phenomena such as the duct thermal behavior and to explore the effects of different geometries and operating conditions, with the long-term goal of incorporating DFI into the design of a real engine. If you are interested in learning more about this work, be sure to sign up for the CONVERGE User Conference today!

References

[1] Mueller, C.J., Nilsen, C.W., Ruth, D.J., Gehmlich, R.K., Pickett, L.M., and Skeen, S.A., “Ducted fuel injection: A new approach for lowering soot emissions from direct-injection engines,” Applied Energy, 204, 206-220, 2017. DOI: 10.1016/j.apenergy.2017.07.001

► An Evening With the Experts: Scaling CFD With High-Performance Computing
  25 Feb, 2020
Listen to the full audio of the panel discussion.

As computing technology continues to advance rapidly, running simulations on hundreds and even thousands of cores is becoming standard practice in the CFD industry. Likewise, CFD software is continually evolving to keep pace with the advances in hardware. For example, CONVERGE 3.0, the latest major release of our software, is specifically designed to scale well in parallel on modern high-performance computing (HPC) systems. It’s clear that HPC is the future of CFD, so how does this shift affect those of us running simulations and how can we make the most of the increased availability of computational resources? At the 2019 CONVERGE User Conference–North America, we assembled a panel of engineers from industry and government to share their expertise.

In the panel discussion, which you can listen to above, you’ll learn about the computing resources available on the cloud and at the U.S. national laboratories and how to take advantage of them. The panelists discuss the types of novel, one-of-a-kind studies that HPC enables and how to handle post-processing data from massive cases run across many cores. Additionally, you’ll get a look at where post-processing is headed in the future to manage the ever-increasing amounts of data generated form large-scale simulations. Listen to the full panel discussion above!

Panelists

Alan Klug, Vice President of Customer Development, Tecplot

Sibendu Som, Manager of the Computational Multi-Physics Section, Argonne National Laboratory

Joris Poort, CEO and Founder, Rescale

Kelly Senecal, Co-Founder and Owner, Convergent Science

Moderator

Tiffany Cook, Partner & Public Relations Manager, Convergent Science

► 2019: A (Load) Balanced End to a Successful Decade
  19 Dec, 2019

2019 proved to be an exciting and eventful year for Convergent Science. We released the highly anticipated major rewrite of our software, CONVERGE 3.0. Our United States, European, and Indian offices all saw significant increases in employee count. We have also continued to forge ahead in new application areas, strengthening our presence in the pump, compressor, biomedical, aerospace, and aftertreatment markets, and breaking into the oil and gas industry. Of course, we remain dedicated to simulating internal combustion engines and developing new tools and resources for the automotive community. In particular, we are expanding our repertoire to encompass batteries and electric motors in addition to conventional engines. Our team at Convergent Science continues to be enthusiastic about advancing simulation capabilities and providing unmatched customer support to empower our users to tackle hard CFD problems.

CONVERGE 3.0

As I mentioned above, this year we released a major new version of our software, CONVERGE 3.0. We have frequently discussed 3.0 in the past few months, including in my recent blog post, so I’ll keep this brief. We set out to make our code more flexible, enable massive parallel scaling, and expand CONVERGE’s capabilities. The results have been remarkable. CONVERGE 3.0 scales with near-ideal efficiencies on thousands of cores, and the addition of inlaid meshes, new physical models, and enhanced chemistry capabilities have opened the door to new applications. Our team invested a lot of effort into making 3.0 a reality, and we’re very proud of what we’ve accomplished. Of course, now that CONVERGE 3.0 has been released, we can all start eagerly anticipating our next major release, CONVERGE 3.1.

Computational Chemistry Consortium

2019 was a big year for the Computational Chemistry Consortium (C3). In July, the first annual face-to-face meeting took place at the Convergent Science World Headquarters in Madison, Wisconsin. Members of industry and researchers from the National University of Ireland Galway, Lawrence Livermore National Laboratory, RWTH Aachen University, and Politecnico di Milano came together to discuss the work done during the first year of the consortium and establish future research paths. The consortium is working on the C3 mechanism, a gasoline and diesel surrogate mechanism that includes NOx and PAH chemistry to model emissions. The first version of the mechanism was released this fall for use by C3 members, and the mechanism will be refined over the coming years. Our goal is to create the most accurate and consistent reaction mechanism for automotive fuels. Stay tuned for future updates!

Third Annual European User Conference

Barcelona played host to this year’s European CONVERGE User Conference. CONVERGE users from across Europe gathered to share their recent work in CFD on topics including turbulent jet ignition, machine learning for design optimization, urea thermolysis, ammonia combustion in SI engines, and gas turbines. The conference also featured some exciting networking events—we spent an evening at the beautiful and historic Poble Espanyol and organized a kart race that pitted attendees against each other in a friendly competition. 

Inaugural CONVERGE User Conference–India

This year we hosted our first-ever CONVERGE User Conference–India in Bangalore and Pune. The conference consisted of two events, each covering different application areas. The event in Bangalore focused on applications such as gas turbines, fluid-structure interaction, and rotating machinery. In Pune, the emphasis was on IC engines and aftertreatment modeling. We saw presentations from both companies and universities, including General Electric, Cummins, Caterpillar, and the Indian Institutes of Technology Bombay, Kanpur, and Madras. We had a great turnout for the conference, with more than 200 attendees across the two events.

CONVERGE in the Big Easy

The sixth annual CONVERGE User Conference–North America took place in New Orleans, Louisiana. Attendees came from industry, academic institutions, and national laboratories in the U.S. and around the globe. The technical presentations covered a wide variety of topics, including flame spray pyrolysis, rotating detonation engines, machine learning, pre-chamber ignition, blood pumps, and aerodynamic characterization of unmanned aerial systems. This year, we hosted a panel of CFD and HPC experts to discuss scaling CFD across thousands of processors; how to take advantage of clusters, supercomputers, and the cloud to run large-scale simulations; and how to post-process large datasets. For networking events, we took a dinner cruise down the Mississippi River and encouraged our guests to explore the vibrant city of New Orleans.

KAUST Workshop

In 2019, we hosted the First CONVERGE Training Workshop and User Meeting at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Attendees came from KAUST and other Saudi Arabian universities and companies for two days of keynote presentations, hands-on CONVERGE tutorials, and networking opportunities. The workshop focused on leveraging CONVERGE for a variety of engineering applications, and running CONVERGE on local workstations, clusters, and Shaheen II, a world-class supercomputer located at KAUST. 

Best Use of HPC in Automotive

We and our colleagues at Argonne National Laboratory and Aramco Research Center – Detroit received this year’s 2019 HPCwire Editors’ Choice Award in the category of Best Use of HPC in Automotive. We were incredibly honored to receive this award for our work using HPC and AI to quickly optimize the design of a clean, highly efficient gasoline compression ignition engine. Using CONVERGE, we tested thousands of engine design variations in parallel to improve fuel efficiency and reduce emissions. We ran the simulations in days, rather than months, on an IBM Blue Gene/Q supercomputer located at Argonne National Laboratory and employed machine learning to further reduce design time. After running the simulations, the best-performing engine design was built in the real world. The engine demonstrated a reduction in CO2 of up to 5%. Our work shows that pairing HPC and AI to rapidly optimize engine design has the potential to significantly advance clean technology for heavy-duty transportation.

Sibendu Som (Argonne National Laboratory), Kelly Senecal (Convergent Science), and Yuanjiang Pei (Aramco Research Center – Detroit) receiving the 2019 HPCwire Editors’ Choice Award

Convergent Science Around the Globe

2019 was a great year for CONVERGE and Convergent Science around the world. In the United States, we gained nearly 20 employees. We added a new Convergent Science office in Houston, Texas, to serve the oil and gas industry. In addition, we have continued to increase our market share in other areas, including automotive, gas turbine, and pumps and compressors.

In Europe, we had a record year for new license sales, up 70% from 2018. A number of new employees joined our European team, including new engineers, sales personnel, and office administrators. We attended and exhibited at tradeshows on a breadth of topics all over Europe, and we expanded our industry and university clientele. 

Our Indian office celebrated its second anniversary in 2019. The employee count nearly doubled in size from 2018, with the addition of several new software developers and marketing and support engineers. The first Indian CONVERGE User Conference was a huge success–we had to increase the maximum number of registrants to accommodate everyone who wanted to attend. We have also grown our client base in the transportation sector, bringing new customers in the automotive industry on board.

In Asia, our partners at IDAJ continue to do a fantastic job supporting CONVERGE. CONVERGE sales significantly increased in 2019 compared to 2018. And at this year’s IDAJ CAE Solution Conference, speakers from major corporations presented CONVERGE results, including Toyota, Daihatsu, Mazda, and DENSO.

Looking Ahead

While we like to recognize the successes of the past year, we’re always looking toward the future. Computing technology is constantly evolving, and we are eager to keep advancing CONVERGE to make the most of the increased availability of computational resources. With the expanded functionality that CONVERGE 3.0 offers, we’re also looking forward to delving into untapped application areas and breaking into new markets. In the upcoming year, we are excited to form new collaborations and strengthen existing partnerships to promote innovation and keep CONVERGE on the cutting-edge of CFD software.

Numerical Simulations using FLOW-3D top

► FLOW-3D CAST Workshops
  18 Aug, 2020
FLOW-3D CAST Metal Casting Workshops
FLOW-3D CAST is a state-of-the-art metal casting simulation modeling platform that combines extraordinarily accurate modeling with versatility, ease of use, and high performance CLOUD computing capabilities. Our FLOW-3D CAST workshops use hands-on exercises to show you how to set up and run successful simulations for detailed analysis of your casting design. Workshop materials provide an introduction to the FLOW-3D CAST modeling platform and detail all the steps of a successful casting model setup, from geometry import through post-processing.

Stay tuned for new FLOW-3D CAST workshop dates!

Want to discuss an online ‘in-house’ workshop for your team? Contact our workshop instructor.

What will you learn?

  • How to import geometry and set up models, including meshing and initial and boundary conditions
  • How to apply complex physics such as air entrainment, as well as FLOW-3D CAST‘s pioneering filling and solidification models to your simulation, to analyze defects, and adjust your casting design
  • Best practices for casting simulation and design analysis in FLOW-3D CAST

What happens after the workshop?

  • After the workshop, your FLOW-3D CAST license will be extended for 30 days. During this time, one of our CFD engineers will work closely with you to help you apply FLOW-3D CAST to a casting problem of your choosing. You will also have access to our web-based training videos covering introductory through advanced modeling topics. 

Who should attend?

  • Process and casting engineers working in foundry or die casting industries
  • Industry researchers working on new alloy developments, lightweighting, and other challenges in modern metal casting
  • University students interested in CFD for casting applications
  • Workshops are online, hosted through Zoom
  • Registration is limited to 6 attendees
  • Cost: $99
  • 30-day FLOW-3D CAST license

Workshop registration is currently only available to prospective or lapsed users in the United States and Canada.

  • A Windows machine running Windows 7 or later
  • An external mouse (not a touchpad device)
  • Dual monitor setup recommended
  • Dedicated graphics card; nVidia Quadro card required for remote desktop
For more info on recommended hardware, see our Supported Platforms page.

Registration: Workshop registration is currently only available to prospective or lapsed users in the United States and Canada. Prospective users outside of these countries should contact their distributor to inquire about workshops. Existing users should contact sales@flow3d.com to discuss their licensing options.

Cancellation: Flow Science reserves the right to cancel a workshop at any time, due to reasons such as insufficient registrations or instructor unavailability. In such cases, a full refund will be given, or attendees may opt to transfer their registration to another workshop. Flow Science is not responsible for any costs incurred.

Registrants who are unable to attend a workshop may cancel up to one week in advance to receive a full refund. Attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop; after that date, no refunds will be given. If available, an attendee can also request to have their registration transferred to another workshop.

Licensing: Workshop licenses are for evaluation purposes only, and not to be used for any commercial purpose other than evaluation of the capabilities of the software.

Register for an Online FLOW-3D CAST Workshop

Register for an Online Metal Casting Workshop
Registration Type *
Total

Workshop License Terms and Conditions *
Request for Workshop Certificate
Certificates will be in PDF format. Flow Science does not confirm that our workshops are eligible for PDHs or CEUs.
FLOW-3D News
Privacy *

Please note: Once you click 'Register', you will be directed to our PayPal portal. If you do not have a PayPal account, choose the 'Pay with credit card' option. Your registration is not complete until you have paid.
If you need assistance with the registration process, please contact Workshop Support.

About the Instructor

Ajit D'Brass, CFD Engineer, Metal Casting Applications

Ajit D’Brass studied manufacturing engineering with a concentration on metal casting at Texas State University. His current work focuses on how to expedite the design phase of a casting through functional, efficient, user-friendly process simulations. Ajit helps customers use FLOW-3D CAST to create streamlined, sustainable workflows.

► Achieving Optimal Continuous Castings
    5 Aug, 2020

Using the continuous casting process, casters can manufacture ingots, high-pressure tubes, and irregularly-shaped bars of high quality and strength, but the process must be controlled through a delicate balance of pour temperature, mold cooling, and draw rate. FLOW-3D CAST v5.1’s Continuous Casting Workspace includes all the tools needed to simulate and optimize a process design to produce high-quality continuous castings in a cost-efficient manner.

Two primary types of continuous casting processes can be modeled: strand casting and direct chill continuous casting. In strand casting, molten metal is poured from a tundish through a mold which has the shape of the part to be cast. The mold, typically made of graphite, gives the casting its shape and provides some cooling to begin solidifying the melt. Additional cooling is applied to the molten strand by cooling channels placed in the mold.

The image below shows a continuous casting of an aluminum/silicon/magnesium slab. Through careful specification of the flow rate of molten metal through the mold and the cooling applied to the mold, the position of the melt front can be controlled so that the slab is fully solidified when it leaves the mold. Additionally, the grain structure in the slab can be optimized by properly controlling the temperature and solidification profiles. By using simulation to study these parameters, trial and error can be greatly reduced or even eliminated.

Here you can see the evolution of the melt front in the mold.

In direct chill continuous casting, additional cooling is applied directly to the casting. The draw rate on the casting is controlled by allowing the end of the casting to solidify on a starter cap before it is drawn out of the mold.

In this example, a bronze billet is cast using a direct chill continuous casting process. As the billet is drawn from the mold, a cooling spray is applied to the billet. The cooling must be sufficient to maintain a solidified shell on the billet as it leaves the mold. The starter cap is withdrawn at a rate that ensures the cooling rate and feed rate are balanced.

The video below shows a simulation of the direct chill process.

With the tools provided in the Continuous Casting Workspace, process engineers can simulate their designs to ensure maximum casting quality and process efficiency for their continuous castings.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► Sand Core Making – Is It Time to Vent?
    1 Jul, 2020
Sand cores are a crucial element in the casting process because they are used to create complex interior cavities. For example, sand cores are used to create passages for water cooling, oil lubrication, and air flow in typical V8 engine casting. Ever wonder how a sand core is made? How can a material that works so well for making sandcastles on the beach be made into complex forms able to withstand the brutal conditions of hot metal flowing and solidifying around them? In this blog I will walk you through the process of how sand cores are made and describe the modeling tools in FLOW-3D CAST v5.1 that help engineers design their manufacturing processes.

The Sand Core Making Process Workspace

Choosing the correct physics models for such complex flow dynamics to model sand core making can be daunting. The Sand Core Making Workspace addresses this challenge by providing automated settings for numerical techniques and activating the appropriate physics models. Sub workspaces for cold box, hot box, and inorganic processes guide the user through the setup process with ease.

Sand Shooting

The starting point with all sand cores is the shooting process. In the shooting process, a mixture of air, sand, and binder is “shot” under high pressure into a core box with air vents placed strategically around the cavity to allow air to be displaced by sand.
Water jacket sand core
Simulation of a water jacket sand core. The sand/binder mixture is shot into the core box through the 8 inlets at the top. Air vents of varying size are placed around the sand core to allow air to escape.

The primary goal of a sand core shooting is to create a sand core with uniform density. Two design factors play important roles in achieving this goal — the location of sand inlets and the location and size of the air vents. Simulating the flow of the sand mixture using FLOW-3D CAST allows us to study different inlet and air vent configurations.

This video shows the filling pattern of H32 sand with a 2% binder additive being shot to produce a water jacket sand core. Notice that some of the regions are underfilled.

To address underfilling, air vents can be easily and accurately placed at the problem area using our interactive geometry placement tool. Here, a 6 mm air vent (see red arrow) is placed at a location where incomplete filling was observed.

This video shows a comparison of the filling in the region where the air vent has been added compared with the original result. The filling is now more complete in the region where the air vent was added. More vents can be added to address other underfilled regions.

Core Hardening

Once the air vent configurations have been placed and the shooting provides a uniform sand distribution, the sand core needs to be hardened. Three different hardening methods can be simulated in FLOW-3D CAST: cold box, hot box, and inorganic.

Drying Sand Cores in an Inorganic Process

The sand/binder mixtures used to produce inorganic cores are water based. To harden them, energy from the hot core box along with a hot air purge evaporate the water and carry it out of the core through the air vents. In this video, an intake manifold sand core shot with a sand/binder mixture containing 2% water by weight is dried by a hot (180 C) air purge. The blue region represents the water remaining in the sand core. The air vents are shown in gray. After 150 seconds of drying, the moisture continues to be pushed to the area where the most venting occurs.

Hardening Cores in a Hot Box Process

Sand cores shot in a hot box process are hardened using energy from the core box to cure the binder. This video shows the temperature distribution in the sand core as it is heated by the hot core box.

Simulating the hardening step allows us to determine the temperature distribution in the shot sand core and identify the time required to ensure that all regions of the core are sufficiently heated to harden it.

Gassing Sand Cores in a Cold Box Process

The binder used to produce sand cores shot in a cold box process contains a phenolic urethane resin. To harden these cores and give them the strength required to withstand flowing hot metal in the casting process, hot air carrying a catalyst (amine gas in this case) is used to purge the core. The hot air/amine gas mixture is introduced through the inlets and leaves the core box through the air vents that were used in the shooting step.

This video shows the evolution of amine gas through the porous shot sand core, which is a water jacket for an internal combustion engine.

With FLOW-3D CAST v5.1, sand core manufactures have the tools they need to model their sand core making processes to optimize the quality of their cores. Learn more about the Sand Core Making Workspace.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► Exploring the Centrifugal Casting Workspace
  30 Jun, 2020

A common challenge in most casting processes is minimizing, or in some cases eliminating, filling-related defects such as entrained air and inclusions. For example, in high pressure die casting, entrained air can be moved out of the casting by proper placement of overflow or at least moved to areas of the casting where strength and aesthetics are not compromised. However, some castings such as high pressure pipes, bushings, and high-end jewelry like platinum rings require exceptionally low porosity, high strength, and near-perfect finish. In this blog, we will explore the three centrifugal casting processes – horizontal, vertical, and centrifuge – available in FLOW-3D CAST v5.1’s Centrifugal Casting Workspace and its unique features that allow casting engineers to create high quality castings.

Centrifugal casting processes use rapidly spinning molds to force molten metal outward from the rotation axis while relatively light defects drift out of the casting or at least to the center of the casting where they can be machined away. Two unique features in the centrifugal casting process workspace provide the ability to accurately and efficiently simulate a given design – cylindrical meshes and spinning mold model. Let’s start by looking at a typical horizontal centrifugal casting to see how these features are beneficial.

Horizontal Centrifugal Casting

Here’s an example of a horizontal mold used to cast a pipe. The mold is spun on rollers at 1000 rpm. 

Molten metal is poured into the open end of the mold and falls under gravity until it is picked up by the spinning mold. The melt spreads out quickly into a thin sheet as it fills the mold. The end-on view in the video below shows how a rather coarse 150,000 cell, cylindrical mesh with fine radial resolution near the wall captures the flow accurately and efficiently.  Since the heat transfer in the melt and mold are mostly radial, the fine radial resolution provided by the cylindrical mesh also contributes to the accuracy of the simulation.

In this filling simulation of a horizontal pipe casting, an end-on view of the filling at the left shows the cylindrical mesh used to resolve the flow. This simulation was run on 10 cores of a medium-level CPU (AMD 1950x) in 11 minutes! Even with this relatively coarse mesh resolution, a great deal of process knowledge can be obtained. Once a rough idea of the proper values for the process parameters such as pour rate, melt superheat, and initial mold temperature have been identified, higher mesh resolutions can be used to zero in on more exact values.

Vertical Centrifugal Casting

The next centrifugal casting process we’ll investigate is a vertical centrifugal casting. The vertical centrifugal casting process is ideal for large, symmetrical castings with a length similar or smaller than their diameter. Again, the spinning mold model in a cylindrical mesh is used to provide for an accurate representation of the filling characteristics. Various fill configurations such as using moving metal inputs can be easily studied. For example, metal can be introduced into the spinning mold through a sprue that moves vertically and/or horizontally to distribute the melt. In this video, the metal input brings molten metal into the spinning mold at the top of the mold to fill the flange initially and then moves downward as the filling continues.

In this simulation, a moving metal input is used to fill a vertical spinning mold rotating at 50 rpm. This simulation ran in 17 minutes on 10 cores of an AMD 1950X, which is quite remarkable considering the complexity of the flow. This is due to the efficiencies of the cylindrical meshing method and the spinning mold model. Detailed parametric studies can be carried out to identify an optimal process design with such efficiencies.

Once the filling is complete and the metal has become stable in the spinning mold, the simulation can be restarted in a solidification subprocess. In the solidification subprocess, the flow field is set to zero in a rotating mesh and only solidification is computed. Computing only the solidification allows for extraordinarily fast simulation times. This video shows solidification in a cross-section of the vertical casting. A 600 second simulation time is computed in less than 1 minute.

Centrifuge Casting

The final centrifugal casting process we’ll investigate is a centrifuge casting using an example of a 6-handle lever set.

A caster might wonder how various process parameters such as mold spin rate and spin-up profiles may have on the casting quality. For example, should the melt be poured into an already spinning mold or should the mold be spun up gradually so that entrained air isn’t generated? We can answer this question by comparing two spin-up profiles. On the left is a mold spun up from stationary to 10 rpms over 2 seconds while metal is poured into the cup. From 2 seconds to 3 seconds, the mold spin rate is ramped up to 50 rpms. On the right, the mold spins continuously at 50 rpms. 

A comparison of entrained air in the melt with a ramped up spin rate (left) vs pouring into a mold spinning at a constant rate. The simulation indicates that it is best to spin up the mold gradually to allow the runners to fill before the maximum spin rate is applied. A rotating mesh is used to achieve high filling accuracy as well as fast simulation runtimes. Both simulations ran in about 4 hours on 12 cores of an AMD 2990WX.

This image shows the last frame of the simulation, illustrating that the air entrainment is reduced by slowly spinning up the mold.

Comparison of centrifugal castings

A casting process design engineer can use the Centrifugal Casting Workspace to study a wide variety of process parameters in almost any centrifugal casting setup to achieve optimal casting quality in a reasonable amount of time.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► Simulating the Investment Casting Process
  24 Jun, 2020

The investment casting process can produce high quality, complex castings with great accuracy and controlled grain structure. However, many challenges face process designers hoping to achieve these results. Fortunately, FLOW-3D CAST v5.1 includes an Investment Casting Workspace which provides the necessary tools to study the wide range of process parameters in a virtual space and determine an optimal design before casting a single part.

In this blog, we’ll walk through the Investment Casting Workspace and show how easy it is to simulate a directionally-cooled investment casting using a Bridgman process. The casting we’ll be investigating is this multi-cavity casting on the right.

Multi-cavity investment casting

Shell Building Tool

An investment casting process begins with a wax representation of the part to be cast. The next step is to dip the wax part into a ceramic slurry mixture successively to build up a shell around the part. This is done until a sufficient thickness shell is achieved. FLOW-3D CAST’s shell building tool allows users to create water-tight shells of any thickness in a matter of minutes.

Using the shell building interface in the GUI, the first step is to select the geometry around which the shell should be created. Next, select Fit Mesh to create a computational mesh around the geometry to be shelled. The edge of the mesh where the pouring sprue is located would be moved into the part slightly so that the generated shell is open there. The only other required inputs are the shell thickness and the cell size which should be roughly half the shell thickness.

A preview mode allows various shell thicknesses to be generated and examined quickly. For example, a 5mm shell built from the wax casting part was created in under 2 minutes.

Calculating View Factors

A critical aspect of investment casting is the calculation of view factors between all surfaces in the simulation. Every surface that “sees” another surface requires a calculation of “how” each of the surfaces see each other. The orientation of each surface relative to others and the emissivity of each must be evaluated. For complex shapes, the surface is subdivided, or clustered, and the view factor between each cluster is computed.

Understanding surfaces investment casting

Surface Clustering

In a Bridgman process, where the solidifying casting is being moved slowly through a selectively heated and cooled oven, the view factors are updated continuously throughout the simulation. This simulation result shows the surface clustering computed for the shell mold and the internal surfaces of the oven.

The simulation result shows the surface clustering computed for the shell mold and the internal surfaces of the oven. In a Bridgman process, where the solidifying casting is being moved slowly through a selectively heated and cooled oven, the view factors are updated continuously throughout the simulation.

Cluster Generation

A number of user-adjustable controls for cluster generation are available to minimize memory use and simulation runtime. For example, the cluster size could be set relatively large so that iterative simulations can be run quickly. As the design options are reduced, more refined details can be added to zero-in on the final design.

Here we see the solidifying casting has moved downward from the heated portion of the oven through a cooling ring so that the casting solidifies from the bottom to the top. This process allows for equiaxed grain structure to be formed.

This simulation shows how the temperature distribution in the solidifying casting on the left and the solid fraction on the right. The feeders at the top of each part provide liquid metal to the casting as it solidifies and shrinks.

Many process parameters can affect the outcome of an investment casting. With FLOW-3D CAST v5.1 in your design toolbox, the effect of these parameters, including the temperature profiles of the heated and cooled sections of the oven, the initial shell temperature, and the rate of motion of the solidifying casting through the oven, can be studied in-depth before casting a single part.

John Ditter

John Ditter

Principal CFD Engineer at Flow Science

► FLOW-3D CAST v5.1 Released
  16 Jun, 2020

Featuring new process workspaces and state-of-the-art solidification model

SANTA FE, NM, June 16, 2020 — Flow Science, Inc. has announced a major release of their metal casting simulation software, FLOW-3D CAST v5.1, a modeling platform that combines extraordinary accuracy with versatility, ease of use, and high performance cloud computing.

FLOW-3D CAST v5.1 features new process workspaces for investment casting, sand core making, centrifugal casting, and continuous casting, as well as a chemistry-based alloy solidification model capable of predicting the strength of the part at the end of the process, an expansive exothermic riser database, and improved interactive geometry creation. FLOW-3D CAST now has 11 process workspaces that cover the spectrum of casting applications, which can be purchased individually or as bundles.

Offering FLOW-3D CAST by process workspace gives foundries and tool & die shops the flexibility to balance their needs with cost, in order to address the increased challenges and demands of the manufacturing sector, said Dr. Amir Isfahani, CEO of Flow Science.

FLOW-3D CAST v5.1’s brand new solidification model advances the industry into the next frontier of casting simulation – the ability to predict the strength and mechanical properties of cast parts while reducing scrap and still meeting product safety and performance requirements. By accessing a database of chemical compositions of alloys, users can predict ultimate tensile strength, elongation, and thermal conductivity to better understand both mechanical properties and microstructure of the part.

This release delivers the complete package – a process-driven workspace concept for every casting application paired with our unparalleled filling and now, groundbreaking microstructure and solidification analyses. Expert casting knowledge pre-loads sensible components and defaults for each workspace, putting our users on a path to success each time they run a simulation. FLOW-3D CAST v5.1 is going to take the industry by storm, said Dr. Isfahani.

Additionally, databases for heat transfer coefficients, air vents, HPDC machines, and GTP Schäfer risers provide information at users’ fingertips. The new Exothermic Riser Database along with the Solidification Hotspot Identification tool helps users with the precise placement of exothermic risers to prevent predicted shrinkage.

A live webinar outlining the new developments and how to apply them to casting workflows will take place on July 15 at 1:00 pm EST. Registration is available online > 

Go here for an extensive description of the FLOW-3D CAST v5.1 release improvements > 

About Flow Science

Flow Science, Inc. is a privately-held software company specializing in transient, free-surface CFD flow modeling software for industrial and scientific applications worldwide. Flow Science has distributors for FLOW-3D sales and support in nations throughout the Americas, Europe, and Asia. Flow Science is located in Santa Fe, New Mexico.

Media Contact

Flow Science, Inc.
683 Harkle Rd.
Santa Fe, NM 87505
Attn: Amanda Ruggles
info@flow3d.com
+1 505-982-0088

Mentor Blog top

► Event: Integrated Electrical Solutions Forum (IESF) Conferences
  24 Jul, 2020

Come see Mentor Graphics automotive tools in action at Integrated Electrical Solutions Forum. This FREE event also includes industry presentations, case studies, product expo, networking events and technical tracks of industry and technical sessions.

► Technology Overview: Simcenter FLOEFD 2020.1 Package Creator Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD helps users create thermal models of electronics packages easily and quickly. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Electrical Element Overview
  20 Jul, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to add a component into a direct current (DC) electro-thermal calculation by the given component’s electrical resistance. The corresponding Joule heat is calculated and applied to the body as a heat source. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 Battery Model Extraction Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, the software features a new battery model extraction capability that can be used to extract the Equivalent Circuit Model (ECM) input parameters from experimental data. This enables you to get to the required input parameters faster and easier. Watch this short video to learn how.

► Technology Overview: Simcenter FLOEFD 2020.1 BCI-ROM and Thermal Netlist Overview
  17 Jun, 2020

Simcenter™ FLOEFD™ software, a CAD-embedded computational fluid dynamics (CFD) tool is part of the Simcenter portfolio of simulation and test solutions that enables companies optimize designs and deliver innovations faster and with greater confidence. Simcenter FLOEFD helps engineers simulate fluid flow and thermal problems quickly and accurately within their preferred CAD environment including NX, Solid Edge, Creo or CATIA V5. With this release, Simcenter FLOEFD allows users to create a compact Reduced Order Model (ROM) that solves at a faster rate, while still maintaining a high level of accuracy. Watch this short video to learn how.

► On-demand Web Seminar: Avoiding Aerospace Electronics Failures, thermal testing and simulation of high-power semiconductor components
  27 May, 2020

High semiconductor temperatures may lead to component degradation and ultimately failure. Proper semiconductor thermal management is key for design safety, reliability and mission critical applications.

Tecplot Blog top

► Tecplot 360 Basics – Python Load Custom File Formats
  23 Sep, 2020

Learn how to load custom data file formats using the Tecplot 360 Python API, PyTecplot. In this training, we cover:

  • What is PyTecplot?​
  • Demo loading Haiti Earthquake data​
  • Overview of data organization and terminologies​
  • Python data access APIs (in brief)​
  • Loading XY-line data​
  • Putting it all together​

Q&A from Tecplot 360 Basics – Load Custom Data File Formats with Python

Load-Custom-Data-File-Formats

Can PyTecplot compute a time average?

Yes! See the Time Average script on our GitHub site, TimeAveragy.py.

Can PyTecplot automatically do data probes over time?

A: Yes again! See the script on our GitHub site, tpprobe.py.

You may also want to try Tools>Probe to create a time series plot right in the Tecplot 360 user interface.

Is there a Python script to unite multiple meshes?

Of course! See the GitHub script, CombineFEZones.py

This script loads an example dataset, so you’ll have to modify it to load your data, or simply use data that’s already loaded.

Can we achieve high quality videos with PyTecplot? and can you show how to generate plots using PyTecplot?

Yes. PyTecplot uses the Tecplot 360 engine, so the image and video quality will be the same as if you exported directly from Tecplot 360. See the PyTecplot Export module here in our documentation

When exporting a sequence of images using a script, is there a significant difference between macros and PyTecplot?

No.  PyTecplot and Macros use the same Tecplot 360 engine, so there will be no difference in image quality. There may be slight differences in performance, as each language is an interpreted language and the cost of interpreting is different. 

PyTecplot in ‘batch’ mode runs faster than PyTecplot in ‘connected’ mode.

How can I find the cell that has the maximum and the minimum values?

To do this, use this script in our Tecplot GitHub repository. It finds and places a scatter point at the min or max values. This Knowledge Base article shows you how.

Where do I go for Tecplot Technical Support

To find support in your region, please visit see our Distributor Map, and scroll down for contact information.

The post Tecplot 360 Basics – Python Load Custom File Formats appeared first on Tecplot.

► Tecplot Europe Solutions for CFD Visualization
  17 Sep, 2020

The Tecplot Europe engineering team specializes in custom solutions in the field of numerical simulation, especially CFD. For the past 25 years, they have been building long-term partnerships by understanding customers’ unique problems and delivering easy-to-implement solutions.  

Tecplot Europe (also known as Genias Graphics) provides Tecplot sales and support to the European and Western Asian markets.

On any given day, team members may be working on optimizing the output of a customer’s in-house solver, automating customer workflows with Python scripts, or extending the capability of Tecplot 360 with addons.

Implementing complex technical requirements in a tailor-made way for each and every customer is our most important task.
–Lothar Lippert, Tecplot Europe Manager.

Tecplot Output for Your In-House Solver – Optimizing Performance

Many customers are working on their own in-house codes and solvers. These solvers use different grid sizes, may have unsteady simulations, and use various mesh types – some of which may have meshes that change over time. The Tecplot 360 suite of tools can handle all of that with ease.

The key to optimal performance is a good output file. Tecplot Europe engineers are experts at optimizing a solver’s output to achieve the best possible performance and optimal user experience. Small changes to an output file often result in 10 times better performance, and it may take just a few minutes to change.

Automation with Python and Macros – Optimizing Your Time

Repetitive tasks and generating reports can eat up a lot of time. Automating these routine tasks is a tremendous time savings (as well as a relief from doing mundane tasks). Virtually every day, we help customers automate their workflows with Python and Macros scripts.

Often customers do not know what is possible, and what can be accomplished with macros and Python scripts. We can share and explain sample code and provide ready-to-use scripts and macros, which are also easy for customers to maintain and extend.

Tecplot Europe has several scripts, available on demand. For example, a user wanted to automate the creation of a plot of forces and moments vs span on a wing, together with a visualization of Cp plots along the 3D-view of the geometry. The Python script is available in our Tecplot’s GitHub repository.

Tecplot Europe Add-On Development

Tecplot 360 is known as the most complete post-processing desktop solution for CFD visualization. However, some of our customers have very challenging requirements, which can be solved only by developing additional functionality. Tecplot 360 is extendable with add-ons. The additional capability is wide ranging, from developing data loaders for special file formats, connecting to Web Map Services, or connecting to larger database applications like a dedicated tool to compare wind tunnel data with CFD results.

Optimization with High-Quality Tools – A Case Study

In a recent case, Tecplot Europe engineers helped DLR use Tecplot 360 to optimize the tail strake position of a generic transport aircraft. Optimization with high-quality tools found the best position. Visualization with Tecplot 360 was crucial in helping understand the effects of each tail strake position.

Read the Case Study »


Tecplot Europe can Help You Optimize Your Workflows

Contact Tecplot Europe:
Phone: +49 (0)9402 9480–0
Email: info-eu@tecplot.com

The post Tecplot Europe Solutions for CFD Visualization appeared first on Tecplot.

► Tail Strake Position Optimization for Generic Transport Aircraft
  17 Sep, 2020

German Aerospace Center
Without a visualization component in the design optimization loop, precisely configuring an aircraft’s tail strake position is simply impossible. Tail strakes are “fins” mounted horizontally on the rear fuselage that add stability and controllability. Engineers at the German Aerospace Center (DLR) found that the visualization component was crucial in understanding flow effects of a generic transport aircraft they were designing.

One requirement was the ability to load cargo at the aft end of the aircraft. This feature included a long, upswept ramp you can see in Figure 1. The fuselage upsweep created strong vortices as shown in Figure 2. The vorticies lead to flow detachment which reduced pressure recovery and increased drag. The DLR engineers needed to find a way to keep the flow attached and the drag reduced.

The Goal

Figure 1. Long upswept ramp feature on generic transport aircraft. Figure 2. Vorticies along the ramp.

 

Finding an Optimal Tail Strake Position

The Problem

Figure 3. The problem was finding the optimal tail strake position.

The Goal

Our goal was to weaken the tail vorticies caused by the ramp. To do this, tail strakes needed to be precisely configured to produce effective counter-rotating vortices.

The Challenge

The problem was to find the optimal tail strake position and orientation (Figure 3).

The Method

A design optimization – from CAD to mesher to solver and optimizer – included a visualization tool, Tecplot 360, which helped in understanding and trusting the results.

Optimization Loop with Tecplot 360

Figure 4. Design optimization loop: Parametric CAD (CATIA V5), Parametric Mesher (CENTAUR), Solver (TAU+2xAdaptions, Optimizer (SUBPLEX) and Visualization with Tecplot 360.

Three parameters investigated

Figure 5. Three parameters were investigated.

Three parameters were investigated as shown in Figure 5:

  • Strake position along u-isocurve of tail surface (generally the fore-aft position)
  • Strake position along v-isocurve (radial location on the fuselage)
  • Rotation angle φ against tangent in u-direction (angle of the strake against the fuselage)

The Results

The resulting XY plots, produced by Tecplot 360, are shown in Figure 6.

  • The optimizer did a good job, showing ideal asymptotic convergence of objective and design variables.
  • The optimization resulted in a 4% improvement of lift to drag ratio due to strake location – a very significant improvement.
  • A 28% difference between worst (No. 4) and best (No. 44) strake position showed the importance of precisely locating the strake for maximum benefit. See Figure 7.
Tecplot 360 XY Plots

Figure 6. The results are shown in Tecplot 360 XY plots.

Best and Worst Tail Strake Position

In the worst tail configuration, the strake amplified the tail vortex, leading to a larger area of separation and high drag. In the optimal configuration, the strake truncated the tail vortex and minimized separation. While optimizers are generally good at driving to a desired result, understanding and trusting the underlying physics requires good visualization. Best and worst scenarios are shown in figure 8.

Optimization with high-quality tools found the best position. Visualization with Tecplot 360 was crucial in helping understand the effects.


 

Get Help Optimizing Your Workflows

Contact Tecplot Europe:
Phone: +49 (0)9402 9480–0
Email: info-eu@tecplot.com

Learn more about Tecplot Europe Support

Tail strake configuration

Figure 7. Comparison of tail configurations.

Best and worst tail strake configuration

Figure 8. Best and worst tail configuration.

The post Tail Strake Position Optimization for Generic Transport Aircraft appeared first on Tecplot.

► Compusense Aquired by Vela Software
    8 Sep, 2020

Vela Software is pleased to announce the acquisition of Compusense which will report into Vela’s subsidiary Tecplot, augmenting an expanding range of statistical tools with the industry-leading platform for Consumer and Sensory Science testing.

CompusenseSeptember 1, 2020 – Compusense develops Compusense Cloud and Compusense20, powerful SaaS tools used by major food and beverage companies, CPG multinationals and luxury brands to plan, execute, and analyze consumer and sensory tests, leading to insights that help these companies launch and refine successful products. With over 30 years of research and innovation, Compusense sets the standard for sensory research software.

Founders Karen Phipps and Chris Findlay will lead the transition to an all-internal management team comprised of employees with a combined 55 years of experience at the company.

“Karen and I are delighted that we can place the company that we have grown and nurtured for 34 years into the strong hands of Vela,” Findlay said. “Their expertise in growing software companies gives Compusense a solid foundation upon which it can build into the future. We are very pleased that our management team will be able to retain Compusense’s culture and continue to support our amazing clients as we always have.”

Tom Chan, President of Tecplot, thanked Karen and Chris, adding, “they have worked tirelessly to build a great company, and more importantly a great team, who are passionate about helping customers and advancing sensory testing. With the profound disruptions caused by COVID, brands need valued partners more than ever to help them be successful in the marketplace and Compusense is key to ensuring that products hit the right notes with consumers. We look forward to working closely to bring their important technology to more clients across the globe.”

Compusense is based in Guelph, ON, and serves customers world-wide.

If you have any questions about this acquisition, or the capabilities of Compusense’s products, please contact Compusense at info@compusense.com.

About Tecplot, Inc.

An operating company of Vela Software International, Inc., itself an operating group of Toronto-based Constellation Software, Inc. (CSI), Tecplot is the leading independent developer of visualization and analysis software for engineers and scientists. CSI is a public company listed on the Toronto Stock Exchange (TSX:CSU). CSI acquires, manages, and builds software businesses that provide mission-critical solutions in specific vertical markets.

 

The post Compusense Aquired by Vela Software appeared first on Tecplot.

► Tecplot 360 Basics – Equations
    2 Sep, 2020
This training session covers data alteration through equations in Tecplot 360, including:
  • Referencing Variables
  • Math syntax & Functions
  • IF conditions
  • Operating on subsets of zones
  • Use of I & J special values
  • Referencing zones in equations

Q&A from the Equations Training

Equations-in-Tecplot-360

Can I compute a time average over an interval?

This capability is not directly available in the Tecplot 360 user interface. But we do have a robust Python API for Tecplot 360. Scripts are available on the public GitHub, Tecplot Handyscripts, Python scripts. Scroll to find TimeAverage.py. This script uses another behind the scenes script called tpmath.py, which has a phased-average function available in it. It was written by a Tecplot customer, and we thank them for writing it! Because it is on GitHub, it’s supported by our user community.

Can contours be classified and generated based on categories?

The Tecplot 360 contour legend typically shows only numeric values. But sometimes you may want to show values that are string based. For example, if you have different material properties like sand or soil in a geoscience case, or in the case of a CONVERGE dataset where you have particles that may be in the fluid or have bounced or rebounded. You can show strings using our custom label set. We have blog, Creating a Materials Legend, that uses a bit of Tecplot Kung Fu to add a string-based custom label set. You can use that blog as a guide.

Can I do a while loop in specify equations?

In the Tecplot 360 user interface, the answer is no. But with our Python API, PyTecplot, the answer is yes! In the video example, Jared showed finding the difference between the two zones, with a blended wing body shape. If that were a time-dependent simulation, you could use our looping capability to compute that difference over time. The Tecplot 360 macro language has “for” and “while” loop capabilities. And Python, of course, has many logical and flow control operations. You can use these scripts in conjunction with equations.

Is there a discount for academic licenses for students and faculty? And how do I get one?

We have several academic license options for those at degree-granting universities or institutions. You can email Jared McGarry at campus@tecplot.com, and he can help you decide which type of license you need – single user, department, college, campus, or site licenses (with effectively unlimited seats). You can also visit us at Tecplot Academic Suite.

Can we get a PDF of the presentation to help remember the equations?

We don’t have a PDF of this presentation. But you can watch the recording (above). Also, click on the help button in the specified equations dialog. You’ll see a reference to the functions available.

Can an equation reference data in a different frame?

No – equations can only operate on data in the active frame. If you have to datasets that you want to compare, you’ll need to load both into the same frame.

What is the best way to calculate the difference between different zones that have different meshes?

In the video, the blended wing body example, those two zones have the same mesh and you can simply subtract one zone from the other. If you have different meshes, you will need to interpolate results onto a common mesh with the same number of points. See this video tutorial Comparing Grids: Interpolation of Differing Meshes.

Do you have license roaming, and how does it work?

License roaming is enabled for network licenses. You must be connected to your license server to roam your license. Go to Help>License Roaming.

Is there a quick way in alter equation to find a deviation from a node value?

Sure, let’s look at an example: I want to find the difference of U from its value at X=1, Y=1, Z=1. First use the Probe tool to find the value of U at that XYZ location. From the Probe results you can Copy the U value. Then in the Data Alter dialog you would use this equation:

{U_delta} = {U} – 1.234

Where 1.234 is the value you copied from the Probe dialog. This will create a new variable called “U_delta”.

Can we use alter equation to find max-min of a variable and its location?

The best way to find min-max is with a Python script available on our GitHub site: Tecplot Handyscripts on Github, and look for display_max_variable_value.py.

You supply the zone and the variable through Python. Then the script will, in Tecplot 360, point to the location of the maximum value on your plot and display it in a text box.

There are two options for polyline point extraction. Which one should I use?

There are several options for extracting data along a line in Tecplot 360. Which one to use depends on your use case.

  1. Use the menu option Data>Extract>Precise Line. This will allow you to enter two X, Y, Z locations. You can then extract data across a perfectly straight line between those two points.
  2. Use the menu option Data>Extract>Polyline Over Time. We also have the extract points from polyline, I think it is. Jared, can you go back to the data extract menu? Yeah, polyline over time.
  3. Select a polyline on your plot, right-click, and you can select Extract Points from the context menu.

Is there any plan in the future to duplicate a page like we do for a frame?

We have no immediate plans for this capability. We could create a Tecplot macro or a Python script that would mimic the behavior by looping over each individual frame on a page and copying and pasting it to a new page. If this is something you do frequently, contact support@tecplot.com, and we can create a custom solution for you.

If I extract the line across the wall, do I get wall quantities?

In the internal combustion case from the video [timestamp: 38:46], we have volume data representing the fluid, and boundary data representing the wall. When you extract across the line, Tecplot 360 will extract points from the first zone that it encounters. In this case, it will encounter the wall. If you want to make sure of that, open the Zone Style dialog and ensure the wall zone is the only active zone.

The post Tecplot 360 Basics – Equations appeared first on Tecplot.

► Isosurface Algorithms – Visualizing Higher Order Elements
  11 Aug, 2020

Visualization of Higher-Order Elements – Part 3: Isosurface Algorithms

This blog was written by Dr. Scott Imlay, Chief Technical Officer, Tecplot, Inc.

In this blog I’ll be discussing our research into isosurface algorithms for higher-order finite-element solutions. The first blog on this topic was A Primer on Visualizing Higher-Order Elements and the second was on the Complex Nature of Higher-Order Finite-Element Data.

Big cells beget little cells
That model their complexity
And little cells have smaller cells
That we choose selectively

In the second blog, I described how the isosurface passing through a linear tetrahedra is a simple plane described entirely by its intersections with the edges. Since the solution varies linearly along the edges you can calculate these intersections very quickly. You can also quickly exclude edges based on the range of the nodal values at either end of the edge. If the isosurface value is greater than the maximum node value, or less than the minimum node value, no further computation is needed. In this way, the vast majority of the edges can be excluded from further computation by a couple of simple floating-point compares. This, among other optimizations, make this technique very fast.

Isosurfaces in a Quadratic, Higher-Order Element

In comparison, an isosurface in a quadratic, or even higher-order, element can be quite complex. The isosurface is not, in general, planar and it doesn’t even have to intersect the edges or surfaces of the element (see Figure 2). You can have isosurfaces that are entirely contained within an element like little islands. How do we extract these isosurfaces?

Isosurface in linear and quadratic tetrahedron

Figure 1. Isosurface in linear tetrahedron (left). Figure 2. Isosurface in quadratic tetrahedron (right)

Nearly all visualization techniques for higher-order isosurfaces involve subdividing the higher-order element into a large number of linear sub-elements. The variation of the solution across these sub-elements approximates the non-linear solutions and the approximation error decreases as the number of sub-elements increase. Once you have the sub-element, you can use existing isosurface algorithms for linear elements to extract an approximate non-linear isosurface. Sounds easy, right?

Visualization Techniques for Higher-Order Isosurfaces

It is fairly easy to implement an algorithm where all higher-order elements are sub-divided into a large number of linear elements. A quadratic tetrahedra, for example may be divided into eight sub-tetrahedra using the existing ten nodes. This sub-division is shown in Figure 3. Each of those sub-tetrahedra may be further subdivided into eight sub-sub-tetrahedra by creating new nodes at the edge centers, interpolating to those nodes using the full quadratic element basis function, and subdivide as done for the original element. This process can be repeated until the non-linear isosurface is sufficiently resolved.

Unfortunately, the number of sub-cells grows exponentially: after the first sub-division it is eight sub-cells, after the second level of sub-divisions it is 64, after the third level of sub-divisions it is 512, and so on. If you start with 500 thousand higher-order cells you will have 256 million linear sub-cells after three levels of sub-division. It is not cheap to create those 256 million linear sub-cells!

Tetrahedron sub-division

Figure 3. Tetrahedron sub-division.

Most of my research has been on optimizations to make this faster. Specifically, are there simple tests that will allow us to eliminate cells early in the process?

For example, for linear elements, we compute the min/max range of the isosurface variable for all nodes in the element, and we exclude cells where the isosurface value is not in that range. We do this because the cell extrema (min’s and max’s) in linear cells are guaranteed to be at the nodes.

Unfortunately, for most basis functions the extrema in a higher-order cell is not generally at the nodes but may be anywhere within the cell. If we can find a way to quickly exclude higher-order cells, we can significantly reduce the computational cost and memory usage of the subdivision process.

Optimizing the Isosurface Algorithm

It turns out you can eliminate many cells based on the min/max values of the isosurface variable at the nodes. A heuristic formula that seems to work is to keep any cell where the isosurface value satisfies this formula:

2φ_min-φ_max<φ_iso<2φ_max-φ_min

I wish I had a mathematical proof that this formula always works, but it has worked in all the cases I’ve tested so far. This formula basically has a buffer equal to the range of the isosurface variable in the non-linear cell. The same formula, with smaller buffers, is applied to the sub-cells at each level of recursion. That is, the formula is also applied when sub-dividing the sub-cells, and again when subdividing the sub-sub-cells, but the size of the buffer on the isosurface variable range is smaller each time.

Selectively subdividing elements based on the formula above dramatically reduces the cost of extracting higher-order isosurfaces. In Figure 4 you can have four levels of subdivision for an isosurface of constant radius from a point. By the fourth level of subdivision, all but 8,617 out of a possible 663,552 sub-cells have been excluded. Over 98.7% of the sub-cells have been discarded and further computations should be nearly a factor of 100 faster!

Selective subdivision for quadratic tetrahedral isosurface extraction

Figure 4. Selective subdivision for quadratic tetrahedral isosurface extraction.

Figure 5 shows the extracted isosurface at various levels of subdivision. Four levels of subdivision are sufficient to create a very smooth isosurface.

Quadratic tetrahedral isosurface with increasing levels of subdivision

Figure 5. Quadratic tetrahedral isosurface with increasing levels of subdivision.

In my next blog, I will discuss the results of our research into higher-order finite-element curved surface visualization algorithms. See all blogs on higher order elements.

Subscribe to Tecplot

Get all the latest news from Tecplot, Inc.

Subscribe to Tecplot 360

The post Isosurface Algorithms – Visualizing Higher Order Elements appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► BSY rings the Nasdaq bell last week
  29 Sep, 2020

I had promised you a photo of Bentleys ringing the opening bell at the Nasdaq, but can go one better — a video!

The first 3:30 minutes or so are of the big LED pilar billboard in Times Square, showing pictures of Bentley employees (“colleagues” in Bentley-speak). Then the Nasdaq MC introduces the company and Greg Bentley, who starts speaking at 5:30 into the recording. At 11:30 you can see the countdown and hear a bell, likely to signal the opening of trading last Thursday.

I’ve been in Times Square many times and this video doesn’t do justice to the size and energy of the space, nor to how many LED billboards likely carried the images on that pillar. The Bentley Systems logo and the images of people and projects were probably plastered all over this huge space — creating a Bentley-themed montage in the square.

I bet it was awesome.

(I apologize for the weird Netflix or something? ad in the lower left of the video. Not sure why it’s here, but It’s part of the Youtube video and I can’t seem to get rid of it.)

The post BSY rings the Nasdaq bell last week appeared first on Schnitger Corporation.

► Bentley’s IPO. Done. Finally.
  23 Sep, 2020

Bentley shares started trading under the BSY symbol at around 11:15 on Wednesday. Someone told me that the bookmaker, the brokerage firm that handles trading of the shares, had to determine the opening price, which I thought would be the price Bentley set last night, $22. Nope — turns out it was $28, and the price went up from there, as you can see in this screen capture from Marketwatch at 4:04 PM on September 23, 2020:

As you can see, the price wobbled a bit but eventually closed at $33.49, up a whopping 52% on the day — and on a day when technology shares, in general, went up 2.6% while the Nasdaq overall fell 3%. Not a bad day for Bentley, at all.

By the way, I was wrong when I said we didn’t know who wanted to sell shares in this IPO. In the latest version of the S-1, on numbered page 188 is a chart that shows the number of shares owned before and after the sale — by subtraction then, we can figure it out. Many people will be selling — but not Greg, Keith, Barry or Raymond Bentley, who all keep their shares. Not Siemens, which maintains its 14%ish stake. The sellers are longtime and recently retired employees, a charitable giving fund, and clumps of employees in aggregate — in other words, as we expected, it’s a liquidity event for some of the people that made Bentley Systems, Bentley Systems.

What did we learn today? That IPOs are unpredictable, even with all the prep in the world. Some shares go absolutely crazy after they start trading, like Snowflake last week: Shares started the day at $120 and closed at $253.93, more than double their IPO price of $120 — and that was triple what the company initially thought it could get just a week earlier. Other shares go in the opposite direction, as the market makers can’t find buyers for the shares they’ve guaranteed. Basic economics: desirable things that are scarce command higher prices than stuff no one wants, or that there’s too much of. So much is out of the offeror’s control — had there been a dire political, social, or pandemic event, Snowflake and Bentley might have seen very different results.

Oh, and Snowflake also took time to get going. It started trading right around noon on its first day — apparently, it takes that long to match up buyers and orders and begin trading. I thought this was all done by computer nowadays, especially since the NASDAQ doesn’t have a trading floor. It’s charming to think of a guy in an old-fashioned eyeshade, still making all this happen.

The post Bentley’s IPO. Done. Finally. appeared first on Schnitger Corporation.

► PTC acquires ioxt for cognitive AR
  23 Sep, 2020

PTC acquires ioxt for cognitive AR

I saw this on Ralf Steck’s blog just now and verified it on PTC Germany’s website:

PTC has acquired German start-up ioxp to for its cognitive AR and AI technologies. Ralf says that

ioxp is a spin-off of the German Research Center for Artificial Intelligence (DFKI) and a pioneer in the field of video-based augmented reality. The Mannheim-based company offers cognitive AR and AI solutions for knowledge transfer, training and quality assurance … PTC plans to integrate ioxp technology to validate and verify process instructions into its Enterprise AR solution suite. This functionality will support critical manufacturing, assembly, inspection, and service use cases, improve workers’ experience and enable companies to more effectively ensure that quality assurance standards are met.

Details of the transaction were not disclosed.

As I understand it, cognitive AR automatically generates content, by observing actions and recording them. Say you want to create operating instructions for a piece of equipment and think AR is the most useful way to show those to a new colleague. Cognitive AR watches a skilled operator perform the task and then generates the AR instructions. IBM calls this ‘cognitive operations guidance” and says it “can recognize what’s in the devices’ field of view, answer questions, and even understand and ask about what you’re gesturing toward – the leaky pipe under the sink on the left. This technology has the potential to simplify and clarify many types of interactions in daily life, for consumers and in industry”.

Will post again if/when I learn more!

The post PTC acquires ioxt for cognitive AR appeared first on Schnitger Corporation.

► Quickie: Bentley IPO now at $22/share
  23 Sep, 2020

Quickie: Bentley IPO now at $22/share

I am no expert at IPOs but this seems unusual to me: Bentley’s offering had originally set a price range of $17 to $19 per share, then yesterday that range was upped to $19 to $21 — and then last night, it was again raised, to $22/share. That means the selling shareholders will raise on the order of $272 million and Bentley’s overall valuation is now over $5 billion.

We still don’t know who, exactly, the selling shareholders are. The German newspaper, Handelsblatt, wrote

A [Siemens] spokesman left open the question of whether the Munich-based industrial group has sold part of its shares. In any case, the strategic partnership with Bentley will not change, he stressed. Speculation about a takeover of Bentley Systems by Siemens had been rejected by corporate circles.

[My translation. The newspaper is, of course, in German. “Unternehmenskreisen” is literally corporate circles, but I think we can also use it to mean “people with knowledge who were unwilling to be quoted”.]

There had been massive speculation that Siemens and Bentley could be using the IPO process to establish a price for Bentley, leading to a purchase by Siemens of the 85% or so of shares it doesn’t already own. Handelsblatt seems to think that’s off the table.

Barring any last minute changes, trading in Bentley shares is set to start today on the NASDAQ under the ticker symbol BSY. I’ll be watching the stock trades on Marketwatch to see what happens. The trading day starts at 9:30 AM ET, for what it’s worth. Not sure when Bentley’s shares will appear but I’ll be obsessively refreshing to see what happens.

A couple of people have asked why this matters. I find it fascinating because our little world of engineering and design and operations solutions isn’t nearly as sexy as Zoom, as controversial as Facebook, as self-promoting as Tesla. Yet it makes a significant difference in the lives of every person on the planet, every day. Clean water, affordable housing, communications – -all of that, engineerings and designers do. And Bentley enables its customers to do it better and faster. This IPO rewards shareholders who may be founders of companies Bentley acquired, longtime employees, or others who built the Bentley we know today. That’s cool.

In a practical sense, it also helps establish for the short term what other entrepreneurs might expect if they want to sell their businesses — compare your financials to Bentley’s. Look at its customer metrics, geographic reach, income statements and the other details in the S-1 filings to see how your baby stacks up. Altair has been acquiring like crazy, but not releasing price or other information, so we can’t use it as a baseline. Bentley’s disclosures help establish the value the market places on PLMish companies like it.

OK. Off to fuel up for all of that reflexive refreshing of the Marketwatch page. I hope the Bentley family rings the opening bell at the NASDAQ in Times Square –it was very cool when I got to see Altair’s James Dagg do that a couple of years ago– but I’m not sure what’s possible in this COVID era. If I find a picture I’ll post it.


The title image is of the massive bank of displays inside the NASDAQ headquarters in Times Square, from Wikimedia.

The post Quickie: Bentley IPO now at $22/share appeared first on Schnitger Corporation.

► Quickie: Bentley raises price in IPO
  22 Sep, 2020

Quickie: Bentley raises price in IPO

How interesting: Bentley just filed with the SEC to raise its price per share from $17 to $19, to $19 to $21 — that ups the value of the shares being sold to a max of ​ $259,580,811. Why raise the offering price? Typically, it’s because there’s more demand for the shares than expected and that makes sense: Bentley is a solid company with diverse customers around the world, etc. and investors see it as a good place to park money for whatever their time horizon may be. And because tech stocks are popular right now, with those that offer something to keep industries going while workers shelter at home — even more in demand.

Bentley is expected to begin trading on Wednesday, September 23, under the ticker symbol BSY, if you want to follow what happens. Just a reminder: Bentley’s selling shareholders only sells these shares once; what you’ll see after the market opens is what happens to the people who bought those shares — will they sell them for more or less than they paid?

I know what I’m doing on Wednesday — reflexively refreshing my browser to see how this story plays out.

The post Quickie: Bentley raises price in IPO appeared first on Schnitger Corporation.

► Quickies: Bentley sets IPO price, Altair acquires for HPC
  17 Sep, 2020

Quickies: Bentley sets IPO price, Altair acquires for HPC

I go away for a couple of days, and what happens? Yup. Newsy things! Here are two items of interest:

Bentley issued another update to its IPO filing, this time with prices for the shares. You can see it here. (I have not read the whole thing, nor have I diffed it to find out what else may have changed. Soon. This is what I wrote about the last amended S-1.) In the latest update, we learn that Bentley is helping current shareholders sell around 10.75 million shares at a $17 to $19/share price range. What does that mean?

  • Bentley’s market cap would be between $4.4 billion and $4.7 billion, a 6x-ish multiple of 2019 revenue
  • This sale is for class B shares currently held by existing stockholders. Class B shares hold 1 vote each; class A shares have 29 votes/share. Bentley family members are the primary owners of the class A stock, which will hold 57% of the voting power — so while the class B shares can be owned by anyone, the Bentley family will still, in effect, control the company
  • Bentley shares are expected to begin trading on the NASDAQ market on Wednesday, September 23 under the symbol BSY. (How soon, and how exciting!)
  • I believe that this means that Bentley will be required to report earnings for the fiscal third quarter, sometime in October/November. Will confirm this once the dust settles.

Meanwhile, Altair announced its second acquisition in a week, also to do with HPC. The first was Univa (my note, here); this one is Ellexus, an input/output (I/O) analysis tool, which Altair says “helps customers find and address issues quickly, improving speed accuracy and cloud readiness”. Ellexus Mistral and Breeze “complement Altair’s scheduling technology by providing per-job storage agnostic file and network I/O real-time monitoring to identify I/O latencies and bottlenecks for faster job execution times and better resource utilization”. Neither price paid nor revenue contribution was disclosed.

The post Quickies: Bentley sets IPO price, Altair acquires for HPC appeared first on Schnitger Corporation.


return

Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: