CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home >

CFD Blog Feeds

Another Fine Mesh top

► This Week in CFD
  12 Jul, 2019
This week’s CFD news starts with an intriguing article about posits, an alternative to floating point numbers that are said to provide faster and more accurate computations. Coming soon to a computer near you? There are several very cool articles … Continue reading
► This Week in CFD
    5 Jul, 2019
For today’s post-Independence Day (in the U.S, although I guess it’s still post-yesterday everywhere) edition of This Week in CFD, we start with this unique application of CFD: studying the manner in which a 500 million year old organism fed … Continue reading
► I’m Cannon DeBardelaben and This Is How I Mesh
    4 Jul, 2019
I’ve always had a passion for all things aerospace. One of my earliest memories is going up in a biplane where we proceeded to do barrel rolls and loops. As a kid I read every aircraft or space encyclopedia I … Continue reading
► Stuff Engineers Don’t Learn in School, Part 2
    2 Jul, 2019
Our summer interns (Cade, Cannon, and Patrick) and I continue our discussion of Carl Selinger‘s book Stuff You Don’t Learn in Engineering School. If you missed Part 1 of this series, we covered the first three chapters (Introduction, Writing, Speaking … Continue reading
► This Week in CFD
  28 Jun, 2019
With two weeks of CFD news to report, there’s more of everything from jobs to events, software releases, and cool applications. The wind turbine simulation here is from a new open-source CFD code called Nalu-Wind. Siemens released Screenplay, a cool … Continue reading
► Stuff Engineers Don’t Learn in School, Part 1
  26 Jun, 2019
This summer our three interns (Cade, Cannon, and Patrick) and I are reading and discussing the book Stuff You Don’t Learn in Engineering School by Carl Selinger. This is the first post in a series that will cover all 16 … Continue reading

F*** Yeah Fluid Dynamics top

► Not everything that behaves like a fluid is a liquid or a gas....
  16 Jul, 2019


Not everything that behaves like a fluid is a liquid or a gas. In particular, groups of organisms can behave in a collective manner that is remarkably flow-like. From schools of fish to fire-ant rafts, nature is full of examples of groups with fluid-like properties. 

One of the most mesmerizing examples are these giant honeybee colonies, which essentially do “the wave” to frighten away predators like wasps. Researchers are still trying to understand and mimic the way these groups coordinate such behaviors. Can even complicated patterns be generated by a simple set of rules an individual animal follows? That’s the sort of question active matter researchers investigate. Check out the video above to see a whole cliff’s worth of bee colonies shimmering. (Image and video credit: BBC Earth)

image
► Living near the Rocky Mountains, it’s not unusual to look up and...
  15 Jul, 2019


Living near the Rocky Mountains, it’s not unusual to look up and find the sky striped with lines of clouds. Such wave clouds are often formed on the lee side of mountains and other topography. But even in the flattest plains, you can find clouds like these at times. That’s because the internal waves necessary to create the clouds can be generated by weather fronts, too.

Imagine a bit of atmosphere sitting between a low-pressure zone and a high-pressure zone. This will be an area of convergence, where winds flow inward and squeeze the fluid parcel in one direction before turning 90 degrees and stretching it in the perpendicular direction. The result is a sharpening of any temperature gradient along the interface. This is the weather front that moves in and causes massive and sudden shifts in temperature. 

On one side of the front, warm air rises. Then, as it loses heat and cools, it sinks down the cold side of the front. The sharper the temperature differences become, the stronger this circulation gets. If the air is vertically displaced quickly enough, it will spontaneously generate waves in the atmosphere. With the right moisture conditions, those waves create visible clouds at their crests, as seen here. For more on the process, check out this article over at Physics Today. (Image credit: W. Velasquez; via Physics Today)

► On a hot day, it’s not unusual to catch a glimpse of a...
  12 Jul, 2019


On a hot day, it’s not unusual to catch a glimpse of a shimmering optical illusion over a hot road, but you probably wouldn’t expect to see the same thing 2,000 meters under the ocean. Yet that’s exactly what a team of scientists saw through the cameras of their unmanned submersible as it explored hydrothermal vents deep in the Pacific Ocean.

At these depths, the pressure is high enough that water can reach more than 350 degrees Celsius without boiling. The hot fluid from the vents rises and gets caught beneath mineral overhangs, forming a sort of upside-down pool. Since the index of refraction of the hot water is different than that of the colder surrounding water, we see a mirror-like surface at some viewing angles. Be sure to check out the whole video for more examples of the illusion. (Image and video credit: Schmidt Ocean; via Smithsonian; submitted by Kam-Yung Soh)

► In “Aurora”, artist Rus Khasanov uses fluids to create a short...
  11 Jul, 2019


In “Aurora”, artist Rus Khasanov uses fluids to create a short film full of psychedelic color and cosmic visuals. As in a soap bubble, the bright colors – as well as the pure black holes – come from the interference of light rays. The colors directly relate to the thickness of fluid, and they allow us to see all the subtle flows caused by variations in surface tension. (Video and image credit: R. Khasanov)

image
image
image
► Soft systems like this bubble raft can retain memory of how they...
  10 Jul, 2019


Soft systems like this bubble raft can retain memory of how they reached their current configuration. Because the bubbles are different sizes, they cannot pack into a crystalline structure, and because they’re too close together to move easily, they cannot reconfigure into their most efficient packing. This leaves the system out of equilibrium, which is key to its memory. 

By shearing the bubbles between a spinning inner ring (left in image) and a stationary outer one (not shown) several times, researchers found they they could coax the bubbles into a configuration that was unresponsive to further shearing at that amplitude. 

Once the bubbles were configured, the scientists could sweep through many shear amplitudes and look for the one with the smallest response. This was always the “remembered” shear amplitude. Effectively, the system can record and read out values similar to the way a computer bit does. Bubbles are no replacement for silicon, though. In this case, scientists are more interested in what memory in these systems can teach us about other, similar mechanical systems and how they respond to forces. (Image and research credit: S. Mukherji et al.; via Physics Today; submitted by Kam-Yung Soh)

► Periodically, our sun releases plasma in a coronal mass...
    9 Jul, 2019


Periodically, our sun releases plasma in a coronal mass ejection. Afterwards, the local magnetic field lines shift and reorganize. We can see that process in action here because charged particles spin along the magnetic lines, outlining them as bright loops in this imagery. This sequence – one of the best examples of this phenomenon to date – was captured by NASA’s Solar Dynamics Observatory in early 2017. To understand behaviors like these, scientists use magnetohydrodynamics, a marriage of the equations of fluid mechanics with Maxwell’s equations for electromagnetism. (Image credit: NASA SDO, source)

Symscape top

► CFD Simulates Distant Past
  25 Jun, 2019

There is an interesting new trend in using Computational Fluid Dynamics (CFD). Until recently CFD simulation was focused on existing and future things, think flying cars. Now we see CFD being applied to simulate fluid flow in the distant past, think fossils.

CFD shows Ediacaran dinner party featured plenty to eat and adequate sanitation

read more

► Background on the Caedium v6.0 Release
  31 May, 2019

Let's first address the elephant in the room - it's been a while since the last Caedium release. The multi-substance infrastructure for the Conjugate Heat Transfer (CHT) capability was a much larger effort than I anticipated and consumed a lot of resources. This lead to the relative quiet you may have noticed on our website. However, with the new foundation laid and solid we can look forward to a bright future.

Conjugate Heat Transfer Through a Water-Air RadiatorConjugate Heat Transfer Through a Water-Air Radiator
Simulation shows separate air and water streamline paths colored by temperature

read more

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

CFD Online top

► Connecting Fortran with VTK - the MPI way
  24 May, 2019
I wrote a little couple of programs, respectively in Fortran and C++, as a proof of concept for connecting a Fortran program to a sort of visualization server based on VTK. The nice thing is that it uses MPI for the connection, so on the Fortran side nothing new and scary.

The code (you can find it at https://github.com/plampite/vtkForMPI) and the idea strongly predate a similar example in Using Advanced MPI by W. Gropp et al., but makes it more concrete by adding actual visualization based on VTK.

Of course, this is just a proof of concept, and nothing really interesting is really visualized (just a cylinder with parameters passed from Fortran side), but it is intended as an example to adapt to particular use cases (the VTK itself is taken from https://lorensen.github.io/VTKExamples/site/, where a lot of additional examples are present).
► Direct Numerical Simulation on a wing profile
  14 May, 2019
Direct Numerical Simulation on a wing profile

1 billion points DNS (Direct Numerical Simulation) on a NACA4412 profile at 5 degrees angle of attack. Reynolds number is 350000 per airfoil chord and Mach number is 0.117. Both upper and lower turbulent boundary layers are tripped respectively at 15% and 50% by roughness elements evenly spaced in the boundary layer created by a zonal immersed boundary condition (Journal of Computational Physics, Volume 363, 15 June 2018, Pages 231-255, https://www.sciencedirect.com/science...). The spanwise extent is 0.3*chord. The computation has been performed on a structured multiblock mesh with the FastS compressible flow solver developed by ONERA on 1064 MPI cores. The video shows the early stages of the calculation (equivalent to 40000 time steps) highlighting the spatial development of fine-scale turbulence in both attached boundary layer and free wake. Post-processing and flow images have been made with Cassiopée (http://elsa.onera.fr/Cassiopee).
► NACA4 airFoils generator
  20 Feb, 2019
https://github.com/mcavallerin/airFoil_tools


generate 3D model for foils
Attached Thumbnails
Click image for larger version

Name:	airfoilWinger.png
Views:	122
Size:	72.5 KB
ID:	452  
► Use gnuplot to plot graph of friction_coefficient for T3A Flat Plate case in OpenFOAM
  13 Feb, 2019
Hello,
I am new to OF and gnuplot. I am working on the T3A flat plate case in tutorials of OpenFOAM. I was struggling a lot to plot graph of friction coefficient from simulation and experimental data using the default plot_file (creatGraphs.plt) provided with the tutorial. I looked on the internet for a solution but remained unsuccessful. So, after trying for some time, I got the graph correct so I decided to share it here for use of anyone else.

The trick is we have to edit the default plot file provided in the case file. :) This is how the default file looks like:
Code:
#!/bin/sh
cd ${0%/*} || exit 1                        # Run from this directory

# Test if gnuplot exists on the system
command -v gnuplot >/dev/null 2>&1 || {
    echo "gnuplot not found - skipping graph creation" 1>&2
    exit 1
}

gnuplot<<GNUPLOT
    set term post enhanced color solid linewidth 2.0 20
    set out "graphs.eps"
    set encoding utf8
    set termoption dash
    set style increment user
    set style line 1 lt 1 linecolor rgb "blue"  linewidth 1.5
    set style line 11 lt 2 linecolor rgb "black" linewidth 1.5

    time = system("foamListTimes -case .. -latestTime")

    set xlabel "x"
    set ylabel "u'"
    set title "T3A - Flat Plate - turbulent intensity"
    plot [:1.5][:0.05] \
        "../postProcessing/kGraph/".time."/line_k.xy" \
        u (\$1-0.04):(1./5.4*sqrt(2./3.*\$2))title "kOmegaSSTLM" w l ls 1, \
        "exptData/T3A.dat" u (\$1/1000):(\$3/100) title "Exp T3A" w p ls 11

    set xlabel "Re_x"
    set ylabel "c_f"
    set title "T3A - Flat Plate - C_f"
    plot [:6e+5][0:0.01] \
        "../postProcessing/wallShearStressGraph/".time."/line_wallShearStress.xy" \
        u ((\$1-0.04)*5.4/1.5e-05):(-\$2/0.5/5.4**2) title "kOmegaSSTLM" w l, \
        "exptData/T3A.dat" u (\$1/1000*5.4/1.51e-05):2 title "Exp" w p ls 11
GNUPLOT

#------------------------------------------------------------------------------
After editing it, it should look like the following:
Code:
# #!/bin/sh
# cd ${0%/*} || exit 1                        # Run from this directory

# # Test if gnuplot exists on the system
# command -v gnuplot >/dev/null 2>&1 || {
    # echo "gnuplot not found - skipping graph creation" 1>&2
    # exit 1
# }

# gnuplot<<GNUPLOT
    set term post enhanced color solid linewidth 2.0 20
    set out "graphs2.eps"
    set encoding utf8
    set termoption dash
    set style increment user
    set style line 1 lt 1 linecolor rgb "blue"  linewidth 1.5
    set style line 11 lt 2 linecolor rgb "black" linewidth 1.5

    time = system("foamListTimes -case .. -latestTime")

    # set xlabel "x"
    # set ylabel "u'"
    # set title "T3A - Flat Plate - turbulent intensity"
    # plot [:1.5][:0.05] \
        # "../postProcessing/kGraph/".time."/line_k.xy" \
        # u (\$1-0.04):(1./5.4*sqrt(2./3.*\$2))title "kOmegaSSTLM" w l ls 1, \
        # "exptData/T3A.dat" u (\$1/1000):(\$3/100) title "Exp T3A" w p ls 11

    set xlabel "Re_x"
    set ylabel "c_f"
    set title "T3A - Flat Plate - C_f"
    plot [:6e+5][0:0.01] \
		"/home/purnp2/OpenFOAM/purnp2-v1812/run/T3A/postProcessing/wallShearStressGraph/269/line_wallShearStress.xy" \
        u (($1-0.04)*5.4/1.5e-05):(-$2/0.5/5.4**2) title "kOmegaSSTLM" w l, \
        "/home/purnp2/OpenFOAM/purnp2-v1812/run/T3A/validation/exptData/T3A.dat" u ($1/1000*5.4/1.51e-05):2 title "Exp" w p ls 11
# GNUPLOT

#------------------------------------------------------------------------------
Please notice the following changes:
a. the lines need to be commented out.
2. the back-slash (\) sign deleted before all dollar-sign ($) which is used to represent a column in a file used by gnuplot for plotting the graph.
3. full path of data files is added instead of a path to the data file from the current working directory.
► A generalized thermal/dynamic wall function: Part 4
    7 Feb, 2019
In previous posts of this series I presented an elaboration of the Musker-Monkewitz analytical wall function that allowed extensions to non equilibrium cases and to thermal (scalar) cases with, in theory, arbitrary Pr/Pr_t (Sc/Sc_t) ratios.

In the meanwhile, I worked on a rationalization and generalization of the framework, derivation of averaged production term for the TKE equation, etc.

While the new material is presented in a substantially different manner and will require a dedicated post (probably a simple link to the material posted elsewhere), few details emerged that are still worth mentioning in this posts series.

In particular, what is worth discussing here is the fit of the presented wall function (and, for that matter, of wall functions in general) to a particular turbulence model. Indeed, while wall functions are typically presented in stand alone fashion, without particular reference to the turbulence model in use (and, indeed, as done here too in the previous posts), it is important that their analytical profile actually fits, as close as possible, the one expected from the tubulence model in use. This becomes of paramount importance when using an y+ insensitive formulation (as the presented one is intended to be), or such insensitivity is not really achieved.

Thus, for example, using the Reichardt or the Spalding profile (which are well known y+ insensitive formulations) with a turbulence model that, when resolved to the wall, provides a different velocity profile, is not optimal and is not going to produce the expected results of insensitivity.

Things get particularly troublesome when going to the thermal (scalar) case with high Pr/Pr_t (Sc/Sc_t) ratios. Indeed, as this ratio ideally (or practically, depending from the specific formulation) multiplies the viscosity ratio underlying the wall function, even minute differences, not typically relevant for Pr/Pr_t (Sc/Sc_t) < 1 (e.g., the velocity case), are instead amplified.

Thus, for example, using for the temperature the well known Kader formulation with the Jayatilleke term for the log part is not typically going to match the results of a given turbulence model at all the Pr/Pr_t ratios. The same just happens also for the presented Musker-Monkewitz wall function, that has its own peculiar dependence from the ratio Pr/Pr_t.

With this post I just want to present a fitting of the basic profile constant a in the Musker-Monkewitz wall function that can be used to match, approximately, the Spalart-Allmaras temperature/scalar profile. I have similar fittings also for other all y+ models, but SA is relevant because the viscosity ratio profile is simple and can be integrated with a very simple routine (and is thus included for comparison in the attached one).

Just change, as usual, the file extensions from .txt in .m and launch comparewf (having musker.m in the same folder). The adjusted Munker wall function gets compared with the numerically integrated SA profile and the reference Kader wall function.

You can play, in comparewf (don't touch musker.m), with the Pr/Pr_t ratio and the non-equilibrium source term FT (but then note that the Kader profile does not include its effects) and see how the fit works relatively better than the Kader profile for SA.

In particular, the present fit for the constant a is:

a=a_0 \left[1+c_1MAX\left(\frac{Pr}{Pr_t},1\right)^{c_2}+c_3\right]

where the constant values can be found in the attached file. Of course, this fit is just an attempt and should not be taken as etched in stone. In particular it is based on the SA profile using the von Karman constant vk = 0.4187.

Note also that, with respect to the previous posts, where I made a mistake, the suggested default value for the profile constant is the one of the original authors (of course), a0 = 10.306.
Attached Files
File Type: txt comparewf.txt (2.0 KB, 100 views)
File Type: txt musker.txt (930 Bytes, 112 views)
► A few thoughts about today's CPU era...
    6 Jan, 2019
So for whatever reason I went on a nostalgic thought process an hour or so ago and began reading about:
The nostalgic reason for this was that I had briefly gotten a chance to work with a couple of Intel Phi Co-processors a couple of years ago and never got the time to work on it. And I had gotten an AMD A10-7850K for myself as well and likewise never got the time to work on it either.


Intel Phi Co-processors KNC

So the Phi Co-processors were available for cheap, some 200-300€ per card because they were getting rid of stock and boasted a potential 64GHz of cumulative CPU clock, of which 16GHz may be plausible to take advantage, given that it was a monster with 64 threads and 8 memory channels, but each core (16 of them) could only run at ~1.1 GHz.
  • The downside? Required porting code to it, even though it was x86_64 architecture and was using a small Linux-like OS within it, as it it were a CPU with Android on a USB stick, but it was a PCI-E card on a PCI-E slot....
  • The result:
    • Takes too long to make any code work with it and it was essentially something akin to a gaming console, i.e. expensive hardware designed for a specific purpose.
    • Would be plausible to use them, if they had done things right...
  • That said, that's how NVidia does its job with CUDA... but GPUs can crank number crunching all the way up to some 1000-4000 FPUs, so 64 threads sharing 16 or 8 FPUs was borderline nonsense...


AMD A10-7850K
This is what to me felt like the technology that could disrupt all others: 4 cores that could be used to manage 512 FPUs, all on the same die, not requiring memory offloading through PCI-E lanes... this was like a dream come true, the quintessential technology holy grail for high performance computing, if there was ever one as such. Hypothetically this harbored 512 FPUS at 720MHz, which would add up to ~368GHz of cumulative potential CPU clock power, which along 4 x86_64 cores @3.7GHz to shepherd them, would allow for a killing in HPC...

But the memory bottleneck of only having 2 channels at 2133MHz, was like having only some 150 FPUs to herd, when comparing to a GPU card with DDR5 at 7GHz...

However, even if that were the case, that wouldn't be all too bad, given that it would give a ratio of about 38 FPUs per core, which compared to the 4-16 float arrays in AVX, the A10-7850K would still make a killing...

Unfortunately:
  1. It's not exactly easy to code for it, mostly because of the stack that needs to be installed...
  2. Which wouldn't be so bad, given that he competition is CUDA, which also relies on the same kind of installation hazards...
  3. But the thing that eventually held me back on ever doing anything with it was that Kaveri architecture had a bug that rendered it not supportable in AMD's ROCm development efforts: https://github.com/RadeonOpenCompute...ment-270193586
I still wish I can find the time and inspiration to try and figure out what I could still do with this APU... but a cost/benefit analysis states that it's not worth the effort :(


Intel Xeon Phi KNL
Knight's Landing... The Phi was somewhat inspired by D&D, in the sense that Knights took up their arms and went on an adventure, in search or hunt for a better home: https://en.wikipedia.org/wiki/Xeon_Phi
  1. Knights Ferry - began traveling by boat...
  2. Knights Corner - nearly there...
  3. Knights Landing - reached the hunting/fighting grounds...
  4. Knights Hill - conquered the hill... albeit was canceled, because they didn't exactly conquer it...
  5. Knights Mill - began working on it... but it was mostly oriented towards deep learning...
KNL was essentially a nice CPU, in the sense that we didn't need to cross-compile and instead focus work on optimizing for this CPU. The pseudo-Level 4 cache, technically named MCDRAM: https://en.wikipedia.org/wiki/MCDRAM - was akin to having GPU-rated RAM (by which I mean akin to GDDR5) nearby the 64-72 cores that the CPU had...

The problem: 64-72 cores running at 1.1GHz is pointless for FPU processing, if you only have 64-72 of the bloody criters, ain' it? Compared to the countless FPUs on a GPGPU, this is peanuts...


Intel Skylake-SP
They finally learned their lessons with the KNL and gave the x86_64 architecture proper infrastructure for scaling, at least from my understanding of the "KNL vs Skylake-SP" document I mentioned at the start of this post.

They even invested in AVX512... 64 double-precision or 128 single-precision array vector FPUs (all crunched in a single clock cycle, instead of just one FPU per core in the common x86 architecture), which I guess run at 2.2 to 3GHz instead of 1.1GHz, so effectively making them 2 to 3 times faster than GPU FPUs.

I'm not even going to venture at estimating how much potential in CPU clock power these AVX512 units can compare to a GPU, for a very simple reason: they can only reach 6 channels at a maximum of 2666 MHz, which pales in comparison to the 7GHz or more that exist nowadays in GDDR5/6 technology on GPU cards.


AMD EPYC
This made me laugh, once I saw the design architecture: https://www.anandtech.com/show/11544...f-the-decade/2
So the trick was fairly simple:
  1. Have 4 Ryzen CPUs connected to each other in an Infiniband-like connection between all 4 or them.
  2. Each Ryzen CPU has only 2 memory channels, but can have up to 8 cores and 2 thread per core...
  3. Has 2666 Mhz RAM... being accessed through a total of 8 memory channels.
This is what the Knight's thingamabob should have been right from the start... this is the kind of technology that will allow extending to the next logical step: 3D CPU stacks, with liquid cooling running between them...

Either way, the EPYC CPUs are nearly equivalent to 4 mainstream-grade CPUs in each socket, connected via Infiniband, for roughly the size of a single credit card...


Playstation 4 and Xbox One
  • Octa-Core AMD x86-64 "Jaguar"-based CPU
  • AMD Radeon with a ton of shaders (~700 to ~2500 at around 1.2GHz), depending on the version...
  • 8-12 GB GDDR5 RAM, depending on the version, but mostly shared between CPU and GPU...
All in the same board... sharing GDDR5 RAM... this is like the holy grail of modern computing which could proliferate in some HPC environments such as CFD and FEM... and it is only being used for gaming. Really? Seriously??


What I expect in the near future
To me, the plan is simple, given that Moore's law gave out years ago due to it being hard to scale down lithography and that we are now reaching the smallest possible limit on how much a transistor can hold a charge without sneezing...
  1. Specialization: we are already seeing this in several fronts:
    1. ASICs were created for the bitcoin mining thingamabob... a clear sign of the future, even though it's a pain in the butt to code for... since we are coding the actual hardware... but that's how GPUs appeared in the first place and the AI-oriented tech coming out on current CPUs is the same kind of thing, as well as AVX tech et al.
    2. ARM and RISC CPUs, where trimming down hardware specs can help make CPUs run cooler and with less power on our precious smartphones and tablets...
    3. You can even design your own RISC CPU nowadays: https://www.youtube.com/watch?v=jNnCok1H3-g
  2. x86_64 needs to go past its primordial soup design and go all out in integration:
    1. 3D stacking of core groups, with liquid cooling running between stacks, because copper extraction is likely not enough.
    2. Intertwining GDDR RAM between those stacks.
    3. Cumulative memory channels should be non-ubiquitous, akin to AMD EPYC design.
    4. Essentially create a cluster within a single socket, which is essentially what an AMD EPYC nearly is...

curiosityFluids top

► Creating curves in blockMesh (An Example)
  29 Apr, 2019

In this post, I’ll give a simple example of how to create curves in blockMesh. For this example, we’ll look at the following basic setup:

As you can see, we’ll be simulating the flow over a bump defined by the curve:

y=H*\sin\left(\pi x \right)

First, let’s look at the basic blockMeshDict for this blocking layout WITHOUT any curves defined:

/*--------------------------------*- C++ -*----------------------------------*\
  =========                 |
  \\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
   \\    /   O peration     | Website:  https://openfoam.org
    \\  /    A nd           | Version:  6
     \\/     M anipulation  |
\*---------------------------------------------------------------------------*/
FoamFile
{
    version     2.0;
    format      ascii;
    class       dictionary;
    object      blockMeshDict;
}

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

convertToMeters 1;

vertices
(
    (-1 0 0)    // 0
    (0 0 0)     // 1
    (1 0 0)     // 2
    (2 0 0)     // 3
    (-1 2 0)    // 4
    (0 2 0)     // 5
    (1 2 0)     // 6
    (2 2 0)     // 7

    (-1 0 1)    // 8    
    (0 0 1)     // 9
    (1 0 1)     // 10
    (2 0 1)     // 11
    (-1 2 1)    // 12
    (0 2 1)     // 13
    (1 2 1)     // 14
    (2 2 1)     // 15
);

blocks
(
    hex (0 1 5 4 8 9 13 12) (20 100 1) simpleGrading (0.1 10 1)
    hex (1 2 6 5 9 10 14 13) (80 100 1) simpleGrading (1 10 1)
    hex (2 3 7 6 10 11 15 14) (20 100 1) simpleGrading (10 10 1)
);

edges
(
);

boundary
(
    inlet
    {
        type patch;
        faces
        (
            (0 8 12 4)
        );
    }
    outlet
    {
        type patch;
        faces
        (
            (3 7 15 11)
        );
    }
    lowerWall
    {
        type wall;
        faces
        (
            (0 1 9 8)
            (1 2 10 9)
            (2 3 11 10)
        );
    }
    upperWall
    {
        type patch;
        faces
        (
            (4 12 13 5)
            (5 13 14 6)
            (6 14 15 7)
        );
    }
    frontAndBack
    {
        type empty;
        faces
        (
            (8 9 13 12)
            (9 10 14 13)
            (10 11 15 14)
            (1 0 4 5)
            (2 1 5 6)
            (3 2 6 7)
        );
    }
);

// ************************************************************************* //

This blockMeshDict produces the following grid:

It is best practice in my opinion to first make your blockMesh without any edges. This lets you see if there are any major errors resulting from the block topology itself. From the results above, we can see we’re ready to move on!

So now we need to define the curve. In blockMesh, curves are added using the edges sub-dictionary. This is a simple sub dictionary that is just a list of interpolation points:

edges
(
        polyLine 1 2
        (
                (0	0       0)
                (0.1	0.0309016994    0)
                (0.2	0.0587785252    0)
                (0.3	0.0809016994    0)
                (0.4	0.0951056516    0)
                (0.5	0.1     0)
                (0.6	0.0951056516    0)
                (0.7	0.0809016994    0)
                (0.8	0.0587785252    0)
                (0.9	0.0309016994    0)
                (1	0       0)
        )

        polyLine 9 10
        (
                (0	0       1)
                (0.1	0.0309016994    1)
                (0.2	0.0587785252    1)
                (0.3	0.0809016994    1)
                (0.4	0.0951056516    1)
                (0.5	0.1     1)
                (0.6	0.0951056516    1)
                (0.7	0.0809016994    1)
                (0.8	0.0587785252    1)
                (0.9	0.0309016994    1)
                (1	0       1)
        )
);

The sub-dictionary above is just a list of points on the curve y=H\sin(\pi x). The interpolation method is polyLine (straight lines between interpolation points). An alternative interpolation method could be spline.

The following mesh is produced:

Hopefully this simple example will help some people looking to incorporate curved edges into their blockMeshing!

Cheers.

This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trademarks.

► Creating synthetic Schlieren and Shadowgraph images in Paraview
  28 Apr, 2019

Experimentally visualizing high-speed flow was a serious challenge for decades. Before the advent of modern laser diagnostics and velocimetry, the only real techniques for visualizing high speed flow fields were the optical techniques of Schlieren and Shadowgraph.

Today, Schlieren and Shadowgraph remain an extremely popular means to visualize high-speed flows. In particular, Schlieren and Shadowgraph allow us to visualize complex flow phenomena such as shockwaves, expansion waves, slip lines, and shear layers very effectively.

In CFD there are many reasons to recreate these types of images. First, they look awesome. Second, if you are doing a study comparing to experiments, occasionally the only full-field data you have could be experimental images in the form of Schlieren and Shadowgraph.

Without going into detail about Schlieren and Shadowgraph themselves, primarily you just need to understand that Schlieren and Shadowgraph represent visualizations of the first and second derivatives of the flow field refractive index (which is directly related to density).

In Schlieren, a knife-edge is used to selectively cut off light that has been refracted. As a result you get a visualization of the first derivative of the refractive index in the direction normal to the knife edge. So for example, if an experiment used a horizontal knife edge, you would see the vertical derivative of the refractive index, and hence the density.

For Shadowgraph, no knife edge is used, and the images are a visualization of the second derivative of the refractive index. Unlike the Schlieren images, shadowgraph has no direction and shows you the laplacian of the refractive index field (or density field).

In this post, I’ll use a simple case I did previously (https://curiosityfluids.com/2016/03/28/mach-1-5-flow-over-23-degree-wedge-rhocentralfoam/) as an example and produce some synthetic Schlieren and Shadowgraph images using the data.

So how do we create these images in paraview?

Well as you might expect, from the introduction, we simply do this by visualizing the gradients of the density field.

In ParaView the necessary tool for this is:

Gradient of Unstructured DataSet:

Finding “Gradient of Unstructured DataSet” using the Filters-> Search

Once you’ve selected this, we then need to set the properties so that we are going to operate on the density field:

Change the “Scalar Array” Drop down to the density field (rho), and change the name to Synthetic Schlieren

To do this, simply set the “Scalar Array” to the density field (rho), and change the name of the result Array name to SyntheticSchlieren. Now you should see something like this:

This is NOT a synthetic Schlieren Image – but it sure looks nice

There are a few problems with the above image (1) Schlieren images are directional and this is a magnitude (2) Schlieren and Shadowgraph images are black and white. So if you really want your Schlieren images to look like the real thing, you should change to black and white. ALTHOUGH, Cold and Hot, Black-Body radiation, and Rainbow Desatured all look pretty amazing.

To fix these, you should only visualize one component of the Synthetic Schlieren array at a time, and you should visualize using the X-ray color preset:

The results look pretty realistic:

Horizontal Knife Edge

Vertical Knife Edge

Now how about ShadowGraph?

The process of computing the shadowgraph field is very similar. However, recall that shadowgraph visualizes the Laplacian of the density field. BUT THERE IS NO LAPLACIAN CALCULATOR IN PARAVIEW!?! Haha no big deal. Just remember the basic vector calculus identity:

\nabla^2\left[\right]  = \nabla \cdot \nabla \left[\right]

Therefore, in order for us to get the Shadowgraph image, we just need to take the Divergence of the Synthetic Schlieren vector field!

To do this, we just have to use the Gradient of Unstructured DataSet tool again:

This time, Deselect “Compute Gradient” and the select “Compute Divergence” and change the Divergence array name to Shadowgraph.

Visualized in black and white, we get a very realistic looking synthetic Shadowgraph image:

Shadowgraph Image

So what do the values mean?

Now this is an important question, but a simple one to answer. And the answer is…. not much. Physically, we know exactly what these mean, these are: Schlieren is the gradient of the density field in one direction and Shadowgraph is the laplacian of the density field. But what you need to remember is that both Schlieren and Shadowgraph are qualitative images. The position of the knife edge, brightness of the light etc. all affect how a real experimental Schlieren or Shadowgraph image will look.

This means, very often, in order to get the synthetic Schlieren to closely match an experiment, you will likely have to change the scale of your synthetic images. In the end though, you can end up with extremely realistic and accurate synthetic Schlieren images.

Hopefully this post will be helpful to some of you out there. Cheers!

► Solving for your own Sutherland Coefficients using Python
  24 Apr, 2019

Sutherland’s equation is a useful model for the temperature dependence of the viscosity of gases. I give a few details about it in this post: https://curiosityfluids.com/2019/02/15/sutherlands-law/

The law given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

In order to use these equations, obviously, you need to know the coefficients. Here, I’m going to show you how you can simply create your own Sutherland coefficients using least-squares fitting Python 3.

So why would you do this? Basically, there are two main reasons for this. First, if you are not using air, the Sutherland coefficients can be hard to find. If you happen to find them, they can be hard to reference, and you may not know how accurate they are. So creating your own Sutherland coefficients makes a ton of sense from an academic point of view. In your thesis or paper, you can say that you created them yourself, and not only that you can give an exact number for the error in the temperature range you are investigating.

So let’s say we are looking for a viscosity model of Nitrogen N2 – and we can’t find the coefficients anywhere – or for the second reason above, you’ve decided its best to create your own.

By far the simplest way to achieve this is using Python and the Scipy.optimize package.

Step 1: Get Data

The first step is to find some well known, and easily cited, source for viscosity data. I usually use the NIST webbook (
https://webbook.nist.gov/), but occasionally the temperatures there aren’t high enough. So you could also pull the data out of a publication somewhere. Here I’ll use the following data from NIST:

Temparature (K) Viscosity (Pa.s)
200
0.000012924
400 0.000022217
600 0.000029602
800 0.000035932
1000 0.000041597
1200 0.000046812
1400 0.000051704
1600 0.000056357
1800 0.000060829
2000 0.000065162

This data is the dynamics viscosity of nitrogen N2 pulled from the NIST database for 0.101 MPa. (Note that in these ranges viscosity should be only temperature dependent).

Step 2: Use python to fit the data

If you are unfamiliar with Python, this may seem a little foreign to you, but python is extremely simple.

First, we need to load the necessary packages (here, we’ll load numpy, scipy.optimize, and matplotlib):

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

Now we define the sutherland function:

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

Next we input the data:

T=[200,
400,
600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

Then we fit the data using the curve_fit function from scipy.optimize. This function uses a least squares minimization to solve for the unknown coefficients. The output variable popt is an array that contains our desired variables As and Ts.

popt = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]

Now we can just output our data to the screen and plot the results if we so wish:

print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

Overall the entire code looks like this:

import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def sutherland(T, As, Ts):
    return As*T**(3/2)/(Ts+T)

T=[200, 400, 600,
800,
1000,
1200,
1400,
1600,
1800,
2000]

mu=[0.000012924,
0.000022217,
0.000029602,
0.000035932,
0.000041597,
0.000046812,
0.000051704,
0.000056357,
0.000060829,
0.000065162]

popt, pcov = curve_fit(sutherland, T, mu)
As=popt[0]
Ts=popt[1]
print('As = '+str(popt[0])+'\n')
print('Ts = '+str(popt[1])+'\n')

xplot=np.linspace(200,2000,100)
yplot=sutherland(xplot,As,Ts)

plt.plot(T,mu,'ok',xplot,yplot,'-r')
plt.xlabel('Temperature (K)')
plt.ylabel('Dynamic Viscosity (Pa.s)')
plt.legend(['NIST Data', 'Sutherland'])
plt.show()

And the results for nitrogen gas in this range are As=1.55902E-6, and Ts=168.766 K. Now we have our own coefficients that we can quantify the error on and use in our academic research! Wahoo!

Summary

In this post, we looked at how we can simply use a database of viscosity-temperature data and use the python package scipy to solve for our unknown Sutherland viscosity coefficients. This NIST database was used to grab some data, and the data was then loaded into Python and curve-fit using scipy.optimize curve_fit function.

This task could also easily be accomplished using the Matlab curve-fitting toolbox, or perhaps in excel. However, I have not had good success using the excel solver to solve for unknown coefficients.

► Tips for tackling the OpenFOAM learning curve
  23 Apr, 2019

The most common complaint I hear, and the most common problem I observe with OpenFOAM is its supposed “steep learning curve”. I would argue however, that for those who want to practice CFD effectively, the learning curve is equally as steep as any other software.

There is a distinction that should be made between “user friendliness” and the learning curve required to do good CFD.

While I concede that other commercial programs have better basic user friendliness (a nice graphical interface, drop down menus, point and click options etc), it is equally as likely (if not more likely) that you will get bad results in those programs as with OpenFOAM. In fact, to some extent, the high user friendliness of commercial software can encourage a level of ignorance that can be dangerous. Additionally, once you are comfortable operating in the OpenFOAM world, the possibilities become endless and things like code modification, and bash and python scripting can make OpenFOAM worklows EXTREMELY efficient and powerful.

Anyway, here are a few tips to more easily tackle the OpenFOAM learning curve:

(1) Understand CFD

This may seem obvious… but its not to some. Troubleshooting bad simulation results or unstable simulations that crash is impossible if you don’t have at least a basic understanding of what is happening under the hood. My favorite books on CFD are:

(a) The Finite Volume Method in Computational Fluid Dynamics: An Advanced Introduction with OpenFOAM® and Matlab by
F. Moukalled, L. Mangani, and M. Darwish

(b) An introduction to computational fluid dynamics – the finite volume method – by H K Versteeg and W Malalasekera

(c) Computational fluid dynamics – the basics with applications – By John D. Anderson

(2) Understand fluid dynamics

Again, this may seem obvious and not very insightful. But if you are going to assess the quality of your results, and understand and appreciate the limitations of the various assumptions you are making – you need to understand fluid dynamics. In particular, you should familiarize yourself with the fundamentals of turbulence, and turbulence modeling.

(3) Avoid building cases from scratch

Whenever I start a new case, I find the tutorial case that most closely matches what I am trying to accomplish. This greatly speeds things up. It will take you a super long time to set up any case from scratch – and you’ll probably make a bunch of mistakes, forget key variable entries etc. The OpenFOAM developers have done a lot of work setting up the tutorial cases for you, so use them!

As you continue to work in OpenFOAM on different projects, you should be compiling a library of your own templates based on previous work.

(4) Using Ubuntu makes things much easier

This is strictly my opinion. But I have found this to be true. Yes its true that Ubuntu has its own learning curve, but I have found that OpenFOAM works seamlessly in the Ubuntu or any Ubuntu-like linux environment. OpenFOAM now has Windows flavors using docker and the like- but I can’t really speak to how well they work – mostly because I’ve never bothered. Once you unlock the power of Linux – the only reason to use Windows is for Microsoft Office (I guess unless you’re a gamer – and even then more and more games are now on Linux). Not only that- but the VAST majority of forums and troubleshooting associated with OpenFOAM you’ll find on the internet are from Ubuntu users.

I much prefer to use Ubuntu with a virtual Windows environment inside it. My current office setup is my primary desktop running Ubuntu – plus a windows VirtualBox, plus a laptop running windows that I use for traditional windows type stuff. Dual booting is another option, but seamlessly moving between the environments is easier.

(5) If you’re struggling, simplify

Unless you know exactly what you are doing, you probably shouldn’t dive into the most complicated version of whatever you are trying to solve/study. It is best to start simple, and layer the complexity on top. This way, when something goes wrong, it is much easier to figure out where the problem is coming from.

(6) Familiarize yourself with the cfd-online forum

If you are having trouble, the cfd-online forum is super helpful. Most likely, someone else is has had the same problem you have. If not, the people there are extremely helpful and overall the forum is an extremely positive environment for working out the kinks with your simulations.

(7) The results from checkMesh matter

If you run checkMesh and your mesh fails – fix your mesh. This is important. Especially if you are not planning on familiarizing yourself with the available numerical schemes in OpenFOAM, you should at least have a beautiful mesh. In particular, if your mesh is highly non-orthogonal, you will have serious problems. If you insist on using a bad mesh, you will probably need to manipulate the numerical schemes. A great source for how schemes should be manipulated based on mesh non-orthogonality is:

http://www.wolfdynamics.com/wiki/OFtipsandtricks.pdf

(8) CFL Number Matters

If you are running a transient case, the Courant-Freidrechs-Lewis (CFL) number matters… a lot. Not just for accuracy (if you are trying to capture a transient event) but for stability. If your time-step is too large you are going to have problems. There is a solid mathematical basis for this stability criteria for advection-diffusion problems. Additionally the Navier-Stokes equations are very non-linear and the complexity of the problem and the quality of your grid etc can make the simulation even less stable. When I have a transient simulation crash, if I know my mesh is OK, I decrease the timestep by a factor of 2. More often than not, this solves the problem.

For large time stepping, you can add outer loops to solvers based on the pimple algorithm, but you may end up losing important transient information. Excellent explanation of how to do this is given in the book by T. Holzmann:

https://holzmann-cfd.de/publications/mathematics-numerics-derivations-and-openfoam

For the record, this points falls into point (1) of Understanding CFD.

(9) Work through the OpenFOAM Wiki “3 Week” Series

If you are starting OpenFOAM for the first time, it is worth it to work through an organized program of learning. One such example (and there are others) is the “3 Weeks Series” on the OpenFOAM wiki:

https://wiki.openfoam.com/%223_weeks%22_series

If you are a graduate student, and have no job to do other than learn OpenFOAM, it will not take 3 weeks. This touches on all the necessary points you need to get started.

(10) OpenFOAM is not a second-tier software – it is top tier

I know some people who have started out with the attitude from the get-go that they should be using a different software. They think somehow Open-Source means that it is not good. This is a pretty silly attitude. Many top researchers around the world are now using OpenFOAM or some other open source package. The number of OpenFOAM citations has grown every year consistently (
https://www.linkedin.com/feed/update/urn:li:groupPost:1920608-6518408864084299776/?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518932944235610112%29&replyUrn=urn%3Ali%3Acomment%3A%28groupPost%3A1920608-6518408864084299776%2C6518956058403172352%29).

In my opinion, the only place where mainstream commercial CFD packages will persist is in industry labs where cost is no concern, and changing software is more trouble than its worth. OpenFOAM has been widely benchmarked, and widely validated from fundamental flows to hypersonics (see any of my 17 publications using it for this). If your results aren’t good, you are probably doing something wrong. If you have the attitude that you would rather be using something else, and are bitter that your supervisor wants you to use OpenFOAM, when something goes wrong you will immediately think there is something wrong with the program… which is silly – and you may quit.

(11) Meshing… Ugh Meshing

For the record, meshing is an art in any software. But meshing is the only area where I will concede any limitation in OpenFOAM. HOWEVER, as I have outlined in my previous post (https://curiosityfluids.com/2019/02/14/high-level-overview-of-meshing-for-openfoam/) most things can be accomplished in OpenFOAM, and there are enough third party meshing programs out there that you should have no problem.

Summary

Basically, if you are starting out in CFD or OpenFOAM, you need to put in time. If you are expecting to be able to just sit down and produce magnificent results, you will be disappointed. You might quit. And frankly, thats a pretty stupid attitude. However, if you accept that CFD and fluid dynamics in general are massive fields under constant development, and are willing to get up to speed, there are few limits to what you can accomplish.

Please take the time! If you want to do CFD, learning OpenFOAM is worth it. Seriously worth it.

This offering is notapproved or endorsed by OpenCFD Limited, producer and distributorof the OpenFOAM software via www.openfoam.com, and owner of theOPENFOAM®  andOpenCFD®  trade marks.

► Automatic Airfoil C-Grid Generation for OpenFOAM – Rev 1
  22 Apr, 2019
Airfoil Mesh Generated with curiosityFluidsAirfoilMesher.py

Here I will present something I’ve been experimenting with regarding a simplified workflow for meshing airfoils in OpenFOAM. If you’re like me, (who knows if you are) I simulate a lot of airfoils. Partly because of my involvement in various UAV projects, partly through consulting projects, and also for testing and benchmarking OpenFOAM.

Because there is so much data out there on airfoils, they are a good way to test your setups and benchmark solver accuracy. But going from an airfoil .dat coordinate file to a mesh can be a bit of pain. Especially if you are starting from scratch.

The two main ways that I have meshed airfoils to date has been:

(a) Mesh it in a C or O grid in blockMesh (I have a few templates kicking around for this
(b) Generate a “ribbon” geometry and mesh it with cfMesh
(c) Or back in the day when I was a PhD student I could use Pointwise – oh how I miss it.

But getting the mesh to look good was always sort of tedious. So I attempted to come up with a python script that takes the airfoil data file, minimal inputs and outputs a blockMeshDict file that you just have to run.

The goals were as follows:
(a) Create a C-Grid domain
(b) be able to specify boundary layer growth rate
(c) be able to set the first layer wall thickness
(e) be mostly automatic (few user inputs)
(f) have good mesh quality – pass all checkMesh tests
(g) Quality is consistent – meaning when I make the mesh finer, the quality stays the same or gets better
(h) be able to do both closed and open trailing edges
(i) be able to handle most airfoils (up to high cambers)
(j) automatically handle hinge and flap deflections

In Rev 1 of this script, I believe I have accomplished (a) thru (g). Presently, it can only hand airfoils with closed trailing edge. Hinge and flap deflections are not possible, and highly cambered airfoils do not give very satisfactory results.

There are existing tools and scripts for automatically meshing airfoils, but I found personally that I wasn’t happy with the results. I also thought this would be a good opportunity to illustrate one of the ways python can be used to interface with OpenFOAM. So please view this as both a potentially useful script, but also something you can dissect to learn how to use python with OpenFOAM. This first version of the script leaves a lot open for improvement, so some may take it and be able to tailor it to their needs!

Hopefully, this is useful to some of you out there!

Download

You can download the script here:

https://github.com/curiosityFluids/curiosityFluidsAirfoilMesher

Here you will also find a template based on the airfoil2D OpenFOAM tutorial.

Instructions

(1) Copy curiosityFluidsAirfoilMesher.py to the root directory of your simulation case.
(2) Copy your airfoil coordinates in Selig .dat format into the same folder location.
(3) Modify curiosityFluidsAirfoilMesher.py to your desired values. Specifically, make sure that the string variable airfoilFile is referring to the right .dat file
(4) In the terminal run: python3 curiosityFluidsAirfoilMesher.py
(5) If no errors – run blockMesh

PS
You need to run this with python 3, and you need to have numpy installed

Inputs

The inputs for the script are very simple:

ChordLength: This is simply the airfoil chord length if not equal to 1. The airfoil dat file should have a chordlength of 1. This variable allows you to scale the domain to a different size.

airfoilfile: This is a string with the name of the airfoil dat file. It should be in the same folder as the python script, and both should be in the root folder of your simulation directory. The script writes a blockMeshDict to the system folder.

DomainHeight: This is the height of the domain in multiples of chords.

WakeLength: Length of the wake domain in multiples of chords

firstLayerHeight: This is the height of the first layer. To estimate the requirement for this size, you can use the curiosityFluids y+ calculator

growthRate: Boundary layer growth rate

MaxCellSize: This is the max cell size along the centerline from the leading edge of the airfoil. Some cells will be larger than this depending on the gradings used.

The following inputs are used to improve the quality of the mesh. I have had pretty good results messing around with these to get checkMesh compliant grids.

BLHeight: This is the height of the boundary layer block off of the surfaces of the airfoil

LeadingEdgeGrading: Grading from the 1/4 chord position to the leading edge

TrailingEdgeGrading: Grading from the 1/4 chord position to the trailing edge

inletGradingFactor: This is a grading factor that modifies the the grading along the inlet as a multiple of the leading edge grading and can help improve mesh uniformity

trailingBlockAngle: This is an angle in degrees that expresses the angles of the trailing edge blocks. This can reduce the aspect ratio of the boundary cells at the top and bottom of the domain, but can make other mesh parameters worse.

Examples

12% Joukowski Airfoil

Inputs:

With the above inputs, the grid looks like this:

Mesh Quality:

These are some pretty good mesh statistics. We can also view them in paraView:

Clark-y Airfoil

The clark-y has some camber, so I thought it would be a logical next test to the previous symmetric one. The inputs I used are basically the same as the previous airfoil:


With these inputs, the result looks like this:


Mesh Quality:


Visualizing the mesh quality:

MH60 – Flying Wing Airfoil

Here is an example of a flying with airfoil (tested since the trailing edge is tilted upwards).

Inputs:


Again, these are basically the same as the others. I have found that with these settings, I get pretty consistently good results. When you change the MaxCellSize, firstLayerHeight, and Grading some modification may be required. However, if you just half the maxCell, and half the firstLayerHeight, you “should” get a similar grid quality just much finer.

Grid Quality:

Visualizing the grid quality

Summary

Hopefully some of you find this tool useful! I plan to release a Rev 2 soon that will have the ability to handle highly cambered airfoils, and open trailing edges, as well as control surface hinges etc.

The long term goal will be an automatic mesher with an H-grid in the spanwise direction so that the readers of my blog can easily create semi-span wing models extremely quickly!

Comments and bug reporting encouraged!

DISCLAIMER: This script is intended as an educational and productivity tool and starting point. You may use and modify how you wish. But I make no guarantee of its accuracy, reliability, or suitability for any use. This offering is not approved or endorsed by OpenCFD Limited, producer and distributor of the OpenFOAM software via www.openfoam.com, and owner of the OPENFOAM®  and OpenCFD®  trademarks.

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

Hanley Innovations top

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse


Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit http://www.hanleyinnovations.com/stallion3d.html or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017


Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.




More information about the software can be found at the following url:
http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

Thanks for reading.


► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Avoid Testing Pitfalls
  24 Jan, 2017


The only way to know if your idea will work is to test it.  Rest assured, as a design engineer your idea and designs will be tested over and over again often in front of a crowd of people.

As an aerodynamics design engineer, Stallion 3D helps you to avoid the testing pitfalls that would otherwise keep you awake at night. An advantage of Stallion 3D is it enables you to test your designs on the privacy of your laptop or desktop before your company actually builds a prototype.  As someone who uses Stallion 3D for consulting, I find it very exciting to see my designs flying the way they were simulated in the software. Stallion 3D will assure that your creations are airworthy before they are tested in front of a crowd.

I developed Stallion 3D for engineers who have an innate love and aptitude for aerodynamics but who do not want to deal with the hassles of standard CFD programs.  Innovative technologies should always take a few steps out of an existing process to make the journey more efficient.  Stallion 3D enables you to skip the painful step of grid (mesh) generation. This reduces your workflow to just a few seconds to setup and run a 3D aerodynamics case.

Stallion 3D helps you to avoid the common testing pitfalls.
1. UAV instabilities and takeoff problems
2. Underwhelming range and endurance
3. Pitch-up instabilities
4. Incorrect control surface settings at launch and level flight
5. Not enough propulsive force (thrust) due to excess drag and weight.

Are the results of Stallion 3D accurate?  Please visit the following page to see the latest validations.
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.
► Flying Wing UAV: Design and Analysis
  15 Jan, 2017

3DFoil is a design and analysis software for wings, hydrofoils, sails and other aerodynamic surfaces. It requires a computer running MS Windows 7,8 and 10.

I wrote the 3DFoil software several years ago using a vortex lattice approach. The vortex lattice method in the code is based on vortex rings (as opposed to the horse shoe vortex approach).  The vortex ring method allows for wing twist (geometric and aerodynamic) so a designer can fashion the wing for drag reduction and prevent tip stall by optimizing the amount of washout.  The approach also allows sweep (backwards & forwards) and multiple dihedral/anhedral angles.
Another feature that I designed into 3DFoil is the capability to predict profile drag and stall. This is done by analyzing the wing cross sections with a linear strength vortex panel method and an ordinary differential equation boundary layer solver.   The software utilize the solution of the boundary layer solver to predict the locations of the transition and separation points.

The following video shows how to use 3DFoil to design and analyze a flying wing UAV aircraft. 3DFoil's user interface is based on the multi-surface approach. In this method, the wing is designed using multiple tapered surface where the designer can specify airfoil shapes, sweep, dihedral angles and twist. With this approach, the designer can see the contribution to the lift, drag and moments for each surface.  Towards the end of the video, I show how the multi-surface approach is used to design effective winglets by comparing the profile drag and induced drag generated by the winglet surfaces. The video also shows how to find the longitudinal and lateral static stability of the wing.



The following steps are used to design and analyze the wing in 3DFoil:
1. Input the dimensions and sweep half of the wing (half span)
2. Input the dimensions and sweep of the winglet.
3. Join the winglet and main wing.
4. Generate the full aircraft using the mirror image insert function.
5. Find the lift drag and moments
6. Compute longitudinal and lateral stability
7. Look at the contributions of the surfaces.
8. Verify that the winglets provide drag reduction.

More information about 3DFoil can be found at the following url: http://www.hanleyinnovations.com/3dfoil.html.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Corvette C7 Aerodynamics
    7 Jan, 2017

The CAD file for the Corvette C7 aerodynamics study in Stallion 3D version 4 was obtained from Mustafa Asan revision on GrabCAD.  The file was converted from the STP format to the STL format required in Stallion 3D using OnShape.com.

Once the Corvette was imported into Stallion 3D, I applied ground effect and a speed of 75 miles per hour at zero angle of attack.  The flow setup took just seconds in Stallion 3D and grid generation was completely automatic.  The software allows the user to choose a grid size setting and I chose the option the produced a total of 345,552 cells in the computational domain.

I chose the Reynolds Averaged Navier-Stokes (RANS) equations solver for this example.  In Stallion 3D, the RANS equations are solve along with the k-e turbulence model.  A wall function approach is used at the boundaries.

The results were obtained after 10,950 iterations on a quad core laptop computer running at 2.0 GHz under MS Windows 10.


The results for the Corvette C7 model  are summarized below:

Lift Coefficient:  0.227
Friction Drag Coefficient: 0.0124
Pressure Drag Coefficient: 0.413
Total Drag Coefficient: 0.426

Stallion 3D HIST Solver:  Reynolds Averaged Navier-Stokes Equations
Turbulence Model: k-e
Number of Cells: 345,552
Grid: Built-in automatic grid generation

Run time: 7 hours

The coefficients were computed based on a frontal area of 2.4 square meters.

The following are images of the same solution from different views in Stallion 3D.  The streamlines are all initiated near the ground plane 2 meters ahead of the car.

Top View



Side View


Bottom View


Stallion 3D utilizes a new technology (Hanley Innovations Surface Treatment or HIST) that enables design engineers to quickly analyze their CAD models on an ordinary Window PC.  We call this SameDayCFD. This unique technology is my original work and was not derived from any existing software codes.  More information about Stallion 3D can be found at:


Do not hesitate to contact us if you have any questions.  More information can be found at  http://www.hanleyinnovations.com

Thanks for reading.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.



CFD and others... top

► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 


► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.


A Tank outside my taxi



A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion


To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.



► The Linux Version of meshCurve is Now Ready for All to Download
  20 Apr, 2016
The 64-bit version for the Linux operating system is now ready for you to download. Because of the complexities associated with various libraries, we experienced a delay of slightly more than a month. Here is the link again.

Please let us know your experience, good or bad. Good luck!
► Announcing meshCurve: A CAD-free Low Order to High-Order Mesh Converter
  14 Mar, 2016
We are finally ready to release meshCurve to the world!

The description of meshCurve is provided in AIAA Paper No. 2015-2293. The primary developer is Jeremy Ims, who has been supported by NASA and NSF. Zhaowen Duan also made major contributions. By the way, Aerospace America also highlighted meshCurve in its 2015 annual review issue (on page 22). Many congratulations to Jeremy and Zhaowen on this major milestone!

The current version supports both the Mac OS X and Windows (64 bit) operating systems. The Linux version will be released soon.

Here is roughly how meshCurve works. The input is a linear mesh in the CGNS format. Then the user selects which boundary patches should be reconstructed to high-order. After that, geometrically important features are detected. The user can also manually select or delete features. Next the selected patches are reconstructed to add curvature. Finally the interior volume meshes are curved (if necessary). The output mesh is also stored in CGNS format.

We have tested the tool with meshes in the order of a million cells. But I still want to lower your expectation. So try it out yourself and let us know if you like it or hate it. Please do report bugs so that improvements can be made in the future.

Good luck!

Oh, did I mention the tool is completely free? Here is the meshCurve link again.






ANSYS Blog top

► How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon
  10 Oct, 2018
Illini EV Concept Team Photo at Shell Eco Marathon 2018

Illini EV Concept Team Photo at Shell Eco Marathon 2018

Weight is the enemy of all teams that design electric cars for the Shell Eco Marathon.

Reducing the weight of electric cars improves the vehicle’s acceleration and power efficiency. These performance improvements make all the difference come race day.

However, if the car’s weight is reduced too much, it could lead to safety concerns.

Illini EV Concept (Illini) is a Shell Eco Marathon team out of the University of Illinois. Team members use ANSYS academic research software to optimize the chassis of their electric car without compromising safety.

Where to Start When Reducing the Weight of Electric Cars?

Front bump composite failure under a load of 2000N.

Front bump composite failure under a load of 2000N.

The first hurdle of the Shell Eco Marathon is an initial efficiency contest. Only the best teams from this efficiency assessment even make it into the race.

Therefore, Illini concentrates on reducing the most weight in the shortest amount of time to ensure it makes it to the starting line.

Illini notes that its focus is on reducing the weight of its electric car’s chassis.

“The chassis is by far the heaviest component of our car, so ANSYS was used extensively to help design our first carbon fiber monocoque chassis,” says Richard Mauge, body and chassis leader for Illini.

“Several loading conditions were tested to ensure the chassis was stiff enough and the carbon fiber did not fail using the composite failure tool,” he adds.

Competition regulations ensure the safety of all team members. These regulations state that each team must prove that their car is safe under various conditions. Simulation is a great tool to prove a design is within safety tolerances.

“One of these tests included ensuring the bulkhead could withstand a 700 N load in all directions, per competition regulations,” says Mauge. If the teams’ electric car designs can’t survive this simulation come race day, then their cars are not racing.

Iterate and Optimize the Design of Electronic Cars with Simulation

Front bump deformation under a load of 2000N.

Front bump deformation under a load of 2000N.

Simulations can do more than prove a design is safe. They can also help to optimize designs.

Illini uses what it learns from simulation to optimize the geometry of its electric car’s chassis.

The team found that its new designs have a torsional rigidity increase around 100 percent. This is after a 15 percent decrease in weight compared to last year’s model.

“Simulations ensure that the chassis is safe enough for our driver. It also proved that the chassis is lighter and stiffer than ever before. ANSYS composite analysis gave us the confidence to move forward with our radical chassis redesign,” notes Mauge.

The story optimization story continues from Illini. It plans to explore easier and more cost-effective ways to manufacture carbon fiber parts. For instance, the team wants to replace the core of its parts with foam and increase the number of bonded pieces.

If team members just go with their gut on these hunches, they could find themselves scratching their heads when something goes wrong. However, with simulations, the team makes better informed decisions about its redesigns and manufacturing process.

To get started with simulation, try our free student download. For student teams that need to solve in-depth problems, check out our software sponsorship program.

The post How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon appeared first on ANSYS.

► Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers
    9 Oct, 2018
This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

Simulation data sets have a funny habit of ballooning as engineers move through the development cycle. At some point, post-processing these data sets on a single machine becomes impractical.

Engineers can speed up post-processing by spatially or temporally decomposing large data sets so they can be post-processed across numerous servers.

The idea is to utilize the idle compute nodes you used to run the solver in parallel to now run the post-processing in parallel.

In ANSYS 19.2 Ensight Enterprise you can spatially or temporally decompose data sets. Ensignt Enterprise is an updated version of EnSight HPC.

Post-Processing Using Spatial Decomposition

EnSight is a client/server architecture. The client program takes care of the graphical user interface (GUI) and rendering operations, while the server program loads the data, creates parts, extracts features and calculates results.

If your model is too large to post-process on a single machine, you can utilize the spatial decomposed parallel operation to assign each spatial partition to its own EnSight Server. A good server-to-model ratio is one server for every 50 million elements.

Each EnSight Server can be located on a separate compute node on any compute resource you’d like. This allows engineers to utilize the memory and processing power of heterogeneous high-performance computing (HPC) resources for data set post-processing.

The engineers effectively split the large data set up into pieces with each piece assigned to its own compute resource. This dramatically increases the data set sizes you can load and process.

Once you have loaded the model into EnSight Enterprise, there are no additional changes to your workflow, experience or operations.

Post-Processing Using Temporal Decomposition

Keep in mind that this decomposition concept can also be applied to transient data sets. In this case, the dataset is split up temporally rather than spatially. In this scenario, each server receives its own set of time steps.

A turbulence simulation created using EnSight Enterprise post-processing

EnSight Enterprise offers performance gains when the server operations outweigh the communication and rendering time of each time step. Since it’s hard to predict network communication or rendering workloads, you can’t easily create a guiding principle for the server-to-model ratio.

However, you might want to use a few servers when your model has more than 10 million elements and over a hundred time steps. This will help keep the processing load of each server to a moderate level.

How EnSight Speeds Up the Post-Processing of Large Simulation Data Sets

Another good tip to ensure you are post-processed optimally within EnSight Enterprise. Engineers achieve the best performance gains by pre-decomposing the data and locating it locally to the compute resources they anticipate using. Ideally, this data should be in EnSight Case format.

To learn more, check out Ensight or register for the webinar Analyze, Visualize and Communicate Your Simulation Data with ANSYS EnSight.

The post Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers appeared first on ANSYS.

► Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing
    8 Oct, 2018

Your design team will make informed decisions about the products they create when they bring detailed simulations up front in the development cycle.

The 19.2 release of ANSYS Discovery AIM facilitates the need of early simulations.

It does this by streamlining templates for physics-aware meshing and rapid results.

High-Fidelity Simulation Through Physics-Aware Meshing

 Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Analysts have likely told your design team about the importance of a quality mesh to achieve accurate simulation results.

Creating high quality meshes takes time and specialized training. Your design team doesn’t likely have the time or patience to learn this art.

To account for this, Discovery AIM automatically incorporates physics-aware meshing behind the scenes. In fact, your design team doesn’t even need to see the mesh creation process to complete the simulation.

This workflow employs several meshing best practices analysts typically use. The tool even accounts for areas that require mesh refinements based on the physics being assessed.

For instance, areas with a sliding contact gain a finer mesh so the sliding behavior can be accurately simulated. Additionally, areas near the walls of fluid-solid interfaces are also refined to ensure this interaction is properly captured. Physics-aware meshing ensures small features and areas of interests won’t get lost in your design team’s simulation.

The simplified meshing workflow also lets your design team choose their desired solution fidelity. This input will help the software balance the time the solver takes to compute results with the accuracy of the results.

Though physics-aware meshing can create the mesh under the hood of the simulation process, it still has tools allowing user-control of the mesh. This way, if your design team chooses to dig into the meshing details — or an analyst decides to step in — they can finely tune the mesh.

Capabilities like this further empower designers as techniques and knowledge traditionally known only by analysts are automated in an easy-to-use fashion.

Gain Rapid Results in Important Areas You Might Miss

The 19.2 release of Discovery AIM has seen improvements with its ability to enable your design team to explore simulation results.

Many analysts will know instinctively where to focus their post-processing, but without this experience, designers may miss areas of interest.

Discovery AIM enables the designer to interactively explore and identify these critical results. These initial results are rapidly displayed as contours, streamlines or field flow lines.

Field flow and streamlines for an electromagnetics simulation

Field flow and streamlines for an electromagnetics simulation

Once your design team finds locations of interest within the results, they can create higher fidelity results to examine those area of interest in further detail. Designers can then save the results and revisit them when comparing design points or after changing simulation inputs.

To learn more about other changes to Discovery AIM — like the ability to directly access fluid results — watch the Discovery AIM 19.2 release recorded webinar or take it for a test drive.

The post Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing appeared first on ANSYS.

► Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer
    5 Oct, 2018
Traditional chemotherapy can often be blocked by a tumor’s stroma.

Traditional chemotherapy can often be blocked by a tumor’s stroma.

There are few illnesses as crafty as pancreatic cancer. It spreads like weeds and resists chemotherapy.

Pancreatic cancer is often asymptomatic, has a low survival rate and is often misdiagnosed as diabetes. And, this violent killer is almost always inoperable.

The pancreatic tumor’s resistance to chemotherapy comes from a shield of supporting connective tissue, or stroma, which it builds around itself.

Current treatments attempt to overcome this defense by increasing the dosage of intravenously administered chemotherapy. Sadly, this rarely works, and the high dosage is exceptionally hard on patients.

Nonetheless, doctors need a way to shrink these tumors so that they can surgically remove them without risking the numerous organs and vasculature around the pancreas.

“We say if you can’t get the drugs to the tumor from the blood, why not get it through the stroma directly?” asks William Daunch, CTO at Advanced Chemotherapy Technologies (ACT), an ANSYS Startup Program member. “We are developing a medical device that implants directly onto the pancreas. It passes drugs through the organ, across the stroma to the tumor using iontophoresis.”

By treating the tumor directly, doctors can theoretically shrink the tumor to an operable size with a smaller dose of chemotherapy. This should significantly reduce the effects of the drugs on the rest of the patient’s body.

How to Treat Pancreatic Cancer with a Little Electrochemistry

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Most of the drugs used to treat pancreatic cancer are charged. This means they are affected by electromotive forces.

ACT has created a medical device that takes advantage of the medication’s charge to beat the stroma’s defenses using electrochemistry and iontophoresis.

The device contains a reservoir with an electrode. The reservoir connects to tubes that connect to an infusion pump. This setup ensures that the reservoir is continuously filled. If the reservoir is full, the dosage doesn’t change.

The tubes and wires are all connected into a port that is surgically implanted into the patient’s abdomen.

A diagram of ACT’s chemotherapy medical device.

A diagram of ACT’s chemotherapy medical device.

The circuit is completed by a metal panel on the back of the patient.

“When the infusion pump runs, and electricity is applied, the electromotive forces push the medication into the stroma’s tissue without a needle. The medication can pass up to 10 to 15 mm into the stroma’s tissue in about an hour. This is enough to get through the stroma and into the tumor,” says Daunch.

“Lab tests show that the medical device was highly effective in treating human pancreatic cancer cells within mice,” added Daunch. “With conventional infusion therapy, the tumors grew 700 percent and with the device working on natural diffusion alone the tumors grew 200 percent. However, when running the device with iontophoresis, the tumor shrank 40 percent. This could turn an inoperable tumor into an operable one.” Subsequent testing of a scaled-up device in canines demonstrated depth of penetration and the low systemic toxicity required for a human device.

Daunch notes that the Food and Drug Administration (FDA) took notice of these results. ACT’s next steps are to develop a human clinical device and move onto to human safety trials.

Simulation Optimized the Fluid Dynamics in the Pancreatic Cancer Chemotherapy Implant

Before these promising tests, ACT faced a few design challenges when coming up with their chemotherapy implant.

For example, “There was some electrolysis on the electrode in the reservoir. This created bubbles that would change the electrode’s impedance,” explains Daunch. “We needed a mechanism to sweep the bubbles from the surface.”

An added challenge is that ACT never knows exactly where doctors will place the device on the pancreas. As a result, the mechanism to sweep the bubbles needs to work from any orientation.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

“We used ANSYS Fluent and ANSYS Discovery Live to iterate a series of designs,” says Daunch. “Our design team modeled and validated our work very quickly. We also noticed that the bubbles didn’t need to leave the reservoir, just the electrode.”

“If we place the electrode on a protrusion in a bowl-shaped reservoir the bubbles move aside into a trough,” explains Daunch. “The fast fluid flow in the center of the electrode and the slower flow around it would push the bubbles off the electrode and keep them off until the bubbles floated to the top.”

As a result, the natural fluid flow within the redesigned reservoir was able to ensure the bubbles didn’t affect the electrode’s impedance.

To learn how your startup can use computational fluid dynamics (CFD) software to address your design challenges, please visit the ANSYS Startup Program.

The post Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer appeared first on ANSYS.

► Making Wireless Multigigabit Data Transfer Reliable with Simulation
    4 Oct, 2018

The demand for wireless communications with high data transfer rates is growing.

Consumers want wireless 4K video streams, virtual reality, cloud backups and docking. However, it’s a challenge to offer these data transfer hogs wirelessly.

Peraso aims to overcome this challenge with their W120 WiGig chipset. This device offers multigigabit data transfers, is as small as a thumb-drive and plugs into a USB 3.0 port.

The chipset uses the Wi-Fi Alliance’s new wireless networking standard, WiGig.

This standard adds a 60 GHz communication band to the 2.4 and 5 GHz bands used by traditional Wi-Fi. The result is higher data rates, lower latency and dynamic session transferring with multiband devices.

In theory, the W120 WiGig chipset could run some of the heaviest data transfer hogs on the market without a cord. Peraso’s challenge is to design a way for the chipset to dissipate all the heat it generates.

Peraso uses the multiphysics capabilities within the ANSYS Electronics portfolio to predict the Joule heating and the subsequent heat flow effects of the W120 WiGig chipset. This information helps them iterate their designs to better dissipate the heat.

How to Design High Speed Wireless Chips That Don’t Overheat

Systems designers know that asking for high-power transmitters in a compact and cost-effective enclosure translates into a thermal challenge. The W120 WiGig chipset is no different.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

The chipset includes active/passive components and two main chips that are mounted on a printed circuit board (PCB). The system reaches considerably high temperatures due to the Joule heating effect.

To dissipate this heat, design engineers include a large heat sink that connects only to the chips and a smaller one that connects only to the PCB. The system is also enclosed in a casing with limited openings.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Traditionally, optimizing this set up takes a lot of trial and error as measuring the air flow within the enclosure would be challenging.

Instead, Peraso uses ANSYS SIwave to simulate the Joule heating effects of the system. This heat map is transferred to ANSYS Icepak, which then simulates the current heat flow, orthotropic thermal conductivity, heat sources and other thermal effects.

This multiphysics simulation enables Peraso to predict the heat distribution and the temperature at every point of the W120 WiGig chipset.

From there, Peraso engineers iterate their designs until they reached their coolest setup.

This simulation led design tactic helps Peraso optimize their system until they reached a heat transfer balance they need. To learn how Peraso performed this iteration, read Cutting the Cords.

The post Making Wireless Multigigabit Data Transfer Reliable with Simulation appeared first on ANSYS.

► Designing 5G Cellular Base Station Antennas Using Parametric Studies
    3 Oct, 2018

There is only so much communication bandwidth available. This will make it difficult to handle the boost in cellular traffic expected from the 5G network using conventional cellular technologies.

In fact, cellular networks are already running out of bandwidth. This severely limits the number of users and data rates that can be accommodated by wireless systems.

One potential solution is to leverage beamforming antennas. These devices transmit different signals to different locations on the cellular network simultaneously over the same frequency.

Pivotal Commware is using ANSYS HFSS to design beamforming antennas for cellular base stations that are much more affordable than current technology.

How 5G Networks Will Send More Signals on Existing Bandwidths

A 28 GHz antenna for a cellular base station.

A 28 GHz antenna for a cellular base station.

Traditionally, cellular technologies — 3G and 4G LTE — crammed more signals on the existing bandwidth by dividing the frequencies into small segments and splitting the signal time into smaller pulses.

The problem is, there is only so much you can do to chop up the bandwidth into segments.

Alternatively, Pivotal’s holographic beamforming (HBF) antennas are highly directional. This means they can split up the physical space a signal moves through.

This way, two cells in two locations can use the same frequency at the same time without interfering with each other.

Additionally, these HBF antennas use varactor (variable capacitors) and electronic components that are simpler and more affordable than existing beamforming antennas.

How to Design HBF Antennas for 5G Cellular Base Stations

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

Antenna design companies — like Pivotal — are always looking to design devices that optimize cost, size, weight and power (C-SWaP) and performance.

So, how was Pivotal able to account for C-SWaP and performance so thoroughly?

Traditionally, this was done by building prototypes, finding flaws, creating new designs and integrating manually.

Meeting a product launch with an optimized product using this manual method is grueling.

Pivotal instead uses ANSYS HFSS to simulate their 5G antennas digitally. This allows them to assess their HBF antennas and iterate their designs faster using parametric studies.

For instance, Pivotal wants to optimize their design for performance characteristics like roll-off. To do so they can plug in the parameter values, run simulations with these values and see how each parameter affects roll-off.

By setting up parametric studies, Pivotal assess which parameters affect performance and C-SWaP the most. From there they could weigh different trade-offs until they settled on an optimized design that accounted for all the factors they studied.

To see how Pivotal set up their parametric studies and optimize their antenna designs, read 5G Antenna Technology for Smart Products.

The post Designing 5G Cellular Base Station Antennas Using Parametric Studies appeared first on ANSYS.

Convergent Science Blog top

► Apollo 11 at 50: Balancing the Two-Legged Stool
  15 Jul, 2019

On July 16th, I will look up at the night sky and celebrate the 50-year anniversary of the launch of Apollo 11. As I admire the full moon, the CFDer in me will think about the classic metaphor of the three-legged stool. Modern engineering efforts depend on theory, simulation, and experiment: Theory gives us basic understanding, simulation tells us how to apply this theoretical understanding to a practical problem, and experiment confirms that our applied understanding is in agreement with the physical world. One element does not seek to replace another; instead, each element reinforces the others. By modern standards, simulation did not exist in the 1960s⁠—NASA’s primary “computers” were the women we saw in Hidden Figures, and humans are limited to relatively simple calculations. When NASA sent people to the moon, it had to build a modern cathedral balanced atop a two-legged stool.

I like the cathedral metaphor for the Saturn V rocket because it expresses some unexpected similarities between the efforts. A medieval cathedral was a huge, societal construction effort. It required workers from all walks of life to contribute above and beyond, not just in scale but in care and diligence. Designers had to go past what they fully understood, overcoming unknown engineering physics through sheer persistence. The end product was a unique and breathtaking expression of craftsmanship on a colossal scale.

In aerospace, we are habituated to assembly lines, but each Saturn V was a one-off. The Apollo program as a whole employed some 400,000 people, and the Saturn family of launch vehicles was a major slice of the pie. Though their tools were certainly more advanced than a medieval artisan’s, these workers essentially built this 363-foot-tall rocket by hand. They had to, because the rocket had to be perfect. The rocket had to be perfect because there was so little margin for error, because engineers were reaching so far beyond the existing limits of understanding. Huge rockets are not routine today, but I want to highlight a few design challenges of the Saturn V as places where modern simulation tools would have had a program-altering effect.

The mighty F-1 remains the largest single-chambered liquid-fueled rocket engine ever fired. All aspects of the design process were challenging, but devising a practical combustion chamber was particularly torturous. Large rocket engines are prone to a complex interaction between combustion dynamics and aeroacoustics. Pressure waves within the chamber can locally enhance the combustion rate, which in turn alters the flow within the engine. If these physical processes occur at the wrong rates, the entire system can become self-exciting and unstable. From a design standpoint, engineers must control engine stability through chamber shaping, fuel and oxidizer injector design, and internal baffling. 

Without any way to simulate the fuel injection, mixing, combustion, and outflow, engineers were left with few approaches other than scaling, experimentation, and doggedness. They started with engines they knew and understood, then tried to vary them and enlarge them. They built a special 2D transparent thrust chamber, then applied high-speed photography to measure the unsteadiness of the combustion region. They literally set off tiny bombs within an operating engine, at a variety of locations, monitoring the internal pressure to see whether the blast waves decayed or were amplified. Eventually they produced a workable design for the F-1, but, in the words of program manager Werner von Braun:

…lack of suitable design criteria has forced the industry to adopt almost a completely empirical approach to injector and combustor development… [which] does not add to our understanding because a solution suitable for one engine system is usually not applicable to another…

It was being performed by engineers, but in some senses, it wasn’t quite engineering. Persistence paid off in the end, but F-1 combustion instability almost derailed the whole Apollo program.

Close-up of an F-1 injector plate. Many of the 1428 liquid oxygen injectors and 1404 RP-1 fuel injectors can be seen. The injector plate is about 44 inches in diameter and is split into 13 injector compartments by two circular and twelve radial baffles. Photo credit: Mike Jetzer (heroicrelics.org).

Imagine if Rocketdyne engineers had had access to modern simulation tools! A tool like CONVERGE can simulate liquid fuel spray impingement directly, allowing an engineer to parametrically vary the geometry and spray parameters. A tool like CONVERGE can calculate the local combustion enhancement of impinging pressure fluctuations, allowing an engineer to introduce different baffle shapes and structures to measure their moderating effect. And the engineer can, in von Braun’s words, add to his or her understanding of how to combat combustion instability.

Snapshot from an RP-1 fuel tank on a Saturn I (flight SA-5). This camera looks down from the top center of the tank. Note the anti-slosh baffles. Photo credit: Mark Gray on YouTube.

Fuel slosh in the colossal lower-stage tanks presented another design challenge. The first-stage liquid oxygen tank was 33 feet in diameter and about 60 feet long. How do you study slosh in such an immense tank while subjecting it to what you think will be flight-representative vibration and acceleration? What about the behavior of leftover propellant in zero gravity? In the 1960s, the answer was you built the rocket and flew it! In fact, the early Saturn launches (uncrewed, of course) featured video cameras to monitor fuel flow within the tanks. Cameras of that era recorded to film, and these cameras were housed in ejectable capsules. After collecting their several minutes of footage, the capsules would deploy from the spent stage and parachute to safety. I bet those engineers would have been over the moon if you had presented them with modern volume of fluid simulation tools.

Readers who have watched Apollo 13 may recall that the center engine of the Saturn V second stage failed during the launch. This was due to pogo, another combustion instability problem. In a rocket experiencing pogo, a momentary increase in thrust causes the rocket structure to flex, which (at the wrong frequency) can cause the fuel flow to surge, causing another self-exciting momentary increase in thrust. In severe cases, this vibration can destroy the vehicle. Designers added various standpipes and accumulators to de-tune the system, but this was only performed iteratively, flying a rocket to measure the effects. Today, we can study the fluid-structure interaction before we build the structure! Modern simulation tools are dramatic aids to the design process.

Saturn V first-stage anti-pogo valve. Diagram credit: NASA.

Today’s aerospace engineering community is doing some amazing things. SpaceX and Blue Origin are landing rockets on their tails. The United Launch Alliance has compiled a perfect operational record with the Delta IV and Atlas V. Companies like Rocket Lab and Firefly Aerospace are demonstrating that you don’t need to have the resources of a multinational conglomerate to put payloads into orbit. But for me, nothing may ever surpass the incredible feat of engineers battling physical processes they didn’t fully understand, flying people to the moon on a two-legged stool.

Interested in reading more about the Saturn V launch vehicle? I recommend starting with Dr. Roger Bilstein’s Stages to Saturn.

► CONVERGE Chemistry Tools: The Simple Solution to Complex Chemistry
  20 May, 2019

As I’ve started to cook more, I’ve learned the true value of multipurpose kitchen utensils and appliances. Especially living in an apartment with limited kitchen space, the fewer tools I need to make delicious meals, the better. A rice cooker that doubles as a slow cooker? Great. A blender that’s also a food processor? Sign me up. Not only do these tools prove to be more useful, but they’re also more economical.

The same principle applies beyond kitchen appliances. CONVERGE CFD software is well known for its flow solver, autonomous meshing, and fully coupled chemistry solver, but did you know that it also features an extensive suite of chemistry tools, with even more coming in version 3.0? Whether you need to speed up your abnormal combustion simulations, create and validate new chemical mechanisms, expedite your design process with 0D or 1D modeling, or compare your chemical kinetics experiments with simulated results, CONVERGE chemistry tools have you covered. The many capabilities of CONVERGE translate to a broadly applicable piece of software for CFD and beyond.

Zero-Dimensional Simulations

CONVERGE 3.0 expands on the previous versions’ 0D simulation capabilities with a host of new tools and reactors that are useful across a wide range of applications. If you’re running diesel engine simulations, you can take advantage of CONVERGE’s autoignition utility to quickly generate ignition delay data for different combinations of temperature, pressure, and equivalence ratio. Furthermore, you can couple the autoignition utility with 0D sensitivity analysis to determine which reactions and species are important for ignition or to determine the importance of various reactions in forming a given species.

The variable volume tool in CONVERGE 3.0 is a closed homogeneous reactor that can simulate a rapid compression machine (RCM). RCMs are ideal for chemical kinetics studies, especially for understanding autoignition chemistry as a function of temperature, pressure, and fuel/oxygen ratio.

Another new reactor model is the 0D engine tool, which can provide information on autoignition and engine knock. HCCI engines operate by compressing well-mixed fuel and oxidizer to the point of autoignition, and so you can use the 0D engine tool to gain valuable insight into your HCCI engine.

For other applications, look toward the well-stirred reactor (WSR) model coming in 3.0. The WSR assumes a high rate of mixing so that the output composition is identical to the composition inside the reactor. WSRs are thus useful for studying highly mixed IC engines, highly turbulent portions of non-premixed combustors, and ignition and extinction limits on residence time such as lean blow-out in gas turbines.

In addition to the new 0D reactor models, CONVERGE 3.0 will also feature new 0D tools. The chemical equilibrium (CEQ) solver calculates the concentration of species at equilibrium. The CEQ solver in CONVERGE, unlike many equilibrium solvers, is guaranteed to converge for any combination of gas species. The RON/MON estimator finds the research octane number (RON) and motor octane number (MON) for a fuel by finding the critical compression ratio (CCR) at which autoignition occurs and correlates this with the CCR of PRF fuel composition using the LLNL Gasoline Mechanism.

One-Dimensional Simulations

For 1D simulations, CONVERGE contains the 1D laminar premixed flame tool, which calculates the flamespeed of a combustion reaction using a freely propagating flame. You can use this tool to ensure your mechanisms yield reasonable flamespeeds for specific conditions and to generate laminar flamespeed tables that are needed for some combustion models, such as G-Equation, ECFM, and TFM. In CONVERGE 3.0, this solver has seen significant improvement in parallelization and scalability, as shown in Fig. 1. You can additionally perform 1D sensitivity analysis to determine how sensitive the flamespeed is to the various reactions and species in your mechanism.

Figure 1. Parallelization (left) and scalability (right) of the CONVERGE flamespeed solver.

CONVERGE 3.0 also includes a new 1D reactor model: the plug flow reactor (PFR). PFRs can be used to predict chemical kinetics behavior in continuous, flowing systems with cylindrical geometry. PFRs have commonly been applied to study both homogeneous and heterogeneous reactions, continuous production, and fast or high-temperature reactions.

Chemistry Tools

Zero- and one-dimensional simulation tools aren’t all CONVERGE has to offer. CONVERGE also features a number of tools for optimizing reaction mechanisms and interpreting your chemical kinetics simulation results.

Detailed chemistry calculations can be computationally expensive, but you can decrease computational time by reducing your chemical mechanism. CONVERGE’s mechanism reduction utility eliminates species and reactions that have the least effect on the simulation results, so you can reduce computational expense while maintaining your desired level of accuracy. In previous versions of CONVERGE, mechanism reduction was only available to target ignition delay. In CONVERGE 3.0, you can also target flamespeed, so you can ensure that your reduced mechanism maintains a similar flamespeed as the parent mechanism.

CONVERGE additionally offers a mechanism tuning utility to optimize reaction mechanisms. This tool prepares input files for running a genetic algorithm optimization using CONVERGE’s CONGO utility, so you can tune your mechanism to meet specified performance targets.

If you’re developing multi-component surrogate mechanisms, or you need to add additional pathways or NOx chemistry to a fuel mechanism, the mechanism merge tool is the one for you. This tool combines two reaction mechanisms into one and resolves any duplicate species or reactions along the way.

CONVERGE 3.0 will feature new table generation and visualization tools. With the tabulated kinetics of ignition (TKI) and tabulated laminar flamespeed (TLF) tools, you can generate ignition or flamespeed tables that are needed for certain combustion models. To visualize your results, you can run a CONVERGE utility to prepare your tables for visualization in Tecplot for CONVERGE or other visualization software.

Figure 2. 3D visualization of flamespeed as a function of pressure and temperature.

CONVERGE’s suite of chemistry tools is just one of the components that make CONVERGE a robust, multipurpose solver. And just as multipurpose kitchen appliances have more uses during meal prep, CONVERGE’s chemistry capabilities mean our software has a broad scope of applications for not just CFD—but for all of your chemical kinetics simulation needs. Interested in learning more about CONVERGE or CONVERGE’s chemistry tools? Contact us today!

► Your &lt;span class=&quot;text-lowercase&quot;&gt;μ&lt;/span&gt; Matters: Understanding Turbulence Model Behavior
    6 Mar, 2019

I recently attended an internal Convergent Science advanced training course on turbulence modeling. One of the audience members asked one of my favorite modeling questions, and I’m happy to share it here. It’s the sort of question I sometimes find myself asking tentatively, worried I might have missed something obvious. The question is this:

Reynolds-Averaged Navier Stokes (RANS) turbulence models and Large-Eddy Simulation (LES) turbulence models have very different behavior. LES will become a direct numerical simulation (DNS) in the limit of infinitesimally fine grid, and it shows a wide range of turbulent length scales. RANS does not become a DNS, no matter how fine we make the grid. Rather, it shows grid-convergent behavior (i.e., the simulation results stop changing with finer and finer grids), and it removes small-scale turbulent content.

If I look at a RANS model or an LES turbulence model, the transport equations look very similar mathematically. How does the flow ‘know’ which is which?

There’s a clever, physically intuitive answer to this question, which motivates the development of additional hybrid models. But first we have to do a little bit of math.

Both RANS and LES take the approach of decomposing a turbulent flow into a component to be resolved and a component to be modeled. Let’s define the Reynolds decomposition of a flow variable ϕ as

$$\phi = \bar \phi \; + \;\phi’,$$

where the overbar term represents a time/ensemble average and the prime term is the fluctuating term. This decomposition has the following properties:

$$\overline{\overline{\phi}} = \bar \phi \;\;{\rm{and}}\;\;\overline{\phi’} = 0.$$

Figure 1 Schematic of time-averaging a signal.

LES uses a different approach, which is a spatial filter. The filtering decomposition of ϕ is defined as

$$\phi  = \left\langle \phi  \right\rangle + \;\phi ”,$$

where the term in the angled brackets is the filtered term and the double-prime term is the sub-grid term. In practice, this is often calculated using a box filter, a spatial average of everything inside, say, a single CFD cell. The spatial filter has different properties than the Reynolds decomposition,

$$\left\langle {\left\langle \phi  \right\rangle } \right\rangle \ne \left\langle \phi  \right\rangle \;\;{\rm{and}}\;\;\left\langle {\phi ”} \right\rangle  \ne 0.$$

Figure 2 Example of spatial filtering. DNS at left, box filter at right. (https://pubweb.eng.utah.edu/~rstoll/LES/Lectures/Lecture04.pdf )

To derive RANS and LES turbulence models, we apply these decompositions to the Navier-Stokes equations. For simplicity, let’s consider only the incompressible momentum equation. The Reynolds-averaged momentum equation is written as

$$\frac{{\partial \overline {{u_i}} }}{{\partial t}} + \frac{{\partial \overline {{u_i}}\; \overline {{u_j}} }}{{\partial {x_j}}} = – \frac{1}{\rho }\frac{{\partial \overline P }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left[ {\mu \left( {\frac{{\partial \overline {{u_i}} }}{{\partial {x_j}}} + \frac{{\partial \overline {{u_j}} }}{{\partial {x_i}}}} \right) – \frac{2}{3}\mu \frac{{\partial \overline {{u_k}} }}{{\partial {x_k}}}{\delta _{ij}}} \right] – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\overline {{{u’}_i}{{u’}_j}}} } \right).$$

This equation looks the same as the basic momentum transport equation, replacing each variable with the barred equivalent, with the exception of the term* in red. That’s where the RANS model will make a contribution.

The LES momentum equation, again neglecting Favre filtering, is written

$$\frac{{\partial \left\langle {{u_i}} \right\rangle }}{{\partial t}} + \frac{{\partial \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle }}{{\partial {x_j}}} =  – \frac{1}{\rho }\frac{{\partial \left\langle P \right\rangle }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{{\partial \left\langle {{\sigma _{ij}}} \right\rangle }}{{\partial {x_j}}} – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\left\langle {{u_i}{u_j}} \right\rangle}}  – \rho \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle  \right).$$

Once again, we have introduced a single unclosed term*, shown in red. As with RANS, this is where the LES model will exert its influence.

These terms are physically stress terms. In the RANS case, we call it the Reynolds stress.

$${\tau _{ij,RANS}} =  – \rho \overline {{{u’}_i}{{u’}_j}}.$$

In the LES case, we define a sub-grid stress as follows:

$${\tau _{ij,LES}} = \rho \left( {\left\langle {{{u}_i}{{u}_j}} \right\rangle  – \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle } \right).$$

By convention, the same letter is used to denote these two subtly different terms. It’s common to apply one more assumption to both. Kolmogorov postulated that at sufficiently small scales, turbulence was statistically isotropic, with no preferential direction. He also postulated that turbulent motions were self-similar. The eddy viscosity approach invokes both concepts, treating

$${\tau _{ij,RANS}} = f\left( {{\mu _t},\overline V } \right)$$

and

$${\tau _{ij,LES}} = g\left( {{\mu _t},\overline V } \right),$$

where \(\overline V \) represents the vector of transported variables: mass, momentum, energy, and model-specific variables like turbulent kinetic energy. We have also introduced \({\mu _t}\), which we call the turbulent viscosity. Its effect is to dissipate kinetic energy in a similar fashion to molecular viscosity, hence the name.

If you skipped the math, here’s the takeaway. We have one unclosed term* each in the RANS and LES momentum equations, and in the eddy viscosity approach, we close it with what we call the turbulent viscosity \({\mu _t}\). Yet we know that RANS and LES have very different behavior. How does a CFD package like CONVERGE “know” whether that \({\mu _t}\) is supposed to behave like RANS or like LES? Of course the equations don’t “know”, and the solver doesn’t “know”. The behavior is constructed by the functional form of \({\mu _t}\).

How can the turbulent viscosity’s functional form construct its behavior? Dimensional analysis informs us what this term should look like. A dynamic viscosity has dimensions of density multiplied by length squared per time. If we’re looking to model the turbulent viscosity based on the flow physics, we should introduce dimensions of length and time. The key to the difference between RANS and LES behavior is in the way these dimensions are introduced.

Consider the standard k-ε model. It is a two-equation model, meaning it solves two additional transport equations. In this case, it transports turbulent kinetic energy (k) and the turbulent kinetic energy dissipation rate (ε). This model calculates the turbulent viscosity according to the local values of these two flow variables, along with density and a dimensionless model constant as

$${\mu _t} = {C_\mu }\rho \frac{{{k^2}}}{\varepsilon }.$$

Dimensionally, this makes sense. Turbulent kinetic energy is a specific energy with dimensions of length squared per time squared, and its dissipation rate has dimensions of length squared per time cubed. In a sufficiently well-resolved solution, all of these terms should limit to finite values, rather than limiting to zero or infinity. If so, the turbulent viscosity should limit to some finite value, and it does.

Figure 3 Example of a grid-converged RANS simulation: the ECN Spray A case, with a contour plot for illustration.

LES, in contrast, directly introduces units of length via the spatial filtering process. Consider the Smagorinsky model. This is a zero-equation model that calculates turbulent viscosity in a very different way. For the standard Smagorinsky model,

$${\mu _t} = \rho C_s^2{\Delta ^2}\sqrt {{S_{ij}}{S_{ij}}},$$

where \({C_s}\) is a dimensionless model constant, \({S_{ij}}\) is the filtered rate of strain tensor, and Δ is the grid spacing. Once again, the dimensions work out: density multiplied by length squared multiplied by inverse time. But what do the limits look like? The rate of strain is some physical quantity that will not limit to infinity. In the limit of infinitesimal grid size, the turbulent viscosity must limit to zero! The model becomes completely inactive, and the equations solved are the unfiltered Navier-Stokes equations. We are left with a direct numerical simulation.

When I was a first-year engineering student, discussion of dimensional analysis and limiting behaviors seemed pro forma and almost archaic. Real engineers in the real world just use computers to solve everything, don’t they? Yes and no. Even those of us in the computational analysis world can derive real understanding, and real predictive power, from considering the functional form of the terms in the equations we’re solving. It can even help us design models with behavior we can prescribe a priori.

Detached Eddy Simulation (DES) is a hybrid model, taking advantage of the similarity of functional forms of the turbulent viscosities in RANS and LES. DES adopts RANS-like behavior near the wall, where we know an LES can be very computationally expensive. DES adopts LES behavior far from the wall, where LES is more computationally tractable and unsteady turbulent motions are more often important.

The math behind this switching behavior is beyond the scope of a blog post. In effect, DES solves the Navier-Stokes equations with some effective \({\mu _{t,DES}}\) such that \({\mu _{t,DES}} \approx {\mu _{t,RANS}}\) near the wall and \({\mu _{t,DES}} \approx {\mu _{t,LES}}\) far from the wall, with \({\mu _{t,RANS}}\) and \({\mu _{t,LES}}\) selected and tuned so that they are compatible in the transition region. Our understanding of the derivation and characteristics of the RANS and LES turbulence models allows us to hybridize them into something new.

Figure 4 DES simulation over a backward facing step with CONVERGE

*This term is a symmetric second-order tensor, so it has six scalar components. In some approaches (e.g., Reynolds Stress models), we might transport these terms separately, but the eddy viscosity approach treats this unknown tensor as a scalar times a known tensor.

► What’s Knockin&#8217; in Europe?
  29 Jan, 2019

The Convergent Science GmbH team is based in Linz, Austria and provides support to our European clients and collaborators alike as they tackle the hard problems. One of the most interesting and challenging problems in the design of high efficiency modern spark-ignited (SI) internal combustion engines is the prediction of knock and the development of knock-mitigation strategies. At the 2018 European CONVERGE User Conference (EUC), several speakers presented recent work on engine knock.

This winter, when I cold-started my car, I heard a loud knocking noise. Usually, though, knocking is more prevalent in engines that operate near the edge of the stability range. The first step of knocking is spontaneous secondary ignition (autoignition) of the end-gases ahead of the flame front. When the pressure waves from this autoignition hit the walls of the combustion chamber, they often make a knocking noise and damage the engine. Knock is challenging to simulate because you must correctly calculate critical local conditions and simultaneously track the pressure waves that are traveling rapidly across the combustion chamber.

To enable you to easily model these conditions, CONVERGE offers autonomous meshing, full-cycle simulation, and flexible boundary conditions. Adaptive Mesh Refinement allows you to add cells and spend computational time on areas where the knock-relevant parameters (such as local pressure difference, heat release rate, and species mass fraction of radicals that indicate autoignition) are rapidly changing. CONVERGE can predict autoignition with surrogate fuels, changing physical engine parameters, and a spectrum of operating conditions.

EUC keynote speaker Vincenzo Bevilacqua from Porsche Engineering presented an intriguing new approach (re-defining knock index) to evaluate the factors that may contribute to knock and to identify a clear knock limit. In another study, researchers from Politecnico di Torino investigated the feasibility of water injection as knock mitigation strategy. In yet another study, Max Mally and his colleagues from VKA RWTH Aachen University used RANS to successfully reproduce combustion and knock with a spark-timing sweep approach at various exhaust gas recirculation (EGR) percentages. You can see in the figure below that they were able to capture the moving pressure waves.


The rapid propagation of the pressure waves across the combustion chamber functions much like a detonation. Source: Mally, M., Gunterh, M., and Pischinger, S., “Numerical Study of Knock Inhibition with Cooled Exhaust Gas Recirculation,” CONVERGE User Conference-Europe, Bologna, Italy, March 19-23, 2018.

Advancing the spark, using lean burn, turbo-charging, or running at a high compression ratio can increase the likelihood of knock. However, each cycle in an SI engine is unique, and thus autoignition is not a consistent phenomenon. When simulating an SI engine, it is critical to simulate multiple cycles to identify the limits of the operating conditions at which knock is likely to occur. (Fortunately, CONVERGE can easily run multi-cycle simulations!)

Knock is one of the limiting factors in engine design because many of the techniques that improve the thermal efficiency and enable downsizing of the engine increase the likelihood of knock. Here at Convergent Science, we encourage you to solve the hard problems. Go on, knock it out of park.


► 2018: CONVERGE-ING ON A DECADE
  17 Dec, 2018

Convergent Science thrived in 2018, with many successes, rapid growth, and consistent innovation. We celebrated the tenth anniversary of the commercial release of CONVERGE. The Convergent Science employee count surpassed 100, and our India office tripled in size. We formed new partnerships and collaborations and continued to bring CONVERGE to new application areas. Simultaneously, we endeavored to increase the prominence of CONVERGE in internal combustion applications and grew our market share.

Our dedicated team at Convergent Science ensures that CONVERGE stays on the cutting-edge of CFD software—implementing new models, enhancing CONVERGE features, increasing simulation speed and accuracy—while also offering exceptional support and customer service to our clients.

New Application Successes

Increasingly, clients are using CONVERGE for new applications and great strides are being made in these fields. Technical presentations and papers on gerotor pumps, blood pumps, reciprocating compressors, scroll compressors, and screw machines this year reflected CONVERGE’s increased use in the pumps and compressors markets. Research projects using CONVERGE to model gas turbine combustion, lean blow-out, ignition, and relight are going strong. In the field of aftertreatment, new acceleration techniques have been implemented in CONVERGE to enable users to accurately predict urea deposits in Urea/SCR aftertreatment systems while keeping pace with rapid prototyping schedules. In addition, we were thrilled to see the first paper using CONVERGE for offshore wind turbine modeling published this year, as part of a collaborative effort with the University of Massachusetts Amherst.

CONVERGE Featured at SAE, DOE Merit Review, and ASME ICEF

CONVERGE’s broad use in the automotive industry was showcased at the Society of Automotive Engineers World Congress Experience (SAE WCX18), with more than 30 papers presenting CONVERGE results. Convergent Science cultivates collaboration with industry, academic, and research institutions, and the benefit of these collaborations was prominently displayed at SAE WCX18. Organizations such as General Motors, Caterpillar, Ford, Jaguar Land Rover, Isuzu Motors, John Deere, Renault, Aramco Research Center, Argonne National Laboratory, King Abdullah University of Science and Technology (KAUST), Saudi Aramco, and the University of Oxford all authored papers describing CONVERGE results. These papers spanned a wide array of topics, including fuel injection, chemical mechanisms, HCCI, GCI, water injection, LES, spray/wall interaction, abnormal combustion, machine learning, soot modeling, and aftertreatment systems.

At the 2018 DOE Merit Review, CONVERGE was featured in 17 of the advanced vehicle technologies projects that were reviewed by the U.S. Department of Energy. The broad range of topics of the projects is a testament to the versatility and broad applicability of CONVERGE. The research for these projects was conducted at Argonne National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratories, the Department of Energy, National Renewable Energy Laboratory, and the University of Michigan.

CONVERGE was once again well represented at the ASME Internal Combustion Engine Fall Technical Conference (ICEF). At ASME ICEF 2018, 18 papers included CONVERGE results, with topics ranging from ignition systems and injection strategies to emissions modeling and predicting cycle-to-cycle variation. I was honored to have the opportunity to further my cause of defending the IC engine in a keynote presentation.

New Partnerships and Collaborations

At Convergent Science, we take pride in fostering partnerships and collaborations with companies and institutions to spark innovation and bring our best software to the CFD community. This year, we renewed our partnership with Roush Yates Engines, who had a fantastic 2018 season, achieving the company’s 350th win and winning the Monster Energy NASCAR Cup Series Championship. We formed a new partnership with Tecplot and integrated their industry-leading visualization software into CONVERGE Studio. In addition, we entered into new partnerships with the National Center for Supercomputing Applications and two Dassault Systèmes subsidiaries, Spatial Corp. and Abaqus. These partnerships improve the usability and applicability of CONVERGE and help CONVERGE reach new markets.

CONVERGE in Italy

We had a great showing of CONVERGE users at our second European CONVERGE User Conference held this year in Bologna, Italy. Attendees shared their latest research using CONVERGE for a host of different applications, from modeling liquid film boiling and mitigating engine knock to developing turbulent combustion models and simulating premixed burners with LES. For one of our networking events, we rented out the Ferrari Museum in Maranello, where we were treated to a tour of the museum and ate dinner surrounded by cars we wished we owned. We also enjoyed traditional Bolognese cuisine at the Osteria de’ Poeti and a reception at the Garganelli Restaurant. 

Turning 10 at the U.S. CONVERGE User Conference

It seemed only fitting to celebrate ten years of CONVERGE back where it all started in Madison, Wisconsin. During the fifth annual North American User Conference, we commemorated CONVERGE’s tenth birthday with a festive evening at the historic Orpheum Theater in downtown Madison. During the celebration, we heard from Jamie McNaughton of Roush Yates Engines, who discussed the game-changing impact of CFD on creating winning racing engines. Physics Girl Dianna Cowern entertained us with her live physics demonstrations and her unquenchable enthusiasm for all things science. I concluded the evening with a brief presentation (which you can check out below) reflecting on the past decade of CONVERGE and looking forward to the future. We were incredibly grateful to be able to celebrate the successes of CONVERGE with our users who have made these past ten years possible.

In addition to our 10-year CONVERGE celebration, we hosted our third trivia match at the Convergent Science World Headquarters. At the beautiful Madison Club, we heard a fascinating round of presentations on topics including gas turbine modeling, offshore fluid-structural dynamics, machine learning, and a wide range of IC engine applications.

Convergent Science India

The Convergent Science India office in Pune celebrated its one-year anniversary in August. The office has transformed in the span of the last year and a half. The employee count more than tripled—from two employees at the end of 2017 to seven at the end of 2018. Five servers are now up and running and the office is fully staffed. We’re thrilled with the service and support our Pune office has been able to offer our clients all around India.

CONVERGE 3.0 Coming Soon

CONVERGE 3.0 is slated to be released soon, and we truly believe this new version of CONVERGE will once again change the CFD game. In 3.0, you can look forward to our new boundary layer mesh and inlaid mesh features, which will allow greater meshing flexibility for accurate results at less computational cost. Our new partnership with Spatial Corp. will enable CONVERGE users to directly import CAD files into CONVERGE Studio, greatly streamlining our Studio users’ workflow. We’ve also focused a lot of our attention this year towards enhancing our chemistry tools to be more efficient, robust, and applicable to an even greater range of flow and combustion problems. We’ve added new 0D and 1D reactors, including a perfectly stirred reactor, 0D HCCI engine, RON and MON estimators, plug flow reactors, and improved our 1D laminar flame solver. Additionally, we enhanced our mechanism reduction capability by targeting both ignition delay and laminar flamespeed. But perhaps the most anticipated aspect of CONVERGE 3.0 is the scaling. 3.0 demonstrates dramatically superior parallelization compared to 2.4 and shows significant speedup even on thousands of cores.

Looking Ahead

2019 promises to be an exciting year. With the upcoming release of CONVERGE 3.0, we’re looking forward to growing CONVERGE’s presence in new application areas, continuing our work on pumps and compressors, and expanding our presence in aftertreatment and gas turbine markets. We will continue working hard to expand the usage of CONVERGE in the European, Asian, and Indian automotive markets. Above all, we look forward to more innovation, more collaboration, and continuing to provide unparalleled support to our clients. Want to join us? Check out our website to find out how CONVERGE can help you solve the hard problems.


Kelly looks back on the past decade of CONVERGE during the 10-Year Celebration at the 2018 CONVERGE User Conference-North America. The video Kelly references in his presentation is a video tribute to CONVERGE that was played earlier in the evening, Turning 10: A CONVERGE History.
► Harness the Power of CONVERGE + GT-SUITE with Unlimited Parallelization
    5 Nov, 2018

Imagine that you are modeling an engine. Engines are complex machines, and accurately modeling an engine is not an easy undertaking. Capturing in-cylinder dynamics, intake and exhaust system characteristics, complicated boundary conditions, and much more creates a problem that often takes multiple software suites to solve.

Convergent Science has a solution: CONVERGE Lite—and we’ve just introduced a new licensing option.

CONVERGE Lite is a reduced version of CONVERGE that comes free of charge with every GT-SUITE license. Gamma Technologies, the developer of GT-SUITE, and Convergent Science combined forces to allow users of GT-SUITE to leverage the power of CONVERGE.

CONVERGE LITE + GT-SUITE OVERVIEW

GT-SUITE is an industry-leading CAE system simulation tool that combines 1D physics modeling, such as fluid flow, thermal analysis, and mechanics, with 3D multi-body dynamics and 3D finite element thermal and structural analysis. GT-SUITE is a great tool for a wide variety of system simulations, including vehicles, engines, transmission, general powertrains, hydraulics, and more.

Let’s think again about modeling an engine. GT-SUITE is ideal for the primary workflow of engine design. But, what if you want to model 3D mixing in an intake engine manifold to track the cylinder-to-cylinder distribution of recirculated exhaust gas? Or simulate complex 3D flow through a throttle body to find the optimal design to maximize power? In these scenarios, 1D modeling is not sufficient on its own.

Visualization of flow through an optimized throttle body generated using data from a CONVERGE Lite + GT-SUITE coupled simulation.

In this type of situation where 3D flow analysis is critical, GT-SUITE users can invoke CONVERGE Lite to obtain detailed 3D analysis at no extra charge. CONVERGE Lite is fully integrated into GT-SUITE and is known for being user friendly. One of the biggest advantages of CONVERGE Lite is that it allows GT-SUITE users access to CONVERGE’s powerful autonomous meshing. With automatic mesh generation, fixed mesh embedding, and Adaptive Mesh Refinement, CONVERGE Lite eliminates user meshing time and allows for efficient grid refinement. In addition, CONVERGE Lite comes with automatic CFD species setup and automatic setup of fluid properties to match the properties in the GT-SUITE model. And as if that weren’t enough, recently CONVERGE Lite has been enhanced to include a license for Tecplot for CONVERGE, an advanced 3D post-processing software.

LICENSING

You can run CONVERGE Lite in serial for free if you have a GT-SUITE license. If you want to run CONVERGE Lite in parallel, you can purchase parallel licenses from Convergent Science. We have just introduced a new low-cost option for running CONVERGE Lite in parallel. For one flat fee, you can obtain a license from Convergent Science to run CONVERGE Lite on an unlimited number of cores. Even though CONVERGE Lite contains many features to enhance efficiency, 3D simulations can be computationally expensive. This new option is a great way to affordably speed up your integrated GT-SUITE + CONVERGE Lite simulations.

CONVERGE Lite is a robust tool, but it does not contain all of the features of the full CONVERGE solver. For example, if you want to take advantage of advanced physical models, like combustion, spray, or volume of fluid, or you want to simulate moving walls, such as pistons or poppet valves, a full CONVERGE license is required. With both a full CONVERGE license and a GT-SUITE license, you can also take advantage of CONVERGE’s detailed chemistry solver, multiphase flow modeling, and other powerful features while performing advanced CONVERGE + GT-SUITE coupled simulations.

The combined power of CONVERGE and GT-SUITE opens the door to a whole array of advanced simulations, like engine cylinder coupling, exhaust aftertreatment coupling, or fluid-structure interaction coupling, that cannot be accomplished with just one of the programs.

Contact a Convergent Science salesperson for licensing details and pricing information.

Contact sales

Numerical Simulations using FLOW-3D top

► CFD Engineer
  15 Jul, 2019

Come work in one of the best small cities in the USfor one of the best companies in New Mexico2! Flow Science is growing tech company with deep roots looking for outstanding engineers with an interest or expertise in the aerospace, automotive, additive manufacturing, and consumer products industries.

Principal responsibilities and key requirements

CFD engineers work at the intersection of classical physics, numerical methods, and computer science. We spend our days using our expertise in these fields to solve complex real-world engineering problems, to teach others about applied CFD, and to guide our development teams to create new models that grow our capabilities and application areas. This challenging and dynamic role requires the following skills to be successful:

  • An engineering degree from an ABET or equivalently accredited university and some work experience
    • M.S. degree (mechanical, aerospace, or chemical engineering preferred) and engineering internship experience OR
    • B.S. degree (mechanical, aerospace, or chemical engineering preferred) and 2+ years of engineering work experience
  • Strong understanding of engineering fundamentals, particularly fluid mechanics, heat transfer, and solid mechanics
  • Excellent oral communication, technical writing, and interpersonal skills
  • Ability to comfortably navigate a diverse, multicultural environment
  • Excellent organizational skills
  • Common sense and an unending desire to learn
  • To comply with U.S. Government regulations, including the International Traffic in Arms Regulations (ITAR), you must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or otherwise eligible to obtain the required authorizations.

Preferred skills and experience

Exceptional CFD engineers usually draw heavily on the following skills and experience:

  • 2+ years of relevant industry or academic experience (e.g., additive manufacturing, consumer product processing, coating, analysis of complex fluids, propellant management design, slosh analysis, etc.)
  • Experience with CFD, FEA, or other numerical analysis
  • Experience with experimental setups and data analysis
  • Experience with 3D CAD
  • Programming experience (FORTRAN and Python)
  • Demonstrated initiative in work projects
  • EIT certification

Benefits

Flow Science offers an exceptional benefits package to full-time employees including medical, dental, vision insurances, life and disability insurances, 401(k) and profit-sharing plans with generous employer matching, and an incentive compensation plan that offers a year-end bonus opportunity up to 30% of base salary.

Contact

Still interested? Submit your resume and a cover letter to careers@flow3d.com. Paper copies may be submitted via mail (Attention: Human Resources, 683 Harkle Road, Santa Fe, NM 87505) or fax (505-982-5551). Not quite what you’re looking for? Check out our other openings on our Careers Page >

1 HuffPost listed Santa Fe, NM as one of the top 5 small cities in the US

2 Flow Science has been named one of the Best Places to Work in New Mexico by Albuquerque Business First.

► User Conference Proceedings Available
  11 Jul, 2019

The FLOW-3D European Users Conference 2019 will be held June 3-5 at the Sheraton Diana Majestic in Milan, Italy. Join engineers, researchers and scientists from some of Europe’s most renowned companies and institutions to hone your simulation skills, explore new modeling approaches and learn about the latest software developments. This year’s conference features metal casting and water & environmental application tracks, advanced training for workflow automation with a focus on optimization, in-depth technical presentations by FLOW-3D users, and the latest product developments presented by Flow Science’s senior technical staff. The conference will be co-hosted by XC Engineering, the official distributor of FLOW-3D products in Italy and France.

Conference Proceedings Available!

We are pleased to announce that the conference proceedings for the FLOW-3D European Users Conference 2019 are now available for download. As always, our users have outdone themselves with the originality and quality of their presentations.

Advanced Training: Workflow Automation

Engineers need to be able to deliver project analyses faster and more efficiently than ever before. This is why FLOW-3D and FLOW-3D CAST have built-in options for workflow automation, a modular text-driven structure that allows for easy scripting, and batch postprocessing. In this advanced training, we will review the features that can help you save time and money through automation and optimization.

The Workflow Automation advanced training will take place the afternoon of June 3, from 14:00 – 18:00 at the Diana Majestic Sheraton. You can sign up for the training when you register for the conference.

Important Dates

  • May 24: Presentations Due
  • June 3: Advanced Training
  • June 3: Opening Reception
  • June 4: Tour of Milan
  • June 4: Conference Dinner

Conference Fees

  • Advanced Training – Automation: 300 €
  • Day 1 & Day 2 of the Conference: 300 €
  • Day 1 of the Conference: 200 €
  • Day 2 of the Conference: 200 €
  • Guest Fee (social events only): 50 €
Download Conference Proceedings

Past conference presentations are available through our website.

Tour of Milan!

We invite you to see the sights of Milan! All conference attendees are invited to a complimentary city tour on Tuesday, June 4. The tour will take place after the conference on June 4 from 17:30 –  19:15. Immediately following the tour, we will convene for the conference dinner at Toscanino. Please sign up for the tour when you register for the conference. Thank you to our tour sponsor, Protesa SACMI.

Milan city tour

Tour Highlights

  • Milan Central Station, Pirelli Tower
  • Palazzo Lombardia
  • Porta Nuova Skyscrapers District
  • Indro Montanelli Park, Villa Belgiojoso Bonaparte, Natural History Museum, Planetarium
  • Via Montenapoleone Fashion District
  • Brera Art District
  • Sforza Castle
  • Via Dante
  • Santa Maria delle Grazie
  • Navigli
  • Basilica of Sant’Ambrogio
  • Stock Exchange
  • Duomo di Milano and Piazza

Opening Reception

The conference will commence with an Opening Reception on Monday, June 3 at 19:00. We invite all conference attendees and their guests for a welcome aperitif and appetizers. The reception will take place in the Gazebo in the conference hotel’s garden.

Conference Dinner

We are excited to announce that this year’s conference dinner will be held at ToscaNino. Attendees will experience an excellent representation of the cuisine of Tuscany. The dinner will be held the evening of Tuesday, June 4. All conference attendees are invited to the conference dinner as part of their registration.

Conference dinner

Presenter Information

Each presenter will have a 30 minute speaking slot, including Q & A. All presentations will be distributed to the conference attendees and on our website after the conference. A full paper is not required for this conference. Please contact us if you have any questions about presenting at the conference. XC Engineering will sponsor this year’s Best Presentation Award.

Conference Hotel

The conference will be held at the Sheraton Diana Majestic. The Sheraton Diana Majestic is a historical hotel located in the heartbeat of Milan that works as the perfect base for shopping, business or discovering the city’s rich history. The hotel is located at Viale Piave 42, 20129. If you wish to book accommodations at the hotel, please contact the Sheraton Diana Majestic directly through their website or call +39 02 20581.

Diana Majestic Sheraton

Nearby Hotels

There are many hotels near the Sheraton Diana Majestic. We’ve researched some of these possibilities and ranked our choices below, along with their distance from the conference hotel.

#1: Hotel Teco
Via Lazzaro Spallanzani 27, 20129 Milan (0.7 km)
TripAdvisor Review

 #2: Hotel Sanpi Milano
Via Lazzaro Palazzi 18 | Corso Buenos-Aires/Area Porta Venezia, 20124 Milan (0.65 km)
TripAdvisor Review

 #3: WORLDHOTEL Cristoforo Colombo
Corso Buenos Aires 3, 20124 Milan (0.35 km)
TripAdvisor Review

#4: Best Western Plus Hotel Galles
Piazza Lima 2, 20124 Milan (1.0 km)
TripAdvisor Review

#5: Hotel Manin
Via Daniele Manin 7, 20121 Milan (1.2 km)
TripAdvisor Review

#6: Hotel Cavour
Via Fatebenefratelli 21, 20121 Milan (1.2 km)
TripAdvisor Review

Milan

Milan is the second most populous city in Italy, after Rome and is Italy’s first industrial city, with a multifaceted identity that offers attractions in the field of art, commerce, design, education, fashion, finance, and tourism. The city is brimming with iconic art and architecture from Roman times to the Renaissance and beyond to the contemporary era. Famous symbols of the city include the Duomo, an Italian Gothic cathedral that took 600 years to complete and now stands as the largest church in Italy, and Sforza Castle, home to several Dukes and Duchesses of Milan, as well as artists including Leonardo da Vinci and Michelangelo. Today the city is recognized as the world’s fashion and design capital, thanks to several international events and fairs including Milan Fashion Week and the Milan Furniture Fair, which are among the world’s largest in terms of revenue, visitors and growth.

Milan Underground

If you are coming from the airport to the conference hotel, you can take the Milan Underground. The stop for the hotel is P.ta Venezia.

Castello Sforzesco - in the heart of Milan city centre. Courtesy Shutterstock.
Castello Sforzesco - in the heart of Milan city centre. Courtesy Shutterstock.
Modern skyscrapers and architecture (vertical gardens). Courtesy Shutterstock.
Modern skyscrapers and architecture (vertical gardens). Courtesy Shutterstock.

500th Anniversary of the Death of Leonardo da Vinci

2019 marks the 500th anniversary of Leonardo da Vinci’s death. A series of events are planned around the world and of course in Italy, and Florence and Milan in particular, the two cities where Da Vinci spent most of his time. Milan is preparing for this period with many events: usually-closed rooms containing Leonardo’s fresco of the Sforza Castle will be open to the public, and Leonardo’s Codex and other art pieces, including tapestries and models, will be shown throughout the city.

Da Vinci’s Il Cavallo Case Study

‘Realizing Da Vinci’s Il Cavallo’ was a collaboration between XC Engineering and Institute and Museum of the History of Science (IMSS). Using Da Vinci’s notes on the casting of Il Cavallo, collected in a 34-page handbook, the IMSS and XC Engineering were able to demonstrate that Il Cavallo, often referred to as “the horse that never was,” can be successfully cast as designed. Read the full case study >

More Information

Do you have questions about the conference? Please call or email Amanda Ruggles at +1 505-982-0088 or amanda@flow3d.com.

► Learn CFD and teach CFD better
    3 Jul, 2019

Receiving a high quality CFD education is crucial in the world of civil, mechanical, computational and bio-chemical engineering. Accurate, versatile CFD software that offers state-of-the-art visualization and high-performance computing capabilities can enhance the learning experience of students and help them become CFD experts. This is why Flow Science offers Research and Teaching Assistance programs that provide free CFD software to students and teachers.

CFD 101 | General CFD
Visualization results of a centrifugal casting simulation in FLOW-3D CAST rendered using FlowSight.

Many academic researchers face the same problem: a lack of affordable, accessible, high-quality CFD software. Commercial software can be out of the budget of an academic program. And while open-source software tools are free, they require command-line expertise and a lot of troubleshooting, which takes time away from learning CFD.

At Flow Science we know that the quality of CFD education is an invaluable part of students’ success during their academic career and beyond. Our academic assistance program is designed to support the educational goals of participants and improve the quality of their research.

Flow Science offers two academic assistance programs in the US and Canada: the Research Assistance program and the Teaching Assistance program.

The Research Assistance Program

This program was created for individual researchers (students or teachers) at universities. A Research Assistance license is limited to a 4-month period, with the possibility of extensions. Researchers can request FLOW-3D, our multi-application CFD product or FLOW-3D CAST, our dedicated metal casting product. We also include our state-of-the-art post-processing and visualization tool FlowSight with all our CFD software.

Many universities are currently using or have used our Research Assistance program. Researchers at the Western Michigan University are using FLOW-3D CAST to estimate the erosion on various sand-casting systems for manufacturing automotive parts. The process involves complex turbulent flows and solidification. At the University of Ottawa, researchers are using FLOW-3D to estimate scour caused by tsunami-induced coastal inundation around structures. FLOW-3D is being used at the University of Cincinnati to determine the extent of lateral dispersion in the Ohio River. And researchers at the University of Pretoria used FLOW-3D to simulate the inkjet printability of inks through a piezo-electric nozzle. 

FLOW-3D has also been used in cutting-edge research. At the University of California, San Diego, researchers used the software to model ways to mitigate congenital heart defects.

The Teaching Assistance Academic Program

This program is designed for teachers who want to integrate CFD simulation software into their courses. An instructor can request FLOW-3D or FLOW-3D CAST software. FlowSight will be included. The Teaching Assistance license is annual and renewable. We provide enough seats per class/department so that each student can fully explore the software.

Some of our FLOW-3D Teaching Assistance participants include California State University, Texas A&M University, the University of Michigan, National Technical University of Athens and the University of Wisconsin-Madison.

The Application Process and Getting Started

The application is quick and easy to fill out. Research Assistance applicants will find an optional section to upload a research proposal. A good proposal should contain the summary of your research and a description of how you want to use the software. If your research proposal is accepted, we promptly provide you with a license without any limitations. If your proposal is not accepted or your application does not include a research proposal, your license will have cell count and core count limitations. All licenses include access to online video tutorials and instructions to get you started.

The Teaching Assistance application is similar to the Research Assistance application. Applicants are also asked to provide course IDs, details about the number of students in the class, and the percentage of FLOW-3D component in the final student grade.

While we do not offer technical support as part of the academic assistance programs, we have developed a series of tutorials to help you familiarize yourself with the software interface, suggested workflows, and best practices.

Yes, you can apply for both programs!

Please contact me at adwaith@flow3d.com for any questions regarding these academic programs.

► Flow Science Earns Recognition as New Mexico Family-Friendly Workplace
    2 Jul, 2019

Status will give Flow Science an edge in competition to attract the highest-quality applicants

Santa Fe, NM, July 2, 2019 — For the third year in a row, Flow Science, Inc. has earned distinction for its workplace policies by Family Friendly New Mexico, a statewide project developed to recognize companies that have adopted policies to give New Mexico businesses an edge in recruiting and retaining the best employees.

As we grow the state’s economy, we have the opportunity to be a national leader in offering New Mexicans workplaces that help companies attract and keep the best workers, said Giovanna Rossi, head of Family Friendly New Mexico. Implementing family-friendly policies can be a simple, concrete investment a company can make to ensure it can compete for highly qualified employees.  Studies have shown that costs associated with creating family-friendly benefits are more than made up for in improved productivity, employee morale and employee retention.

Flow Science is committed to continuing to offer a competitive, best-in-class total compensation package for its full-time employees. Along with excellent benefits such as employer-paid health insurance, a generous employer match on employees’ 401(k) contributions and substantial paid time off policies that make us an employer of choice, Flow Science offers parental leave for the birth or adoption of a child, a wellness allowance for stress reducing and health enhancing activities, and flexibility in work arrangements.

We want to attract, hire and keep the best employees and we believe our benefits and accommodations are what today’s most highly qualified employees around the nation are looking for, said Aimee Abby, Human Resources Manager at Flow Science.

The New Mexico Task Force on Work Life Balance created the New Mexico Family Friendly Business Award to recognize and celebrate New Mexico businesses that have family friendly policies in place, including paid leave, health support, work schedules and economic support. More information is available at http://www.nmfamilyfriendlybusiness.com/.

► FLOW-3D Workshops: Water Civil Infrastructure
    6 Jun, 2019
FLOW-3D Workshops Water Civil Infrastructure

Curious about FLOW-3D? Want to learn about how we go about modeling the most challenging free surface hydraulic applications? 

Our workshops are designed to deliver focused, hands-on, yet wide-ranging instruction that will leave you with a thorough understanding of how FLOW-3D is used in key water infrastructure industries. In the morning, you will explore using hands-on examples, the hydraulics of typical dam and weir cases, municipal conveyance and wastewater problems, and river and environmental applications. In the afternoon, you will be introduced to more sophisticated physics models, including air entrainment, sediment and scour, thermal plumes and density flows and particle dynamics. By the end of the day, you will have set up six models, absorbed the user interface and steps that are common to three classes of hydraulic problems, and used the advanced post-processing tool FlowSight to analyze the results of your simulations. This one-day class is comprehensive yet accessible for engineers new to CFD methods. 

All workshop registrants* will receive a 30-day license of FLOW-3D.

FLOW-3D workshop
A FLOW-3D workshop for water and environmental applications in Lyon, France. A special thank you to our host, Électricité de France.

Cancellation policy: For a full refund of the registration fee, attendees must cancel their registration by 5:00 pm MST one week prior to the date of the workshop. After that date, no refunds will be given.

Register for a Water & Environmental Workshop

  • American Express
    Discover
    MasterCard
    Visa
     

2019 FLOW-3D Workshops

  • August 1
    Hatch
    Calgary, Alberta
  • August 20
    FreshWater Engineering
    Madison, WI
  • September 13
    Stantec
    New Orleans, LA
  • September 18
    IIHR—Hydroscience & Engineering
    Iowa City, IA

Workshop Details

  • Registration is limited to 12-15 attendees
  • Cost: $499 (private sector); $299 (government); $99 (academic)
  • 9:00 am – 4:00 pm
  • 30-day FLOW-3D license*
  • Bring your own laptop and mouse to follow along, or just watch
  • Lunch provided by Flow Science

*Workshop licenses only available to prospective or lapsed customers.

Host a workshop!

If you would like to host a workshop, we are happy to travel to your location. 

How does it work?

You provide a conference room for up to 15 attendees, a projector and Wi-Fi. Flow Science provides the training, workshop materials, lunch and licenses. As host, you also receive three workshop seats free of charge, which you can offer to your own engineers, to consultants, or to partnering companies.

Workshop Locations

 August 1, 2019

Hatch
840 – 7th Avenue SW 
Suite 400 
Calgary, Alberta T2P 3G2

August 20, 2019

FreshWater Engineering
30 W. Mifflin St.
10th Floor
Madison, WI 53703

September 13, 2019

Stantec
1340 Poydras St.
Suite 1420
New Orleans, LA 70112

September 18, 2019

IIHR—Hydroscience & Engineering
The University of Iowa
100 C. Maxwell Stanley Hydraulics Lab
Iowa City, IA 52242

About the Instructors

Brian Fox, CFD Engineer

Brian Fox is a senior applications engineer with Flow Science who specializes in water and environmental modeling. Brian received an MS in Civil Engineering from Colorado State University with a focus on river hydraulics and sedimentation. He has over 10 years of combined experience working within private, public and academic sectors in water and environmental engineering applications. His experience includes using 1D, 2D and 3D hydraulic models for projects including fish passage, river restoration, bridge scour analysis, sediment transport modeling and analysis of hydraulic structures.
linkedin_small

John Wendelbo, Director of Sales

John Wendelbo, Director of Sales, focuses on modeling challenging water and environmental problems. John graduated from Imperial College with an MEng in Aeronautics, and from Southampton University with an MSc in Maritime Engineering Science. John joined Flow Science in 2013.
linkedin_small

About our workshops

Our workshops provide attendees with a valuable opportunity to learn about FLOW-3D and its powerful multiphysics modeling capabilities. These workshops are designed to cover the fundamentals of specific modeling simulations, provide hands-on learning, and allow attendees to test drive the software by building a model from start to finish. Additionally, each participant receives a free 30-day license and access to tutorial videos and practice examples.

FLOW-3D Workshop
A very successful FLOW-3D workshop for water and environmental applications in Bangkok, organized by our Thai partner DTA and hosted generously by King Mongkut's University of Technology Thonburi (KMUTT). Special thanks to Prof. Chaiyuth Chinnarasri.

Who should attend?

  • Practicing engineers working in the water resources, environmental, energy and civil engineering industries
  • Regulators and decision makers looking to better understand what state-of-the-art tools are available to the modeling community
  • All modelers working in the field of environmental hydraulics

What will you learn?

  • How to import geometry and set up free surface hydraulic models, including meshing and initial and boundary conditions.
  • How to add complexity by including sediment transport and scour, particles, scalars and turbulence.
  • How to use sophisticated visualization tools such as FlowSight to effectively analyze and convey simulation results.
  • Advanced topics, including air entrainment and bulking phenomena, shallow water and hybrid 3D/shallow water modeling, and chemistry.

You’ve completed the one-day workshop, now what?

We recognize all will not be absorbed in one day, and you may want to use FLOW-3D on one of your own problems or compare CFD results with prior measurements in the field or in the lab. After the workshop, your license will be extended for another month to use on your workstation. During this time you will have access to our technical staff in order to help you work through your specifics: we are here to help you at every step.

Past Workshops

EDF
Lyon, France

Northwest Hydraulic Consultants
Vancouver, BC

Stantec
Chicago, IL

AECOM
Morrisville, NC

SFDPW
San Francisco, CA

FERC
Portland, OR

Drexel University
Philadelphia, PA

 

WSP
Baltimore, MD

Wood Rodgers
Sacramento, CA

Moffatt & Nichol
San Diego, CA

Wood
Portland, OR

Northwest Hydraulics Consultants
Seattle, WA

Golder
Vancouver, BC

 

Parsons
Markham, ON

AECOM
Montreal, QC

Knight Piesold
Denver, CO

Alden Lab
Holden, MA

GEI
Portland, ME

O’Brien & Gere
East Norriton, PA

 
► FLOW-3D European Users Conference 2019 Speakers Announced
    7 May, 2019

SANTA FE, NM, May 7, 2019 Flow Science, Inc. has announced the speakers for its FLOW-3D European Users Conference 2019, being held on June 3-5 at the Diana Sheraton Majestic in Milan, Italy. The conference will be co-hosted by XC Engineering, the official distributor of FLOW-3D products in Italy and France.

Users from the commercial and academic sectors will present original, innovative work featuring applications of FLOW-3D or FLOW-3D CAST in the fields of water & environmental engineering, automotive and aerospace design, biotechnology, additive manufacturing, and more. Topics include design optimization of a storm surge barrier; air valve effectiveness in a potable water pipeline; thermal management of internal combustion engines; analysis of aluminum casting design for automotive lightweighting; cryogenic tank sloshing; and microfluidic flow-cell design. The diverse lineup of presenters includes speakers from FCA, Roche Diagnostics GmbH, Federal Mogul Nuremberg GmbH, A Tenneco Group Company, ArianeGroup GmbH, Plastic Omnium, Arcadis, Mott MacDonald, and Otto von Guericke University Magdeburg. Flow Science, Inc. senior technical staff will also present on the latest features and developments to the FLOW-3D product suite. A complete list of speakers and topics is available at https://www.flow3d.com/speakers-at-the-flow-3d-european-users-conference-2019/

This year’s conference offers two tracks: Water & Environmental, and Metal Casting. Advanced training in Workflow Automation and Optimization will take place on Monday, June 3. On Tuesday, June 4, conference attendees are invited to a complimentary tour of the city of Milan, sponsored by Protesa S.p.A.

More information about the conference, including online registration, is available at https://www.flow3d.com/flow-3d-european-users-conference-2019/

Mentor Blog top

► Event: What’s New in Simcenter Flotherm and Simcenter Flotherm XT 2019.1
    9 Jul, 2019

A web seminar on Simcenter Flotherm and Simcenter Flotherm XT 2019.1 software releases covers topics including; faster reduced order models for transient thermal simulation (BCI-ROM), electro-thermal modeling accuracy, through to optimization linkages to HEEDS software and more.

► On-demand Web Seminar: What's New in Simcenter Flomaster 2019.1
  27 Jun, 2019

In this web seminar you will learn about the new features in Simcenter Flomaster 2019.1.

► Event: Marine Craft Dynamic Motion with Simcenter Flomaster
  24 Jun, 2019

Come and learn about the new dynamic node elevation capabilities introduced in Simcenter Flomaster 2019.1 in marine applications.

► Blog Post: Article Roundup: Mentor’s New AI Capabilities, a High Schooler’s Engineering Crash Course, Easing Aerospace Electrical Compliance &amp; the Future of Electronics Manufacturing
  22 Jun, 2019
Mentor Extends AI Footprint How a High-Schooler Trying to Save Water Got a Crash Course in Engineering AI and ML fuel Catapult and Calibre updates Siemens’ Load Analyzer App Reduces Aerospace Electrical Compliance & Certification Risk Fraunhofer Future Packaging Line at SMT Connect 2019: A Manufacturing Solution is Realized Mentor Extends AI Footprint SemiWiki Mentor has expanded its artificial
► Technology Overview: Faster transient thermal simulation with Simcenter Flotherm BCI-ROM models
  12 Jun, 2019

Solve electronics cooling transient design problems up to 40,000 times faster by using Simcenter Flotherm generated Boundary Condition Independent Reduced Order Models (BCI- ROMs). BCI-ROMs generated from a full 3D CFD conduction models in Simcenter Flotherm release 2019.1 can help you rapidly explore the transient behavior of your electronics product. BCI-ROMs are mathematically guaranteed to respond correctly to any transient power variation and in any thermal environment or boundary condition. As BCI-ROMs can be solved orders of magnitude faster, they make it feasible to rapidly analyze huge numbers of power loading conditions, thermal environments, and power control strategies. BCI-ROMs can be created for many application types, including vehicle power electronics,consumer electronics like smartphones, semiconductor packaging designs, and more.

► Technology Overview: Thermal simulation using Simcenter Flotherm & Simcenter Flotherm XT during product design to optimize thermal performance
  12 Jun, 2019

George Chiriac, explains how Simcenter Flotherm and Simcenter Flotherm XT helps design engineers at Continental Automotive overcome challenges while improving and optimizing products through simulation, while meeting customer requirements.

Tecplot Blog top

► Learning Tecplot 360 Just Got Easier! 
  23 Apr, 2019

Tecplot 360 Basics Certification

Earning your Tecplot 360 Certification helps build engineering skills and shows your knowledge of this popular visualization and analysis tool. Listing your certification on your resume, CV and LinkedIn profile can help future employers match your skills to their job requirements.

To earn your certification, use a Tecplot 360 free trial, or upgrade your licensed software to the latest version. We recommend using the newest version of Tecplot 360.

The training materials include:

  • A video that walks you step-by-step through each skill you need to learn. You can watch the video on YouTube or download the MP4 to watch offline.
  • A training manual in PDF format has the instructions from the video if you prefer to follow a written guide.
  • The datasets used in the Video and Manual to follow along. You are welcome to use your own datasets if you prefer.

Click the button below for all instructions and materials you need to get certified.

Tecplot 360 Basics Certification

The training covers Tecplot 360 Basics:

  • Compatible Tecplot 360 data formats
  • Loading data
  • Working with zones
  • Slices and streamtraces
  • Pages and frames
  • Plot types
  • Derived objects and calculations
  • Image formats, animations and exporting. 

Once you complete the test and upload your plot, we will review your work and send you your Certificate! Note that certification is given at the sole discretion of Tecplot, Inc. and is based on evaluation of your submitted answers and plot.

 

More Ways to Learn Tecplot 360

Are you already Tecplot 360 Certified and looking to learn more advanced topics? Here are more ways to learn Tecplot 360.

Tecplot 360 Getting Started Guide

A complete set of documentation is included with your Tecplot 360 installation, and links can also be found on our website Documentation. Four tutorials that get progressively more advanced are in our Getting Started Manual. The tutorials include:

  1. External Flow – Using the Onera M6 wing model, this tutorial covers loading the data, producing a basic plot, slicing, streamtraces, isosurfaces, probing, and comparing simulated and experimental data (including normalizing the data).
  2. Understanding Volume Surfaces – The DuctFlow dataset is used in this tutorial as an example of how Tecplot 360 renders volume surfaces using Surfaces to Plot.
  3. Transient Data – A wind turbine dataset with 127 time steps helps you understand how transient (time-based) data is structured and how to produce animated contour plots, extract data over time for analysis, and calculate and visualize additional variables using the Tecplot 360 analysis tools.
  4. Finite Element Analysis – A transient FEA dataset of a connecting rod created with LSDYNA is used in this tutorial.

Datasets used in these examples are included with your installation of Tecplot 360, or they can be found in our Getting Started Bundle.

Getting Started Manual

Video Tutorials

Over the past few years, we have built up an extensive Video Library – 82 videos to date! Topics range from loading data to working with transient data, and everything in between. The videos are sorted by most current first, but there is a long list of categories to help you find the topic most interesting to you. Many of the videos have been transcribed if you prefer reading the tutorial as you work.

Tecplot 360 Video Library

Written Tutorials

Speaking of learners who prefer reading to watching videos, the Tecplot Blog contains a number of tutorials. The blog tutorials are in the “Tecplot Tip” category.

Browse our Tecplot Tips

Free Tecplot 360 Training

We offer one hour of free online training when you purchase a Tecplot 360 license, or when you download a Tecplot 360 Free Trial. The training can be tailored to meet your specific workflows, or we can walk you through the standard training modules – answering your specific questions along the way.

Download a Free Trial

Group Training – Onsite or Online

Last but not least, companies have found it very helpful to get all their engineers and scientists up-to-speed quickly on Tecplot 360. We can address your specific business challenges and cover designated topics. You provide the facilities and computers, we provide the expert instructor and in-depth materials. To find out more, please email at support@tecplot.com or use the Contact Form.

For Tecplotters in Europe and Western Asia, our Tecplot Europe office holds Tecplot User Days for beginning through advanced users. These training sessions are free of charge! The schedule and registration forms are up on our Tecplot Europe Training page.

Tecplot Europe Training

The post Learning Tecplot 360 Just Got Easier!  appeared first on Tecplot.

► Python Multiprocessing Accelerates Your CFD Analysis Time
  17 Apr, 2019

PyTecplot and the Python multiprocessing module was the subject of our recent Webinar. PyTecplot already has a lot of great capability, and bringing in the Python multiprocessing toolkit allows you to accelerate your work and get it done even faster. This blog answers some questions asked during the Webinar.

1. What is PyTecplot?

PyTecplot is an API to control Tecplot 360. PyTecplot is a separate installation from Tecplot 360. When you have Tecplot 360 installed, PyTecplot will need to be installed separately. Because this is a Python module you have to install it as part of your Python installation. A 64-bit installation of Python 2.7 or Python 3.4 and newer is required. All of this information is in our (very thorough) documentation.

PyTecplot Documentation

2. What is Python multiprocessing?

Multiprocessing is a process-based Python “threading” interface. “Threading” is in quotes because it is not actually using multi-threading. It’s actually spawning separate processes. We encourage you to read more in the Python documentation, Python multiprocessing library.

In the Webinar we show you a method to use the Python Multiprocessing Library in conjunction with PyTecplot to accelerate the generation of movies and images. This technique can go beyond just the generation of images. You can extract information from your simulation data as well. The recent Webinar shows you how to use the multiprocessing toolkit in conjunction with PyTecplot. We use a transient simulation of flow around a cylinder as the example, but have timings from several different cases.

The recording and the scripts from the Webinar “Analyze Your Time-Dependent Data 6x Faster” can be found on our website.

Watch the Webinar

3. Is PyTecplot included in the package of Tecplot for CONVERGE?

Last year we partnered with Convergent Science, which makes a CFD code that is used quite heavily in internal combustion, but they also can work with many other application areas. In our partnership if you buy CONVERGE, you get free access to Tecplot for CONVERGE. Tecplot for CONVERGE allows you to use PyTecplot but only through the Tecplot 360 Graphical User Interface.

To have the capability of running PyTecplot in batch mode, as shown in the Webinar, you will need to upgrade to the full version of Tecplot 360.

Request a Quote

4. Does Tecplot 360 run well with other software like Star-CCM+?

Tecplot 360 does not have a direct loader for Star-CCM+. However, you can export from Star-CCM+ to CGNS, Ensight or Tecplot format, all of which can be read by Tecplot 360.

Tecplot 360 Compatible Solvers
Swimmer

5. When running PyTecplot in batch mode, Is session.stop required to clean up the temporary files? Or can you just let the process exit?

Yes and no. We found that on Linux, the multiprocessing toolkit just terminates the process resulting in a core dump. It is very important to call session.stop to avoid these core dump files.

6. What PyTecplot Resources Do You Have?

The post Python Multiprocessing Accelerates Your CFD Analysis Time appeared first on Tecplot.

► Predictive Ocean Model Helps Coastal Estuarine Research
    5 Apr, 2019

Jonathan Whiting is a member of the hydrodynamic modeling group at Pacific Northwest National Laboratory in Washington State. He has been a Tecplot 360 user since 2014.

PNNL and the Salish Sea Model

Pacific Northwest National Laboratory (PNNL) is a U.S. Department of Energy laboratory with a main campus in Richland, Washington. The PNNL mission is to advance the frontiers of knowledge by taking on some of the world’s greatest science and technology challenges. The lab has distinctive strengths in chemistry, earth sciences and data analytics and deploys them to improve America’s energy resiliency and to enhance national security.

Jonathan is part of the Coastal Sciences Division at PNNL’s Marine Sciences Laboratory. The hydrodynamic modeling group in Seattle, WA works primarily to promote both ecosystem management and the restoration of the Salish Sea and Puget Sound with the Salish Sea Model.

The Salish Sea Model is a predictive ocean modeling tool for coastal estuarine research, restoration planning, water-quality management, and climate change response assessment. It was initially created to evaluate the sensitivity of Puget Sound acidification to ocean and fresh water inputs and to reproduce hypoxia in the Puget Sound while examining its sensitivity to nutrient pollution, funded by the Washington State Department of Ecology. Now it is being applied to answer the most pressing environmental challenges in the greater Salish Sea region.

PNNL is currently in the first year of a three-year project to enhance the Salish Sea Model. The goals are to increase the model’s resolution and to make it operational, which means assuring the model runs on schedule and gets results that are continuously available to the public—including predictions a week or so ahead. This will allow for new applications such as the tracking of oil spills during response activities.

Jonathan has worked with the modeling team on several habitat restoration planning projects along the Snoqualmie and Skagit rivers in Washington’s Puget Sound region. Jonathan was responsible for post-processing model outputs into analytical and geospatial products to meet client expectations and to convey findings that aid project planning and stakeholder engagement.

The Challenge: Creating Consistent, High-Quality Visualization for Model Post-Processing

The hydrodynamics modeling group uses the Finite Volume Community Ocean Model (FVCOM) simulation code.

For the recent Skagit Delta Hydrodynamic Modeling project, a high-resolution triangular unstructured grid was created with 131,471 elements and 10 terrain-following sigma layers in the vertical plane. Post-processing was conducted on five time snapshots per scenario across 11 scenarios (including a baseline). Each file was about 55MB in uncompressed binary format.

The sheer quantity of plots was very challenging to handle, and it was important to generate clean plots that clearly conveyed results.

The Solution – Tecplot 360

Jonathan most often uses Tecplot 360 to generate top-down plots and videos that visualize parameters geospatially across an area. He then uses that visualization to convey meaningful project implications to his clients, who in turn use the products to inform program stakeholders and the public.

To handle the quantity of data Jonathan was working with, Tecplot 360 product manager Scott Fowler gave him a quick demonstration of Tecplot 360 and showed him how to use Chorus, the parametric design space analysis tool within Tecplot 360. Chorus allowed Jonathan to analyze a single dataset with multiple plots in a single view over time by using the matrix tool, easing the bulk generation of plots.

Tecplot support and development teams have been working closely with Jonathan, especially by adding new geospatial features to the software that enhance its automation and efficiency.

According to Jonathan, the key strengths in Tecplot’s software have been:

  • Ease of use
  • Availability of scripting to assist bulk processing
  • Variety of tools and features, such as georeferenced images

Using Tecplot 360 has allowed Jonathan to create professional plots that enhance the impact of their modeling work.

How Will Jonathan Use Tecplot In the Future?

Jonathan’s personal niche has become trajectory modeling, so he is also interested in using Tecplot to generate visuals associated with the movement of objects on the surface by using streamlines, velocity gradients, slices, and more. He intends to take a deeper dive into the vast capabilities of Chorus and PyTecplot in the future!

 


 

Tecplot 360’s latest geoscience-focused release, Tecplot 360 2018 R2, includes the popular FVCOM loader and has the ability to insert georeferenced images that put your data in context. Tecplot 360 will automatically position and scale your georeferenced Google Earth or Bing Maps images.

Learn more about how Tecplot 360 is used for geoscience research.

Try Tecplot 360 for Free

The post Predictive Ocean Model Helps Coastal Estuarine Research appeared first on Tecplot.

► Parallel SZL Output from SU2
    2 Apr, 2019

At the end of February 2019, I did a presentation at the SIAM Conference on Computer Science and Engineering (CSE) in Spokane Washington. I live in the Seattle area, and Spokane is reasonably close, so I decided to drive instead of fly. Unfortunately, the entire nation, including Washington state, was still in the grips of the dreaded “polar vortex.” The night before my drive to Spokane all of the mountain passes were closed due to heavy snowfall. They opened in time but the drive was slippery and slow. I probably should have taken a flight instead! On the drive, I came up with this Haiku…

Driving to Spokane
Snow whirlwinds on pavement
Must make conference!

Join the Tecplot Community

Stay up-to-date by subscribing to the TecIO Newsletter, events and product updates.

Subscribe to Tecplot

The Goal: Adding Parallel SZL Output to SU2

My presentation at the SIAM CSE conference was on the progress made adding parallel SZL (SubZone Load-on-demand) file output to SU2. The SU2 suite is an open-source collection of C++ based software tools for performing Partial Differential Equation (PDE) analysis and solving PDE-constrained optimization problems. The toolset is designed with Computational Fluid Dynamics (CFD) and aerodynamic shape optimization in mind, but is extensible to treat arbitrary sets of governing equations such as potential flow, elasticity, electrodynamics, chemically-reacting flows, and many others. SU2 is under active development by individuals all around the world on GitHub and is released under an open-source license. For more details, visit SU2 on Github.

The Challenge: Building System Compatibility

We implemented parallel SZL output in SU2 using the TecIO-MPI library, available for free download from the TecIO page. In some CFD codes, such as NASA’s FUN3D code, each user site is required to download and link the TecIO library. However, in the case of SU2 we decided to include the obfuscated TecIO source code directly into the distribution of SU2. This makes it much easier for the user – they need only download and build SU2 and they have SZL file output available.

However, this did add some complications from our end.

The main complication is that SU2 is built using the GNU configure system whereas TecIO is built using CMake. We had to create new automake, autoconfig, and m4 script files to seamlessly build TecIO as a part of the SU2 build.

If you find yourself integrating TecIO source into a CFD code built with the GNU configure system, feel free to shoot me some questions – scottimlay@tecplot.com

Implementing Serial vs. Parallel TecIO

Once TecIO was building as part of the SU2 build, it was straight-forward to get the serial version of SZL output working. SU2 already included an older version of TecIO, so we simply replaced those calls with the newer TecIO calls.

To get the parallel SZL output (using TecIO-MPI) working was a little more complicated. Specifically, it required knowing which nodes on each MPI rank were ghost nodes. Ghost nodes are nodes that are duplicated between partitions to facilitate the communication of solution data between MPI ranks. We only want the node to show up once in the SZL file, so we need to tell TecIO-MPI which nodes are the ghost nodes. In addition, CFD codes often utilize ghost cells (finite-element cells duplicated between MPI ranks) which must be supplied to TecIO-MPI. This information took a little effort to extract from the SU2 “output” framework.

High-Lift Prediction Workshop

The first test case is the Common Research Model from the
High-Lift Prediction workshop.

How Well Does It Perform?

We now have a version of SU2 that is capable of writing SZL files in parallel while being run on an HPC system. The next obvious questions: “How well does it perform?”

Test Case #1: Common Research Model (CRM) in High-Lift Configuration

The first test case is the Common Research Model from the High-Lift Prediction workshop. It was run with 3 grid refinement levels:

  • 10 million cells
  • 47.5 million cells
  • 118 million cells

These refinements allowed us to measure the effect of problem size on the overhead of parallel output. All three cases were run on 640 MPI Ranks on the NCSA Blue Waters supercomputer. The results are shown in the following table:

  10M Cells 47.5M Cells 118M Cells
Time for CFD Step 17.6 sec 70 sec 88 sec
Time Restart write 6.1 sec 10.7 sec 31.4 sec
Time SZL File Write 43.9 sec 171 sec 216 sec

For comparison we include the cost of incrementing the solution a single CFD time step and the cost of writing an SU2 restart file. It should be noted that the SU2 restart file only contains the conservative field variables – no grid variables and no auxiliary variables – so there is far less writing involved with the creation of the restart file. The cost of writing the SZL file is roughly 2.5 the cost of a single time step. If you write the SZL file infrequently (every 100 steps or so) this overhead is fairly small (2.5%).

Test Case #2: Inlet

The second test case is an inlet like you might find on the next generation jet fighter. It aggressively compresses the flow to keep the inlet as short as possible.

The inlet was analyzed using 93 million tetrahedral cells and 38 million nodes. As with the CRM case, the inlet case was run on the NCSA Blue Waters computer using 640 MPI ranks.

SU2 takes 74.7 seconds to increment the inlet CFD solution by one time-step and 31 seconds to write a restart file. To write the SZL plot file requires 216 seconds – 2.9 times as long as a single CFD time step.

Availability

The parallel SZL file output is currently in the pull-request phase of SU2 development. Once it is accepted it will be available in the Develop branch on GitHub. On occasion (I’m told every six months to a year), the develop branch is merged into the master branch. If you are interested in trying the parallel SZL output from SU2, send me an email (scottimlay@tecplot.com) and I’ll let you know which branch to download.

Better yet, subscribe to our TecIO Newsletter and we will send you the updates.

Subscribe to Tecplot


Scott Imlay
Scott Imlay
Chief Technical Officer
Tecplot, Inc.

The post Parallel SZL Output from SU2 appeared first on Tecplot.

► Improving TecIO-MPI’s Parallel Output Performance
  20 Mar, 2019
 
 
TecIO, Tecplot’s input-output library, enables applications to write Tecplot binary files. Its parallel version, TecIO-MPI, enables MPI parallel applications to output Tecplot’s newer binary format, .szplt.

TecIO-MPI was first released in 2016. Since then, we’ve received feedback from some customers that its parallel performance for outputting unstructured-grid solution data needed improvement. So we embarked on an effort to understand and eliminate bottlenecks in TecIO-MPI’s execution.

Customer reports 15x speed-up in writing data from FUN3D when using the new TecIO-MPI library!
 
Learn more and download the TecIO Library

Understanding What Customers are Seeing

To understand what our customers were seeing, we needed to be able to run our software on hardware representative of what our customers were running on, namely, a supercomputer. The problem is that we don’t own one. We also needed parallel profiling software that would help us identify bottlenecks, or “hot spots,” in our code, including in the MPI inter-process communication. We made some progress in Amazon EC2 using open-source profiling software, but had greater success using Arm (formerly Allinea) Forge software at the National Center for Supercomputing Applications (NCSA).

NCSA has an industrial partners program that provides access to their iForge supercomputer and a wide array of open source and commercial software, including Arm Forge. iForge has over 2,000 CPU cores and runs IBM’s GPFS parallel file system, so it was a good platform to test our software. Arm Forge, specifically its MAP profiling tool, provided the ability to easily identify hot spots in our software, and to drill down through the layers of our source code to see exactly where the performance problems lay.

An additional application to NCSA also gave us access to their Blue Waters petascale supercomputer, which features about 400,000 CPU cores and the Lustre parallel file system1. This gave us the ability to scale our testing up to larger problems, and to test the performance on another popular parallel file system.

Arm MAP with Region of Time Selected

Performance Improvement Results

Using iForge hardware and Arm Forge software, we were able to identify two sources of performance problems in TecIO-MPI:

  • Excessive time spent in writing small chunks of data to disk.
  • Too much inter-process exchange of small chunks of data.

Consolidating these has led to an order-of-magnitude reduction in output time. Testing with three different computational fluid dynamics (CFD) flow solvers indicates output times, for structured or unstructured grids, roughly equal to the time required to compute a single solver iteration.

We will continue to collect feedback from users with an eye to additional improvements as TecIO-MPI is implemented in additional solvers. We invite you to provide us with your own experience!

Take our TecIO Survey

How to Obtain TecIO Libraries

TecIO and TecIO-MPI, along with instructions in Tecplot’s Data Format Guide, are installed with every Tecplot 360 installation.

It is recommended, however, that you obtain and compile source for TecIO-MPI applications, because the various MPI implementations are not binary-compatible. Source for TecIO and TecIO-MPI, and the Data Format Guide, are all available via a My Tecplot account.

For more information and access to the TecIO Library, please visit:

TecIO Library

1This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.



By Dr. David E. Taflin
Senior Software Development Engineer | Tecplot, Inc.

Read Dave’s employee profile »

The post Improving TecIO-MPI’s Parallel Output Performance appeared first on Tecplot.

► Calculating a New Variable
  11 Mar, 2019

Data Alteration through Equations

Engineers using Tecplot 360 often need to create a new variable which is based on a numeric relationship of existing variables already loaded into Tecplot.

This powerful capability for calculating a new variable uses a simple method. To start, load your data into Tecplot 360. In this example, we loaded the VortexShedding.plt data located in the Tecplot 360 examples folder.

Choose Alter -> Specify Equations from the Data menu.
Alternately, click the equations icon Equations Icon

You will see the Specify Equations dialog shown at right.

We will now calculate the difference between two variables. In the Zones to Alter list, click All.

Initialize the new variable T(K)Difference, by typing in the Equation(s) window:

{T(K)Difference} = 0

Click Compute

Now find the difference for variable T(K) between zone 2 and 3 (for example, T(K) in zone 3 – T(K) in zone 2) to T(K)Difference). You can do this for any two variables that have a similar structure.

Select the zones you want to receive the difference value. Type the following equation using the exact variable name from the Data Set Information dialog.

{T(K)Difference} = {T(K)}[3]-{T(K)}[2]

Click Compute

The new variable T(K)Difference is now available for plotting. Open the Data Set Information dialog from the Data menu and view the new variable T(K)Difference.

Note that changes made to the dataset in the Specify Equations dialog are not made to the original data file. You can save the changes by saving a layout file or writing the new data to a file. Saving a layout file will keep your data file in its original state, but use journaled commands to reapply the equations.

Learn more in Chapters 20 and 21 of the Tecplot 360 User Manual.


This blog was originally published in 2013 and has been updated and expanded.

The post Calculating a New Variable appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► Hexagon’s race to the Smart Factory
  17 Jul, 2019

What if you could reimagine your product design and manufacturing processes? Start from scratch, rather than have to deal with legacy thinking, work processes and tools? Come up with new ways of working in all areas of your business?

That’s what Hexagon wants us all to do: imagine a Smart Factory where design, simulation, machining/making and validation are all connected. Each step generates and consumes data, with the ultimate goal of a closed-loop feedback system that aims to ensure that all designs are manufacturable. That all manufactured items meet quality standards. That the manufacturing line is designed to be as energy and material efficient as possible. And that the data on the line’s quality and efficiency is fed back into the design of the next product iteration.

Sound like a pipe dream? Perhaps, but Hexagon believes it’s putting together the pieces to help its clients move incrementally closer to this vision.

Hexagon gave us a glimpse into this on the HxGN LIVE show floor last month in Las Vegas, using an automobile hood as an example. The part was designed in CATIA and then manufactured out of titanium on a 3D printer. Spoiler: Someone hadn’t done their homework and the part sagged during fabrication where the weight of the metal pulled it out of shape. A Hexagon metrology scanner measured the part and comparing the as-is to the as-designed, finding the problem. A Smart Factory app built on Hexagon’s Xalt IoT platform notified a human and made a slight tweak to the printer to ensure that the next part would be made differently to avoid the problem, thus closing the loop between design and manufacturing quality.

Read this quickly and you’re left with more questions that answers: How does the app know what to adjust? What if the flaw is so minor as to be OK? Do we need to take springback into account? Who makes car hoods out of titanium, anyway? Ignore all of that—it was a demonstration!

The strategic concept holds: as soon as we know we have a problem, we should act. Humans need to be alerted to the problem so they can figure out a fix. If we can pre-program the fix, we can shorten the time to a solution. And if we can identify the root causes of the problem, perhaps we can prevent it.

Central to Hexagon’s value proposition in the mechanical manufacturing world (we’ll get to process in a subsequent post) is simulation in all its forms: traditional CAE, manufacturing process simulation, scenario building and so on. A key part of that vision was fulfilled when Hexagon acquired MSC Software in 2017.

Many people still think of MSC as synonymous with Nastran, the solver, and Patran, the pre- and post-processor. Maybe a little Adams for the multi body dynamics enthusiasts. The portfolio today is a lot broader and includes linear, nonlinear, CFD, controls, materials, forming, fatigue and many more solvers, along with data management, workflow and other solutions, as shown in the slide I cribbed from MSC’s keynote presentation. (Missing is AMendate, acquired after HxGN LIVE.)

MSC packages everything in flavors that appeal to experts, novices and the in-between, and sells via direct and indirect channels. And, increasingly, Hexagon is using these solvers as the cornerstones of solution sets that address specific problems, such as autonomous vehicle development and additive manufacturing.

During his keynote, MSC CEO Paolo Guglielmini’s pointed out that MSC is working to improve each individual product and to support what he and his bosses at Hexagon consider essential initiatives like the Smart Factory. This slide, also from his keynote, give the highlights:

His overall thesis: to reach the end-goal of a smart factory, design must be smarter at each step—whether of the product, its production process or its eventual support after sales. Mr. Guglielmini says that MSC and Hexagon want to extend the tools that engineers use, to give them as much info as possible to inform each design choice. And that extends beyond MSC’s legacy of replacing bend-and-break testing with virtual methods. From his slide, you can see that physics is definitely part of the equation, but that other aspects like design-to-manufacture, costing and compliance also factor in.

Speaking to each element in the slide above, from the upper right,

Mr. Guglielmini feels that a red-light/green-light worse-better simulation result isn’t sufficient. Users expect as accurate a simulation result as possible, as quickly as is reasonable. MSC continues to invest in building out (or acquiring) solvers with high accuracy.

Too, the world doesn’t work in only one physical dimension; the MSC CoSim Engine couples different solvers in a multi-physics framework. The first release connects models in Adams, Marc and scFLOW (CFD). Out any day now is Apex Framework, a front end that will allow chained simulations for all solvers, and coming is a second initiative that will allow true co-simulation, where multiple solvers to talk to one another i time-step increments

But it’s all pointless if the tools are too difficult to use. MSC continues to focus on improving usability and productivity across its products.

And, before you ask, MSC is also connecting MSC Apex to Adams, Marc and Actran, in addition to Nastran

Design to manufacture” is central to the Smart Factory concept, and something MSC believes sets it apart from other CAE suppliers. Tied to the next bubble, about costing, the combo of material definition via MaterialCenter, Hexagon Forming Technologies (FTI) sheetmetal costing and MSC Simufact’s costing for other production processes, leads Hexagon to believe that it can close the pre-production loops in the Smart Factory concept. MSC calls this “simulation at the point of design“— making design choices that take into account costing, manufacturability, and other criteria that haven’t typically been part of the simulation task.

The sustainability bubble ties back to the main theme of HxGN LIVE and to Hexagon CEO Ola Rollén’s keynote. From MSC’s more specific perspective, this gets back to material choices and trade-offs, and to know how a particular item is sourced. To that end, MSC is working with iPoint GmbH to incorporate materials management and compliance solutions and with Purdue University for optimizing composite hybrid mold manufacturing.

The end-goal with all of this is improved productivity. Not just in the simulation group, as used to be the case, but in an enterprise setting. Mr. Guglielmini spoke of AI-powered simulations that will take into account prior results to accelerate current simulations and using/creating more reduced order modeling techniques so users can more quickly focus in on only the most relevant scenarios and design options, to speed up the simulations

Faster CAE, earlier CAE, better decisions that take into account more types of information — all leading to more informed decisions.

The bottom line? It’s true: we gather and then squander far too much data. We’re beyond mechanizing routine tasks, and can now take that automation to another level by connecting and using all of the tools available to us. If we know that the titanium hood never does work out, given our production capability, how can we fix it? Can we change to another material? A different production process or 3D printer? What are the cost, quality and regulatory or sustainability implications of any of these alternatives?

Simulation will always be an important mechanism for closing the loop between intention and reality. MSC continues to focus on its core but is expanding that to new use cases and technology types. The question is, are its customers ready? I’m not sure they are, but they will have to get ready quickly or risk losing to more agile competitors who are (or almost are) there.

One last thing: I don’t know why I never internalized this before HxGN LIVE, but Hexagon is a manufacturer. They make metrology devices and are building a new factory in Hongdao, China that will be a proof-point for their Smart Factory vision. (Its construction is being executed via Hexagon’s enterprise construction solution, HxGN SMART Build). I am told that Hongdao will be a Smart Factory showcase, demonstrating how data-driven closed-loops lead to better metrics across the board. It’s scheduled to open in something like 9 months and I’m looking forward to seeing Hexagon’s Smart Factory in action.

Note: Hexagon graciously covered some of the expenses associated with my participation in the event but did not in any way influence the content of this post. The cover image is courtesy of Hexagon’s photo team.

The post Hexagon’s race to the Smart Factory appeared first on Schnitger Corporation.

► Look to the moon, thank an engineer
  15 Jul, 2019

This week marks the 50th anniversary of the Apollo 11 landing on the moon. We all know about the incredible bravery of the astronauts to risk everything to go there and (perhaps not) come home–but amidst all the hoopla, spare a thought for the engineers who made it all possible. From designing the craft, to programming the navigation and communications systems, to figuring out if the launch fireball would blow up the spectators and managing thousands of other details, engineers made the space program happen.

This very cool downloadable book, from NASA, gives one perspective on how humankind got to the moon, if you’re interested. And here, from the New York times, is a wonderful picture gallery to commemorate the event.

The engineers of the space program inspired a whole generation of kids to become engineers, too. Yes, the Apollo program gave us Tang and Space Ice Cream but also, so much more. The Jet Propulsion Lab says that the computer mouse, water purification tech and small portable computers, among other things, are a direct result of problems solving for the space programs.

So this week, tell a kid if the Apollo influenced you to become an engineer, software programmer, mathematician, scientist or other techie type — and help them get through all of the myths and legends built up over 50 years to see this as the an awesome feat of engineering that it also is.

Me? I’m in awe of the science, engineering and sheer “yes, we will” attitude that got humans to the moon. I’ve been so honored to meet several of the people involved and have watched just about every movie and documentary and read every book — and will do some more of that this week. Given the current “facts are optional” environment, I’m worried that not marking events like this will cause us to go backwards; we might retain the facts but forget the mechanisms to find new facts.

The cover image is from NASA, of astronaut Neil Armstrong and the rest of the crew heading to the launch of Apollo 11 on 16 July, 1969. There are so many amazing photos; I chose this one because it shows that many people were involved in Apollo, not only the astronauts.

The post Look to the moon, thank an engineer appeared first on Schnitger Corporation.

► AspenTech acquires IoT and AI companies
  12 Jul, 2019

Artificial intelligence, AI, means many different things, depending on the industry, use case and technological sophistication of the user. AspenTech, which makes chemical process design software as well as solutions that help process plants manage their assets more intelligently. Now, it’s getting into AI and data visualization as part of a bigger push into asset management and the Internet of Industrial Things.

The company announced two acquisitions today: Mnubo, which makes purpose-built AI and analytics platforms that enable companies to assemble and deploy AI-driven IoT applications at enterprise scale. AspenTech believes Mnubo’s technology will “accelerate the realization of AspenTech’s vision for the next generation of asset optimization solutions that combine deep process expertise with AI and machine learning”.

AspenTech also made public that it acquired Sabisu, maker of enterprise visualization and workflow solution this past June.

The company says the combo of Sabisu and Mnubo will enable it to deploy AI-powered applications and to visualize and analyze vast quantities of data, and to embed these technologies into existing and future products. “By combining first principle engineering models and deep process expertise with AI capabilities, these solutions will enable the automation of knowledge and data-driven decision-making for continuous improvement across the design, operation and maintenance lifecycle of industrial assets”, according to the press release.

Why do this? CEO Antonio Pietri says, “our customers need to yield higher outputs and drive higher efficiencies with existing assets … AI offers a significant competitive advantage in managing operations to the limits of performance without compromising safety. By bringing the deep domain expertise of AspenTech together with Mnubo AI-driven IoT expertise and Sabisu visualization, we can deliver innovation that helps our customers drive greater value from their existing data at scale. The actionable insights from AI-powered applications will help AspenTech customers to achieve a truly smart enterprise.”

AspenTech is spending Canadian $102 million (about US$80 million) for Mnubo. That acquisition is expected to close within the next five business days. Financial details for the Sabisu acquisition were not disclosed.

The post AspenTech acquires IoT and AI companies appeared first on Schnitger Corporation.

► Autodesk invests in actual construction startup
  10 Jul, 2019

You know that Autodesk invests in tech startups. Did you know that they also invest in makers of actual, physical things? Apparently, they do!

Autodesk just announced that it has increased its investment in Factory_OS, a “volumetric modular construction start-up” that aims to “displace conventional construction practices for affordable multi-unit residential properties”. That seems to be PR-speak for making prefabricated homes, somewhere away from the construction site where factory-like automation can be used to build better, faster and more economically, and then trucking them to their permanent locations, where they are assembled.

Unlike other Autodesk investments that are purely technological, building new products for simulation, design, construction planning or execution, or more typical CAx/PLMish applications, this one seems to be about the physical buildings.

Factory_OS will use its new funds to expand its Factory Floor Learning Center to develop new techniques for industrialized construction and to build what Autodesk calls a “Rapid Response Factory”, where Factory_OS will ultimately fabricate housing as quickly as possible to meet demand after a natural disaster or other emergency.

Factory_OS says it already builds “well-designed, tech-ready multifamily homes 40% faster [that are] 20% less expensive than conventional housing … We do this by building the bulk of the home off-site, right down to the toilet paper holders. Then we ship it and assemble it on-site”.

In announcing this investment, Autodesk CEO Andrew Anagnost said that “Factory_OS is a pioneer that is revolutionizing the approach to modularized homebuilding and making the dream of affordable housing in cities, a reality. We’re honored to support their mission, in collaboration with Citi [the other investor in this round] of giving families safe and affordable homes to call their own. I have no doubt our continued collaboration will serve as a springboard to addressing the growing housing crisis nationwide.”

Autodesk’s press release didn’t say how much this round totaled, but I did find an SEC filing that said that Factory_OS raised just under $23 million in May/June.

Image courtesy of Factory_OS.

The post Autodesk invests in actual construction startup appeared first on Schnitger Corporation.

► AEC Quickie: NEMT sells, RIB buys
    8 Jul, 2019

You might remember that Nemetschek co-owned a company called DocuWare, which is focused on document management and the AEC workflows that surround project documents. The companies have been intertwined for at least a decade but now DocuWare is being acquired by Ricoh (yes, that Ricoh — further evidence that AEC acquisitions are hot hot hot) and, as part of that deal, Ricoh bought out Nemetschek’s 22% stake in DocuWare. The companies declined to specify how much Nemetschek stands to gain from the sale, except to say that it “will lead to a book profit.” That, plus anticipated revenue growth that trickles down to the bottom line, will lead to “an additional one-off rise in the EPS of about 40% compared to the previous year’s figure.”

Nemetschek says that “DocuWare solutions are successfully integrated in some of the brands of Nemetschek Group such as Nevaris and Crem Solutions. Following the sale of the interest, the close cooperation between DocuWare and the Nemetschek brands will continue”.

Bottom line? Execution and the strength of the partnership between Ricoh/DocuWare and Nemetschek. I wouldn’t read too much into the one-time profit; the real issue for customers is how the sale affects DocuWare’s 600-odd resellers who support joint customers, and how the integrated components are maintained and enhanced. It is in everyone’s best interest to continue the strong working relationship …

Staying in AEC, RIB Software, best known for its iTWO platform for the construction industry, announced that it is acquiring 70% of CCS in South Africa for $31.5 million, which the company reports is an 8.5 EBIT/DA multiple. For you multiple junkies, RIB says that CCS, which is privately held, “generat[ed] 13.6 Million+ USD ARR revenue (more than 18 Million USD revenue in total forecasted for 2019) with high profitable EBIT/DA margin of ~30% in a high potential and fast-growing market.”

CCS’ main product is Candy, a cost estimating and project control solution. I am periodically asked about Candy and, a couple of years ago, spoke with a few users who had only good things to say about the discipline that using it imposed on their businesses — and that it led to real project benefit. I am not familiar with CCS’ BuildSmart, but RIB says it is a “complete construction solution similar to an iTWO 4.0 concept for the African, Middle East and other select markets.”

And that’s what’s so interesting about this deal: RIB says that “50% of the company’s revenue comes out of Africa and around 30% out of Middle East”. (The remaining 20% seems to be opportunistic: from the U.K., Portugal, India, Australia and New Zealand.)

There’s such potential for PLMish solutions in Africa because of demographic shifts. Young populations, booming cities, but not much money so efficiency is paramount. According to its press release, “RIB wants to support Africa to develop cities and infrastructure for the Gen Z [Residents under 20, about 50% of the African population, according to RIB]. With the investment in CCS and through the partnership with the leading IT-Services provider in Africa, the EOH Group, we will provide comprehensive solutions to our clients in Africa like we do in Australia and other markets like German speaking or Spanish speaking markets.” The question becomes how many of CCS’ users can be migrated to/sold on iTWO.

So. Two very different deals that highlight the jockeying for position that’s going on in AEC. Both Nemetschek and RIB are riding the BIM wave that will only accelerate as new workers and their technology expectations enter an AEC workforce that has to cope with demand for more and better infrastructure.

The post AEC Quickie: NEMT sells, RIB buys appeared first on Schnitger Corporation.

► Schnitgercorp.com update!
    5 Jul, 2019

Hello — and thanks for checking in. We apologize if you experienced errors when using the website earlier today; we were working behind the scenes to make the site load faster, search more quickly and be more secure. We think we’ve squashed all bugs and put the gremlins back in their box but please let us know if you run into any issues!

The post Schnitgercorp.com update! appeared first on Schnitger Corporation.


return

Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: