CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home >

CFD Blog Feeds

Another Fine Mesh top

► This Week in CFD
  19 Apr, 2019
This week’s roundup of CFD news is full of even more things to read including the latest installment of a series comparing different CFD software and the latest from an NSF funded effort to create a new CFD software infrastructure. … Continue reading
► This Week in CFD
  12 Apr, 2019
Today’s post should be called This Week in Applied CFD because it’s virtually all applications of CFD or meshing or visualization. This is actually a good thing because use of CFD to obtain engineering data is what it’s all about. … Continue reading
► Flow Visualization Showcase at AIAA Aviation – Enter by 19 April
  11 Apr, 2019
Entrants are sought for the 3rd Flow Visualization Showcase to be held at AIAA Aviation on June 18th in Dallas. If you have a cool visualization and are presenting work at Aviation (or have presented at any AIAA conference since … Continue reading
► This Week in CFD
    5 Apr, 2019
This week’s CFD news includes a must-watch video of an LES simulation, a very cool application of CFD to downhill skateboarding, and a good overview of where simulation now fits into all aspects of a design process. Of course, there’s … Continue reading
► I’m Josh Dawson and This Is How I Mesh
    4 Apr, 2019
One afternoon on an excruciatingly hot summer day in Texas, as I was taking phone call after phone call working as a 401(k) specialist at Fidelity Investments – I decided to go back to school to get my Masters of … Continue reading
► This Week in CFD
  29 Mar, 2019
This week’s CFD news includes articles and videos that share the work and thoughts of three people in the CFD world who are worth getting to know. Shown here are CFD results for turbulence downstream of a nozzle exit with … Continue reading

F*** Yeah Fluid Dynamics top

► Combustion is ultimately a chemical reaction, and like any...
  19 Apr, 2019


Combustion is ultimately a chemical reaction, and like any chemical reaction, it requires the right balance of ingredients. The only way to completely exhaust the reaction is to have the perfect amount of fuel (i.e. stuff to burn) and oxidizer (i.e. oxygen). When those ratios don’t match, the reaction can slow down or even appear to end, but that doesn’t mean a fire’s gone out.

Firefighters face one of the dangerous consequences of this situation in the form of backdrafts. When a fire has been burning in a sealed container and exhausted its oxygen supply, it can get extremely hot even if the flames seem to have died down. When oxygen is added back by opening a door or window, the fire can react explosively, as the Slow Mo Guys demonstrate above. The good news is that backdrafts are relatively rare and there are steps you can take to avoid them. (Image and video credit: The Slow Mo Guys)

image
► As waves fold over and break, they trap air, creating bubbles of...
  18 Apr, 2019


As waves fold over and break, they trap air, creating bubbles of many sizes. The smallest of these bubbles can be only a few microns across and persist for long times compared to larger bubbles. When they burst, they create tiny droplets that can carry sea salt up into the atmosphere to seed rain. Understanding how these bubbles form and how many there are of a given size is key to predicting both oceanic and atmospheric behaviors. Numerical simulations like the one featured in the video above reveal the dynamic collisions that create these tiny bubbles and help researchers learn how to model the tiniest bubbles so that future simulations can be faster. (Image and video credit: W. Chan et al.)

image
► When we think of resonance, we often think of it in simple...
  17 Apr, 2019


When we think of resonance, we often think of it in simple terms: hit the one right note, and the wine glass will shatter. But resonance isn’t always about a one-to-one ratio between a driving frequency and the resonating system. Especially in fluid dynamics, we often see responses that occur at other, related frequencies.

One of the simplest places to see this is with a droplet bouncing on a bath of fluid. Above you see a liquid metal droplet bouncing on a bath of the same metal. At low amplitude, the pool surface moves at the driving frequency and a droplet bounces simply upon that surface, with one bounce per oscillation. Increase the amplitude, though, and the droplet’s bounce changes. It bounces twice – one large bounce and one small bounce – in the time it takes for the pool surface to go through one cycle. This is called period doubling because the bouncing occurs at twice the driving frequency.

Turn the amplitude up further, and the system undergoes another change. Faraday waves form on the surface. They resonate at half the driving frequency, and a droplet’s bouncing will sync up with the waves. That means the droplet returns to a one-to-one bounce with the waves, but the waves themselves are no longer reacting at the driving frequency. It’s this kind of complexity that makes fluid systems fertile grounds for studying paths toward chaos. (Image and research credit: X. Zhao et al.)

► One of the deadliest features of some volcanic eruptions is the...
  16 Apr, 2019




One of the deadliest features of some volcanic eruptions is the pyroclastic flow, a current of hot gas and volcanic ash capable of moving hundreds of kilometers an hour and covering tens of kilometers. Since volcanic particles have a high static friction, it’s been something of a mystery how the flows can move so quickly. Using large-scale experiments (top), researchers are now digging into the details of these fast-moving flows.

What they found is that the two-phase flow results in a pressure gradient that tends to force gases downward. This creates a gas layer with very little friction near the bottom of the pyroclastic flow (bottom), essentially lubricating the entire flow with air. This helps explain why pyroclastic flows are so fast and long-lived despite their inherent friction and the roughness of the terrain over which they flow. (Image and research credit: G. Lube et al.; video credit: Nature; submitted by Kam-Yung Soh)

► The coalescence of two water droplets happens so quickly, it’s...
  15 Apr, 2019




The coalescence of two water droplets happens so quickly, it’s essentially impossible to see, even with high-speed cameras. For this reason, researchers have turned to simulating molecular dynamics – essentially building computer programs that model the actions of all the molecules contained in the water droplets. Viewed this way, the very first contact between drops comes from thermal fluctuations – the random jumping of molecules across the separating gap. Once the bridge starts to form, it continues to grow, driven by thermal forces and opposed by surface tension. Eventually, this thermal regime gives way to the more familiar hydrodynamic one, where the bridge is large enough for flow to drive its growth. (Image credits: experiment - S. Nagel et al.; simulation - S. Perumanath et al.; research credit: S. Perumanath et al.; submitted by Rohit P.)

► Antlion larvae dig sandpits to catch their prey, and, according...
  12 Apr, 2019




Antlion larvae dig sandpits to catch their prey, and, according to a new study, they rely on the physics of granular materials to do so. The antlion digs in a spiral pattern (bottom), beginning from the outside and working its way inward. As it digs, it ejects larger grains and triggers avalanches that cause large grains to fall inward. This leaves the walls of the final pit lined with small grains, which have a shallower angle of repose and will slip out from under any prey that wander in. The subsequent avalanche will carry the victim to the antlion lying in wait at the center of the pit. (Image credits: antlion larva - J. Numer; antlion digging - N. Franks et al.; research credit: N. Franks et al.; submitted by Kam-Yung Soh)

Symscape top

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

► Wind Tunnel and CFD Reveal Best Cycling Tuck
  10 May, 2017

The Giro d'Italia 2017 is in full swing, so how about an extensive aerodynamic study of various cycling tuck positions? You got it, from members of the same team that brought us the study of pursuit vehicles reducing the drag on cyclists.

Chris Froome TuckChris Froome TuckStage 8, Pau/Bagnères-de-Luchon, Tour de France, 2016

read more

► Active Aerodynamics on the Lamborghini Huracán Performante
    3 May, 2017

Early on in the dash to develop ever faster racecars in the 1970s, aerodynamics, and specifically downforce, proved a revelation. Following on quickly from the initial passive downforce initiatives were active aerodynamic solutions. Only providing downforce when needed (i.e., cornering and braking) then reverting to a low drag configuration was an ideal protocol, but short lived due to rule changes in most motor sports (including Formula 1), which banned active aerodynamics. A recent exception to the rule is the highly regulated Drag Reduction System now used in F1. However, road-legal cars are not governed by such regulations and so we have the gloriously unregulated Lamborghini Huracán Performante.

Active Aerodynamics on the Lamborghini Huracán Performante

read more

CFD Online top

► NACA4 airFoils generator
  20 Feb, 2019
https://github.com/mcavallerin/airFoil_tools


generate 3D model for foils
Attached Thumbnails
Click image for larger version

Name:	airfoilWinger.png
Views:	23
Size:	72.5 KB
ID:	452  
► Use gnuplot to plot graph of friction_coefficient for T3A Flat Plate case in OpenFOAM
  13 Feb, 2019
Hello,
I am new to OF and gnuplot. I am working on the T3A flat plate case in tutorials of OpenFOAM. I was struggling a lot to plot graph of friction coefficient from simulation and experimental data using the default plot_file (creatGraphs.plt) provided with the tutorial. I looked on the internet for a solution but remained unsuccessful. So, after trying for some time, I got the graph correct so I decided to share it here for use of anyone else.

The trick is we have to edit the default plot file provided in the case file. :) This is how the default file looks like:
Code:
#!/bin/sh
cd ${0%/*} || exit 1                        # Run from this directory

# Test if gnuplot exists on the system
command -v gnuplot >/dev/null 2>&1 || {
    echo "gnuplot not found - skipping graph creation" 1>&2
    exit 1
}

gnuplot<<GNUPLOT
    set term post enhanced color solid linewidth 2.0 20
    set out "graphs.eps"
    set encoding utf8
    set termoption dash
    set style increment user
    set style line 1 lt 1 linecolor rgb "blue"  linewidth 1.5
    set style line 11 lt 2 linecolor rgb "black" linewidth 1.5

    time = system("foamListTimes -case .. -latestTime")

    set xlabel "x"
    set ylabel "u'"
    set title "T3A - Flat Plate - turbulent intensity"
    plot [:1.5][:0.05] \
        "../postProcessing/kGraph/".time."/line_k.xy" \
        u (\$1-0.04):(1./5.4*sqrt(2./3.*\$2))title "kOmegaSSTLM" w l ls 1, \
        "exptData/T3A.dat" u (\$1/1000):(\$3/100) title "Exp T3A" w p ls 11

    set xlabel "Re_x"
    set ylabel "c_f"
    set title "T3A - Flat Plate - C_f"
    plot [:6e+5][0:0.01] \
        "../postProcessing/wallShearStressGraph/".time."/line_wallShearStress.xy" \
        u ((\$1-0.04)*5.4/1.5e-05):(-\$2/0.5/5.4**2) title "kOmegaSSTLM" w l, \
        "exptData/T3A.dat" u (\$1/1000*5.4/1.51e-05):2 title "Exp" w p ls 11
GNUPLOT

#------------------------------------------------------------------------------
After editing it, it should look like the following:
Code:
# #!/bin/sh
# cd ${0%/*} || exit 1                        # Run from this directory

# # Test if gnuplot exists on the system
# command -v gnuplot >/dev/null 2>&1 || {
    # echo "gnuplot not found - skipping graph creation" 1>&2
    # exit 1
# }

# gnuplot<<GNUPLOT
    set term post enhanced color solid linewidth 2.0 20
    set out "graphs2.eps"
    set encoding utf8
    set termoption dash
    set style increment user
    set style line 1 lt 1 linecolor rgb "blue"  linewidth 1.5
    set style line 11 lt 2 linecolor rgb "black" linewidth 1.5

    time = system("foamListTimes -case .. -latestTime")

    # set xlabel "x"
    # set ylabel "u'"
    # set title "T3A - Flat Plate - turbulent intensity"
    # plot [:1.5][:0.05] \
        # "../postProcessing/kGraph/".time."/line_k.xy" \
        # u (\$1-0.04):(1./5.4*sqrt(2./3.*\$2))title "kOmegaSSTLM" w l ls 1, \
        # "exptData/T3A.dat" u (\$1/1000):(\$3/100) title "Exp T3A" w p ls 11

    set xlabel "Re_x"
    set ylabel "c_f"
    set title "T3A - Flat Plate - C_f"
    plot [:6e+5][0:0.01] \
		"/home/purnp2/OpenFOAM/purnp2-v1812/run/T3A/postProcessing/wallShearStressGraph/269/line_wallShearStress.xy" \
        u (($1-0.04)*5.4/1.5e-05):(-$2/0.5/5.4**2) title "kOmegaSSTLM" w l, \
        "/home/purnp2/OpenFOAM/purnp2-v1812/run/T3A/validation/exptData/T3A.dat" u ($1/1000*5.4/1.51e-05):2 title "Exp" w p ls 11
# GNUPLOT

#------------------------------------------------------------------------------
Please notice the following changes:
a. the lines need to be commented out.
2. the back-slash (\) sign deleted before all dollar-sign ($) which is used to represent a column in a file used by gnuplot for plotting the graph.
3. full path of data files is added instead of a path to the data file from the current working directory.
► A generalized thermal/dynamic wall function: Part 4
    7 Feb, 2019
In previous posts of this series I presented an elaboration of the Musker-Monkewitz analytical wall function that allowed extensions to non equilibrium cases and to thermal (scalar) cases with, in theory, arbitrary Pr/Pr_t (Sc/Sc_t) ratios.

In the meanwhile, I worked on a rationalization and generalization of the framework, derivation of averaged production term for the TKE equation, etc.

While the new material is presented in a substantially different manner and will require a dedicated post (probably a simple link to the material posted elsewhere), few details emerged that are still worth mentioning in this posts series.

In particular, what is worth discussing here is the fit of the presented wall function (and, for that matter, of wall functions in general) to a particular turbulence model. Indeed, while wall functions are typically presented in stand alone fashion, without particular reference to the turbulence model in use (and, indeed, as done here too in the previous posts), it is important that their analytical profile actually fits, as close as possible, the one expected from the tubulence model in use. This becomes of paramount importance when using an y+ insensitive formulation (as the presented one is intended to be), or such insensitivity is not really achieved.

Thus, for example, using the Reichardt or the Spalding profile (which are well known y+ insensitive formulations) with a turbulence model that, when resolved to the wall, provides a different velocity profile, is not optimal and is not going to produce the expected results of insensitivity.

Things get particularly troublesome when going to the thermal (scalar) case with high Pr/Pr_t (Sc/Sc_t) ratios. Indeed, as this ratio ideally (or practically, depending from the specific formulation) multiplies the viscosity ratio underlying the wall function, even minute differences, not typically relevant for Pr/Pr_t (Sc/Sc_t) < 1 (e.g., the velocity case), are instead amplified.

Thus, for example, using for the temperature the well known Kader formulation with the Jayatilleke term for the log part is not typically going to match the results of a given turbulence model at all the Pr/Pr_t ratios. The same just happens also for the presented Musker-Monkewitz wall function, that has its own peculiar dependence from the ratio Pr/Pr_t.

With this post I just want to present a fitting of the basic profile constant a in the Musker-Monkewitz wall function that can be used to match, approximately, the Spalart-Allmaras temperature/scalar profile. I have similar fittings also for other all y+ models, but SA is relevant because the viscosity ratio profile is simple and can be integrated with a very simple routine (and is thus included for comparison in the attached one).

Just change, as usual, the file extensions from .txt in .m and launch comparewf (having musker.m in the same folder). The adjusted Munker wall function gets compared with the numerically integrated SA profile and the reference Kader wall function.

You can play, in comparewf (don't touch musker.m), with the Pr/Pr_t ratio and the non-equilibrium source term FT (but then note that the Kader profile does not include its effects) and see how the fit works relatively better than the Kader profile for SA.

In particular, the present fit for the constant a is:

a=a_0 \left[1+c_1MAX\left(\frac{Pr}{Pr_t},1\right)^{c_2}+c_3\right]

where the constant values can be found in the attached file. Of course, this fit is just an attempt and should not be taken as etched in stone. In particular it is based on the SA profile using the von Karman constant vk = 0.4187.

Note also that, with respect to the previous posts, where I made a mistake, the suggested default value for the profile constant is the one of the original authors (of course), a0 = 10.306.
Attached Files
File Type: txt comparewf.txt (2.0 KB, 18 views)
File Type: txt musker.txt (930 Bytes, 20 views)
► A few thoughts about today's CPU era...
    6 Jan, 2019
So for whatever reason I went on a nostalgic thought process an hour or so ago and began reading about:
The nostalgic reason for this was that I had briefly gotten a chance to work with a couple of Intel Phi Co-processors a couple of years ago and never got the time to work on it. And I had gotten an AMD A10-7850K for myself as well and likewise never got the time to work on it either.


Intel Phi Co-processors KNC

So the Phi Co-processors were available for cheap, some 200-300€ per card because they were getting rid of stock and boasted a potential 64GHz of cumulative CPU clock, of which 16GHz may be plausible to take advantage, given that it was a monster with 64 threads and 8 memory channels, but each core (16 of them) could only run at ~1.1 GHz.
  • The downside? Required porting code to it, even though it was x86_64 architecture and was using a small Linux-like OS within it, as it it were a CPU with Android on a USB stick, but it was a PCI-E card on a PCI-E slot....
  • The result:
    • Takes too long to make any code work with it and it was essentially something akin to a gaming console, i.e. expensive hardware designed for a specific purpose.
    • Would be plausible to use them, if they had done things right...
  • That said, that's how NVidia does its job with CUDA... but GPUs can crank number crunching all the way up to some 1000-4000 FPUs, so 64 threads sharing 16 or 8 FPUs was borderline nonsense...


AMD A10-7850K
This is what to me felt like the technology that could disrupt all others: 4 cores that could be used to manage 512 FPUs, all on the same die, not requiring memory offloading through PCI-E lanes... this was like a dream come true, the quintessential technology holy grail for high performance computing, if there was ever one as such. Hypothetically this harbored 512 FPUS at 720MHz, which would add up to ~368GHz of cumulative potential CPU clock power, which along 4 x86_64 cores @3.7GHz to shepherd them, would allow for a killing in HPC...

But the memory bottleneck of only having 2 channels at 2133MHz, was like having only some 150 FPUs to herd, when comparing to a GPU card with DDR5 at 7GHz...

However, even if that were the case, that wouldn't be all too bad, given that it would give a ratio of about 38 FPUs per core, which compared to the 4-16 float arrays in AVX, the A10-7850K would still make a killing...

Unfortunately:
  1. It's not exactly easy to code for it, mostly because of the stack that needs to be installed...
  2. Which wouldn't be so bad, given that he competition is CUDA, which also relies on the same kind of installation hazards...
  3. But the thing that eventually held me back on ever doing anything with it was that Kaveri architecture had a bug that rendered it not supportable in AMD's ROCm development efforts: https://github.com/RadeonOpenCompute...ment-270193586
I still wish I can find the time and inspiration to try and figure out what I could still do with this APU... but a cost/benefit analysis states that it's not worth the effort :(


Intel Xeon Phi KNL
Knight's Landing... The Phi was somewhat inspired by D&D, in the sense that Knights took up their arms and went on an adventure, in search or hunt for a better home: https://en.wikipedia.org/wiki/Xeon_Phi
  1. Knights Ferry - began traveling by boat...
  2. Knights Corner - nearly there...
  3. Knights Landing - reached the hunting/fighting grounds...
  4. Knights Hill - conquered the hill... albeit was canceled, because they didn't exactly conquer it...
  5. Knights Mill - began working on it... but it was mostly oriented towards deep learning...
KNL was essentially a nice CPU, in the sense that we didn't need to cross-compile and instead focus work on optimizing for this CPU. The pseudo-Level 4 cache, technically named MCDRAM: https://en.wikipedia.org/wiki/MCDRAM - was akin to having GPU-rated RAM (by which I mean akin to GDDR5) nearby the 64-72 cores that the CPU had...

The problem: 64-72 cores running at 1.1GHz is pointless for FPU processing, if you only have 64-72 of the bloody criters, ain' it? Compared to the countless FPUs on a GPGPU, this is peanuts...


Intel Skylake-SP
They finally learned their lessons with the KNL and gave the x86_64 architecture proper infrastructure for scaling, at least from my understanding of the "KNL vs Skylake-SP" document I mentioned at the start of this post.

They even invested in AVX512... 64 double-precision or 128 single-precision array vector FPUs (all crunched in a single clock cycle, instead of just one FPU per core in the common x86 architecture), which I guess run at 2.2 to 3GHz instead of 1.1GHz, so effectively making them 2 to 3 times faster than GPU FPUs.

I'm not even going to venture at estimating how much potential in CPU clock power these AVX512 units can compare to a GPU, for a very simple reason: they can only reach 6 channels at a maximum of 2666 MHz, which pales in comparison to the 7GHz or more that exist nowadays in GDDR5/6 technology on GPU cards.


AMD EPYC
This made me laugh, once I saw the design architecture: https://www.anandtech.com/show/11544...f-the-decade/2
So the trick was fairly simple:
  1. Have 4 Ryzen CPUs connected to each other in an Infiniband-like connection between all 4 or them.
  2. Each Ryzen CPU has only 2 memory channels, but can have up to 8 cores and 2 thread per core...
  3. Has 2666 Mhz RAM... being accessed through a total of 8 memory channels.
This is what the Knight's thingamabob should have been right from the start... this is the kind of technology that will allow extending to the next logical step: 3D CPU stacks, with liquid cooling running between them...

Either way, the EPYC CPUs are nearly equivalent to 4 mainstream-grade CPUs in each socket, connected via Infiniband, for roughly the size of a single credit card...


Playstation 4 and Xbox One
  • Octa-Core AMD x86-64 "Jaguar"-based CPU
  • AMD Radeon with a ton of shaders (~700 to ~2500 at around 1.2GHz), depending on the version...
  • 8-12 GB GDDR5 RAM, depending on the version, but mostly shared between CPU and GPU...
All in the same board... sharing GDDR5 RAM... this is like the holy grail of modern computing which could proliferate in some HPC environments such as CFD and FEM... and it is only being used for gaming. Really? Seriously??


What I expect in the near future
To me, the plan is simple, given that Moore's law gave out years ago due to it being hard to scale down lithography and that we are now reaching the smallest possible limit on how much a transistor can hold a charge without sneezing...
  1. Specialization: we are already seeing this in several fronts:
    1. ASICs were created for the bitcoin mining thingamabob... a clear sign of the future, even though it's a pain in the butt to code for... since we are coding the actual hardware... but that's how GPUs appeared in the first place and the AI-oriented tech coming out on current CPUs is the same kind of thing, as well as AVX tech et al.
    2. ARM and RISC CPUs, where trimming down hardware specs can help make CPUs run cooler and with less power on our precious smartphones and tablets...
    3. You can even design your own RISC CPU nowadays: https://www.youtube.com/watch?v=jNnCok1H3-g
  2. x86_64 needs to go past its primordial soup design and go all out in integration:
    1. 3D stacking of core groups, with liquid cooling running between stacks, because copper extraction is likely not enough.
    2. Intertwining GDDR RAM between those stacks.
    3. Cumulative memory channels should be non-ubiquitous, akin to AMD EPYC design.
    4. Essentially create a cluster within a single socket, which is essentially what an AMD EPYC nearly is...
► Install ANSYS 18 on ubuntu (x64)
  23 Oct, 2018
This is the step by step procedure that i followed to get ansys running on my machine. This procedure won't give any library issue.

The required packages from the installation guide (for RedHat/CentOS) are:
• libXp.x86_64
• xorg-x11-fonts-cyrillic.noarch
• xterm.x86_64
• openmotif.x86_64
• compat-libstdc++-33.x86_64
• libstdc++.x86_64
• libstdc++.i686
• gcc-c++.x86_64
• compat-libstdc++-33.i686
• libstdc++-devel.x86_64
• libstdc++-devel.i686
• compat-gcc-34.x86_64
• gtk2.i686
• libXxf86vm.i686
• libSM.i686
• libXt.i686
• xorg-x11-fonts-ISO8859-1-75dpi.noarch
• glibc-2.12-1.166.el6_7.1 (or greater)

1. Therefore I installed:
Code:
sudo apt install xterm lsb csh ssh rpm xfonts-base xfonts-100dpi xfonts-100dpi-transcoded xfonts-75dpi xfonts-75dpi-transcoded xfonts-cyrillic libmotif-common mesa-utils libxm4 libxt6 libxext6 libxi6 libx11-6 libsm6 libice6 libxxf86vm1 libpng12-0 libpng16-16 libtiff5 gcc g++ libstdc++6 libstdc++5 libstdc++-5-dev

2. Install manually libXp (not included in the standard repo), you can find it at:
HTML Code:
https://pkgs.org/download/libxp6

3. Update the database with:
Code:
sudo updatedb

4. Locate the following libs and create soft links based on your system. I did as follows:
Code:
sudo ln -sf /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 /usr/lib/libGL.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 /usr/lib/libGL.so.1
sudo ln -sf /usr/lib/x86_64-linux-gnu/libGLU.so.1 /usr/lib/libGLU.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXm.so.4 /usr/lib/libXm.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXm.so.4 /usr/lib/libXm.so.3
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXp.so.6 /usr/lib/libXp.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXt.so.6 /usr/lib/libXt.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXext.so.6 /usr/lib/libXext.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXi.so.6 /usr/lib/libXi.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libX11.so.6 /usr/lib/libX11.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libSM.so.6 /usr/lib/libSM.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libICE.so.6 /usr/lib/libICE.so
sudo ln -sf /lib/x86_64-linux-gnu/libgcc_s.so.1 /lib/libgcc.so
sudo ln -sf /lib/x86_64-linux-gnu/libc.so.6 /lib/libc.so
sudo ln -sf /lib/x86_64-linux-gnu/libc.so.6 /lib64/libc.so.6

5. Change the command interpreter for shell scripts:
Code:
sudo dpkg-reconfigure dash
Then answer "No" to the question.

6. From DVD1 install Ansys, launch:
Code:
sudo ./INSTALL

7. Modify the Linux environment variable ANSYS quick start of each component Modify the hidden files.Bashrc your Ubuntu home directory. Add the following code in.Bashrc. Method for typing in the terminal code: gedit ~/.bashrc
paste the following code:

# add environment variables:

#ANSYS

# Workbench
export ANSYS180_DIR=/ansys_inc/v180/ansys
alias wb2='/ansys_inc/v180/Framework/bin/Linux64/runwb2 -oglmesa'
export LD_LIBRARY_PATH=/usr/ansys_inc/v180/Framework/bin/Linux64/Mesa:$LD_LIBRARY_PATH
#export XLIB_SKIP_ARGB_VISUALS=1 #uncomment if you have trasparency issues in CFX pre/post or turbogrid
export LANG=en_US.UTF8

#CFX
export PATH=/ansys_inc/v180/CFX/bin:$PATH

#Turbogrid
export PATH=/ansys_inc/v180/TurboGrid/bin:$PATH

#FLUENT
export PATH=/ansys_inc/v180/fluent/bin:$PATH
export FLUENT_ARCH='lnamd64'

8. Type the following code:
source .bashrc

9. To start ansys use any of the following commands:
Ansys CFX Launcher

CFX5
CFX-pre

cfx5pre
CFX-Solver Manager

cfx5-solve
CFD-post

cfdpost
Turborid

cfxtg
Fluent

fluent
ICEM CFD

icemcfd
Ansys APDL

ansys180 # is really in command mode
Ansys180 -g # graphics mode
Ansys Workbench

runwb2
Use the get command to help command -help

Note: The Linux version of Ansys is installed in the Ubuntu is installed by default in /usr/ansys_inc. If you change the default installation location, please modify the path specified in the ansyslmd.ini. Use VIM to open ansyslmd.ini, as follows ANSYSLI_NOFLEX=1 LICKEYFIL=/usr/ansys_inc/shared_files/licensing/license.dat

If the fonts are not displayed correctly in gui, run the following: sudo apt-get install xfonts-75dpi xfonts-100dpi xterm ttf-mscorefonts-installer

You do need to restart your system (it might be enough to restart the X server)!
► intresting threads
  19 Oct, 2018

curiosityFluids top

► Normal Shock Calculator
  20 Feb, 2019

Here is a useful little tool for calculating the properties across a normal shock.

If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

Disclaimer: This calculator is for educational purposes and is free to use. STF Solutions and curiosityFluids makes no guarantee of the accuracy of the results, or suitability, or outcome for any given purpose.

► A cfMesh workflow to speed up and improve your meshing
  19 Feb, 2019

Here, I’ll cover the basic workflow that I implement when I am using cfMesh.

First, when do I use cfMesh? I love cfMesh. I find it robust, easy to use (with this workflow), and gives high quality hex-dominant meshes for use with OpenFOAM. But there are a few cases where I do not use cfMesh. These are when I am meshing multiple regions (such as rotating zones or conjugate heat transfer) and when I have decided (for some reason) that I am going to do a tet-dominant mesh. With that said, both of those can be accomplished with cfMesh, but I prefer snappyHexMesh for multi-region meshing, and I prefer the built-in Netgen, and gmsh modules of Salome if I am doing a tet-mesh.

What do you need to start?

  • For this workflow using cfMesh, you will obviously need a working installation of cfMesh. Luckily it now comes preloaded as part of OFv18. Even if you prefer a different openfoam flavor, you can use the OFv18 cfMesh to create the mesh, and the mesh will still work in the other flavors.
  • You will also need a working installation of Salome.

Geometry Preparation

Glider model obtained from GrabCad: https://grabcad.com/library/15m-glider-sailplane-1

Typically, I first start in a CAD program (it doesn’t matter which) and produce a 3D .step, .iges, or .brep representation of the fluid domain. This means that if you are doing an internal flow, you want a geometry file representing everywhere the fluid will go. For an external flow, you want a large open domain with your geometry subtracted from it. This is easily accomplished with a simple Boolean subtraction. You can do this in pretty much any CAD programs. If you are like me and frequently just get the CAD file from a client, I usually load it into the CAD, draw my fluid domain, and perform the subtraction. My CAD software of choice is Onshape (primarily for cost reasons and its similarity Solidworks). Alternatively you can perform boolean subtractions directly in Salome.

For an example, I’ll use the Glider geometry shown in the figure above. I downloaded this model from Grabcad, loaded it into Onshape, cleaned up some of the surfaces and subtracted the glider from a fluid domain:

FMS File Preparation

FMS is the recommended file format for using cfMesh as it contains all of the information that the mesher needs. Even as a relatively heavy user of cfMesh I never knew the answer to the question HOW DO I PRODUCE A FULL FMS FILE?. Yes, you can use the surfaceFeatureEdges script from cfMesh to create an FMS file from your stl or obj file, but these files don’t typically contain all of the patch information you want.

For a very long time, I used cfMesh using a very tedious workflow where I would convert my geometry to fms or ftr, mesh it in using automatic patch naming from cfMesh, load it into paraview, and manually see which patch was which. Needless to say this was time consuming and frustrating. In one project, I was modeling a diffuser with over 100 slots in it. surfaceFeatureEdges extracted all of the sides of the slots as separate patches. I then manually renamed them all.. ugh. Luckily, I have learned my lesson, and I can share with you the trick. Salome.

cfMesh actually comes with two python scripts for Salome. These can be used to output an fms file that contains ALL of the information you need, and also lets you use the Salome GUI to name patches and refinement boundaries.

Selecting Patches

Back on topic: First load your STEP, IGES, or BREP file into geometry module of Salome. Now, we want to create and name our patches. To do this, use the menu New Entity -> Group -> Create Group. Make sure the geometry file is selected in the tree. A window will open where you can select points, edges, surfaces, or solids. We want surfaces.

Type the name of the patch you are going to select. Select the face in the viewer, and click add. Finally hit apply. Repeat this for each patch. In most cases, you will need to hide some surfaces to see the surfaces of the geometry. In those cases, just select the faces you aren’t currently selecting and then hit “Hide Selected”.

Extract Edges

Before we move on to creating our surface mesh, we should extract our edges for cfMesh. This is done by executing the “ExtractFeatureEdges.py” python script. Select your the geometry file in the tree, hit CTRL-t, then select the script. If you are running OF1806, it is located in OpenFOAM-v1806/modules/cfmesh/python/Salome.

Generate Surface Mesh

It seems tedious to generate a mesh… before generating a mesh. But rest assured we are only creating a surface mesh. And it should run fast, unless you have a complicated geometry. Regardless, if you choose the default settings of either Netgen 1D-2D, or Mefisto with adaptive wire discretization, it should be fast and produce a good surface triangulation. I have had one slow case, but with tweaking the settings a bit, it wasn’t a problem. For this glider example here, I used a Mefisto meshing and wire discretization on the glider edges to control the grid size.

Dialog to select NETGEN 1D-2D defautls

After building the mesh (hitting the little gear button), we now just need to load our patches from the geometry module, and export to an FMS file.

Surface mesh of outer boundaries created by Salome

Surface mesh of glider created by Salome before exporting to cfMesh

Within the mesh module, select your mesh, and then go to Mesh->Create Groups from Geometry . Select your patches AND your featureEdges (as cfMesh needs these) and hit Apply and Close.

Create Groups from Geometry Dialog

Finally, we are ready to export to an FMS file. To do this, select your mesh on the tree. First you load the Salome module. Hit CTRL-t, and select the salomeTriSurf.py python script. This only loads the module. Now, in the python shell at the bottom the screen you write: triSurf().writeFms(‘FileName.fms’) and hit enter. This only takes a few seconds then writes the FMS file in your Salome directory. Copy it into your OpenFOAM directory and you are ready to mesh it with cfMesh.

Mesh Using cfMesh

From here on out, is basic cfMesh meshing. In your system/meshDict file you must select the FileName.fms that you created in the previous steps. I will not cover the meshDict options I used here, as the focus of this article was on the workflow described in the previous sections. But here are some images of the type of results you can expect from cfMesh:

Glider mesh obtained with cfMesh
Surface grid and crinkle-cut showing layers on fuselage
Surface mesh and crinkle-cut showing layers at the wing trailing edge
Crinkle cut showing mesh behind wing

As the general usage of cfMesh is covered in detail in the cfMesh manual, and in another post (the Ahmed Body) I will just give a couple tips and tricks here. A specific Tips and Tricks post may follow.

Use renameBoundary dict to simplify your case

In this workflow, I usually create surfaces for refinement within Salome, and export them. However, having a bunch of different patches can be a bit of a pain. To address this, I export the patches from Salome, select them for refinement within my meshDict, but also rename them in the renameBoundary sub-dict. If you have multiple patches renamed to the same thing, they all get grouped into a single patch. So you can have 10 different refinement patches that you created, but then group them all into a single patch afterwards. Making case setup simpler.

Use the generateBoundaryLayers command

I have found that I get much better results if I first mesh without boundary layers, and then add them after using the generateBoundaryLayers command. This is for a couple reasons. First, this lets you look at the basic mesh and make sure that it is satisfactory. Your mesh without boundary layers should be as high quality as you can make it. Secondly, you can see how the layer addition affects the final mesh. Layer addition can be somewhat an art. And doing it separately makes this much easier. If your base mesh has bad quality, it will be even worse after layering.

Use the crinkle-slice option in Paraview

When viewing your mesh, make sure to select the “crinkle-slice” option when viewing your mesh. If you just take a regular slice (a) you aren’t seeing the actual cells you are seeing a slice of them, and (b) the rendering puts some cross lines which don’t actually exist. You want to see the actual cells, and without extra non-existent lines.

Use the improveMeshQuality command

In some cases where I have had bad mesh quality, I have improved it by running the improveMeshQuality command. It basically runs through some of the improvement algorithms that are executed during the actual meshing, but runs through them again. I have had some success with it, but havent used it extensively. Try your best to have a good mesh quality right from the get-go (duh).

Use checkMesh

This sounds obvious… but checkMesh works. It gives you important statistics on the quality of your mesh. In my experience, take a very close note of nonOrthogonality. If it is high (>70 or so), you need to improve your mesh or introduce some special schemes to account for this. Regardless, you should know how orthogonal your mesh is.

Use .eMesh files for additional control

For some shapes, you may need extra refinement based on edges instead of faces. In this case, you can use the edgeMeshRefinement sub-dict and use an eMesh or vtk file defining the edges.

Good Luck!

If you are struggling with CFD, are interested in getting started, or are looking for consulting services, visit http://www.stfsol.com for information on CFD consulting services including training, development, private webinars and other simulation services.

► Sutherland’s Law
  16 Feb, 2019

When working on practical engineering applications, you often encounter problems where there are significant changes in temperature locally in the flow field. For instance, if you are doing a heat transfer calculation, you usually want to use the film properties (aka the properties adjacent to the wall). Doing this in any practical sense requires a model for viscosity and thermal conductivity.

For pure, non-reacting gas, the viscosity is only dependent on temperature (Anderson 2006). Therefore, viscosity can be modelled using temperature alone. One of these models, and one that is built in with software like OpenFOAM, is the Sutherland’s law.

It is given by:

\mu=\mu_o\frac{T_o + C}{T+C}\left(\frac{T}{T_o}\right)^{3/2}

It is also often simplified (as it is in OpenFOAM) to:

\mu=\frac{C_1 T^{3/2}}{T+C}=\frac{A_s T^{3/2}}{T+T_s}

The right hand side gives the OpenFOAM notation. For air, \mu_o=1.716\times 10^{-5}, T_o=273.15C=T_s=110.4C_1=A_s=1.458\times10^{-6} .

The coefficients for several other gases can be found in Frank White’s Viscous Fluid Flow (2006).

When is it applicable?

This is an important question. If you are doing engineering or research, you should know the limits of these equations. According to Anderson (2006), “Sutherland’s law is accurate for air over a range of several thousand degrees and is certainly appropriate for hypersonic viscous-flow calculations”. In the crucial undergraduate textbook by Frank White, states that it is “adequate over a wide range of temperatures”.

For air, another source (Rathakrishnan 2013) states that the relationship is valid from 0.01 to 100 atm, and between 0 and 3000K. And Frank White’s Viscous Fluid Flow mentions 2% error between 170K and 1900K for air.

What if you can’t find the coefficients?

If you can’t find a good reference listing the coefficients of the gas you are looking at, one option is to head over to the NIST Webbook, download 2 viscosity values at two different temperatures and solve for the 2 coefficients. Or curve-fit in Matlab using a series of data from NIST or some other source. Basically, something along those lines. This is explained in the undergraduate book by Munson (2014).

FYI that is a good idea anyway, if you want to quantify the error in your work or simulations.

References

Rathakrishnan, E. (2013). Theoretical aerodynamics. John Wiley & Sons.

Anderson Jr, J. D. (2006). Hypersonic and high-temperature gas dynamics. American Institute of Aeronautics and Astronautics.

Munson, B. R., Okiishi, T. H., Rothmayer, A. P., & Huebsch, W. W. (2014). Fundamentals of fluid mechanics. John Wiley & Sons.

White, F. M. (2009). Fluid mechanics. Boston, Mass: WCB/McGraw-Hill.

White, F. M., & Corfield, I. (2006). Viscous fluid flow (Vol. 3, pp. 433-434). New York: McGraw-Hill.

► Air Properties Calculator
  16 Feb, 2019

Here is a little calculator for calculating the properties of air. Enter the pressure (Pa) and Temperature (K), and the calculator should produce an estimate for the specific heat capacities, thermal conductivity, and density.

For information on Sutherland’s law, see the post on the topic.
If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes to simpler targeted codes, scripts, macros and GUIs for a wide range of specific engineering purposes such as pipe sizing, pressure loss calculations, heat transfer calculations, 1D flow transients, optimization and more. Visit STF Solutions at www.stfsol.com for more information!

► High-level overview of meshing for OpenFOAM (and others)
  14 Feb, 2019

Here, I’ll give a high-level overview of my opinions on open-source meshing for OpenFOAM. This post should help point you in direction of the mesher you could use to accomplish your simulating goals! But I won’t go into each mesher in detail here. As this would become a novel. This is just my opinion on what mesher I use in a few typical cases.

During my PhD we were very spoiled and used the software Pointwise to generate our meshes. I still love Pointwise, and if I ever decide to move away from open source meshing that will be my choice. But today is not that day! For users of open-sources software, including OpenFOAM, meshing is a constant struggle. Especially if you’re new. I have even known people who thought about using OpenFOAM and then gave up simply because they thought that blockMesh was the only way to make meshes in OpenFOAM. If that were the case – needless to say – countless applications would be impossible. Don’t give up! You can mesh pretty much anything given the tools that are out there!

In my experience the key meshing options for the user of OpenFOAM are:

  • blockMesh (All version of OF)
  • cfMesh (OF.org <=v4, foam-extend, OF.com V18+)
  • SnappyHexMesh (All versions of OF)
  • gmsh (independent)
  • Salome (independent)

Of these, each meshing software has its own advantages and disadvantages and have types of meshing where they perform the best. I’ll try to break this down based on my experience. Of course, some people will disagree with my views as meshing (somehow) can be a deeply personal adventure.

Structured 2D or Simple 3D Meshes

In this case, the answer is pretty simple: blockMesh

blockMesh has a slightly steep learning curve but all it takes is practice and reminding yourself of the right-hand-rule. If your geometry is easily represented by a reasonable number of points and curves – blockMesh is a good option. You would be surprised what you can mesh in blockMesh – if you have the time… but who has the time! That being said, blockMesh is the ideal OpenFOAM mesher for simple 2D and 3D geometries. Because it is essentially a “structured” mesher you easily achieve very good mesh quality, excellent orthogonality, and have fine control over the mesh in ways unstructured meshers do not provide. (NOTE: Structured vs Unstructured has become a blurry terminology, and in fact, all OF meshes are unstructured…. even if you use a blocking strategy… Ugh.).

A major advantage of using the native OpenFOAM meshers is that they can interface seamlessly with whatever scripting language you prefer. If your code can interface with the OS, it can interface with OpenFOAM – and by extension its meshers. I frequently use blockMesh to build a template. I then leave blanks where my control variables will go and then fill them in at run-time using either bash or python scripting! This makes parametric studies a breeze.

Moving structured mesh created in blockMesh

Tetrahedral (Tet-Dominant) Meshes

There are lots of limitations of using tet-dominant meshes in CFD. However, I won’t go into those here. There are many times when a tet-mesh is the most straight-forward to produce. If you don’t have important boundary layer zones to produce, ripping out a quick tet-mesh can have huge time savings.

In these cases, I usually use either Salome’s built-in Netgen module or gmsh. cfMesh has a tet-mesh generator as well, but if I’m using cfMesh I am always using a hex-dominant mesh.

Key to the effective generation of tetrahedral dominant meshes is the ability to generate layers at the wall. This can be a challenge in each of these programs. So no matter what you use – prepare to be patient. I have found that the boundary layer generation in Salome is quite robust. However, it occasionally produces some very poor quality regions which you need to be careful of. I have not had much success with layer generation in gmsh, but if I’m being honest, I haven’t spent too long trying. I have other tools – and generally prefer hex-meshes anyway.

Salome-Netgen Mesh of the Drag Prediction Workshop geometry

Hex-Dominant Meshing

For hex-dominant meshing, your options are blockMesh (if you can create your geometry easily enough), cfMesh, or snappyHexMesh.

snappyHexMesh is a pretty nice tool, and is my mesher of choice for any multi-region meshing (see next section). But compared to cfMesh I find it much less robust. The key area where this is true is in layer generation. There are ways to get consistent layer generation in snappyHexMesh (i.e. providing a separate meshQualityDict where you turn off many of the paraemeters) but even then it can still be challenging. I have found that in almost all cases, I can achieve consistent, high-quality, boundary layers using cfMesh. Not only this, but I have found the workflow using cfMesh to be a breeze. Especially if you use the supplied Salome scripts to produce your geometry file (separate post on this to follow).

snappyHexMesh grid of the Drag Prediction Workshop Geometry
cfMesh of the Ahmed Body

Multi-Region Meshing

Multi-region meshing is required for several different applications. The main ones I have encountered are rotating zones (for machinery and turbines), and conjugate heat transfer (for separate solid and fluid regions). For multi-region meshing in OpenFOAM, I almost always use snappyHexMesh. The primary reason for this is that is extremely simple to create in your workflow. As an example, let’s consider the case of a simulation where you plan to have rotating and static zones separated by an arbitrary mesh interface (AMI). When using cfMesh, gmsh, or Salome you typically have to create both meshes, and then combine them using the mergeMeshes command. This is can be very tedious. However, in some cases it can be worth it if you can achieve better mesh quality with one of these programs. However, you can definitely get good mesh quality with snappyHexMesh.

Multi-region rotating mesh created in snappyHexMesh

A sobering exercise

If you want a sobering exercise, google the Drag Prediction Workshop website, get the geometry, and try to mesh it using open source software and meet their gridding requirements. Here are some of the requirements for the COARSEST MESH:

  • Minimum of 8 cells across the trailing edge of the wing
  • Y+ < 1.0 (equivalent to dy=0.0006 mm)
  • 2 cells close to the wall with no growth
  • Growth rate <1.25 for the rest of the layers

The list goes on – and some of the example meshes will have 20+ layers.

Anyway, I hope someone finds this post useful to point them in the direction of a good meshing software for their application!

If you are struggling with CFD, are interested in getting started, or are looking for consulting services, visit www.stfsol.com for information on CFD consulting services including training, development, private webinars and other simulation services.

Happy Open-Source Gridding!

► Pipe Flow Pressure Drop Calculator
  13 Feb, 2019

Here is a simple tool to calculate the pressure drop along a pipe. It uses the Haaland equation for friction factor to approximate the Colebrook equation. If Re<2300 the flow is assumed to be laminar.

If you find a mistake in the calculation, please let us know!

If you found this useful, and have the need for more, visit www.stfsol.com. One of STF Solutions specialties is providing our clients with custom software developed for their needs. Ranging from custom CFD codes, to codes, scripts, macros and GUIs for a specific engineering purpose such as pipe sizing, pressure loss calculations and optimization. Visit STF Solutions at www.stfsol.com for more!

Hanley Innovations top

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse


Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit http://www.hanleyinnovations.com/stallion3d.html or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017


Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.




More information about the software can be found at the following url:
http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

Thanks for reading.


► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Avoid Testing Pitfalls
  24 Jan, 2017


The only way to know if your idea will work is to test it.  Rest assured, as a design engineer your idea and designs will be tested over and over again often in front of a crowd of people.

As an aerodynamics design engineer, Stallion 3D helps you to avoid the testing pitfalls that would otherwise keep you awake at night. An advantage of Stallion 3D is it enables you to test your designs on the privacy of your laptop or desktop before your company actually builds a prototype.  As someone who uses Stallion 3D for consulting, I find it very exciting to see my designs flying the way they were simulated in the software. Stallion 3D will assure that your creations are airworthy before they are tested in front of a crowd.

I developed Stallion 3D for engineers who have an innate love and aptitude for aerodynamics but who do not want to deal with the hassles of standard CFD programs.  Innovative technologies should always take a few steps out of an existing process to make the journey more efficient.  Stallion 3D enables you to skip the painful step of grid (mesh) generation. This reduces your workflow to just a few seconds to setup and run a 3D aerodynamics case.

Stallion 3D helps you to avoid the common testing pitfalls.
1. UAV instabilities and takeoff problems
2. Underwhelming range and endurance
3. Pitch-up instabilities
4. Incorrect control surface settings at launch and level flight
5. Not enough propulsive force (thrust) due to excess drag and weight.

Are the results of Stallion 3D accurate?  Please visit the following page to see the latest validations.
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.
► Flying Wing UAV: Design and Analysis
  15 Jan, 2017

3DFoil is a design and analysis software for wings, hydrofoils, sails and other aerodynamic surfaces. It requires a computer running MS Windows 7,8 and 10.

I wrote the 3DFoil software several years ago using a vortex lattice approach. The vortex lattice method in the code is based on vortex rings (as opposed to the horse shoe vortex approach).  The vortex ring method allows for wing twist (geometric and aerodynamic) so a designer can fashion the wing for drag reduction and prevent tip stall by optimizing the amount of washout.  The approach also allows sweep (backwards & forwards) and multiple dihedral/anhedral angles.
Another feature that I designed into 3DFoil is the capability to predict profile drag and stall. This is done by analyzing the wing cross sections with a linear strength vortex panel method and an ordinary differential equation boundary layer solver.   The software utilize the solution of the boundary layer solver to predict the locations of the transition and separation points.

The following video shows how to use 3DFoil to design and analyze a flying wing UAV aircraft. 3DFoil's user interface is based on the multi-surface approach. In this method, the wing is designed using multiple tapered surface where the designer can specify airfoil shapes, sweep, dihedral angles and twist. With this approach, the designer can see the contribution to the lift, drag and moments for each surface.  Towards the end of the video, I show how the multi-surface approach is used to design effective winglets by comparing the profile drag and induced drag generated by the winglet surfaces. The video also shows how to find the longitudinal and lateral static stability of the wing.



The following steps are used to design and analyze the wing in 3DFoil:
1. Input the dimensions and sweep half of the wing (half span)
2. Input the dimensions and sweep of the winglet.
3. Join the winglet and main wing.
4. Generate the full aircraft using the mirror image insert function.
5. Find the lift drag and moments
6. Compute longitudinal and lateral stability
7. Look at the contributions of the surfaces.
8. Verify that the winglets provide drag reduction.

More information about 3DFoil can be found at the following url: http://www.hanleyinnovations.com/3dfoil.html.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Corvette C7 Aerodynamics
    7 Jan, 2017

The CAD file for the Corvette C7 aerodynamics study in Stallion 3D version 4 was obtained from Mustafa Asan revision on GrabCAD.  The file was converted from the STP format to the STL format required in Stallion 3D using OnShape.com.

Once the Corvette was imported into Stallion 3D, I applied ground effect and a speed of 75 miles per hour at zero angle of attack.  The flow setup took just seconds in Stallion 3D and grid generation was completely automatic.  The software allows the user to choose a grid size setting and I chose the option the produced a total of 345,552 cells in the computational domain.

I chose the Reynolds Averaged Navier-Stokes (RANS) equations solver for this example.  In Stallion 3D, the RANS equations are solve along with the k-e turbulence model.  A wall function approach is used at the boundaries.

The results were obtained after 10,950 iterations on a quad core laptop computer running at 2.0 GHz under MS Windows 10.


The results for the Corvette C7 model  are summarized below:

Lift Coefficient:  0.227
Friction Drag Coefficient: 0.0124
Pressure Drag Coefficient: 0.413
Total Drag Coefficient: 0.426

Stallion 3D HIST Solver:  Reynolds Averaged Navier-Stokes Equations
Turbulence Model: k-e
Number of Cells: 345,552
Grid: Built-in automatic grid generation

Run time: 7 hours

The coefficients were computed based on a frontal area of 2.4 square meters.

The following are images of the same solution from different views in Stallion 3D.  The streamlines are all initiated near the ground plane 2 meters ahead of the car.

Top View



Side View


Bottom View


Stallion 3D utilizes a new technology (Hanley Innovations Surface Treatment or HIST) that enables design engineers to quickly analyze their CAD models on an ordinary Window PC.  We call this SameDayCFD. This unique technology is my original work and was not derived from any existing software codes.  More information about Stallion 3D can be found at:


Do not hesitate to contact us if you have any questions.  More information can be found at  http://www.hanleyinnovations.com

Thanks for reading.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.



CFD and others... top

► Not All Numerical Methods are Born Equal for LES
  15 Dec, 2018
Large eddy simulations (LES) are notoriously expensive for high Reynolds number problems because of the disparate length and time scales in the turbulent flow. Recent high-order CFD workshops have demonstrated the accuracy/efficiency advantage of high-order methods for LES.

The ideal numerical method for implicit LES (with no sub-grid scale models) should have very low dissipation AND dispersion errors over the resolvable range of wave numbers, but dissipative for non-resolvable high wave numbers. In this way, the simulation will resolve a wide turbulent spectrum, while damping out the non-resolvable small eddies to prevent energy pile-up, which can drive the simulation divergent.

We want to emphasize the equal importance of both numerical dissipation and dispersion, which can be generated from both the space and time discretizations. It is well-known that standard central finite difference (FD) schemes and energy-preserving schemes have no numerical dissipation in space. However, numerical dissipation can still be introduced by time integration, e.g., explicit Runge-Kutta schemes.     

We recently analysed and compared several 6th-order spatial schemes for LES: the standard central FD, the upwind-biased FD, the filtered compact difference (FCD), and the discontinuous Galerkin (DG) schemes, with the same time integration approach (an Runge-Kutta scheme) and the same time step.  The FCD schemes have an 8th order filter with two different filtering coefficients, 0.49 (weak) and 0.40 (strong). We first show the results for the linear wave equation with 36 degrees-of-freedom (DOFs) in Figure 1.  The initial condition is a Gaussian-profile and a periodic boundary condition was used. The profile traversed the domain 200 times to highlight the difference.

Figure 1. Comparison of the Gaussian profiles for the DG, FD, and CD schemes

Note that the DG scheme gave the best performance, followed closely by the two FCD schemes, then the upwind-biased FD scheme, and finally the central FD scheme. The large dispersion error from the central FD scheme caused it to miss the peak, and also generate large errors elsewhere.

Finally simulation results with the viscous Burgers' equation are shown in Figure 2, which compares the energy spectrum computed with various schemes against that of the direct numerical simulation (DNS). 

Figure 2. Comparison of the energy spectrum

Note again that the worst performance is delivered by the central FD scheme with a significant high-wave number energy pile-up. Although the FCD scheme with the weak filter resolved the widest spectrum, the pile-up at high-wave numbers may cause robustness issues. Therefore, the best performers are the DG scheme and the FCD scheme with the strong filter. It is obvious that the upwind-biased FD scheme out-performed the central FD scheme since it resolved the same range of wave numbers without the energy pile-up. 


► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.


A Tank outside my taxi



A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion


To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.



► The Linux Version of meshCurve is Now Ready for All to Download
  20 Apr, 2016
The 64-bit version for the Linux operating system is now ready for you to download. Because of the complexities associated with various libraries, we experienced a delay of slightly more than a month. Here is the link again.

Please let us know your experience, good or bad. Good luck!
► Announcing meshCurve: A CAD-free Low Order to High-Order Mesh Converter
  14 Mar, 2016
We are finally ready to release meshCurve to the world!

The description of meshCurve is provided in AIAA Paper No. 2015-2293. The primary developer is Jeremy Ims, who has been supported by NASA and NSF. Zhaowen Duan also made major contributions. By the way, Aerospace America also highlighted meshCurve in its 2015 annual review issue (on page 22). Many congratulations to Jeremy and Zhaowen on this major milestone!

The current version supports both the Mac OS X and Windows (64 bit) operating systems. The Linux version will be released soon.

Here is roughly how meshCurve works. The input is a linear mesh in the CGNS format. Then the user selects which boundary patches should be reconstructed to high-order. After that, geometrically important features are detected. The user can also manually select or delete features. Next the selected patches are reconstructed to add curvature. Finally the interior volume meshes are curved (if necessary). The output mesh is also stored in CGNS format.

We have tested the tool with meshes in the order of a million cells. But I still want to lower your expectation. So try it out yourself and let us know if you like it or hate it. Please do report bugs so that improvements can be made in the future.

Good luck!

Oh, did I mention the tool is completely free? Here is the meshCurve link again.






ANSYS Blog top

► How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon
  10 Oct, 2018
Illini EV Concept Team Photo at Shell Eco Marathon 2018

Illini EV Concept Team Photo at Shell Eco Marathon 2018

Weight is the enemy of all teams that design electric cars for the Shell Eco Marathon.

Reducing the weight of electric cars improves the vehicle’s acceleration and power efficiency. These performance improvements make all the difference come race day.

However, if the car’s weight is reduced too much, it could lead to safety concerns.

Illini EV Concept (Illini) is a Shell Eco Marathon team out of the University of Illinois. Team members use ANSYS academic research software to optimize the chassis of their electric car without compromising safety.

Where to Start When Reducing the Weight of Electric Cars?

Front bump composite failure under a load of 2000N.

Front bump composite failure under a load of 2000N.

The first hurdle of the Shell Eco Marathon is an initial efficiency contest. Only the best teams from this efficiency assessment even make it into the race.

Therefore, Illini concentrates on reducing the most weight in the shortest amount of time to ensure it makes it to the starting line.

Illini notes that its focus is on reducing the weight of its electric car’s chassis.

“The chassis is by far the heaviest component of our car, so ANSYS was used extensively to help design our first carbon fiber monocoque chassis,” says Richard Mauge, body and chassis leader for Illini.

“Several loading conditions were tested to ensure the chassis was stiff enough and the carbon fiber did not fail using the composite failure tool,” he adds.

Competition regulations ensure the safety of all team members. These regulations state that each team must prove that their car is safe under various conditions. Simulation is a great tool to prove a design is within safety tolerances.

“One of these tests included ensuring the bulkhead could withstand a 700 N load in all directions, per competition regulations,” says Mauge. If the teams’ electric car designs can’t survive this simulation come race day, then their cars are not racing.

Iterate and Optimize the Design of Electronic Cars with Simulation

Front bump deformation under a load of 2000N.

Front bump deformation under a load of 2000N.

Simulations can do more than prove a design is safe. They can also help to optimize designs.

Illini uses what it learns from simulation to optimize the geometry of its electric car’s chassis.

The team found that its new designs have a torsional rigidity increase around 100 percent. This is after a 15 percent decrease in weight compared to last year’s model.

“Simulations ensure that the chassis is safe enough for our driver. It also proved that the chassis is lighter and stiffer than ever before. ANSYS composite analysis gave us the confidence to move forward with our radical chassis redesign,” notes Mauge.

The story optimization story continues from Illini. It plans to explore easier and more cost-effective ways to manufacture carbon fiber parts. For instance, the team wants to replace the core of its parts with foam and increase the number of bonded pieces.

If team members just go with their gut on these hunches, they could find themselves scratching their heads when something goes wrong. However, with simulations, the team makes better informed decisions about its redesigns and manufacturing process.

To get started with simulation, try our free student download. For student teams that need to solve in-depth problems, check out our software sponsorship program.

The post How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon appeared first on ANSYS.

► Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers
    9 Oct, 2018
This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

Simulation data sets have a funny habit of ballooning as engineers move through the development cycle. At some point, post-processing these data sets on a single machine becomes impractical.

Engineers can speed up post-processing by spatially or temporally decomposing large data sets so they can be post-processed across numerous servers.

The idea is to utilize the idle compute nodes you used to run the solver in parallel to now run the post-processing in parallel.

In ANSYS 19.2 Ensight Enterprise you can spatially or temporally decompose data sets. Ensignt Enterprise is an updated version of EnSight HPC.

Post-Processing Using Spatial Decomposition

EnSight is a client/server architecture. The client program takes care of the graphical user interface (GUI) and rendering operations, while the server program loads the data, creates parts, extracts features and calculates results.

If your model is too large to post-process on a single machine, you can utilize the spatial decomposed parallel operation to assign each spatial partition to its own EnSight Server. A good server-to-model ratio is one server for every 50 million elements.

Each EnSight Server can be located on a separate compute node on any compute resource you’d like. This allows engineers to utilize the memory and processing power of heterogeneous high-performance computing (HPC) resources for data set post-processing.

The engineers effectively split the large data set up into pieces with each piece assigned to its own compute resource. This dramatically increases the data set sizes you can load and process.

Once you have loaded the model into EnSight Enterprise, there are no additional changes to your workflow, experience or operations.

Post-Processing Using Temporal Decomposition

Keep in mind that this decomposition concept can also be applied to transient data sets. In this case, the dataset is split up temporally rather than spatially. In this scenario, each server receives its own set of time steps.

A turbulence simulation created using EnSight Enterprise post-processing

EnSight Enterprise offers performance gains when the server operations outweigh the communication and rendering time of each time step. Since it’s hard to predict network communication or rendering workloads, you can’t easily create a guiding principle for the server-to-model ratio.

However, you might want to use a few servers when your model has more than 10 million elements and over a hundred time steps. This will help keep the processing load of each server to a moderate level.

How EnSight Speeds Up the Post-Processing of Large Simulation Data Sets

Another good tip to ensure you are post-processed optimally within EnSight Enterprise. Engineers achieve the best performance gains by pre-decomposing the data and locating it locally to the compute resources they anticipate using. Ideally, this data should be in EnSight Case format.

To learn more, check out Ensight or register for the webinar Analyze, Visualize and Communicate Your Simulation Data with ANSYS EnSight.

The post Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers appeared first on ANSYS.

► Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing
    8 Oct, 2018

Your design team will make informed decisions about the products they create when they bring detailed simulations up front in the development cycle.

The 19.2 release of ANSYS Discovery AIM facilitates the need of early simulations.

It does this by streamlining templates for physics-aware meshing and rapid results.

High-Fidelity Simulation Through Physics-Aware Meshing

 Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Analysts have likely told your design team about the importance of a quality mesh to achieve accurate simulation results.

Creating high quality meshes takes time and specialized training. Your design team doesn’t likely have the time or patience to learn this art.

To account for this, Discovery AIM automatically incorporates physics-aware meshing behind the scenes. In fact, your design team doesn’t even need to see the mesh creation process to complete the simulation.

This workflow employs several meshing best practices analysts typically use. The tool even accounts for areas that require mesh refinements based on the physics being assessed.

For instance, areas with a sliding contact gain a finer mesh so the sliding behavior can be accurately simulated. Additionally, areas near the walls of fluid-solid interfaces are also refined to ensure this interaction is properly captured. Physics-aware meshing ensures small features and areas of interests won’t get lost in your design team’s simulation.

The simplified meshing workflow also lets your design team choose their desired solution fidelity. This input will help the software balance the time the solver takes to compute results with the accuracy of the results.

Though physics-aware meshing can create the mesh under the hood of the simulation process, it still has tools allowing user-control of the mesh. This way, if your design team chooses to dig into the meshing details — or an analyst decides to step in — they can finely tune the mesh.

Capabilities like this further empower designers as techniques and knowledge traditionally known only by analysts are automated in an easy-to-use fashion.

Gain Rapid Results in Important Areas You Might Miss

The 19.2 release of Discovery AIM has seen improvements with its ability to enable your design team to explore simulation results.

Many analysts will know instinctively where to focus their post-processing, but without this experience, designers may miss areas of interest.

Discovery AIM enables the designer to interactively explore and identify these critical results. These initial results are rapidly displayed as contours, streamlines or field flow lines.

Field flow and streamlines for an electromagnetics simulation

Field flow and streamlines for an electromagnetics simulation

Once your design team finds locations of interest within the results, they can create higher fidelity results to examine those area of interest in further detail. Designers can then save the results and revisit them when comparing design points or after changing simulation inputs.

To learn more about other changes to Discovery AIM — like the ability to directly access fluid results — watch the Discovery AIM 19.2 release recorded webinar or take it for a test drive.

The post Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing appeared first on ANSYS.

► Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer
    5 Oct, 2018
Traditional chemotherapy can often be blocked by a tumor’s stroma.

Traditional chemotherapy can often be blocked by a tumor’s stroma.

There are few illnesses as crafty as pancreatic cancer. It spreads like weeds and resists chemotherapy.

Pancreatic cancer is often asymptomatic, has a low survival rate and is often misdiagnosed as diabetes. And, this violent killer is almost always inoperable.

The pancreatic tumor’s resistance to chemotherapy comes from a shield of supporting connective tissue, or stroma, which it builds around itself.

Current treatments attempt to overcome this defense by increasing the dosage of intravenously administered chemotherapy. Sadly, this rarely works, and the high dosage is exceptionally hard on patients.

Nonetheless, doctors need a way to shrink these tumors so that they can surgically remove them without risking the numerous organs and vasculature around the pancreas.

“We say if you can’t get the drugs to the tumor from the blood, why not get it through the stroma directly?” asks William Daunch, CTO at Advanced Chemotherapy Technologies (ACT), an ANSYS Startup Program member. “We are developing a medical device that implants directly onto the pancreas. It passes drugs through the organ, across the stroma to the tumor using iontophoresis.”

By treating the tumor directly, doctors can theoretically shrink the tumor to an operable size with a smaller dose of chemotherapy. This should significantly reduce the effects of the drugs on the rest of the patient’s body.

How to Treat Pancreatic Cancer with a Little Electrochemistry

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Most of the drugs used to treat pancreatic cancer are charged. This means they are affected by electromotive forces.

ACT has created a medical device that takes advantage of the medication’s charge to beat the stroma’s defenses using electrochemistry and iontophoresis.

The device contains a reservoir with an electrode. The reservoir connects to tubes that connect to an infusion pump. This setup ensures that the reservoir is continuously filled. If the reservoir is full, the dosage doesn’t change.

The tubes and wires are all connected into a port that is surgically implanted into the patient’s abdomen.

A diagram of ACT’s chemotherapy medical device.

A diagram of ACT’s chemotherapy medical device.

The circuit is completed by a metal panel on the back of the patient.

“When the infusion pump runs, and electricity is applied, the electromotive forces push the medication into the stroma’s tissue without a needle. The medication can pass up to 10 to 15 mm into the stroma’s tissue in about an hour. This is enough to get through the stroma and into the tumor,” says Daunch.

“Lab tests show that the medical device was highly effective in treating human pancreatic cancer cells within mice,” added Daunch. “With conventional infusion therapy, the tumors grew 700 percent and with the device working on natural diffusion alone the tumors grew 200 percent. However, when running the device with iontophoresis, the tumor shrank 40 percent. This could turn an inoperable tumor into an operable one.” Subsequent testing of a scaled-up device in canines demonstrated depth of penetration and the low systemic toxicity required for a human device.

Daunch notes that the Food and Drug Administration (FDA) took notice of these results. ACT’s next steps are to develop a human clinical device and move onto to human safety trials.

Simulation Optimized the Fluid Dynamics in the Pancreatic Cancer Chemotherapy Implant

Before these promising tests, ACT faced a few design challenges when coming up with their chemotherapy implant.

For example, “There was some electrolysis on the electrode in the reservoir. This created bubbles that would change the electrode’s impedance,” explains Daunch. “We needed a mechanism to sweep the bubbles from the surface.”

An added challenge is that ACT never knows exactly where doctors will place the device on the pancreas. As a result, the mechanism to sweep the bubbles needs to work from any orientation.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

“We used ANSYS Fluent and ANSYS Discovery Live to iterate a series of designs,” says Daunch. “Our design team modeled and validated our work very quickly. We also noticed that the bubbles didn’t need to leave the reservoir, just the electrode.”

“If we place the electrode on a protrusion in a bowl-shaped reservoir the bubbles move aside into a trough,” explains Daunch. “The fast fluid flow in the center of the electrode and the slower flow around it would push the bubbles off the electrode and keep them off until the bubbles floated to the top.”

As a result, the natural fluid flow within the redesigned reservoir was able to ensure the bubbles didn’t affect the electrode’s impedance.

To learn how your startup can use computational fluid dynamics (CFD) software to address your design challenges, please visit the ANSYS Startup Program.

The post Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer appeared first on ANSYS.

► Making Wireless Multigigabit Data Transfer Reliable with Simulation
    4 Oct, 2018

The demand for wireless communications with high data transfer rates is growing.

Consumers want wireless 4K video streams, virtual reality, cloud backups and docking. However, it’s a challenge to offer these data transfer hogs wirelessly.

Peraso aims to overcome this challenge with their W120 WiGig chipset. This device offers multigigabit data transfers, is as small as a thumb-drive and plugs into a USB 3.0 port.

The chipset uses the Wi-Fi Alliance’s new wireless networking standard, WiGig.

This standard adds a 60 GHz communication band to the 2.4 and 5 GHz bands used by traditional Wi-Fi. The result is higher data rates, lower latency and dynamic session transferring with multiband devices.

In theory, the W120 WiGig chipset could run some of the heaviest data transfer hogs on the market without a cord. Peraso’s challenge is to design a way for the chipset to dissipate all the heat it generates.

Peraso uses the multiphysics capabilities within the ANSYS Electronics portfolio to predict the Joule heating and the subsequent heat flow effects of the W120 WiGig chipset. This information helps them iterate their designs to better dissipate the heat.

How to Design High Speed Wireless Chips That Don’t Overheat

Systems designers know that asking for high-power transmitters in a compact and cost-effective enclosure translates into a thermal challenge. The W120 WiGig chipset is no different.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

The chipset includes active/passive components and two main chips that are mounted on a printed circuit board (PCB). The system reaches considerably high temperatures due to the Joule heating effect.

To dissipate this heat, design engineers include a large heat sink that connects only to the chips and a smaller one that connects only to the PCB. The system is also enclosed in a casing with limited openings.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Traditionally, optimizing this set up takes a lot of trial and error as measuring the air flow within the enclosure would be challenging.

Instead, Peraso uses ANSYS SIwave to simulate the Joule heating effects of the system. This heat map is transferred to ANSYS Icepak, which then simulates the current heat flow, orthotropic thermal conductivity, heat sources and other thermal effects.

This multiphysics simulation enables Peraso to predict the heat distribution and the temperature at every point of the W120 WiGig chipset.

From there, Peraso engineers iterate their designs until they reached their coolest setup.

This simulation led design tactic helps Peraso optimize their system until they reached a heat transfer balance they need. To learn how Peraso performed this iteration, read Cutting the Cords.

The post Making Wireless Multigigabit Data Transfer Reliable with Simulation appeared first on ANSYS.

► Designing 5G Cellular Base Station Antennas Using Parametric Studies
    3 Oct, 2018

There is only so much communication bandwidth available. This will make it difficult to handle the boost in cellular traffic expected from the 5G network using conventional cellular technologies.

In fact, cellular networks are already running out of bandwidth. This severely limits the number of users and data rates that can be accommodated by wireless systems.

One potential solution is to leverage beamforming antennas. These devices transmit different signals to different locations on the cellular network simultaneously over the same frequency.

Pivotal Commware is using ANSYS HFSS to design beamforming antennas for cellular base stations that are much more affordable than current technology.

How 5G Networks Will Send More Signals on Existing Bandwidths

A 28 GHz antenna for a cellular base station.

A 28 GHz antenna for a cellular base station.

Traditionally, cellular technologies — 3G and 4G LTE — crammed more signals on the existing bandwidth by dividing the frequencies into small segments and splitting the signal time into smaller pulses.

The problem is, there is only so much you can do to chop up the bandwidth into segments.

Alternatively, Pivotal’s holographic beamforming (HBF) antennas are highly directional. This means they can split up the physical space a signal moves through.

This way, two cells in two locations can use the same frequency at the same time without interfering with each other.

Additionally, these HBF antennas use varactor (variable capacitors) and electronic components that are simpler and more affordable than existing beamforming antennas.

How to Design HBF Antennas for 5G Cellular Base Stations

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

Antenna design companies — like Pivotal — are always looking to design devices that optimize cost, size, weight and power (C-SWaP) and performance.

So, how was Pivotal able to account for C-SWaP and performance so thoroughly?

Traditionally, this was done by building prototypes, finding flaws, creating new designs and integrating manually.

Meeting a product launch with an optimized product using this manual method is grueling.

Pivotal instead uses ANSYS HFSS to simulate their 5G antennas digitally. This allows them to assess their HBF antennas and iterate their designs faster using parametric studies.

For instance, Pivotal wants to optimize their design for performance characteristics like roll-off. To do so they can plug in the parameter values, run simulations with these values and see how each parameter affects roll-off.

By setting up parametric studies, Pivotal assess which parameters affect performance and C-SWaP the most. From there they could weigh different trade-offs until they settled on an optimized design that accounted for all the factors they studied.

To see how Pivotal set up their parametric studies and optimize their antenna designs, read 5G Antenna Technology for Smart Products.

The post Designing 5G Cellular Base Station Antennas Using Parametric Studies appeared first on ANSYS.

Convergent Science Blog top

► Your &lt;span class=&quot;text-lowercase&quot;&gt;μ&lt;/span&gt; Matters: Understanding Turbulence Model Behavior
    6 Mar, 2019

I recently attended an internal Convergent Science advanced training course on turbulence modeling. One of the audience members asked one of my favorite modeling questions, and I’m happy to share it here. It’s the sort of question I sometimes find myself asking tentatively, worried I might have missed something obvious. The question is this:

Reynolds-Averaged Navier Stokes (RANS) turbulence models and Large-Eddy Simulation (LES) turbulence models have very different behavior. LES will become a direct numerical simulation (DNS) in the limit of infinitesimally fine grid, and it shows a wide range of turbulent length scales. RANS does not become a DNS, no matter how fine we make the grid. Rather, it shows grid-convergent behavior (i.e., the simulation results stop changing with finer and finer grids), and it removes small-scale turbulent content.

If I look at a RANS model or an LES turbulence model, the transport equations look very similar mathematically. How does the flow ‘know’ which is which?

There’s a clever, physically intuitive answer to this question, which motivates the development of additional hybrid models. But first we have to do a little bit of math.

Both RANS and LES take the approach of decomposing a turbulent flow into a component to be resolved and a component to be modeled. Let’s define the Reynolds decomposition of a flow variable ϕ as

$$\phi = \bar \phi \; + \;\phi’,$$

where the overbar term represents a time/ensemble average and the prime term is the fluctuating term. This decomposition has the following properties:

$$\overline{\overline{\phi}} = \bar \phi \;\;{\rm{and}}\;\;\overline{\phi’} = 0.$$

Figure 1 Schematic of time-averaging a signal.

LES uses a different approach, which is a spatial filter. The filtering decomposition of ϕ is defined as

$$\phi  = \left\langle \phi  \right\rangle + \;\phi ”,$$

where the term in the angled brackets is the filtered term and the double-prime term is the sub-grid term. In practice, this is often calculated using a box filter, a spatial average of everything inside, say, a single CFD cell. The spatial filter has different properties than the Reynolds decomposition,

$$\left\langle {\left\langle \phi  \right\rangle } \right\rangle \ne \left\langle \phi  \right\rangle \;\;{\rm{and}}\;\;\left\langle {\phi ”} \right\rangle  \ne 0.$$

Figure 2 Example of spatial filtering. DNS at left, box filter at right. (https://pubweb.eng.utah.edu/~rstoll/LES/Lectures/Lecture04.pdf )

To derive RANS and LES turbulence models, we apply these decompositions to the Navier-Stokes equations. For simplicity, let’s consider only the incompressible momentum equation. The Reynolds-averaged momentum equation is written as

$$\frac{{\partial \overline {{u_i}} }}{{\partial t}} + \frac{{\partial \overline {{u_i}}\; \overline {{u_j}} }}{{\partial {x_j}}} = – \frac{1}{\rho }\frac{{\partial \overline P }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left[ {\mu \left( {\frac{{\partial \overline {{u_i}} }}{{\partial {x_j}}} + \frac{{\partial \overline {{u_j}} }}{{\partial {x_i}}}} \right) – \frac{2}{3}\mu \frac{{\partial \overline {{u_k}} }}{{\partial {x_k}}}{\delta _{ij}}} \right] – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\overline {{{u’}_i}{{u’}_j}}} } \right).$$

This equation looks the same as the basic momentum transport equation, replacing each variable with the barred equivalent, with the exception of the term* in red. That’s where the RANS model will make a contribution.

The LES momentum equation, again neglecting Favre filtering, is written

$$\frac{{\partial \left\langle {{u_i}} \right\rangle }}{{\partial t}} + \frac{{\partial \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle }}{{\partial {x_j}}} =  – \frac{1}{\rho }\frac{{\partial \left\langle P \right\rangle }}{{\partial {x_i}}} + \frac{1}{\rho }\frac{{\partial \left\langle {{\sigma _{ij}}} \right\rangle }}{{\partial {x_j}}} – \frac{1}{\rho }\frac{\partial }{{\partial {x_j}}}\left( {\rho \color{Red}{\left\langle {{u_i}{u_j}} \right\rangle}}  – \rho \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle  \right).$$

Once again, we have introduced a single unclosed term*, shown in red. As with RANS, this is where the LES model will exert its influence.

These terms are physically stress terms. In the RANS case, we call it the Reynolds stress.

$${\tau _{ij,RANS}} =  – \rho \overline {{{u’}_i}{{u’}_j}}.$$

In the LES case, we define a sub-grid stress as follows:

$${\tau _{ij,LES}} = \rho \left( {\left\langle {{{u}_i}{{u}_j}} \right\rangle  – \left\langle {{u_i}} \right\rangle \left\langle {{u_j}} \right\rangle } \right).$$

By convention, the same letter is used to denote these two subtly different terms. It’s common to apply one more assumption to both. Kolmogorov postulated that at sufficiently small scales, turbulence was statistically isotropic, with no preferential direction. He also postulated that turbulent motions were self-similar. The eddy viscosity approach invokes both concepts, treating

$${\tau _{ij,RANS}} = f\left( {{\mu _t},\overline V } \right)$$

and

$${\tau _{ij,LES}} = g\left( {{\mu _t},\overline V } \right),$$

where \(\overline V \) represents the vector of transported variables: mass, momentum, energy, and model-specific variables like turbulent kinetic energy. We have also introduced \({\mu _t}\), which we call the turbulent viscosity. Its effect is to dissipate kinetic energy in a similar fashion to molecular viscosity, hence the name.

If you skipped the math, here’s the takeaway. We have one unclosed term* each in the RANS and LES momentum equations, and in the eddy viscosity approach, we close it with what we call the turbulent viscosity \({\mu _t}\). Yet we know that RANS and LES have very different behavior. How does a CFD package like CONVERGE “know” whether that \({\mu _t}\) is supposed to behave like RANS or like LES? Of course the equations don’t “know”, and the solver doesn’t “know”. The behavior is constructed by the functional form of \({\mu _t}\).

How can the turbulent viscosity’s functional form construct its behavior? Dimensional analysis informs us what this term should look like. A dynamic viscosity has dimensions of density multiplied by length squared per time. If we’re looking to model the turbulent viscosity based on the flow physics, we should introduce dimensions of length and time. The key to the difference between RANS and LES behavior is in the way these dimensions are introduced.

Consider the standard k-ε model. It is a two-equation model, meaning it solves two additional transport equations. In this case, it transports turbulent kinetic energy (k) and the turbulent kinetic energy dissipation rate (ε). This model calculates the turbulent viscosity according to the local values of these two flow variables, along with density and a dimensionless model constant as

$${\mu _t} = {C_\mu }\rho \frac{{{k^2}}}{\varepsilon }.$$

Dimensionally, this makes sense. Turbulent kinetic energy is a specific energy with dimensions of length squared per time squared, and its dissipation rate has dimensions of length squared per time cubed. In a sufficiently well-resolved solution, all of these terms should limit to finite values, rather than limiting to zero or infinity. If so, the turbulent viscosity should limit to some finite value, and it does.

Figure 3 Example of a grid-converged RANS simulation: the ECN Spray A case, with a contour plot for illustration.

LES, in contrast, directly introduces units of length via the spatial filtering process. Consider the Smagorinsky model. This is a zero-equation model that calculates turbulent viscosity in a very different way. For the standard Smagorinsky model,

$${\mu _t} = \rho C_s^2{\Delta ^2}\sqrt {{S_{ij}}{S_{ij}}},$$

where \({C_s}\) is a dimensionless model constant, \({S_{ij}}\) is the filtered rate of strain tensor, and Δ is the grid spacing. Once again, the dimensions work out: density multiplied by length squared multiplied by inverse time. But what do the limits look like? The rate of strain is some physical quantity that will not limit to infinity. In the limit of infinitesimal grid size, the turbulent viscosity must limit to zero! The model becomes completely inactive, and the equations solved are the unfiltered Navier-Stokes equations. We are left with a direct numerical simulation.

When I was a first-year engineering student, discussion of dimensional analysis and limiting behaviors seemed pro forma and almost archaic. Real engineers in the real world just use computers to solve everything, don’t they? Yes and no. Even those of us in the computational analysis world can derive real understanding, and real predictive power, from considering the functional form of the terms in the equations we’re solving. It can even help us design models with behavior we can prescribe a priori.

Detached Eddy Simulation (DES) is a hybrid model, taking advantage of the similarity of functional forms of the turbulent viscosities in RANS and LES. DES adopts RANS-like behavior near the wall, where we know an LES can be very computationally expensive. DES adopts LES behavior far from the wall, where LES is more computationally tractable and unsteady turbulent motions are more often important.

The math behind this switching behavior is beyond the scope of a blog post. In effect, DES solves the Navier-Stokes equations with some effective \({\mu _{t,DES}}\) such that \({\mu _{t,DES}} \approx {\mu _{t,RANS}}\) near the wall and \({\mu _{t,DES}} \approx {\mu _{t,LES}}\) far from the wall, with \({\mu _{t,RANS}}\) and \({\mu _{t,LES}}\) selected and tuned so that they are compatible in the transition region. Our understanding of the derivation and characteristics of the RANS and LES turbulence models allows us to hybridize them into something new.

Figure 4 DES simulation over a backward facing step with CONVERGE

*This term is a symmetric second-order tensor, so it has six scalar components. In some approaches (e.g., Reynolds Stress models), we might transport these terms separately, but the eddy viscosity approach treats this unknown tensor as a scalar times a known tensor.

► What’s Knockin&#8217; in Europe?
  29 Jan, 2019

The Convergent Science GmbH team is based in Linz, Austria and provides support to our European clients and collaborators alike as they tackle the hard problems. One of the most interesting and challenging problems in the design of high efficiency modern spark-ignited (SI) internal combustion engines is the prediction of knock and the development of knock-mitigation strategies. At the 2018 European CONVERGE User Conference (EUC), several speakers presented recent work on engine knock.

This winter, when I cold-started my car, I heard a loud knocking noise. Usually, though, knocking is more prevalent in engines that operate near the edge of the stability range. The first step of knocking is spontaneous secondary ignition (autoignition) of the end-gases ahead of the flame front. When the pressure waves from this autoignition hit the walls of the combustion chamber, they often make a knocking noise and damage the engine. Knock is challenging to simulate because you must correctly calculate critical local conditions and simultaneously track the pressure waves that are traveling rapidly across the combustion chamber.

To enable you to easily model these conditions, CONVERGE offers autonomous meshing, full-cycle simulation, and flexible boundary conditions. Adaptive Mesh Refinement allows you to add cells and spend computational time on areas where the knock-relevant parameters (such as local pressure difference, heat release rate, and species mass fraction of radicals that indicate autoignition) are rapidly changing. CONVERGE can predict autoignition with surrogate fuels, changing physical engine parameters, and a spectrum of operating conditions.

EUC keynote speaker Vincenzo Bevilacqua from Porsche Engineering presented an intriguing new approach (re-defining knock index) to evaluate the factors that may contribute to knock and to identify a clear knock limit. In another study, researchers from Politecnico di Torino investigated the feasibility of water injection as knock mitigation strategy. In yet another study, Max Mally and his colleagues from VKA RWTH Aachen University used RANS to successfully reproduce combustion and knock with a spark-timing sweep approach at various exhaust gas recirculation (EGR) percentages. You can see in the figure below that they were able to capture the moving pressure waves.


The rapid propagation of the pressure waves across the combustion chamber functions much like a detonation. Source: Mally, M., Gunterh, M., and Pischinger, S., “Numerical Study of Knock Inhibition with Cooled Exhaust Gas Recirculation,” CONVERGE User Conference-Europe, Bologna, Italy, March 19-23, 2018.

Advancing the spark, using lean burn, turbo-charging, or running at a high compression ratio can increase the likelihood of knock. However, each cycle in an SI engine is unique, and thus autoignition is not a consistent phenomenon. When simulating an SI engine, it is critical to simulate multiple cycles to identify the limits of the operating conditions at which knock is likely to occur. (Fortunately, CONVERGE can easily run multi-cycle simulations!)

Knock is one of the limiting factors in engine design because many of the techniques that improve the thermal efficiency and enable downsizing of the engine increase the likelihood of knock. Here at Convergent Science, we encourage you to solve the hard problems. Go on, knock it out of park.


► 2018: CONVERGE-ING ON A DECADE
  17 Dec, 2018

Convergent Science thrived in 2018, with many successes, rapid growth, and consistent innovation. We celebrated the tenth anniversary of the commercial release of CONVERGE. The Convergent Science employee count surpassed 100, and our India office tripled in size. We formed new partnerships and collaborations and continued to bring CONVERGE to new application areas. Simultaneously, we endeavored to increase the prominence of CONVERGE in internal combustion applications and grew our market share.

Our dedicated team at Convergent Science ensures that CONVERGE stays on the cutting-edge of CFD software—implementing new models, enhancing CONVERGE features, increasing simulation speed and accuracy—while also offering exceptional support and customer service to our clients.

New Application Successes

Increasingly, clients are using CONVERGE for new applications and great strides are being made in these fields. Technical presentations and papers on gerotor pumps, blood pumps, reciprocating compressors, scroll compressors, and screw machines this year reflected CONVERGE’s increased use in the pumps and compressors markets. Research projects using CONVERGE to model gas turbine combustion, lean blow-out, ignition, and relight are going strong. In the field of aftertreatment, new acceleration techniques have been implemented in CONVERGE to enable users to accurately predict urea deposits in Urea/SCR aftertreatment systems while keeping pace with rapid prototyping schedules. In addition, we were thrilled to see the first paper using CONVERGE for offshore wind turbine modeling published this year, as part of a collaborative effort with the University of Massachusetts Amherst.

CONVERGE Featured at SAE, DOE Merit Review, and ASME ICEF

CONVERGE’s broad use in the automotive industry was showcased at the Society of Automotive Engineers World Congress Experience (SAE WCX18), with more than 30 papers presenting CONVERGE results. Convergent Science cultivates collaboration with industry, academic, and research institutions, and the benefit of these collaborations was prominently displayed at SAE WCX18. Organizations such as General Motors, Caterpillar, Ford, Jaguar Land Rover, Isuzu Motors, John Deere, Renault, Aramco Research Center, Argonne National Laboratory, King Abdullah University of Science and Technology (KAUST), Saudi Aramco, and the University of Oxford all authored papers describing CONVERGE results. These papers spanned a wide array of topics, including fuel injection, chemical mechanisms, HCCI, GCI, water injection, LES, spray/wall interaction, abnormal combustion, machine learning, soot modeling, and aftertreatment systems.

At the 2018 DOE Merit Review, CONVERGE was featured in 17 of the advanced vehicle technologies projects that were reviewed by the U.S. Department of Energy. The broad range of topics of the projects is a testament to the versatility and broad applicability of CONVERGE. The research for these projects was conducted at Argonne National Laboratory, Lawrence Livermore National Laboratory, Oak Ridge National Laboratory, Sandia National Laboratories, the Department of Energy, National Renewable Energy Laboratory, and the University of Michigan.

CONVERGE was once again well represented at the ASME Internal Combustion Engine Fall Technical Conference (ICEF). At ASME ICEF 2018, 18 papers included CONVERGE results, with topics ranging from ignition systems and injection strategies to emissions modeling and predicting cycle-to-cycle variation. I was honored to have the opportunity to further my cause of defending the IC engine in a keynote presentation.

New Partnerships and Collaborations

At Convergent Science, we take pride in fostering partnerships and collaborations with companies and institutions to spark innovation and bring our best software to the CFD community. This year, we renewed our partnership with Roush Yates Engines, who had a fantastic 2018 season, achieving the company’s 350th win and winning the Monster Energy NASCAR Cup Series Championship. We formed a new partnership with Tecplot and integrated their industry-leading visualization software into CONVERGE Studio. In addition, we entered into new partnerships with the National Center for Supercomputing Applications and two Dassault Systèmes subsidiaries, Spatial Corp. and Abaqus. These partnerships improve the usability and applicability of CONVERGE and help CONVERGE reach new markets.

CONVERGE in Italy

We had a great showing of CONVERGE users at our second European CONVERGE User Conference held this year in Bologna, Italy. Attendees shared their latest research using CONVERGE for a host of different applications, from modeling liquid film boiling and mitigating engine knock to developing turbulent combustion models and simulating premixed burners with LES. For one of our networking events, we rented out the Ferrari Museum in Maranello, where we were treated to a tour of the museum and ate dinner surrounded by cars we wished we owned. We also enjoyed traditional Bolognese cuisine at the Osteria de’ Poeti and a reception at the Garganelli Restaurant. 

Turning 10 at the U.S. CONVERGE User Conference

It seemed only fitting to celebrate ten years of CONVERGE back where it all started in Madison, Wisconsin. During the fifth annual North American User Conference, we commemorated CONVERGE’s tenth birthday with a festive evening at the historic Orpheum Theater in downtown Madison. During the celebration, we heard from Jamie McNaughton of Roush Yates Engines, who discussed the game-changing impact of CFD on creating winning racing engines. Physics Girl Dianna Cowern entertained us with her live physics demonstrations and her unquenchable enthusiasm for all things science. I concluded the evening with a brief presentation (which you can check out below) reflecting on the past decade of CONVERGE and looking forward to the future. We were incredibly grateful to be able to celebrate the successes of CONVERGE with our users who have made these past ten years possible.

In addition to our 10-year CONVERGE celebration, we hosted our third trivia match at the Convergent Science World Headquarters. At the beautiful Madison Club, we heard a fascinating round of presentations on topics including gas turbine modeling, offshore fluid-structural dynamics, machine learning, and a wide range of IC engine applications.

Convergent Science India

The Convergent Science India office in Pune celebrated its one-year anniversary in August. The office has transformed in the span of the last year and a half. The employee count more than tripled—from two employees at the end of 2017 to seven at the end of 2018. Five servers are now up and running and the office is fully staffed. We’re thrilled with the service and support our Pune office has been able to offer our clients all around India.

CONVERGE 3.0 Coming Soon

CONVERGE 3.0 is slated to be released soon, and we truly believe this new version of CONVERGE will once again change the CFD game. In 3.0, you can look forward to our new boundary layer mesh and inlaid mesh features, which will allow greater meshing flexibility for accurate results at less computational cost. Our new partnership with Spatial Corp. will enable CONVERGE users to directly import CAD files into CONVERGE Studio, greatly streamlining our Studio users’ workflow. We’ve also focused a lot of our attention this year towards enhancing our chemistry tools to be more efficient, robust, and applicable to an even greater range of flow and combustion problems. We’ve added new 0D and 1D reactors, including a perfectly stirred reactor, 0D HCCI engine, RON and MON estimators, plug flow reactors, and improved our 1D laminar flame solver. Additionally, we enhanced our mechanism reduction capability by targeting both ignition delay and laminar flamespeed. But perhaps the most anticipated aspect of CONVERGE 3.0 is the scaling. 3.0 demonstrates dramatically superior parallelization compared to 2.4 and shows significant speedup even on thousands of cores.

Looking Ahead

2019 promises to be an exciting year. With the upcoming release of CONVERGE 3.0, we’re looking forward to growing CONVERGE’s presence in new application areas, continuing our work on pumps and compressors, and expanding our presence in aftertreatment and gas turbine markets. We will continue working hard to expand the usage of CONVERGE in the European, Asian, and Indian automotive markets. Above all, we look forward to more innovation, more collaboration, and continuing to provide unparalleled support to our clients. Want to join us? Check out our website to find out how CONVERGE can help you solve the hard problems.


Kelly looks back on the past decade of CONVERGE during the 10-Year Celebration at the 2018 CONVERGE User Conference-North America. The video Kelly references in his presentation is a video tribute to CONVERGE that was played earlier in the evening, Turning 10: A CONVERGE History.
► Harness the Power of CONVERGE + GT-SUITE with Unlimited Parallelization
    5 Nov, 2018

Imagine that you are modeling an engine. Engines are complex machines, and accurately modeling an engine is not an easy undertaking. Capturing in-cylinder dynamics, intake and exhaust system characteristics, complicated boundary conditions, and much more creates a problem that often takes multiple software suites to solve.

Convergent Science has a solution: CONVERGE Lite—and we’ve just introduced a new licensing option.

CONVERGE Lite is a reduced version of CONVERGE that comes free of charge with every GT-SUITE license. Gamma Technologies, the developer of GT-SUITE, and Convergent Science combined forces to allow users of GT-SUITE to leverage the power of CONVERGE.

CONVERGE LITE + GT-SUITE OVERVIEW

GT-SUITE is an industry-leading CAE system simulation tool that combines 1D physics modeling, such as fluid flow, thermal analysis, and mechanics, with 3D multi-body dynamics and 3D finite element thermal and structural analysis. GT-SUITE is a great tool for a wide variety of system simulations, including vehicles, engines, transmission, general powertrains, hydraulics, and more.

Let’s think again about modeling an engine. GT-SUITE is ideal for the primary workflow of engine design. But, what if you want to model 3D mixing in an intake engine manifold to track the cylinder-to-cylinder distribution of recirculated exhaust gas? Or simulate complex 3D flow through a throttle body to find the optimal design to maximize power? In these scenarios, 1D modeling is not sufficient on its own.

Visualization of flow through an optimized throttle body generated using data from a CONVERGE Lite + GT-SUITE coupled simulation.

In this type of situation where 3D flow analysis is critical, GT-SUITE users can invoke CONVERGE Lite to obtain detailed 3D analysis at no extra charge. CONVERGE Lite is fully integrated into GT-SUITE and is known for being user friendly. One of the biggest advantages of CONVERGE Lite is that it allows GT-SUITE users access to CONVERGE’s powerful autonomous meshing. With automatic mesh generation, fixed mesh embedding, and Adaptive Mesh Refinement, CONVERGE Lite eliminates user meshing time and allows for efficient grid refinement. In addition, CONVERGE Lite comes with automatic CFD species setup and automatic setup of fluid properties to match the properties in the GT-SUITE model. And as if that weren’t enough, recently CONVERGE Lite has been enhanced to include a license for Tecplot for CONVERGE, an advanced 3D post-processing software.

LICENSING

You can run CONVERGE Lite in serial for free if you have a GT-SUITE license. If you want to run CONVERGE Lite in parallel, you can purchase parallel licenses from Convergent Science. We have just introduced a new low-cost option for running CONVERGE Lite in parallel. For one flat fee, you can obtain a license from Convergent Science to run CONVERGE Lite on an unlimited number of cores. Even though CONVERGE Lite contains many features to enhance efficiency, 3D simulations can be computationally expensive. This new option is a great way to affordably speed up your integrated GT-SUITE + CONVERGE Lite simulations.

CONVERGE Lite is a robust tool, but it does not contain all of the features of the full CONVERGE solver. For example, if you want to take advantage of advanced physical models, like combustion, spray, or volume of fluid, or you want to simulate moving walls, such as pistons or poppet valves, a full CONVERGE license is required. With both a full CONVERGE license and a GT-SUITE license, you can also take advantage of CONVERGE’s detailed chemistry solver, multiphase flow modeling, and other powerful features while performing advanced CONVERGE + GT-SUITE coupled simulations.

The combined power of CONVERGE and GT-SUITE opens the door to a whole array of advanced simulations, like engine cylinder coupling, exhaust aftertreatment coupling, or fluid-structure interaction coupling, that cannot be accomplished with just one of the programs.

Contact a Convergent Science salesperson for licensing details and pricing information.

Contact sales

► Resolving Turbulence-Chemistry Interactions with LES and Detailed Chemistry
  30 Oct, 2018

One of the more controversial subjects we talk about here at Convergent Science is the role of turbulence-chemistry interaction (TCI) when using our SAGE detailed chemistry solver.

What is TCI?

TCI is used to describe two separate but linked processes: enhanced mixing in momentum, energy, and species due to turbulence and the commutation error in the reaction rate evaluation. A good turbulence model should always account for the enhanced mixing due to turbulence.

The commutation error is more difficult to address. In an LES simulation, the commutation error is the difference between evaluating the reaction rates using the spatially filtered quantities and using the un-filtered quantities (the latter is exact and the former is an approximation) and then filtering the reaction rates. It is usually convenient to use the averaged or filtered values to evaluate the reaction rates, which unfortunately means more error. For LES, the commutation error reduces as the cell size is reduced[1], and thus, with sufficient grid resolution, the commutation error becomes negligible.

In this blog post, we briefly describe a study that demonstrates that with sufficient grid resolution, CONVERGE CFD (with LES and detailed chemistry) can resolve the enhanced mixing due to turbulence without explicitly assigning a sub-grid model for the commutation error. For more details, please see the accompanying white paper.

Simulation Strategy

We simulate a canonical turbulent partially premixed flame, Sandia Flame D. We leverage Adaptive Mesh Refinement (AMR) and adaptive zoning as acceleration strategies to speed up the computationally expensive LES simulations. Figure 1 shows the fine resolution around a subsection of the flame due to AMR, which allows us to get good resolution when and where we need it.

Figure 1: Small subsection of the instantaneous temperature distribution of velocity, mixture fraction, mass fractions of CO2 and CO, and SGS velocity at the y = 0 plane from the LES case with minimum grid size 0.25 mm.

Conclusion In Brief

We first conduct grid convergence studies and find that 0.25 mm minimum grid size is sufficient to resolve most of the velocity and species fluctuations.

Then, we demonstrate that the commutation error becomes smaller and we resolve more velocity and species fluctuations as we use finer meshes. With the finest mesh, we match not only the the mean and RMS to the experimental value, but also the conditional mean and the shape of joint probability distribution function.

Finally, we take on the challenge of accurately predicting of non-equilibrium combustion processes. These processes (i.e., extinction and reignition) are dependent on two factors:

  1. An accurate mechanism for the range of conditions simulated and
  2. A good LES solver with sufficient grid resolution.

We compare thousands of data points from experiments to the equivalent points from the LES to determine that CONVERGE correctly predicts the extinction and reignition trends.

So what?

The SAGE detailed chemistry solver with LES has demonstrated success in a host of applications[2,3,4,5,6], including gas turbines and internal combustion engines.

We show in this white paper that when you resolve most of the velocity and species fluctuations and significantly reduce the commutation error, you can predict mixing-controlled turbulent combustion without a model for the commutation error in the reaction rates.

CONVERGE contains multiple acceleration strategies that make SAGE detailed chemistry + LES a reasonable strategy as far as computational costs go. Ready to dive more in-depth? Our TCI white paper is waiting for you!


[1] Davidson, L., “Fluid mechanics, turbulent flow and turbulence modeling,” Chalmers University, 2018. www.tfd.chalmers.se/~lada/postscript_files/solids-and-fluids_turbulent-flow_turbulence-modelling.pdf

[2] Drennan, S.A., and Kumar, G., “Demonstration of an Automatic Meshing Approach for Simulation of a Liquid Fueled Gas Turbine with Detailed Chemistry,” 50th AIAA/ASME/SAE/ASEE Joint PropulsionConference, AIAA 2014-3628, Cleveland, OH, United States, July 28-30, 2014. DOI:10.2514/6.2014-3628

[3] Kumar, G., and Drennan, S., “A CFD Investigation of Multiple Burner Ignition and Flame Propagation with Detailed Chemistry and Automatic Meshing,” 52nd AIAA/SAE/ASEE Joint Propulsion Conference, Propulsion and Energy Forum, AIAA 2016-4561, Salt Lake City, UT, United States, July 25-27, 2016. DOI:10.2514/6.2016-4561

[4] Yang, S., Wang, X., Yang, V., Sun, W., and Huo, H., “Comparison of Flamelet/Progress-Variable and Finite-Rate Chemistry LES Models in a Preconditioning Scheme,” 55th AIAA Aerospace Sciences Meeting, AIAA SciTech Forum, AIAA 2017-0605, Grapevine, TX, United States, January 9-13, 2017. https://doi.org/10.2514/6.2017-0605

[5] Pei, Y., Som, S., Pomraning, E., Senecal, P.K., Skeen, S.A., Manin, J., Pickett, L.M., “Large Eddy Simulation of a Reacting Spray Flame with Multiple Realizations under Compression Ignition Engine Conditions,” Combustion and Flame, 162, 4442-4455, 2015. DOI:10.1016/j.combustflame.2015.08.010

[6] Liu, S., Kumar, G., Wang, M., and Pomraning, E., “Towards Accurate Temperature and Species Mass Fraction Predictions for Sandia Flame-D using Detailed Chemistry and Adaptive Mesh Refinement,” 2018 AIAA Aerospace Sciences Meeting, AIAA SciTech Forum, AIAA 2018-1428. DOI:10.2514/6.2018-1428.

► CONVERGE Workflow Tips
  20 Aug, 2018

As a general purpose CFD solver, CONVERGE is robust out of the box. Autonomous meshing technology built into the solver eliminates the meshing bottleneck that has traditionally bogged down CFD workflows. Despite this advantage, however, performing computational fluid dynamics analyses is still a complex task. Challenges in pre-processing and post-processing can slow your workflow. To streamline the simulation process, CONVERGE CFD software includes a wide array of tools, utilities, and documentation as well as support from highly trained engineers with every license.

Pre-Processing

  • Although you do not have to create a volume mesh, your surface geometry must be watertight and meet several quality standards related to triangulation and normal vector orientation. CONVERGE Studio includes several native surface repair tools to quickly detect, show, and resolve these issues. With an additional license for the Polygonica toolkit, you can leverage powerful surface repair capabilities from within CONVERGE Studio.
  • For engine simulations, a popular acceleration technique is to use a sector (an axisymmetric geometry representing a portion of the model) instead of the full geometry. In CONVERGE, the make_surface utility allows you to quickly create a properly prepared sector geometry based on the piston bowl profile and just a few more geometry inputs. CONVERGE Studio includes a graphical version of this tool.
  • With any CFD software, the multitude of input parameters to control the complex physical models can be overwhelming. In CONVERGE CFD, we provide several checks to help you validate your case setup configuration before beginning a simulation. In CONVERGE, run the check_inputs utility to write information about missing or improperly configured parameters to the terminal. In CONVERGE Studio, you can use the Validate buttons throughout the application to validate input parameters incrementally as you configure the case. Additionally, the Final Validation tool examines the geometry and case setup parameters and provides suggestions for anything that may need to be revised.
  • A staple of the CONVERGE feature set is the ease with which you can simulate complex moving geometries. One requirement is that boundaries cannot intersect during the simulation. There are several ways to verify that your setup meets this requirement. Running CONVERGE in no hydrodynamic solver mode does not solve the spray, combustion, and transport equations. Instead, this type of simulation checks surface motion and grid creation. In CONVERGE Studio, use the Animation tab of the View Options dock to preview boundary motion and check for triangle intersections at each step of the motion. 
  • Many complex engine, pump, compressor, and other machinery simulations employ the sealing feature to prevent flow between regions at various times during a simulation. To test your seal setup, run the CONVERGE sealing test utility by supplying the check-sealing argument after your CONVERGE executable. This command uses a simplified test with only a single level of cells and most options (including AMR, embedding, sources, mapping, events, etc.) automatically turned off.
  • Full multi-cylinder simulations provide accurate predictions for fluid-solid heat transfer, intake and exhaust flow, and other important engine design parameters. Setting up the multiple cylinder geometries and timing can be a frustrating exercise in bookkeeping. The Multi-cylinder wizard in CONVERGE Studio makes this process painless. The wizard is a step-by-step tool that guides you through the process of configuring cylinder phase lag, copying geometry components for additional cylinders, and setting up timing of events such as spark ignition. After your configuration is complete, the wizard provides a quick reference sheet that catalogs the salient details for each cylinder. 
  • Because surface triangles cannot intersect during a CONVERGE simulation, valves (e.g., intake and exhaust valves in an IC engine) must be set to a minimum lift value very close to the valve seats but not technically closed. CONVERGE Studio includes a tool to automatically and quickly move the valves to this position based on profiles of intake and exhaust valve motion.
  • In compressor simulations, the working fluid is often far from an ideal gas. In addition to multiple equation of state models in CONVERGE, you can directly supply custom fluid properties for the working fluid. CONVERGE reads properties such as viscosity, conductivity, and compressibility as a function of temperature from supplied tabular data, obviating the need to link CONVERGE with a third-party properties library.
  • As CONVERGE is a very robust tool, you can use it for many different types of simulations: compressible or incompressible flow, multiphase flow, transient or steady-state, moving geometry, non-Newtonian fluids, and much more. Each of these regimes and scenarios requires you to configure relevant parameters. CONVERGE Studio includes a full suite of example cases across a range of these regimes including IC engines, compressors, gas turbines, and more. It is as simple as clicking File > Load Example Case to open an example case with Convergent Science-recommended default parameters for the given simulation type. You can use the example cases as starting points for your own simulations or run them as-is while you learn to use CONVERGE. 

Post-Processing

  • The geometry triangulation for a CONVERGE simulation may differ from that for a finite element analysis (FEA) simulation because the FEA geometry may have higher resolution in areas most relevant to the heat transfer analysis. CONVERGE includes an HTC mapper utility that maps near-wall heat transfer data from the CONVERGE simulation output to the triangulation of the FEA surface. That way, you can iterate between the two simulation approaches to understand and optimize designs.
  • CONVERGE Studio includes a powerful Line Plotting module to create two-dimensional plots. In addition to providing a high level of plot customization, the module is designed to plot some of the two-dimensional *.out files unique to CONVERGE. Also, you can use the Line Plotting module to monitor simulation properties such as mass flow rate convergence in a steady-state simulation. 
  • One of the post-processing tools available in CONVERGE Studio is the Engine performance calculator. This tool automatically calculates engine work and other relevant engine design parameters for 360 degree or 720 degree ranges from CONVERGE output and the engine parameters in your case setup. The results are collated in a table so that you can easily export them to a spreadsheet.

Documentation

  • Several case setup tutorial video series on the Convergent Science YouTube channel provide step-by-step walkthroughs of full case setups. Refer to these for information on surface preparation, case setup, simulation, and post-processing of some basic CONVERGE example cases.
  • On our CFD Online support forum, you can interact with other CONVERGE CFD users and our knowledgeable and approachable support team for assistance.

Performing CFD analyses can be difficult due to the number of unknowns, uncertainty of boundary conditions, and complexity of flows. CONVERGE CFD helps you by removing the necessity of meshing and giving you auxiliary tools to simplify your workflow.

Numerical Simulations using FLOW-3D top

► FLOW-3D Workshop at EDF
  12 Apr, 2019
FLOW-3D EDF Workshop, Lyon, France

This special workshop hosted by our partners at EDF will take place on June 18 – 19, 2019 in Lyon, France. The workshop will be followed by an optional free tour of the Grand’Maison Dam, France’s largest pumped storage hydropower plant located in Allemond on June 20. Registration for the workshop includes a 30-day FLOW-3D license*.

Led by FLOW-3D water & civil modeling expert, John Wendelbo, this one and a half day workshop will provide attendees with an in-depth exposure to FLOW-3D modeling capabilities, as well as hands-on instruction for setting up and running free surface simulations. The workshop will focus on how FLOW-3D can be used in complex hydraulic analysis for common application types for municipal stormwater, dams, river and environmental, and wastewater treatment applications.

Baffle drop structure analysis
Baffle drop structure FLOW-3D analysis. Courtesy Wade Trim, NEORSD

During the workshop, beyond free surface modeling, you will be introduced to more sophisticated physics models, including air entrainment, sediment and scour, thermal plumes, density flows, moving objects and particle dynamics. By the end of the workshop, you will have set up eight models from scratch, absorbed the user interface and steps that are common to important classes of hydraulic problems. You will also use the advanced post-processing tool FlowSight to analyze the results of your simulations. This one and a half day workshop is comprehensive yet accessible for engineers new to 3D CFD modeling. 

Register Online

REGISTRATION CLOSES MAY 15, 2019. THE OPTIONAL DAM TOUR IS LIMITED TO 28 PARTICIPANTS. 

In order to access EDF’s facilities, all attendees must provide their ID one month ahead of the workshop.

  • There is no extra charge for the tour; transportation and lunch will be provided. The tour is limited to 28 participants.
  • American Express
    Discover
    MasterCard
    Visa
     
  • Accepted file types: png, pdf, jpg, docx.

Cancellation policy: For a full refund of the registration fee, attendees must cancel their registration by 5:00 pm MST one month prior to the date of the workshop. After that date, no refunds will be made.

About EDF

Électricité de France S.A. is a French electric utility company, largely owned by the French state. Headquartered in Paris, with €71.2 billion in revenues in 2016, EDF operates a diverse portfolio of 120+ gigawatts of generation capacity in Europe, South America, North America, Asia, the Middle East and Africa. Wikipedia

Workshop Overview

A detailed schedule with the workshop topics is available below.

June 18, 2019

10:30 – 12:00 Registration
12:00 – 13:30 Lunch
13:30 – 18:00 Workshop

June 19, 2019

9:00 – 12:30 Workshop
12:30 – 14:00 Lunch
14:00 – 18:00 Workshop

June 20, 2019

8:00 – 19:00 Tour of the Grand’Maison Dam 

On the tour, you must wear closed-toe shoes. Transportation and lunch will be provided.

Workshop Location

EDF Immeuble de Thier
154 Avenue de Thier
69003  Lyon FRANCE

Workshop Fees

*This offer only applies to prospective or lapsed customers.

  • Academics 99€ 
  • Commercial 299€ 

Workshop Schedule

June 18, 2019

  • 10:30 – 12:00: Registration
  • 12:00 – 13:30: Lunch, provided by Flow Science

Session 1 – FLOW-3D Applications and Workflow

  • 13:30 – 14:00: Review of FLOW-3D applications for water civil infrastructure 
  • 14:00 – 14:30: Free surface hydraulics – FLOW-3D‘s workflow and modeling platform
  • 14:30 – 14:50: Overview of auxiliary physical models

Session 2 – Free Surface Hydraulics Model Setup

  • 15:10 – 15:55: Hands-on A-Z model setup of a fish passage 
  • 15:55 – 16:25: Hands-on post-processing results demonstration
  • 16:25 – 17:10: Hands-on A-Z model setup of a baffle drop structure

June 19, 2019

Session 1 – Hydraulic Controls

  • 9:00 – 9:30: Meshing, boundary conditions and initial condition topics
  • 9:30 – 10:15: Hands-on A-Z model setup of a flume with moving gates
  • 10:15 – 10:45: Weirs, spillways, hydraulic controls – Review of accuracy and best practices

Session 2 – Sediment Scour

  • 11:00 – 11:30: Sediment scour in FLOW-3D – Theory, practice and validation
  • 11.30 – 12:00: Hands-on A-Z model setup of a diamond pier scour model
  • 12:00 – 13:30: Lunch, provided by Flow Science

June 19, 2019 (Cont.)

Session 3 – Dispersed Phase

  • 13:30 – 14:00: Hands-on A-Z model setup of a hydrodynamic separator
  • 14:00 – 14:30: Review of air entrainment modeling options
  • 14:30 – 15:00: Hands-on A-Z model setup of a plunging jet

Session 4 – Wastewater Treatment Topics

  • 15:15 – 16:00: Hands-on A-Z model setup of a chlorine reactor tank
  • 16:00 – 16:45: Hands-on A-Z model setup of a secondary clarifier tank

Session 7 – Numerics, Meshing and Modeling Strategies

  • 16:45 – 17:15: Meshing, physical model and general modeling considerations
  • 17:15 – 17:30: Numerical options in FLOW-3D, common mistakes and common fixes
  • 17:30 – 18:00: Questions and remarks

June 20, 2019

Tour of the Grand’Maison Dam

  • 8:00: Depart Lyon
  • 11:00 – 12:30: First stop of the tour
  • 12:40 – 13:40: Lunch
  • 14:00 – 16:00: Second stop of the tour
  • 16:30 – 19:00: Return to Lyon

Who should attend?

  • Practicing engineers working in the water resources, environmental, energy and civil engineering industries
  • Regulators and decision makers looking to better understand what state-of-the-art tools are available to the modeling community
  • All modelers working in the field of environmental hydraulics
  • Civil engineering students

Participants will learn

  • How to import geometry and set up free-surface hydraulic models, including meshing and initial and boundary conditions.
  • How to add complexity by including sediment transport and scour, particles, scalars and turbulence.
  • How to use sophisticated visualization tools such as FlowSight™ to effectively analyze and convey simulation results.
  • Advanced topics, including air entrainment and bulking phenomena, shallow water, hybrid 3D/shallow water modeling, and chemistry.

For more information about the workshop, please contact Amanda Ruggles.

► FLOW-3D v12.0 Training
    6 Apr, 2019

The FLOW-3D v12.0 online training course is a comprehensive training resource available for FLOW-3D users. This course features online on-demand videos that cover all aspects of the basic model setup process in FLOW-3D. Each section provides examples and explanation so users can confidently setup simulations on their own. We recommend that all new FLOW-3D users complete the entire course before starting work on their project specific simulations. Existing users will also find the new training series valuable to learn about the improvements and new features available in the FLOW-3D v12.0 model setup process, and as a refresher on basic model setup topics. The course videos are organized and bookmarked to easily locate specific topics and segments, and they also provide a great resource that can be referenced at any time. This training course is available on the Users Site for customers with support.

FLOW-3D Training Modules

FLOW-3D GUI

FLOW-3D GUI

Model Setup

Model Setup Tab

Global Settings

Global Settings

Physics Models

Physics Models

Fluid Properties

Fluid Properties

Geometry

Geometry

Meshing

Meshing

Boundary Conditions

Boundary Conditions

Initial Conditions

Initial Conditions

Output Options

FLOW-3D GUI
► FLOW-3D European Users Conference 2019
    4 Apr, 2019
FLOW-3D European Users Conference 2019

The FLOW-3D European Users Conference 2019 will be held June 3-5 at the Diana Sheraton Majestic in Milan, Italy. Join engineers, researchers and scientists from some of Europe’s most renowned companies and institutions to hone your simulation skills, explore new modeling approaches and learn about the latest software developments. This year’s conference features metal casting and water & environmental application tracks, advanced training for workflow automation with a focus on optimization, in-depth technical presentations by FLOW-3D users, and the latest product developments presented by Flow Science’s senior technical staff. The conference will be co-hosted by XC Engineering, the official distributor of FLOW-3D products in Italy and France.

We are pleased to announce that we have added a tour of Milan to the conference program. All conference attendees are invited to attend this complimentary tour, thanks to our sponsor, Protesa SACMI.

Confirmed Speakers

We are pleased to announce the following confirmed speakers and their topics for this year’s conference. Join these speakers for an exciting conference lineup by submitting your abstract today! Because of the holiday weekend, we will be accepting abstracts until Tuesday, April 23rd. Learn more >

Customer Case Studies

  • “Modelling of upper stage cold gas reaction control propulsion system,” Francesco De Rose, ArianeGroup GmbH 
  • “Numerical modeling of microfluidics cells,” Julien Bœuf, Roche Diagnostics GmbH
  • “Predicting phase change in cryogenic tanks during sloshing with CFD,” Philipp Behruzi, ArianeGroup GmbH 
  • “Simulation of fluid dynamics in cooling channels of ICE pistons using the Non-Inertial Reference Frame motion model,” Florian Wirth, Federal-Mogul Nuremberg GmbH
  • “The challenge to reduce powertrain component’s weight,” Claudio Mus, Endurance Overseas

Development Talks

  • “A peek into FLOW-3D future developments,” Michael Barkhudarov, Flow Science, Inc.
  • “Exothermic feeders for gravity casting,” Malte Leonhard, Flow Science Deutschland GmbH
  • FLOW-3D v12.0 – Modernized interface, streamlined workflows and greater accuracy,” John Wendelbo, Flow Science, Inc.
  • “New developments for additive manufacturing and laser welding,” Raed Marwan, Flow Science Japan
  • “New frontiers in solidification modeling: FLOW-3D CAST v5.1,” Michael Barkhudarov, Flow Science, Inc.

Call for Abstracts

Share your experiences, present your success stories and obtain valuable feedback from the FLOW-3D user community and our senior technical staff. We welcome abstracts on all topics including those focused on the following applications:

  • Metal Casting
  • Additive Manufacturing
  • Civil & Municipal Hydraulics
  • Micro/Nano/Bio Fluidics
  • Aerospace
  • Automotive
  • General Applications

Abstracts should include a title, author(s) and a 200 word description. Please email your abstract to info@flow3d.com by Tuesday, April 23.

Registration and training fees will be waived for presenters. 

Speakers 2018 FLOW-3D European Users Conference

Past conference presentations are available through our website.

Presenter Information

Each presenter will have a 30 minute speaking slot, including Q & A. All presentations will be distributed to the conference attendees and on our website after the conference. A full paper is not required for this conference. Please contact us if you have any questions about presenting at the conference. XC Engineering will sponsor this year’s Best Presentation Award.

We strongly encourage presenters to attend both days of the conference. 

Advanced Training: Workflow Automation

Engineers need to be able to deliver project analyses faster and more efficiently than ever before. This is why FLOW-3D and FLOW-3D CAST have built-in options for workflow automation, a modular text-driven structure that allows for easy scripting, and batch postprocessing. In this advanced training, we will review the features that can help you save time and money through automation and optimization.

The Workflow Automation advanced training will take place the afternoon of June 3, from 13:00 – 17:00 at the Diana Majestic Sheraton. You can sign up for the training when you register for the conference.

Important Dates

  • April 19: Abstracts Due
  • April 26: Abstracts Accepted
  • May 24: Presentations Due
  • June 3: Advanced Training
  • June 3: Opening Reception
  • June 4: Conference Dinner

Conference Fees

  • Advanced Training – Automation: 300 €
  • Day 1 & Day 2 of the Conference: 300 €
  • Day 1 of the Conference: 200 €
  • Day 2 of the Conference: 200 €
  • Guest Fee (social events only): 50 €

Tour of Milan!

We invite you to see the sights of Milan! All conference attendees are invited to a complimentary city tour on Tuesday, June 4. The tour will take place after the conference on June 4 from 17:30 –  19:15. Immediately following the tour, we will convene for the conference dinner at Toscanino. Please sign up for the tour when you register for the conference. Thank you to our tour sponsor, Protesa SACMI.

Milan city tour

Tour Highlights

  • Milan Central Station, Pirelli Tower
  • Palazzo Lombardia
  • Porta Nuova Skyscrapers District
  • Indro Montanelli Park, Villa Belgiojoso Bonaparte, Natural History Museum, Planetarium
  • Via Montenapoleone Fashion District
  • Brera Art District
  • Sforza Castle
  • Via Dante
  • Santa Maria delle Grazie
  • Navigli
  • Basilica of Sant’Ambrogio
  • Stock Exchange
  • Duomo di Milano and Piazza

Opening Reception

The conference will commence with an Opening Reception on Monday, June 3 at 18:00. We invite all conference attendees and their guests for a welcome aperitif and appetizers. The reception will take place in the Gazebo in the conference hotel’s garden.

Conference Dinner

We are excited to announce that this year’s conference dinner will be held at Toscanino. Attendees will experience an excellent representation of the cuisine of Tuscany. The dinner will be held the evening of Tuesday, June 4. All conference attendees are invited to the conference dinner as part of their registration.

Conference dinner

Conference Hotel

The conference will be held at the Sheraton Diana Majestic. The Sheraton Diana Majestic is a historical hotel located in the heartbeat of Milan that works as the perfect base for shopping, business or discovering the city’s rich history. The hotel is located at Viale Piave 42, 20129. If you wish to book accommodations at the hotel, please contact the Sheraton Diana Majestic directly through their website or call +39 02 20581.

Diana Majestic Sheraton

Other Hotels

There are many hotels near the Sheraton Diana Majestic. We’ve researched some of these possibilities and ranked our choices below, along with their distance from the conference hotel.

#1: Hotel Teco
Via Lazzaro Spallanzani 27, 20129 Milan (0.7 km)
TripAdvisor Review

 #2: Hotel Sanpi Milano
Via Lazzaro Palazzi 18 | Corso Buenos-Aires/Area Porta Venezia, 20124 Milan (0.65 km)
TripAdvisor Review

 #3: WORLDHOTEL Cristoforo Colombo
Corso Buenos Aires 3, 20124 Milan (0.35 km)
TripAdvisor Review

#4: Best Western Plus Hotel Galles
Piazza Lima 2, 20124 Milan (1.0 km)
TripAdvisor Review

#5: Hotel Manin
Via Daniele Manin 7, 20121 Milan (1.2 km)
TripAdvisor Review

#6: Hotel Cavour
Via Fatebenefratelli 21, 20121 Milan (1.2 km)
TripAdvisor Review

Milan

Milan is the second most populous city in Italy, after Rome and is Italy’s first industrial city, with a multifaceted identity that offers attractions in the field of art, commerce, design, education, fashion, finance, and tourism. The city is brimming with iconic art and architecture from Roman times to the Renaissance and beyond to the contemporary era. Famous symbols of the city include the Duomo, an Italian Gothic cathedral that took 600 years to complete and now stands as the largest church in Italy, and Sforza Castle, home to several Dukes and Duchesses of Milan, as well as artists including Leonardo da Vinci and Michelangelo. Today the city is recognized as the world’s fashion and design capital, thanks to several international events and fairs including Milan Fashion Week and the Milan Furniture Fair, which are among the world’s largest in terms of revenue, visitors and growth.

Milan Underground

If you are coming from the airport to the conference hotel, you can take the Milan Underground. The stop for the hotel is P.ta Venezia.

Castello Sforzesco - in the heart of Milan city centre. Courtesy Shutterstock.
Castello Sforzesco - in the heart of Milan city centre. Courtesy Shutterstock.
Modern skyscrapers and architecture (vertical gardens). Courtesy Shutterstock.
Modern skyscrapers and architecture (vertical gardens). Courtesy Shutterstock.

500th Anniversary of the Death of Leonardo da Vinci

2019 marks the 500th anniversary of Leonardo da Vinci’s death. A series of events are planned around the world and of course in Italy, and Florence and Milan in particular, the two cities where Da Vinci spent most of his time. Milan is preparing for this period with many events: usually-closed rooms containing Leonardo’s fresco of the Sforza Castle will be open to the public, and Leonardo’s Codex and other art pieces, including tapestries and models, will be shown throughout the city.

Da Vinci’s Il Cavallo Case Study

‘Realizing Da Vinci’s Il Cavallo’ was a collaboration between XC Engineering and Institute and Museum of the History of Science (IMSS). Using Da Vinci’s notes on the casting of Il Cavallo, collected in a 34-page handbook, the IMSS and XC Engineering were able to demonstrate that Il Cavallo, often referred to as “the horse that never was,” can be successfully cast as designed. Read the full case study >

More Information

Do you have questions about the conference? Please call or email Amanda Ruggles at 1-505-982-0088 or amanda@flow3d.com.

► What’s New in FLOW-3D v12.0
  29 Mar, 2019

FLOW-3D v12.0 marks an important milestone in the design and functionality of the graphical user interface (GUI), which simplifies model setup and improves user workflows. This webinar introduces the new models, features and design of FLOW-3D v12.0. John Wendelbo, Director of Sales at Flow Science, presents the new features released in the GUI, review new modeling capabilities, and detail accuracy and performance gains.

About the Presenter

John Wendelbo, Director of Sales

John Wendelbo, Director of Sales, focuses on modeling challenging water and environmental problems. John graduated from Imperial College with an MEng in Aeronautics, and from Southampton University with an MSc in Maritime Engineering Science. John joined Flow Science in 2013. Connect with John on LinkedIn.

► FLOW-3D v12.0 Release Features Modern GUI
  14 Mar, 2019

The latest version of Flow Science’s flagship CFD software features a modernized interface, streamlined workflows and greater accuracy.

SANTA FE, NM, March 14, 2019 — Flow Science, Inc. has announced a major release of their flagship CFD software, FLOW-3D. FLOW-3D v12.0 marks an important milestone in the design and functionality of the graphical user interface (GUI), which simplifies model setup and improves user workflows.

A state-of-the-art Immersed Boundary Method brings greater accuracy to FLOW-3D v12.0’s solution. Other featured developments include the Sludge Settling Model, the 2-Fluid 2-Temperature Model, and the Steady State Accelerator, which allows users to model their free surface flows even faster. An extensive description of the v12.0 release improvements is now available.

FLOW-3D v12.0 includes new features and developments in both the solver and the user interface. But without a doubt, the rejuvenation of the user interface steals the show. The UI modernization couples a new look with numerous optimizations under the hood for a much-improved user experience, said Amir Isfahani, CEO of Flow Science.

Isfahani further commented, With version 12.0, we are laying the groundwork for a sophisticated user interface that will serve as the foundation for more application-specific CFD products in the pipeline. Stay tuned for those.

A live webinar will introduce the new models, features and design of FLOW-3D v12.0. The webinar will present the new features released in the GUI, review new modeling capabilities, and detail accuracy and performance gains. The webinar will take place on April 4, 2019 at 1:00 pm EDT. Online registration is available.

► Immersed Boundary Method
  11 Mar, 2019

In FLOW-3D, a simple method is used to eliminate numerical boundary layers caused by fractional cell areas and volumes, in which any spatial velocity derivative that requires velocity values located within solid or void regions when computing the momentum advection term is set to zero, as illustrated in Figure 1. From the physical point of view, this method applies a free-slip (no-penetration) boundary condition to the advection at walls, thus suppresses the artificial boundary layer. Compared to the method without using zero derivatives, the flow solutions are more in accord with real flows, e.g. uniform flow in an inclined duct 1, 2, especially when coarse grids are used.

The loss of flux in the momentum equation is compensated by pressure. Therefore, in certain situations the portion of pressure to compensate the flux loss may grow in time, and at some point, causes a numerical instability called “secular instability”, which is expressed as a monotonic growth of velocities. In order to prevent the instability from developing, an empirical technique 3 in the solver has been used to “correct” the flux at the location where possible instability may emerge. However, this approach does not resolve the flux loss from the source, and could occasionally introduce a nonphysical behavior of the solution such as pressure oscillations.

A technique for approximating the advection term based on the immersed boundary method has been developed for FLOW-3DVersion 12.0, to fundamentally resolve the issue and provide more accurate force predictions at walls. The report describes the methodology of the technique and illustrates its capabilities and advantages, as well as its limitations.

CITATION

Zongxian Liang, “Immersed Boundary Method for FLOW-3D,” Flow Science Report 12-18, October 2018, Copyright Flow Science

Mentor Blog top

► Technology Overview: Smart Design Series: Discovering Intelligent Free Surface Capability
  18 Apr, 2019

In this installment of the Smart Design Series, a library of videos developed to help you become familiar with Computational Fluid Dynamics (CFD), we take a closer look at free surface simulation with the help of Simcenter FLOEFD. Simcenter FLOEFD is an award-winning CAD-embedded fluid flow and heat transfer simulation solution which allows you to frontload CFD simulation during the design stage where you can fully explore the design envelope. Free surface capability can be used to simulate filling, emptying and sloshing of liquids and non-Newtonian fluids. In this video we focus on an automotive surge tank. By simulating the behavior of liquid, you can gain valuable insight into the performance of your model. A wide range of visualization tools including cut plots and animations are at your disposal to fully probe the model. For example, with the help of the Transient Explorer you can see the filling process and by using the gravitation or acceleration changes over time, you can imitate the effect of accelerated or decelerated movement. While this 4-minute video features Simcenter FLOEFD for NX, you can expect the same level of integration with CATIA V5, Creo and Solid Edge from the world’s most popular CFD program for the design engineer.

Watch this 4 minute presentation to learn more.

► Blog Post: Article Roundup: HLS for AI, an Interview with Dr. Marta Rencz, Qualcomm Achieves Faster Signoff DRC Convergence, and Smart Manufacturing
  12 Apr, 2019
High-level synthesis for AI: Part One High-level synthesis for AI: Part Two Interview with Dr. Marta Rencz, Mentor Graphics How Qualcomm Got Faster Signoff DRC Convergence Why Smart Manufacturing? High-level synthesis for AI: Part One Tech Design Forum This two part article series will focus on the role that high-level synthesis (HLS) plays in the design of computer vision systems. Computer vision
► On-demand Web Seminar: Better Plant Cooling System Design Through Optimized System Simulation
  11 Apr, 2019

Optimizing thermal fluid system design using virtual prototyping and costing simulations.

► Technology Overview: Ease of Use of Command Center in Simcenter Flotherm
    5 Apr, 2019

Wendy Luiten explains the Ease of Use of Command Center in Simcenter Flotherm. All, In Just a Minute! 

► Technology Overview: Network results display for quick visualization and understanding of schematic results
  26 Mar, 2019
The network result display in the latest release of Simcenter Flomaster V9.2 allows users to quickly visualize and understand results on the schematic
► Technology Overview: Rapid system level understanding with colorization tool
  26 Mar, 2019

The latest version of Simcenter Flomaster V9.2 features a new colourization tool displaying color and direction to assist rapid system level understanding

Tecplot Blog top

► Python Multiprocessing Accelerates Your CFD Analysis Time
  17 Apr, 2019

PyTecplot and the Python multiprocessing module was the subject of our recent Webinar. PyTecplot already has a lot of great capability, and bringing in the Python multiprocessing toolkit allows you to accelerate your work and get it done even faster. This blog answers some questions asked during the Webinar.

1. What is PyTecplot?

PyTecplot is an API to control Tecplot 360. PyTecplot is a separate installation from Tecplot 360. When you have Tecplot 360 installed, PyTecplot will need to be installed separately. Because this is a Python module you have to install it as part of your Python installation. A 64-bit installation of Python 2.7 or Python 3.4 and newer is required. All of this information is in our (very thorough) documentation.

PyTecplot Documentation

2. What is Python multiprocessing?

Multiprocessing is a process-based Python “threading” interface. “Threading” is in quotes because it is not actually using multi-threading. It’s actually spawning separate processes. We encourage you to read more in the Python documentation, Python multiprocessing library.

In the Webinar we show you a method to use the Python Multiprocessing Library in conjunction with PyTecplot to accelerate the generation of movies and images. This technique can go beyond just the generation of images. You can extract information from your simulation data as well. The recent Webinar shows you how to use the multiprocessing toolkit in conjunction with PyTecplot. We use a transient simulation of flow around a cylinder as the example, but have timings from several different cases.

The recording and the scripts from the Webinar “Analyze Your Time-Dependent Data 6x Faster” can be found on our website.

Watch the Webinar

3. Is PyTecplot included in the package of Tecplot for CONVERGE?

Last year we partnered with Convergent Science, which makes a CFD code that is used quite heavily in internal combustion, but they also can work with many other application areas. In our partnership if you buy CONVERGE, you get free access to Tecplot for CONVERGE. Tecplot for CONVERGE allows you to use PyTecplot but only through the Tecplot 360 Graphical User Interface.

To have the capability of running PyTecplot in batch mode, as shown in the Webinar, you will need to upgrade to the full version of Tecplot 360.

Request a Quote

4. Does Tecplot 360 run well with other software like Star-CCM+?

Tecplot 360 does not have a direct loader for Star-CCM+. However, you can export from Star-CCM+ to CGNS, Ensight or Tecplot format, all of which can be read by Tecplot 360.

Tecplot 360 Compatible Solvers
Swimmer

5. When running PyTecplot in batch mode, Is session.stop required to clean up the temporary files? Or can you just let the process exit?

Yes and no. We found that on Linux, the multiprocessing toolkit just terminates the process resulting in a core dump. It is very important to call session.stop to avoid these core dump files.

6. What PyTecplot Resources Do You Have?

The post Python Multiprocessing Accelerates Your CFD Analysis Time appeared first on Tecplot.

► Predictive Ocean Model Helps Coastal Estuarine Research
    5 Apr, 2019

Jonathan Whiting is a member of the hydrodynamic modeling group at Pacific Northwest National Laboratory in Washington State. He has been a Tecplot 360 user since 2014.

PNNL and the Salish Sea Model

Pacific Northwest National Laboratory (PNNL) is a U.S. Department of Energy laboratory with a main campus in Richland, Washington. The PNNL mission is to advance the frontiers of knowledge by taking on some of the world’s greatest science and technology challenges. The lab has distinctive strengths in chemistry, earth sciences and data analytics and deploys them to improve America’s energy resiliency and to enhance national security.

Jonathan is part of the Coastal Sciences Division at PNNL’s Marine Sciences Laboratory. The hydrodynamic modeling group in Seattle, WA works primarily to promote both ecosystem management and the restoration of the Salish Sea and Puget Sound with the Salish Sea Model.

The Salish Sea Model is a predictive ocean modeling tool for coastal estuarine research, restoration planning, water-quality management, and climate change response assessment. It was initially created to evaluate the sensitivity of Puget Sound acidification to ocean and fresh water inputs and to reproduce hypoxia in the Puget Sound while examining its sensitivity to nutrient pollution, funded by the Washington State Department of Ecology. Now it is being applied to answer the most pressing environmental challenges in the greater Salish Sea region.

PNNL is currently in the first year of a three-year project to enhance the Salish Sea Model. The goals are to increase the model’s resolution and to make it operational, which means assuring the model runs on schedule and gets results that are continuously available to the public—including predictions a week or so ahead. This will allow for new applications such as the tracking of oil spills during response activities.

Jonathan has worked with the modeling team on several habitat restoration planning projects along the Snoqualmie and Skagit rivers in Washington’s Puget Sound region. Jonathan was responsible for post-processing model outputs into analytical and geospatial products to meet client expectations and to convey findings that aid project planning and stakeholder engagement.

The Challenge: Creating Consistent, High-Quality Visualization for Model Post-Processing

The hydrodynamics modeling group uses the Finite Volume Community Ocean Model (FVCOM) simulation code.

For the recent Skagit Delta Hydrodynamic Modeling project, a high-resolution triangular unstructured grid was created with 131,471 elements and 10 terrain-following sigma layers in the vertical plane. Post-processing was conducted on five time snapshots per scenario across 11 scenarios (including a baseline). Each file was about 55MB in uncompressed binary format.

The sheer quantity of plots was very challenging to handle, and it was important to generate clean plots that clearly conveyed results.

The Solution – Tecplot 360

Jonathan most often uses Tecplot 360 to generate top-down plots and videos that visualize parameters geospatially across an area. He then uses that visualization to convey meaningful project implications to his clients, who in turn use the products to inform program stakeholders and the public.

To handle the quantity of data Jonathan was working with, Tecplot 360 product manager Scott Fowler gave him a quick demonstration of Tecplot 360 and showed him how to use Chorus, the parametric design space analysis tool within Tecplot 360. Chorus allowed Jonathan to analyze a single dataset with multiple plots in a single view over time by using the matrix tool, easing the bulk generation of plots.

Tecplot support and development teams have been working closely with Jonathan, especially by adding new geospatial features to the software that enhance its automation and efficiency.

According to Jonathan, the key strengths in Tecplot’s software have been:

  • Ease of use
  • Availability of scripting to assist bulk processing
  • Variety of tools and features, such as georeferenced images

Using Tecplot 360 has allowed Jonathan to create professional plots that enhance the impact of their modeling work.

How Will Jonathan Use Tecplot In the Future?

Jonathan’s personal niche has become trajectory modeling, so he is also interested in using Tecplot to generate visuals associated with the movement of objects on the surface by using streamlines, velocity gradients, slices, and more. He intends to take a deeper dive into the vast capabilities of Chorus and PyTecplot in the future!

 


 

Tecplot 360’s latest geoscience-focused release, Tecplot 360 2018 R2, includes the popular FVCOM loader and has the ability to insert georeferenced images that put your data in context. Tecplot 360 will automatically position and scale your georeferenced Google Earth or Bing Maps images.

Learn more about how Tecplot 360 is used for geoscience research.

Try Tecplot 360 for Free

The post Predictive Ocean Model Helps Coastal Estuarine Research appeared first on Tecplot.

► Parallel SZL Output from SU2
    2 Apr, 2019

At the end of February 2019, I did a presentation at the SIAM Conference on Computer Science and Engineering (CSE) in Spokane Washington. I live in the Seattle area, and Spokane is reasonably close, so I decided to drive instead of fly. Unfortunately, the entire nation, including Washington state, was still in the grips of the dreaded “polar vortex.” The night before my drive to Spokane all of the mountain passes were closed due to heavy snowfall. They opened in time but the drive was slippery and slow. I probably should have taken a flight instead! On the drive, I came up with this Haiku…

Driving to Spokane
Snow whirlwinds on pavement
Must make conference!

Join the Tecplot Community

Stay up-to-date by subscribing to the TecIO Newsletter, events and product updates.

Subscribe to Tecplot

The Goal: Adding Parallel SZL Output to SU2

My presentation at the SIAM CSE conference was on the progress made adding parallel SZL (SubZone Load-on-demand) file output to SU2. The SU2 suite is an open-source collection of C++ based software tools for performing Partial Differential Equation (PDE) analysis and solving PDE-constrained optimization problems. The toolset is designed with Computational Fluid Dynamics (CFD) and aerodynamic shape optimization in mind, but is extensible to treat arbitrary sets of governing equations such as potential flow, elasticity, electrodynamics, chemically-reacting flows, and many others. SU2 is under active development by individuals all around the world on GitHub and is released under an open-source license. For more details, visit SU2 on Github.

The Challenge: Building System Compatibility

We implemented parallel SZL output in SU2 using the TecIO-MPI library, available for free download from the TecIO page. In some CFD codes, such as NASA’s FUN3D code, each user site is required to download and link the TecIO library. However, in the case of SU2 we decided to include the obfuscated TecIO source code directly into the distribution of SU2. This makes it much easier for the user – they need only download and build SU2 and they have SZL file output available.

However, this did add some complications from our end.

The main complication is that SU2 is built using the GNU configure system whereas TecIO is built using CMake. We had to create new automake, autoconfig, and m4 script files to seamlessly build TecIO as a part of the SU2 build.

If you find yourself integrating TecIO source into a CFD code built with the GNU configure system, feel free to shoot me some questions – scottimlay@tecplot.com

Implementing Serial vs. Parallel TecIO

Once TecIO was building as part of the SU2 build, it was straight-forward to get the serial version of SZL output working. SU2 already included an older version of TecIO, so we simply replaced those calls with the newer TecIO calls.

To get the parallel SZL output (using TecIO-MPI) working was a little more complicated. Specifically, it required knowing which nodes on each MPI rank were ghost nodes. Ghost nodes are nodes that are duplicated between partitions to facilitate the communication of solution data between MPI ranks. We only want the node to show up once in the SZL file, so we need to tell TecIO-MPI which nodes are the ghost nodes. In addition, CFD codes often utilize ghost cells (finite-element cells duplicated between MPI ranks) which must be supplied to TecIO-MPI. This information took a little effort to extract from the SU2 “output” framework.

High-Lift Prediction Workshop

The first test case is the Common Research Model from the
High-Lift Prediction workshop.

How Well Does It Perform?

We now have a version of SU2 that is capable of writing SZL files in parallel while being run on an HPC system. The next obvious questions: “How well does it perform?”

Test Case #1: Common Research Model (CRM) in High-Lift Configuration

The first test case is the Common Research Model from the High-Lift Prediction workshop. It was run with 3 grid refinement levels:

  • 10 million cells
  • 5 million cells
  • 118 million cells

These refinements allowed us to measure the effect of problem size on the overhead of parallel output. All three cases were run on 640 MPI Ranks on the NCSA Blue Waters supercomputer. The results are shown in the following table:

  10M Cells 47.5M Cells 118M Cells
Time for CFD Step 17.6 sec 70 sec 88 sec
Time Restart write 6.1 sec 10.7 sec 31.4 sec
Time SZL File Write 43.9 sec 171 sec 216 sec

For comparison we include the cost of incrementing the solution a single CFD time step and the cost of writing an SU2 restart file. It should be noted that the SU2 restart file only contains the conservative field variables – no grid variables and no auxiliary variables – so there is far less writing involved with the creation of the restart file. The cost of writing the SZL file is roughly 2.5 the cost of a single time step. If you write the SZL file infrequently (every 100 steps or so) this overhead is fairly small (2.5%).

Test Case #2: Inlet

The second test case is an inlet like you might find on the next generation jet fighter. It aggressively compresses the flow to keep the inlet as short as possible.

The inlet was analyzed using 93 million tetrahedral cells and 38 million nodes. As with the CRM case, the inlet case was run on the NCSA Blue Waters computer using 640 MPI ranks.

SU2 takes 74.7 seconds to increment the inlet CFD solution by one time-step and 31 seconds to write a restart file. To write the SZL plot file requires 216 seconds – 2.9 times as long as a single CFD time step.

Availability

The parallel SZL file output is currently in the pull-request phase of SU2 development. Once it is accepted it will be available in the Develop branch on GitHub. On occasion (I’m told every six months to a year), the develop branch is merged into the master branch. If you are interested in trying the parallel SZL output from SU2, send me an email (scottimlay@tecplot.com) and I’ll let you know which branch to download.

Better yet, subscribe to our TecIO Newsletter and we will send you the updates.

Subscribe to Tecplot


Scott Imlay
Scott Imlay
Chief Technical Officer
Tecplot, Inc.

The post Parallel SZL Output from SU2 appeared first on Tecplot.

► Improving TecIO-MPI’s Parallel Output Performance
  20 Mar, 2019
 
 
TecIO, Tecplot’s input-output library, enables applications to write Tecplot binary files. Its parallel version, TecIO-MPI, enables MPI parallel applications to output Tecplot’s newer binary format, .szplt.

TecIO-MPI was first released in 2016. Since then, we’ve received feedback from some customers that its parallel performance for outputting unstructured-grid solution data needed improvement. So we embarked on an effort to understand and eliminate bottlenecks in TecIO-MPI’s execution.

Customer reports 15x speed-up in writing data from FUN3D when using the new TecIO-MPI library!
 
Learn more and download the TecIO Library

Understanding What Customers are Seeing

To understand what our customers were seeing, we needed to be able to run our software on hardware representative of what our customers were running on, namely, a supercomputer. The problem is that we don’t own one. We also needed parallel profiling software that would help us identify bottlenecks, or “hot spots,” in our code, including in the MPI inter-process communication. We made some progress in Amazon EC2 using open-source profiling software, but had greater success using Arm (formerly Allinea) Forge software at the National Center for Supercomputing Applications (NCSA).

NCSA has an industrial partners program that provides access to their iForge supercomputer and a wide array of open source and commercial software, including Arm Forge. iForge has over 2,000 CPU cores and runs IBM’s GPFS parallel file system, so it was a good platform to test our software. Arm Forge, specifically its MAP profiling tool, provided the ability to easily identify hot spots in our software, and to drill down through the layers of our source code to see exactly where the performance problems lay.

An additional application to NCSA also gave us access to their Blue Waters petascale supercomputer, which features about 400,000 CPU cores and the Lustre parallel file system1. This gave us the ability to scale our testing up to larger problems, and to test the performance on another popular parallel file system.

Arm MAP with Region of Time Selected

Performance Improvement Results

Using iForge hardware and Arm Forge software, we were able to identify two sources of performance problems in TecIO-MPI:

  • Excessive time spent in writing small chunks of data to disk.
  • Too much inter-process exchange of small chunks of data.

Consolidating these has led to an order-of-magnitude reduction in output time. Testing with three different computational fluid dynamics (CFD) flow solvers indicates output times, for structured or unstructured grids, roughly equal to the time required to compute a single solver iteration.

We will continue to collect feedback from users with an eye to additional improvements as TecIO-MPI is implemented in additional solvers. We invite you to provide us with your own experience!

Take our TecIO Survey

How to Obtain TecIO Libraries

TecIO and TecIO-MPI, along with instructions in Tecplot’s Data Format Guide, are installed with every Tecplot 360 installation.

It is recommended, however, that you obtain and compile source for TecIO-MPI applications, because the various MPI implementations are not binary-compatible. Source for TecIO and TecIO-MPI, and the Data Format Guide, are all available via a My Tecplot account.

For more information and access to the TecIO Library, please visit:

TecIO Library

1This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.



By Dr. David E. Taflin
Senior Software Development Engineer | Tecplot, Inc.

Read Dave’s employee profile »

The post Improving TecIO-MPI’s Parallel Output Performance appeared first on Tecplot.

► Calculating a New Variable
  11 Mar, 2019

Data Alteration through Equations

Engineers using Tecplot 360 often need to create a new variable which is based on a numeric relationship of existing variables already loaded into Tecplot.

This powerful capability for calculating a new variable uses a simple method. To start, load your data into Tecplot 360. In this example, we loaded the VortexShedding.plt data located in the Tecplot 360 examples folder.

Choose Alter -> Specify Equations from the Data menu.
Alternately, click the equations icon Equations Icon

You will see the Specify Equations dialog shown at right.

We will now calculate the difference between two variables. In the Zones to Alter list, click All.

Initialize the new variable T(K)Difference, by typing in the Equation(s) window:

{T(K)Difference} = 0

Click Compute

Now find the difference for variable T(K) between zone 2 and 3 (for example, T(K) in zone 3 – T(K) in zone 2) to T(K)Difference). You can do this for any two variables that have a similar structure.

Select the zones you want to receive the difference value. Type the following equation using the exact variable name from the Data Set Information dialog.

{T(K)Difference} = {T(K)}[3]-{T(K)}[2]

Click Compute

The new variable T(K)Difference is now available for plotting. Open the Data Set Information dialog from the Data menu and view the new variable T(K)Difference.

Note that changes made to the dataset in the Specify Equations dialog are not made to the original data file. You can save the changes by saving a layout file or writing the new data to a file. Saving a layout file will keep your data file in its original state, but use journaled commands to reapply the equations.

Learn more in Chapters 20 and 21 of the Tecplot 360 User Manual.


This blog was originally published in 2013 and has been updated and expanded.

The post Calculating a New Variable appeared first on Tecplot.

► Simulating Fish Behavior to Protect Migratory Fish Runs
  14 Feb, 2019

Developing a method of simulating fish behavior can lead to protecting and preserving migratory fish runs. One of our most loved blogs, originally titled “Some Fishy Business for the Army Corps of Engineers” was written in 2011. It has been updated and is re-posted here.

The U.S. Army Corps of Engineers’ (USACE) mission is to provide vital public engineering services in peace and war to strengthen our nation’s security, energize the economy, and reduce risks from disasters. Maintaining the nation’s inland waterways is found beneath that large umbrella and in support of that mission, USACE has been developing an exciting new way of protecting and preserving migratory fish runs.

Simulating Fish Behavior

Dr. Andy Goodwin, a USACE scientist, has developed a method of simulating fish behavior that allows him to evaluate proposed modifications to dams, fish ladders and related structures to predict their effect on fish survival.

His simulation computer code takes as input a computational fluid dynamics (CFD) simulation of water flow through the proposed structures plus fish initial positions and behavior parameters. It outputs the path each fish would be expected to swim over a specified period of time. From this output he can deduce whether the proposed modifications would have a favorable or unfavorable effect on fish survival rates.

This information enables his sponsors to select, from among a number of possibilities, those modifications that will have the most beneficial impact.

Sample output from Dr. Goodwin’s solver. See more on YouTube .

Tecplot 360 reads popular CFD file formats. Try it for Free »

Tecplot 360 Reads Many CFD Solution File Formats

Dr. Goodwin contracts with companies to produce the CFD results he needs. Due to the large number of CFD simulation codes in use today, he was finding it increasingly important to modify his code to read a wide variety of CFD solution file formats, including the n-faced polyhedral meshes used by two of the most popular CFD solvers.

After examining a number of options, he selected Tecplot 360 to do this work for him. This was a good fit because Tecplot 360 already reads the most popular CFD file formats, and has an extensible architecture that allows Dr. Goodwin’s solver to interact with any data Tecplot 360 can read. And I was the lucky Tecplot developer selected to do the interfacing work.

Developing Tecplot 360 Add-ons

The most interesting challenges I faced included dusting off and updating my Fortran. I learned Fortran back in the F77 days; Dr. Goodwin’s code is in Fortran 90/95.

I also needed to:

  • Locate the proper seams at which to interface his code with Tecplot 360.
  • Write the Fortran/C glue code required to allow his code to query Tecplot 360 for the CFD solution information it needs.
  • Implement in Tecplot 360 the ability to detect when a proposed (fish) path intersects a solid boundary or exits the solution domain, since his solver does this for the CFD data it can read.

Our Technical Support Team Can Help

I have been in the IT business for over 25 years, and you provided the best support of any vendor I have worked with to date.

– IT Administrator, College of Pharmacy & Health Sciences, Western New England University

Learn more about Tecplot Technical Support »

I subsequently wrote a CFD post-processing Tecplot add-on to perform some clean-up operations on legacy CFD data to make it suitable input for Dr. Goodwin’s simulations. This “fishy business” was challenging and interesting, and I have enjoyed seeing the results that Dr. Goodwin produced with Tecplot assistance.

And finally, a required disclaimer: Any statements, opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views or policies of the federal sponsors, and no official endorsement should be inferred.



By Dave Taflin
Developer | Tecplot, Inc.

Read Dave’s employee profile »

The post Simulating Fish Behavior to Protect Migratory Fish Runs appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► AVEVA says full steam ahead
  18 Apr, 2019

AVEVA issued one of its periodic trading updates — they don’t really say much but seem to be required to coincide with parent company Schneider Electric’s earnings releases. Schneider Electric is on a December year-end, quarterly pattern while AVEVA reports bi-annually with a March year-end. In all, a mishmash. I still need to listen to Schneider’s results, but AVEVA’s performance was positive:

AVEVA says it “delivered low double-digit revenue growth on a full year, pro forma IFRS 15 basis” in the March quarter and that “this growth included some benefit from upfront revenue recognition on multiyear rental contracts”. We’ll have to find out exactly what that means for this and future periods when we have real numbers.

The company also hinted to investors that, while operating margins improved over the prior year, it might not be as much of an improvement as some wanted. More sales means higher costs of sales, in commissions and other expenses, and the integration of Schneider Electric Software and AVEVA products plus filling the gaps all cost money, lowering margins.

So for a non-statement, pretty much “all systems go, nothing to see here”.

For its part, Schneider’s press release said that “AVEVA conclude[d] a remarkable first year and confirms the strategic rationale of the transaction” — but they’d say that in any event, right? — and that “AVEVA benefitted from sales in conjunction with Schneider Electric’s automation offers through a coordinated go-to-market approach. This performance highlights the good traction for its end-to-end digital solution from design and build to operation and maintenance for operators in hybrid and process end-markets.”

We’ll learn more when AVEVA itself announces results on May 29.

The post AVEVA says full steam ahead appeared first on Schnitger Corporation.

► Helping to rebuild Notre Dame
  16 Apr, 2019

If you’re like me, you wanted to tune out the destruction of Paris’ Cathedral of Notre Dame yesterday, but just couldn’t. It’s a place of spiritual significance to many, an iconic Paris tourist destination to others. To me, it’s also a marvel of engineering and architecture, showcasing human ingenuity for over 850 years. A museum housing incredible works of art. A super-cool, massive organ that’s played some of civilization’s most important works.

The President of France has said the Cathedral will be rebuilt, as it has been countless times in its history. And we can help. According to the US magazine, Fast Company, here’s how we can donate money to the effort:

Friends of Notre-Dame is the organization that had raised money for the restoration that was underway, which might have created the spark that led to the fire. Fast Company says that the U.S. branch is a 501c3 charity, which means that all donations are tax-deductible for U.S. contributors. When I went to the site a few minutes ago, its online donation page was not secure, but the Paypal donation site is.

Fondation du Patrimoine is a French nonprofit that works to preserve sites throughout France and has established a special Notre Dame rebuilding fund. The site is in French and is secure for donors but does not appear to take Paypal. As of 10AM ET on Tuesday, 16 April, the site says it’s raised over 3.5 million eurosF

Finally, the Basilica of the National Shrine of the Immaculate Conception here in the US has created a fund, too. Secure, no Paypal.

Money will be required to rebuild, that’s for sure. And it seems to be pouring in. But we’ll also need architects, engineers, tradespeople, artists — and software to do the design, manage the restoration, create safety mechanisms to prevent this from happening again … It’ll be a showcase of 21st century ingenuity to preserve what remains and replace what was lost. Will it be better? Too soon to tell, but it will be incredibly important to balance historical accuracy and modern techniques.

Quick update at 15:00 ET on 16 April: I just saw in LIDAR News’ blog that the Cathedral was laser scanned — so there’s data to start from in the restoration effort. W00t! Read more here.

And the Fondation has now recorded more than 5 million euros in donations, to add to the $700 million raised elsewhere, according to CNN.

If you’d like to donate to something that’s not getting the level of attention of Paris’ Notre Dame, one article I read this morning pointed out that three churches in the US burned in recent weeks in cases of malicious arson. You can donate to those restoration efforts here, at one of the church’s GoFundMe page.

We can’t save everything or everyone. But we can make our own little bit of difference.

The cover image is from Pixabay, clearly from before the fire.

The post Helping to rebuild Notre Dame appeared first on Schnitger Corporation.

► First steps to AI: data, man
  11 Apr, 2019

I was speaking to a group of AEC professionals a while back about the technologies they should be investigating. I covered everything from 0D to 6D, from conceptual sketches to advanced BIM, conceptual design to operations — but, as is often the case today, we got into an involved discussion on AI, artificial intelligence.

I said that I don’t see AI as commercially ready for most implementations. Yes, you and a handful of data scientists can scour your digital universe for data, enter it into a unifying system and start creating algorithms to search it for meaning. But wait just a little while, until the end of 2019 for Autodesk’s IQ and similar commercial off-the-shelf offerings, and things become simpler and much more accessible. I’ll write more about Autodesk’s Construction IQ next week, but for now, I’d like to keep this at a higher level.

Talking about AI always leads to data. What data do we need? How do we get it? Ensure its accuracy and timeliness? Guard against incorrect data? After all, if the point is to draw conclusions from the mountain of information out there, bad data can lead us all down a rabbit hole of misleading answers.

A few weeks ago, one frustrated person in the crowd to say that her job relies on data that she often believes to be incorrect. In her case, it was GIS data–geographic information system data that combines spatial references with some sort of attribute, such as this water main is HERE–but really, it could be any data relevant to any job. And it could be incorrect for a million different reasons: It could simply be old, which means it may not represent current conditions. It could be incomplete, leaving out something that would lead to a different conclusion. It could be formatted poorly, using a synonym or acronym that the AI tool might not recognize. Or, of course, it could be maliciously incorrect, when someone tries to hide poor job performance or something else.

What to do?

I suggested that she gather that data anyway, but assign a confidence rating to it. That way, if nothing better every surfaces, she can use it with full knowledge that it may be incorrect. In her specific case, that may mean an extra day of surveying to capture accurate positional data. That update can replace the questionable data. If GIS isn’t your thing, find another way to gather confirming data, perhaps putting another sensor on a machine to check on the first, or gathering sales data in another way.

The point is this: start gathering data. Figure out what your main questions are and how you would answer them. On a construction job site, it could be overall performance. That might lead to a deeper dive on what tasks usually fall behind schedule — who, how, why, what equipment, tasks immediately before and after. On an AEC design project, it might be more successful bidding: what jobs have we won, at what price/profit, with how many design changes. On a manufacturing line, it could be quality related: after how many items are we out of compliance? In what part of the process? Is it related to material, machine, human? In retail, it could be comparing store performance or theft rates. It all starts with the question.

AI will fundamentally change all our jobs by making it easier to find the one or ten pieces of information that need our attention. But to identify those, AI engines will plow through more data than we humans ever could, looking for connections.

As we wait for the commercial solutions to hit the shelves, start thinking about this. What one problem can you solve to help your business the most? What questions will get you to those answers? What data will you need to start answering those questions? Start gathering that data now (or at least, think about how you would gather that data) so that you’re ready when the tools hit the market. If you want to be an overachiever, use a test sample of data and apply human intelligence: can you answer the question yourself with that data? If you can, you’re on the right track.

Do you remember that awesome 1967 movie, The Graduate? A party guest told a young Dustin Hoffman that the future was “just one word … plastics”. Now that word is data. “There’s a great future in data. Think about it. Will you think about it?”

Yes, all of this talk about data leads to a discussion about formats, engines and so on. Not all data will be in the “right” form for whatever AI tools will be used. That’ll be a gnarly problem, for sure, but easier to solve than not having any data to start with.

So, GO! Stop overthinking the end-game and just start.

The post First steps to AI: data, man appeared first on Schnitger Corporation.

► Nemetschek ups rendering with Redshift
    9 Apr, 2019

The Nemetschek Group just announced that its Maxon subsidiary has acquired Redshift Rendering Technologies, maker of the Redshift rendering engine. Redshift (the tech) is a GPU-accelerated renderer that’s available as a plugin to Maxon’s own Cinema 4D as well Autodesk Maya and 3DS Max, and other 3D applications. The company was founded in 2012 in Newport Beach, California by three techies who saw the potential of GPUs to dramatically speed up game engines, renderers and associated workflows.

Maxon CEO David McGavran said that “Rendering can be the most time consuming and demanding aspect of 3D content creation. Redshift’s speed and efficiency combined with Cinema 4D’s responsive workflow make it a perfect match for our solution portfolio. The combination of Cinema 4D and Redshift will bring an unprecedented accessibility, efficiency and reliability to 3D production and will save time and money for artists.”

From Nemetschek’s perspective, this is a natural buildout of its Maxon brand. The parent company acquired the outstanding 30% of Maxon last year and put in place new leadership, so “this acquisition is the next important step to fully capture the great growth potential for Maxon in the Media & Entertainment industry and Nemetschek’s core markets in the AEC industry,” said Patrik Heider, Spokesman and CFOO of the Nemetschek Group.

The purchase price was not announced,

The title image is from Redshift’s gallery and is titled “Nürburgring (Humster challenge)” by Redshift.

The post Nemetschek ups rendering with Redshift appeared first on Schnitger Corporation.

► Quickie: AVEVA adds asset performance assets
    5 Apr, 2019

Super-quick, as I’m about to board a flight: AVEVA has apparently decided its offering (the combined Schneider Electric Software and legacy AVEVA products) had a gap in asset performance management that could only be filled by acquisition.

To that end, the company today announced that will acquire the software assets of MaxGrip, a maker of reliability-centered maintenance (RCM) solutions. MaxGrip will enhance AVEVA’s current offering by adding a templated approach to asset strategy decision-making for risk-based maintenance.

MaxGrip also brings with it a “rich library of asset fault codes and remediations” that will will strengthen AVEVA’s predictive asset analytics and “accelerate the deployment of artificial intelligence for prescriptive maintenance”.

The deal is subject to approval from MaxGrip’s shareholders and no purchase price was announced (at least that I could find quickly).

The post Quickie: AVEVA adds asset performance assets appeared first on Schnitger Corporation.

► OnScale secures $10 million investment
    2 Apr, 2019

Yesterday, I wrote about the continuing buying spree in CAE. Then came the news that OnScale has secured $10 million in Series A funding led by Intel Capital and Gradient Ventures, Google’s investment fund, along with Thornton Tomasetti, Stage 2 Capital, Cultivation Capital and CampbellKlein.

OnScale says it will use the new investment to drive global expansion, respond to increasing demand, and accelerate development of its solutions for complex, real-world engineering applications — in other words, just what you’d expect. The release goes on to say that the company is focusing on improving OnScale’s user interface, expanding the breadth of physics solver capabilities, and forming partnerships with other software companies to provide seamless engineering workflows.

Even though OnScale is new, its solvers were developed and validated over the last 30 years at Thornton Tomasetti, a user company, and spun out in 2017. Today, OnScale offers on-demand CAE with multiphysics solvers that were originally architected for highly parallel mainframe computers — which, OnScale says, makes them as perfect for today’s cloud-based, high-performance computing.

OnScale says that it is targeting the semiconductor and micro-electro-mechanical systems (MEMS), 5G mobile, biomedical, infrastructure safety, and autonomous vehicle markets.

One investor quoted in OnScale’s announcement, Dave Flanagan of Intel Capital, explains why CAE is so attractive: “As technology systems become more complex, next-generation computer aided engineering software will become integral to design and deployment. OnScale’s highly scalable CAE solution leverages the power of the cloud and advanced multiphysics to model highly complex systems, helping customers solve the toughest design challenges”.

Interestingly, Intel announced yesterday that it was investing in 14 tech companies, to a total of $117 million. The blanket statement said that these companies are “creating powerful artificial intelligence (AI) platforms; new ways to see and analyze materials for the built world and our bodies; more efficient and greener manufacturing technologies; and disruptive new approaches to chip design.” This is part of Intel Capital’s strategy to invest $300 million to $500 million every year to foster innovation. How cool is that?

Nothing left to buy or invest in? Au contraire. It would appear that Intel find at least 30 per year.

The post OnScale secures $10 million investment appeared first on Schnitger Corporation.


return

Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: