CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home >

CFD Blog Feeds

Another Fine Mesh top

► Happy 24th Birthday, Pointwise
  10 Nov, 2018
I received a text message early this morning from a friend and I’m embarrassed to say it reminded me of something that had slipped my mind (which my lovely wife would blame on my recent concussion). What were you doing … Continue reading
► Where’s My This Week in CFD?
    9 Nov, 2018
The most recent edition of This Week in CFD was published on 28 September. Rest assured, there has been a lot of activity in the world of CFD in the intervening weeks. What there hasn’t been is any time to … Continue reading
► I’m Sebastian Desand and This Is How I Mesh
    1 Nov, 2018
I was born and raised in Stockholm, Sweden, the country most famous for Ikea, Abba and Ingrid Bergman. This is still where I live with my wife, two kids and two cats. I do travel quite often within Europe, but … Continue reading
► Don’t Miss the Mesh Effects Symposium at GMGW-2
  30 Oct, 2018
The 2nd AIAA Geometry and Mesh Generation Workshop (GMGW-2) is happening this January in San Diego prior to AIAA SciTech. One interesting component of the workshop is a mini-symposium on Mesh Effects on CFD Solutions that’s free for anyone attending … Continue reading
► I Should Be at the ASSESS Congress Today
  29 Oct, 2018
The ASSESS Congress 2018 is happening today outside Atlanta where an invitation-only group of engineering simulation leaders are gathered to “launch the simulation revolution.” But I’m not among them and that’s a problem.  ASSESS (Analysis, Simulation, and Systems Engineering Software … Continue reading
► The Latest CFD Meshing News From Pointwise
  12 Oct, 2018
If you are interested in mesh adaptation, high order meshing, structured grids, hybrid meshes, mesh automation, and meshing process simplification, those topics represent just the last month or so at Pointwise. Read on for the details including how you can … Continue reading

F*** Yeah Fluid Dynamics top

► The current record for stone-skipping is about 88 skips. For...
  16 Nov, 2018


The current record for stone-skipping is about 88 skips. For most of us, that’s an unimaginably high number, but according to physicists, human throwers may top out around 300 or 350 skips. In the video above and the accompanying article, Wired reporter Robbie Gonzalez explores both the technique of a world-record-holding skip and the physics that enable it. 

The perfect skip requires many ingredients: a large, flat rock with good edges; a strong throw to spin the rock and hold it steady at the right angle of attack; and a good first contact with the right entry angle and force to set up the skips’ trajectory. The video is long, but it’s well worth a full watch. It gives you an inside look both at a master skipper and at the experts of skipping science. (Video and image credit: Wired; see also: Splash Lab, C. Clanet et al.; submitted by Kam-Yung Soh)

ETA: Wired’s embed code is acting up, so if you can’t see the stone skipping video here, just go to the article directly.

image

Heads up for those going to the APS DFD meeting! You can catch my talk Monday, Nov. 19th at 5:10PM in Room B206. I’ll be talking about how to use narrative devices to tell scientific stories. I’ll be around for the whole meeting, so feel free to come say hi!

► Not everything that flows is a fluid. And when viewed from above...
  15 Nov, 2018
View this post on Instagram

A post shared by MuzMuzTV (@muzmuztv) on



Not everything that flows is a fluid. And when viewed from above traffic, crowds, and even herds of sheep flow in patterns like those of a fluid. In particular, these conglomerations move like compressible fluids – ones that allow substantial changes in density as they flow. From above, each sheep is just a few pixels of white, but you can see which areas of the herd have the highest density by how white an area looks. The highest density regions also tend to be the slowest moving – not surprising in a crowd.

Now watch the gates. They act like choke points in the flow and, to some extent, like a nozzle in supersonic flow. As the sheep approach the gate, they’re in a dense, slow moving clump, but as they pass through it, the sheep speed up and spread out. This is exactly what happens in a supersonic nozzle. On the upstream end, flow in the nozzle is subsonic and dense. But once the flow hits the speed of sound at the narrowest point in the nozzle, the opening on the downstream side allows the flow to spread out and speed up past Mach 1.  (Video credit: MuzMuzTV*; submitted by Trent D.)

*Editor’s Note: I do my best to credit the original producers of any media featured on FYFD, but this is especially difficult with viral videos as there can be many copies, all of which are uncredited. I’ve made my best guess on this one, but if this is your video, please let me know so that I can credit you properly. Thanks!

► The latest FYFD/JFM video is out, and it’s all about the...
  14 Nov, 2018














The latest FYFD/JFM video is out, and it’s all about the interactions between structures and flows! We learn about plesiosaur-inspired underwater robots, how turbulence affects air-water interfaces, and how adding a tail can help hide an object in a flow. If you missed one of the previous episodes in this series, you can find them all here. (Image and video credit: T. Crawford and N. Sharp)

► Nectar-drinking species of hummingbirds and bats are both...
  13 Nov, 2018




Nectar-drinking species of hummingbirds and bats are both excellent at hovering – one of the toughest aerodynamic feats – but they each have their own ways of doing it. Hummingbirds (bottom) use a nearly horizontal stroke pattern that’s quite symmetric on both the up- and downstroke. To keep generating lift in the upstroke, they twist their wings strongly midway through the stroke. So although hummingbirds get most of their lift from the downstroke, they get quite a bit from the upstroke as well.

Bats, on the other hand, use an asymmetric wingbeat pattern when hovering. Bats flap in a diagonal stroke pattern, using a high angle of attack in the downstroke and an even higher one during the upstroke. They also retract their wings partially during the upstroke. This flapping pattern gives them weak lift during the upstroke, which they compensate for with a stronger downstroke. Compared to non-hovering bat species, nectar-drinking bats do get more lift during the upstroke, but they’re nowhere near as good as the hummingbirds. The bats compensate by having much larger wings compared to their body size. Bigger wings mean more lift.

In the end, the two types of hovering cost roughly the same amount of power per gram of body weight. That’s great news for engineers designing the next generation of flapping robots because it suggests two very different, but equally power-efficient methods for hovering. (Image credit: Lentink Lab/Science News, source; research credit: R. Ingersoll et al.; via Science News; submitted by Kam Yung-Soh

► With some help from Physics Girl and her friends, Grant...
  12 Nov, 2018


With some help from Physics Girl and her friends, Grant Sanderson at 3Blue1Brown has a nice video introduction to turbulence, complete with neat homemade laser-sheet illuminations of turbulent flows. Grant explains some of the basics of what turbulence is (and isn’t) and gives viewers a look at the equations that govern flow – as befits a mathematics channel! 

There’s also an introduction to Kolmogorov’s theorem, which, to date, has been one of the most successful theoretical approaches to understanding turbulence. It describes how energy is passed from large eddies in the flow to smaller ones, and it’s been tested extensively in the nearly 80 years since its first appearance. Just how well the theory holds, and what situations it breaks down in, are still topics of active research and debate. (Video and image credit: G. Sanderson/3Blue1Brown; submitted by Maria-Isabel C.)

image
image
image
► As someone who has played with her share of vortex cannons, I...
    9 Nov, 2018


As someone who has played with her share of vortex cannons, I can assure you that messing around with smoke generators and vortex rings is a lot of fun. And in this video, Dianna gives things a little twist: she makes the vortex cannon’s mouth a square instead of a circle.

Now, that doesn’t create a square vortex ring. (Vortex rings don’t really do 90-degree corners.) But it does make the vortex ring all neat and wobbly. Whenever you have two vortices near one another (or, in this case, two parts of a vortex line near one another), they interact. As Dianna shows with hurricanes, depending on the direction of rotation and their relative strength, nearby vortices can orbit one another or travel together in straight lines – or they can cause more complicated interactions, like in the case of the square-launched rings.  

I think there may also be some interesting effects here from vortex stretching, but that’s a topic for another day! (Video and image credit: D. Cowern/Physics Girl; see also: LIBLAB; submitted by Maria-Isabel C.)

Symscape top

► Long-Necked Dinosaurs Succumb To CFD
  14 Jul, 2017

It turns out that Computational Fluid Dynamics (CFD) has a key role to play in determining the behavior of long extinct creatures. In a previous, post we described a CFD study of parvancorina, and now Pernille Troelsen at Liverpool John Moore University is using CFD for insights into how long-necked plesiosaurs might have swum and hunted.

CFD Water Flow Simulation over an Idealized PlesiosaurCFD Water Flow Simulation over an Idealized Plesiosaur: Streamline VectorsIllustration only, not part of the study

read more

► CFD Provides Insight Into Mystery Fossils
  23 Jun, 2017

Fossilized imprints of Parvancorina from over 500 million years ago have puzzled paleontologists for decades. What makes it difficult to infer their behavior is that Parvancorina have none of the familiar features we might expect of animals, e.g., limbs, mouth. In an attempt to shed some light on how Parvancorina might have interacted with their environment researchers have enlisted the help of Computational Fluid Dynamics (CFD).

CFD Water Flow Simulation over a ParvancorinaCFD Water Flow Simulation over a Parvancorina: Forward directionIllustration only, not part of the study

read more

► Wind Turbine Design According to Insects
  14 Jun, 2017

One of nature's smallest aerodynamic specialists - insects - have provided a clue to more efficient and robust wind turbine design.

DragonflyDragonfly: Yellow-winged DarterLicense: CC BY-SA 2.5, André Karwath

read more

► Runners Discover Drafting
    1 Jun, 2017

The recent attempt to break the 2 hour marathon came very close at 2:00:24, with various aids that would be deemed illegal under current IAAF rules. The bold and obvious aerodynamic aid appeared to be a Tesla fitted with an oversized digital clock leading the runners by a few meters.

2 Hour Marathon Attempt

read more

► Wind Tunnel and CFD Reveal Best Cycling Tuck
  10 May, 2017

The Giro d'Italia 2017 is in full swing, so how about an extensive aerodynamic study of various cycling tuck positions? You got it, from members of the same team that brought us the study of pursuit vehicles reducing the drag on cyclists.

Chris Froome TuckChris Froome TuckStage 8, Pau/Bagnères-de-Luchon, Tour de France, 2016

read more

► Active Aerodynamics on the Lamborghini Huracán Performante
    3 May, 2017

Early on in the dash to develop ever faster racecars in the 1970s, aerodynamics, and specifically downforce, proved a revelation. Following on quickly from the initial passive downforce initiatives were active aerodynamic solutions. Only providing downforce when needed (i.e., cornering and braking) then reverting to a low drag configuration was an ideal protocol, but short lived due to rule changes in most motor sports (including Formula 1), which banned active aerodynamics. A recent exception to the rule is the highly regulated Drag Reduction System now used in F1. However, road-legal cars are not governed by such regulations and so we have the gloriously unregulated Lamborghini Huracán Performante.

Active Aerodynamics on the Lamborghini Huracán Performante

read more

CFD Online top

► Install ANSYS 18 on ubuntu (x64)
  23 Oct, 2018
This is the step by step procedure that i followed to get ansys running on my machine. This procedure won't give any library issue.

The required packages from the installation guide (for RedHat/CentOS) are:
• libXp.x86_64
• xorg-x11-fonts-cyrillic.noarch
• xterm.x86_64
• openmotif.x86_64
• compat-libstdc++-33.x86_64
• libstdc++.x86_64
• libstdc++.i686
• gcc-c++.x86_64
• compat-libstdc++-33.i686
• libstdc++-devel.x86_64
• libstdc++-devel.i686
• compat-gcc-34.x86_64
• gtk2.i686
• libXxf86vm.i686
• libSM.i686
• libXt.i686
• xorg-x11-fonts-ISO8859-1-75dpi.noarch
• glibc-2.12-1.166.el6_7.1 (or greater)

1. Therefore I installed:
Code:
sudo apt install xterm lsb csh ssh rpm xfonts-base xfonts-100dpi xfonts-100dpi-transcoded xfonts-75dpi xfonts-75dpi-transcoded xfonts-cyrillic libmotif-common mesa-utils libxm4 libxt6 libxext6 libxi6 libx11-6 libsm6 libice6 libxxf86vm1 libpng12-0 libpng16-16 libtiff5 gcc g++ libstdc++6 libstdc++5 libstdc++-5-dev

2. Install manually libXp (not included in the standard repo), you can find it at:
HTML Code:
https://pkgs.org/download/libxp6

3. Update the database with:
Code:
sudo updatedb

4. Locate the following libs and create soft links based on your system. I did as follows:
Code:
sudo ln -sf /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 /usr/lib/libGL.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1 /usr/lib/libGL.so.1
sudo ln -sf /usr/lib/x86_64-linux-gnu/libGLU.so.1 /usr/lib/libGLU.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXm.so.4 /usr/lib/libXm.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXm.so.4 /usr/lib/libXm.so.3
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXp.so.6 /usr/lib/libXp.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXt.so.6 /usr/lib/libXt.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXext.so.6 /usr/lib/libXext.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libXi.so.6 /usr/lib/libXi.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libX11.so.6 /usr/lib/libX11.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libSM.so.6 /usr/lib/libSM.so
sudo ln -sf /usr/lib/x86_64-linux-gnu/libICE.so.6 /usr/lib/libICE.so
sudo ln -sf /lib/x86_64-linux-gnu/libgcc_s.so.1 /lib/libgcc.so
sudo ln -sf /lib/x86_64-linux-gnu/libc.so.6 /lib/libc.so
sudo ln -sf /lib/x86_64-linux-gnu/libc.so.6 /lib64/libc.so.6

5. Change the command interpreter for shell scripts:
Code:
sudo dpkg-reconfigure dash
Then answer "No" to the question.

6. From DVD1 install Ansys, launch:
Code:
sudo ./INSTALL

7. Modify the Linux environment variable ANSYS quick start of each component Modify the hidden files.Bashrc your Ubuntu home directory. Add the following code in.Bashrc. Method for typing in the terminal code: gedit ~/.bashrc
paste the following code:

# add environment variables:

#ANSYS

# Workbench
export ANSYS180_DIR=/ansys_inc/v180/ansys
alias wb2='/ansys_inc/v180/Framework/bin/Linux64/runwb2 -oglmesa'
export LD_LIBRARY_PATH=/usr/ansys_inc/v180/Framework/bin/Linux64/Mesa:$LD_LIBRARY_PATH
#export XLIB_SKIP_ARGB_VISUALS=1 #uncomment if you have trasparency issues in CFX pre/post or turbogrid
export LANG=en_US.UTF8

#CFX
export PATH=/ansys_inc/v180/CFX/bin:$PATH

#Turbogrid
export PATH=/ansys_inc/v180/TurboGrid/bin:$PATH

#FLUENT
export PATH=/ansys_inc/v180/fluent/bin:$PATH
export FLUENT_ARCH='lnamd64'

8. Type the following code:
source .bashrc

9. To start ansys use any of the following commands:
Ansys CFX Launcher

CFX5
CFX-pre

cfx5pre
CFX-Solver Manager

cfx5-solve
CFD-post

cfdpost
Turborid

cfxtg
Fluent

fluent
ICEM CFD

icemcfd
Ansys APDL

ansys180 # is really in command mode
Ansys180 -g # graphics mode
Ansys Workbench

runwb2
Use the get command to help command -help

Note: The Linux version of Ansys is installed in the Ubuntu is installed by default in /usr/ansys_inc. If you change the default installation location, please modify the path specified in the ansyslmd.ini. Use VIM to open ansyslmd.ini, as follows ANSYSLI_NOFLEX=1 LICKEYFIL=/usr/ansys_inc/shared_files/licensing/license.dat

If the fonts are not displayed correctly in gui, run the following: sudo apt-get install xfonts-75dpi xfonts-100dpi xterm ttf-mscorefonts-installer

You do need to restart your system (it might be enough to restart the X server)!
► intresting threads
  19 Oct, 2018
► Extract a member function from a class by using autoPtr
  15 Oct, 2018
Quote:
Originally Posted by wyldckat View Post
Greetings mkhm,
  1. I've moved your thread to the sub-forum for Programming: https://www.cfd-online.com/Forums/op...ng-development - given that your question is mostly regarding programming ;)
  2. If you notice that you've posted a new post or thread in the wrong place, please contact the forum moderators, as indicated in the forum rules page, and I quote:

    (Sorry for nitpicking, but since you posted that no one was helping you, I felt compelled to point out why possibly you were not getting any answers... :()
  3. It would be useful to have more details on what you want to do exactly, because it's possible that you are trying to get things to work with the wrong approach.
    • What I mean is that it depends if you are trying to do new reaction calculations or if you are trying to load existing results and process them accordingly, because each strategy requires different approaches.
  4. Knowing which solver you've used to simulate the case also helps, given that each solver can use a different mechanism for creating transport and reaction structures.
  5. And the last question is if this is something that you only want to calculate after the simulation is complete or if it's something you want to do while the simulation is running.
Best regards,
Bruno



Hi Bruno

Thanks for your reply. I am so excited that finally I got an answer to my question. In below, my replies to your questions/hints:

1. Thanks for moving the thread. Indeed, my question is more about C++ programming. But as the context was the openFoam, I thought I might get a reply faster by posting it in post processings forum of OpenFoam. I was indeed wrong.

2. Thanks for mentionning that. It would help me to probably have an answer to my other thereads which did not get any attention.

3. I have to develop some post processing tools. As I am running the steady state simulations, I can as well think to write a tool which would be executed at the final step of my simulations. I tried to take the sensitivity analysis case of openFoam 1806 and compile it for OpenFoam 4.x where I have my modified solvers (My thread about that function object: https://www.cfd-online.com/Forums/op...-properly.html ). However, this seems to be very complicated as this function object depends on many other classes not available in OpenFoam 4.x. So, I started to think to simplify the work and go step by step. For instance, writing an post processing tool where only some useful variables available by virtual functions are extracted. For example: there is this ProxyReaction ( https://github.com/OpenFOAM/OpenFOAM...eactionProxy.C ) or even Reaction (https://cpp.openfoam.org/v6/classFoam_1_1Reaction.html) where the reaction rate coefficients kf (forward) and kr (reverse) could be accessed by a pointer. However, I can not find a right way of writing an autoPointer to these classes to extact data. I explain you my strategy:

The constructers of reaction proxy are:

[CODE]
//- Construct and return a clone
virtual autoPtr<Reaction<ReactionThermo>> clone() const;

//- Construct and return a clone with new speciesTable
virtual autoPtr<Reaction<ReactionThermo>> clone
(
const speciesTable& species
) const;
[\CODE]

I'll write something like:

[CODE]

autoPtr<ReactionProxy<gasHThermoPhysics > > pReaction
(
ReactionProxy<gasHThermoPhysics>::clone(species_)

);
and for species_:

Info<< "Reading Fake dictionary\n" << endl;
IOdictionary Fake_
(
IOobject
(
"Fake",
runTime.timeName(),
mesh,
IOobject::MUST_READ,
IOobject::NO_WRITE
)
);

wordList speciesList = Fake_.lookup("species");
speciesTable species_ = speciesList;
OR

autoPtr<ReactionProxy<gasHThermoPhysics > > pReaction
(
ReactionProxy<gasHThermoPhysics>::clone()

);
[\CODE]

In fisrt case, I have the following error:


[CODE]

cannot call member function ‘Foam::autoPtr<Foam::Reaction<ReactionThermo> > Foam::ReactionProxy<ReactionThermo>::clone(const speciesTable&) const [with ReactionThermo = Foam::sutherlandTransport<Foam::species::thermo<Fo am::janafThermo<Foam::perfectGas<Foam::specie> >, Foam::sensibleEnthalpy> >; Foam::speciesTable = Foam::hashedWordList]’ without object
ReactionProxy<gasHThermoPhysics>::clone(species_)

[\CODE]


For me, species_ is an object to work on it.

In the second case :

[CODE]

cannot call member function ‘Foam::autoPtr<Foam::Reaction<ReactionThermo> > Foam::ReactionProxy<ReactionThermo>::clone() const [with ReactionThermo = Foam::sutherlandTransport<Foam::species::thermo<Fo am::janafThermo<Foam::perfectGas<Foam::specie> >, Foam::sensibleEnthalpy> >]’ without object
ReactionProxy<gasHThermoPhysics>::clone()
[\CODE]


4. I took chemFoam solver's make folder, its create fields and the Utility_name.C does nothing for the moment.

5. Ideally, I want it to calculate after simulation. However, my purpose now, it is more learning the C++ by writing some simple tools to elaborate them afterwards.


Thanks a lot for your time. Don't hesitate to ask me if there is a need to clarify more some points.

Best regards,
Mary
► Diesel unsteady flame-let model using Ansys fluent v16.2
  30 Sep, 2018
Dear all,

I am using Diesel unsteady flamelet model for my research work to model mixture chemical kinetic model using n-dodecane/butanol blends. The aim of my study is to see how butanol blends affect the reduction of NOx and soot emission using the default zeldovich and one-step soot model from Ansys fluent v16.2 version. I took the help of engine simulations done from the workbench and simulated in fluent tutorials from youtube and created our own mesh and fluent cease file too. We initially made case for very well known ECN sprayA (https://ecn.sandia.gov/ecn-data-search/) experiments to model spray characterization of liquid and vapor penetration length and we got good match over it following points we are facing problem in our engine simulations

1. for the better convergence of continuity equation we reduced the time step size nearby start of injection after which combustion will start but our simulations are getting stopped because of writing error. we are using processor in order to reduce the overall time of case. The simulations are run only for the Inlet valve closing (IVC) to exhaust valve open (EVO) that is compression and expansion part. But when I switch to series mode the simulation is running fine but it takes 5times the overall time to complete one case which is not worth

2. We are comparing our simulated pressure with experimental pressure data from in-house experimental test set-up and the engine piston bowl geometry is the same and we maintain the same compression ratio in CFD too. We are getting the mass loss problem in simulation. We ON the Crevice model too as our engine is custom made an extended top head engine.so the mass loss will be very high and can be seen from CFD too. Is it happening because of Crevice loss or else?

3. The NOx from zeldovich is ''volume-average mass fraction of pollutant NO'' in SI units what is unit and how do I convert that it into ppm level and same for the soot mass fraction also what is its SI unit?

4. I am getting a match of in-cylinder pressure of CFD vs Experimental pressure data only when i keep Initial value os gauge pressure of (-13000 pa) which is wrong . what exactly the solution for it? If we change to +value the cylinder pressure rases to 120bar for the same case where it matches with -ve gauge pressure.

5. if anyone has good experience on scalar dissipation rate from flamelet to understand the PDF generation and combustion will help me to understand how these parameters are affecting on all results. Most of the time I used the default value from fluent which may be wrong.

Pressure-velocity coupling is done using PISO scheme and second order Upwind scheme is used for other equations

In-cylinder temperature is reaching to 2200K as per adiabatic temperature too.

If anyone has done the IC engine simulation using Ansys fluent in past and can share the case file it will be helpful for me to do reverse engineering on my case and solve my problems

For spray simulation, I used the droplet injection parameter in settings of particle type but when I used Multicomponent particle type I have to use different KH-RT constants for the same case or else my penetration length will increase. I don't have any explanation for it.

I hope I will get the solution over these problems soon

you can contact me on my email: me12m1002@iith.ac.in

Thank you
► TestCase --> compressible::turbulentTe mperatureCoupledBaffleMix ed
  22 Sep, 2018
Following tutorial shows the usage for:
Code:
compressible::turbulentTemperatureCoupledBaffleMixed
for thin layers of materials, that can't be modeled with meshing process.
The test case is for solid regions only.

Compatible with OF5.x 6.x dev


Code:
./run to select for 1D or 2D not conformal mesh
./clean to reset case folder
Attached Files
File Type: gz thermalBaffles_BC_CONDUCTION.tar.gz (52.1 KB, 55 views)
► Validation Cases Using Ansys Fluent
  18 Sep, 2018
Objective:

Steady state heat conduction with fixed temperature at both ends.

Problem Definition:

Two ends of wall subjected to constant temperature T1 = 393 K and T2 = 323K.
Wall of length, l = 0.2 m and thickness = 0.001 m
Thermal conductivity, k = 1.2 W/m-K

Methodology:

1. Ansys Fluent was used for simulation
2. Initial condition of temperature 323 K was given to domain
3. Left wall temp = 393 K ; Right wall temp = 323 K
4. Top and bottom boundaries were taken to be adiabatic i.e. heat flux q = 0
5. Compared with analytical solution

Analytical Solution:

T(x) = ((T2-T1)/L)*x + T1

Conclusion:

Numerical simulation carried out by fluent shows good agreement with analytical result.
Attached Images
File Type: jpg static_temp.jpg (56.9 KB, 17 views)
Attached Files

curiosityFluids top

► Rayleigh–Bénard Convection Using buoyantBoussinesqPimpleFoam
  13 Jun, 2017

Here is an extremely simple simulation to set up that has a surprisingly beautiful output. In this post, we will simulation the classic Rayleigh–Bénard convection (see Wikipedia) in 3D using the buoyant solver, buoyantBoussinesqPimpleFoam.

buoyantBoussinesqPimpleFoam is a solver for buoyant, turbulent, incompressible flows. The incompressible part of the solver comes from the fact that it uses the Boussinesq approximation for buoyancy which is only suitable for flows with small density changes (aka incompressible). A good source for the equations in this solver is this blog post.

Simulation Set-up

The basic set-up for this case is simple: a hot bottom surface, a cold top surface, either cyclic or zero-gradient sides, and some initial conditions.

For this example, I used the properties of water, I set the bottom plate at 373 K (don’t ask me why… I know it’s close to boiling point of water), and the top plate at 273 K. For this case, we will not use any turbulent modeling and will simply use a laminar model (this simply means there is only molecular viscosity, there are no simplifications applied to the equations).

Geometry and Mesh

The geometry is shown below. As the geometry is so simple… I will not go over the blockMesh set up. The mesh discretization that I used was simplegrading (i.e. no inflation), with 200x200x50 cells.

GeometryLabelled

Constant

For this case, we will simulate water. The transportProperties file should look like:

transportModel Newtonian;

// Laminar viscosity
nu [0 2 -1 0 0 0 0] 1e-06;

// Thermal expansion coefficient
beta [0 0 0 -1 0 0 0] 0.000214;

// Reference temperature
TRef [0 0 0 1 0 0 0] 300;

// Laminar Prandtl number
Pr [0 0 0 0 0 0 0] 7.56;

// Turbulent Prandtl number (not used)
Prt [0 0 0 0 0 0 0] 0.7;

I don’t typically delete unused entries from dictionaries. This makes using previous simulations as templates much easier. Therefore note that the turbulent Prandtl number is in the dictionary but it is not used.

Selecting Reference Temperature TRef for buoyantBoussinesqPimpleFoam. To answer this recall that when the Boussinesq buoyancy approximation is used, the solver does not solve for the density. It solves the relative density using the linear function:

\frac{\rho}{\rho_0}=1-\beta \left(T-T_0\right)

Therefore, I think it makes sense that we should choose a temperature for T_{ref} that is somewhere in the range of the simulation. Thus I chose Tref=300. Somewhere in the middle!

And the turbulenceProperties file is:

simulationType laminar;

RAS
{
 RASModel laminar;

turbulence off;

printCoeffs off;
}

The g file tells the solver the acceleration due to gravity, as well as the direction:

dimensions [0 1 -2 0 0 0 0];
value (0 -9.81 0);

Boundary Conditions

In the “zero” folder, we need the following files: p, p_rgh, T, U, and alphat (this file needs to be present… however it is not used given the laminar simulationType.

T:

dimensions [0 0 0 1 0 0 0];

internalField uniform 273;

boundaryField
{
 floor
 {
 type fixedValue;
 value uniform 373;
 }
 ceiling
 {
 type fixedValue;
 value uniform 273;
 }
 fixedWalls
 {
 type zeroGradient;
 }
}

p_rgh:

dimensions [0 2 -2 0 0 0 0];

internalField uniform 0;

boundaryField
{
floor
{
type fixedFluxPressure;
rho rhok;
value uniform 0;
}

ceiling
{
type fixedFluxPressure;
rho rhok;
value uniform 0;
}

fixedWalls
{
type fixedFluxPressure;
rho rhok;
value uniform 0;
}
}

p:

dimensions [0 2 -2 0 0 0 0];

internalField uniform 0;

boundaryField
{
 floor
 {
 type calculated;
 value $internalField;
 }

ceiling
 {
 type calculated;
 value $internalField;
 }

fixedWalls
 {
 type calculated;
 value $internalField;
 }
}

U:

dimensions [0 1 -1 0 0 0 0];

internalField uniform (0 0 0);

boundaryField
{
 floor
 {
 type noSlip;
 }

ceiling
 {
 type noSlip;
 }

fixedWalls
 {
 type noSlip;
 }
}

Simulation Results

The results (as always) are the best part. But especially for this case since they are so nice to look at! I have made a couple animations of temperature fields and contours. Enjoy.

animation
3D Temperature Contours

 

cutanimation
Temperature Field – Slice Through xy Plane

 

Conclusion

This case demonstrated the simple set up of a case using buoyantBoussinesqPimpleFoam. The case (Rayleigh-Bénard convection) was simulated in 3D on a fine grid.

Comments and questions are welcome! Keep in mind I set this case up very quickly.

 

► Time-Varying Cylinder Motion in Cross-flow: timeVaryingFixedUniformValue
  10 Jun, 2017

This post is a simple demonstration of the timeVaryingFixedUniformValue boundary condition. This boundary condition allows a Dirichlet-type boundary condition to be varied in time. To demonstrate, we will modify the oscillating cylinder case.

Set-Up

Instead of using the oscillating boundary condition for point displacement. We will have the cylinder do two things:

  • Move in a circular motion
  • Move in a sinusoidal decay motion

The basics of this boundary condition are extremely simple. Keep in mind that although (here) we are modifying the pointDisplacement boundary condition for the cylinder, the basics of this BC would be the same if you were doing a time varying boundary condition for say pressure or velocity.

In the pointDisplacement file:

 cylinder
 {
 type timeVaryingUniformFixedValue;
 fileName "prescribedMotion";
 outOfBounds clamp;
 }

fileName points to the file where the time varying boundary condition is defined. Here we used a file called prescribedMotion however you can name it whatever you want. The outOfBounds variable dictates what the simulation should do if the simulation time progresses outside of the time domain defined in the file.

The additional file containing the desired motion prescribedMotion is formatted in the following way:

(
( 0 (0 0 0))
( 0.005 (-0.0000308418795848531 0.00392682932795517 0))
( 0.01 (-0.0001233599085671 0.00785268976953207 0))
( 0.015 (-0.000277531259507496 0.0117766126774107 0))
( 0.02 (-0.00049331789293211 0.0156976298823283 0))
...
( 9.99 (-0.0001233599085671 -0.00785268976953189 0))
( 9.995 (-0.0000308418795848531 -0.00392682932795479 0))
( 10 (0 -3.06161699786838E-016 0))
)

The first column is the time in seconds, and the vector defines the point displacement. In the present tutorial, these points were calculated in libreOffice and then exported into the text file.  I arbitrarily made up the motions purely for the sake of making this blog post.

The circular motion was defined as:

x=0.25\cos\left(\pi t\right)-0.25 and y=0.25\sin\left(\pi t\right)

Decaying sinusoidal motion was:

y=\sin(\pi t) \exp\left(-t/2\right)

The rest of the set-up is identical to the set-up in the oscillating cylinder example. The solver pimpleDyMFoam is then run.

Results

Circular Motion

animation

Sinusoidal Decay

animation

Conclusions

This post demonstrated how a more complicated motion can be prescribed by using a little math and the timeVaryingUniformFixedValue boundary condition. Always like to hear questions and comments! Has anybody else done something like this?

 

► Equations for Steady 1D Isentropic Flow
    5 Dec, 2016

The equations used to describe steady 1D isentropic flow are derived from conservation of mass, momentum, and energy, as well as an equation of state (typically the ideal gas law).

These equations are typically described as ratios between the local static properties (p, T, \rho) and their stagnation property as a function of Mach number and the ratio of specific heats, \gamma. Recall that Mach number is the ratio between the velocity and the speed of sound, a.

These ratios are given here:

Temperature: T_o/T = \left(1+\frac{\gamma -1}{2} M^2\right)

Pressure: P_o/P = \left(1+\frac{\gamma -1}{2} M^2\right)^{\frac{\gamma}{\gamma-1}}

Density: \rho_o/\rho = \left(1+\frac{\gamma -1}{2} M^2\right)^{\frac{1}{\gamma-1}}

In addition to the relationships between static and stagnation properties, 1D nozzle flow offers an equation regarding the choked cross-sectional flow area (recall that the flow is choked when M=1.)

A/A^* = \frac{1}{M}\left(\left(\frac{2}{\gamma+1}\right)\left(1+\frac{\gamma -1}{2} M^2\right)\right)^{\frac{\gamma+1}{2\left(\gamma-1\right)}}

Some excellent references for these equations are:

  • Gas Dynamics Vol. I – Zucrow and Hoffman – 1976
  • Gas Dynamics – John and Keith – 2nd Ed. – 2006

 

► Establishing Grid Convergence
  10 Sep, 2016

Establishing grid convergence is a necessity in any numerical study. It is essential to verify that the equations are being solved correctly and that the solution is insensitive to the grid resolution. Some great background and details can be found from NASA:

https://www.grc.nasa.gov/WWW/wind/valid/tutorial/spatconv.html

First, here is a summary the equations and steps discussed here (in case you don’t want to read the whole example):

  1. Complete at least 3 simulations (Coarse, medium, fine) with a constant refinement ratio, r, between them (in our example we use r=2)
  2. Choose a parameter indicative of grid convergence. In most cases, this should be the parameter you are studying. ie if you are studying drag, you would use drag.
  3. Calculate the order of convergence, p, using:
    • p=\ln(\frac{(f_3-f_2)}{(f_2-f_1)}) / \ln(r)
  4. Perform a Richardson extrapolation to predict the value at h=0
    • f_{h=0}=f_{fine}+\frac{f_1-f_2}{r^p-1}
  5. Calculate grid convergence index (GCI) for the medium and fine refinement levels
    • GCI=\frac{F_s |e|}{r^p-1}
  6. Ensure that grids are in the asymptotic range of convergence by checking:
    • \frac{GCI_{2,3}}{r^p \times GCI_{1,2}} \approxeq 1

So what is a grid convergence study? Well, the gist of it is that you refine the mesh several times and compare the solutions to estimate the error from discretization. There are several strategies to do this. However, I have always been a fan of the following method: Create a very fine grid and simulate the flow problem. Then reduce the grid density twice, creating a medium grid, and coarse grid.

There are several strategies to do this. However, I have always been a fan of the following method: Create a very fine grid and simulate the flow problem. Then reduce the grid density twice, creating a medium grid, and coarse grid. To keep the process simple, the ratio of refinement should be the same for each step. ie. if you reduce the grid spacing by 2 in each direction between the fine and medium grid, you should reduce it again by 2 between the medium and coarse grid. In the current example, I generated three grids for the cavity problem with a refinement ratio of 2:

  • Fine grid: 80 cells in each direction – (6400 cells)
  • Medium grid: 40 cells in each direction – (1600 cells)
  • Coarse grid: 20 cells in each direction – (400 cells)

Velocity contour plots are shown in the following figures:

We can see from the figures that the quality of the simulation improves as the grid is refined. However, the point of a grid convergence study is to quantify this improvement and to provide insight into the actual quality of the fine grid.

The accuracy of the fine grid is then examined by calculating the effective order of convergence, performing a Richardson extrapolation, and calculating the grid convergence index. As well, (as stated in the article from NASA), it is helpful to ensure that you are in the asymptotic range of convergence.

What are we examining?

It is very important at the start of a CFD study to know what you are going to do with the result. This is because different parameters will converge differently. For example, if you are studying a higher order parameter such as local wall friction, your grid requirements will probably be more strict than if you are studying an integral (and hence lower order) parameter such as coefficient of drag. You only need to ensure that the property or parameter that you are studying is grid independent. For example, if you were studying the pressure increase across a shockwave, you would not check that wall friction somewhere else in the simulation was converged (unless you were studying wall friction as well).  If you want to be able to analyze any property in a simulation, both high and low order, then you should do a very rigorous grid convergence study of primitive, integrated and derived variables.

In our test case lets pretend that in our research or engineering project that we are interested in the centerline pressure and velocity. In particular, let’s say we are interested in the profiles of these variables along the centerline as well as their peak values. The centerline profiles of velocity and pressure are shown in the following figures:

Calculate the effective order of convergence

From our simulations, we have generated the following data for minimum pressure along the centerline, and maximum velocity along the centerline:

minpmaxvelocity

The order of convergence, p, is calculated with the following equation:

p=\ln(\frac{(f_3-f_2)}{(f_2-f_1)}) / \ln(r)

where r is the ratio of refinement, and f1 to f3 are the results from each grid level.

Using the data we found, p = 1.84 for the minimum pressure and p=1.81 for the maximum velocity.

Perform Richardson extrapolation of the results

Once we have an effective order, p, we can do a Richardson extrapolation. This is an estimate of the true value of the parameter we are examining based on our order of convergence. The extrapolation can be performed with the following equation:

f_{h=0}=f_{fine}+\frac{f_1-f_2}{r^p-1}

recall that r is the refinement ratio and is h_2/h_1 which in this case is 2.

Using this equation we get the Richardson extrapolated results:

  • P_{min} at h=0  -> -0.029941
  • V_{max} at h=o  -> 0.2954332

The results are plotted here:

richardson_vmax

richardson_pmin

Calculate the Grid Convergence Index (GCI)

Grid convergence index is a standardized way to report grid convergence quality. It is calculated at refinement steps. Thus we will calculate a GCI for steps from grids 3 to 2, and from 2 to 1.

The equation to compute grid convergence index is:

GCI=\frac{F_s |e|}{r^p-1}

where e is the error between the two grids and F_s is an optional (but always recommended) safety factor.

Now we can calculate the grid convergence indices for the minimum pressure and maximum velocity.

Minimum pressure

  • GCI_{2,3} = 1.25 \times |\frac{-0.028836-(-0.025987)}{-0.028836}|/(2^{1.84}-1) \times 100 \% = 4.788 \%
  • GCI_{1,2} = 1.25 \times |\frac{-0.029632-(-0.028836)}{-0.029632}|/(2^{1.84}-1) \times 100 \% = 1.302 \%

Max velocity

  • GCI_{2,3} =1.25 \times | \frac{0.2892-0.27359}{0.2892}|/(2^{1.81}-1) \times 100 \% =2.69187 \%
  • GCI_{1,2} = 1.25 \times |\frac{0.29365-0.2892}{0.29365}|/(2^{1.84}-1) \times 100 \% = 0.7559 \%

Check that we are in the asymptotic range of convergence

It is also necessary to check that we are examing grid converegence within the asymptotic range of convergence. If we are not in the asymtotic range this means that we are not asymptotically approaching a converged answer and thus our solution is definitely not grid indipendent.

With three grids, this can be checked with the following relationship:

\frac{GCI_{2,3}}{r^p \times GCI_{1,2}} \approxeq 1

If we are in the asymptotic range then the left-hand side of the above equation should be approximately equal to 1.

In our example we get:

Minimum Pressure

1.0276 \approxeq 1

Minimum Velocity

1.0154 \approxeq 1

Applying Richardson extrapolation to a range of data

Alternatively to choosing a single value like minimum pressure or maximum velocity. Richardson extrapolation can be applied to a range of data. For example, we can use the equation for Richardson extrapolation to estimate the entire profile of pressure and velocity along the centerline at h=0.

This is shown here:

Conclusions and Additional References

In this post, we used the cavity tutorial from OpenFOAM to do a simple grid convergence study. We established an order of convergence, performed Richardson extrapolation, calculated grid convergence indices (GCI) and checked for the asymptotic range of convergence.

As I said before, the NASA resource is very helpful and covers a similar example:

As well the papers by Roache are excellent reading for anybody doing numerical analysis in fluids:

► The Ahmed Body
    7 Sep, 2016

The Ahmed body is a geometric shape first proposed by Ahmed and Ramm in 1984. The shape provides a model to study geometric effects on the wakes of ground vehicles (like cars).

Image highlights:

In this post, I will use simpleFoam to simulate the Ahmed body at a Reynolds number of 10^6 using the k-omega SST turbulence model. The geometry was meshed using cfMesh which I will briefly discuss as well. Here is a breakdown of this post:

  1. Geometry Definition
  2. Meshing with cfMesh
  3. Boundary Conditions
  4. Results

The files for this case can be downloaded here:

Download Case Files

Note: I ran this case on my computer with 6 Intel – i7 (3.2 GHz) cores and 32 Gb of RAM. Throughout the simulation about 20-ish gigs of RAM were used.

Geometry Definition

STL Creation

The meshing utility cfMesh is similar to snappyHexMesh in that it depends on a geometry file of some type (.stl etc) to create the mesh. But it is different in that the entire domain must be part of the definition.

The ahmed body geometry can be found: http://www.cfd-online.com/Wiki/File:Ahmed.gif

For this simulation, I generated the geometry using SolidWorks. But this wasn’t for any particular reason other than that it was quick since I am familiar with it.

Preparation for Meshing

Once you have an STL file, you could go straight ahead to meshing it with cfMesh. However, some simple preparations to the STL geometry can improve the quality of the mesh created, and make setting up the case easier.

In particular, when you create the STL file in SolidWorks (or your 3D modeller of choice)  it contains no information about the boundaries and patches. As well, cfMesh works best if the geometry is defined using a .fms or .ftr file format.

Use surfaceFeatureEdge utility to extract edge information and create a .ftr file. Firstly let’s extract edge and face information from our STL file. We also define an angle. This angle tells cfMesh that any angle change large than this (in our case I chose 20 degrees) is a feature edge that must be matched.

surfaceFeatureEdge volume.stl volume.ftr -angle 20

After we run this, the new file volume.ftr contains a bunch of face and patch information. 13 surfaces with feature edges were extracted. The first 6 (volume_0 to volume_5) are the boundaries of the simulation (inlet, outlet, ground, front, back, and top).

mergeSurfacePatches volume.ftr ahmed -patchNames '(volume_6 volume_7 volume_8 volume_9 volume_10 volume_11 volume_12)'

After running this command, volume.ftr now contains 7 patches. We are now ready to move on to setting up cfMesh.

Meshing with cfMesh

Set up meshDict file in the system folder

Similar to snappyHexMesh and blockMesh, cfMesh using a dictionary file to control its parameters; meshDict. In this dictionary file we will be modifying a few parameters.

Tell cfMesh what file is to be meshed:

surfaceFile "volume_transformed.ftr";

Set the default grid size:

maxCellSize 0.2;

Set up refinement zones:

We want to set up two refinement zones; a larger one to capture most of the flow further away from the body (including the far wake), and a smaller more refined one to capture the near wake and the flow very close to the Ahmed body.

objectRefinements
{
 box1
 {
    cellSize 25e-3;
    type box;
    centre (2.4 0 6.5);
    lengthX 1;
    lengthY 1;
    lengthZ 3.5;
 }

box2
 {
    cellSize 5e-3;
    type box;
    centre (2.4 0 7);
    lengthX 0.5;
    lengthY 1;
    lengthZ 2;
 }

}

Set up boundaries to be renamed:

renameBoundary
{
 newPatchNames
 {
 volume_0
 {
 newName ground;
 type wall;
 }
 volume_1
 {
 newName back;
 type wall;
 }
 volume_2
 {
 newName inlet;
 type patch;
 }
 volume_3
 {
 newName front;
 type wall;
 }
 volume_4
 {
 newName outlet;
 type wall;
 }
 volume_5
 {
 newName top;
 type patch;
 }
 ahmed
 {
 newName ahmed;
 type wall;
 }
}

Set up boundary layering:

We require boundary layer on both the Ahmed body, as well as the volume_0 patch. Recall that the Ahmed body is surface mounted!

boundaryLayers
{
 patchBoundaryLayers
 {
 ahmed
 {
 nLayers 10;
 thicknessRatio 1.1;
 maxFirstLayerThickness 5e-3;
 }
 volume_0 
 {
 nLayers 10;
 thicknessRatio 1.05;
 maxFirstLayerThickness 10e-3;
 }

 }
}

Run cfMesh

We want to create a hex-dominant grid. This means that the 3D grid will consist primarily of hex cells. To achieve this we will use the cartesianMesh solver from cfMesh.

The results are shown below. The final mesh consisted of approximately 16.9 million cells. The majority of the cells were hexahedra (approximately 99%).

 

Boundary Conditions for the Solver

For this case, we are going to run a steady-state RANS simulation using the kwSST model and the solver simpleFOAM. This is simply to demonstrate the running of the solver.

The boundary conditions used are summarized in the following table:

bctable

As you can see I have used wall functions for the wall boundary conditions. This is due to the very small cell requirements that would be required to resolve the boundary layer on the ground, as well as on the Ahmed body which is at a Reynolds number of one million.

Simulation Results

The simulation took about a day and a half on 6 cores. Throughout the simulation, about 20 gb of RAM was used.

Cross-Section:

Streamlines and pressure on surface:

cropped-ahmedstreamliens.png

Vorticity surface in the near-wake

nearwake

Conclusions

In this post, we meshed and simulated a surface-mounted Ahmed body at a Reynolds number of one million. We meshed it using the open-source meshing add-on cfMesh. We then solved it as a steady-state RANS simulation using the kwSST turbulence model, and the simpleFOAM solver.

The results gave some nice figures and a qualitatively correct result! And it was pretty fun. cfMesh was extremely easy to use and required much less user input than its OpenCFD counterpart snappyHexMesh.

Some references:

For more information on the Ahmed body:

http://www.cfd-online.com/Wiki/Ahmed_body

Some papers studying the Ahmed body:

-See the reference on the above CFD Online page!

 

As usual please comment and let me know what you think!

 

Cheers,

curiosityFluids

curiosityFluidsLogo

► Oscillating Cylinder in Laminar Crossflow – pimpleDyMFoam
  19 Jul, 2016

In this post I am going to simulate an oscillating cylinder in a cross-flow… just for fun… and to provide an additional tutorial case for those wishing to use some of the dynamic meshing features of OpenFOAM.

The case I am going to simulate is a cylinder in a Reynolds number 200 cross-flow (U=2 m/s, D=1 m, nu = 0.01 m^2/s), oscillating at a rate of 0.2 hz.

Tutorial Files

The tutorial files for this case can be downloaded from here:

Download Tutorial Files

Please let me know if the download does not work or if there is a problem running the tutorial files. Note: I ran this case in parallel on a relatively fast 6-core computer. It will take a long time if you try to run it as single core.

Mesh Generation

For this simulation, I built a simple two dimensional O-grid around the cylinder which joined to a series of blocks making a rectangular domain. I built this using the native OpenFOAM meshing utility blockMesh (which I like a lot).

mesh.0000
Fig: Grid

If you are wondering about the blockMesh set up… I intend to a blockMesh tutorial post… not that it’s all that necessary since the OpenFOAM manual covers it pretty well.

Case Set-up

dynamicMeshDict

When running a dynamic mesh case a solver runs as part of the solution and solves for the new grid at each timestep. Several are available in OpenFOAM and I am not really trying to do a full post on that right now. So I’ll just tell you that in this case I decided to use the displacementLaplacian solver.

Along with the solver one must define the coefficients that go with the solver. For most of the solvers this means setting the diffusivity for the Laplace solver. Since I am relatively new to these types of solutions I did what we all do… and turned to the great CFD online forum! A discussion in this post (http://www.cfd-online.com/Forums/openfoam-meshing/97299-diffusivity-dynamicmeshdict.htm) made me think that a good starting point would be the inverseDistance model for the mesh diffusivity.

In the end my dynamicMeshDict (which belongs in the constant folder) looked like:

dynamicFvMesh dynamicMotionSolverFvMesh;

motionSolverLibs ( "libfvMotionSolvers.so" );

solver displacementLaplacian;


displacementLaplacianCoeffs
{
      diffusivity inverseDistance 1(cylinder);
}

The solver produces the grid movements through time as the simulation is performed. Here is what the mesh looks like while it is moving! I added the stationary mesh and the moving mesh. Click on them to get a clearer picture 🙂

Boundary Conditions

The set up for this case is simple. An inlet boundary (on the far left) where I specified the incoming velocity and pressure (both are uniform fixedValue). The top and bottom boundaries are slip boundaries (but could also be freestream depending on your preferred set-up style).

The cylinder itself requires some special treatment because of the moving mesh. It must obey the no-slip condition right? So do we set it as uniform (0 0 0) ? No! The cylinder is moving! Luckily there is a handy boundary condition for this in the U file:

 cylinder
 {
      type movingWallVelocity;
      value uniform (0 0 0);
 }

For a typical simulation using pimpleFoam a p and U file would be all that are required. However, since we are doing a moving mesh simulation there is another parameter that must be solved for and requires boundary conditions… pointDisplacement.

For the pointDisplacement boundary conditions, we know that all of the outer edges should NOT move. Therefore they are all fixed with a type of fixedValue and  a value of uniform (0 0 0).

The cylinder however is moving and requires a definition. In this simulation we are simulating and oscillating cylinder. Since we are using the displacement solver the type is oscillatingDisplacement. We input and omega (rad/s) and an amplitude (m) in the following way:

 cylinder
 {
 type oscillatingDisplacement;
 omega 1.256; 
 amplitude (0 0.5 0); // Max piston stroke
 value uniform (0 0 0);
 }

Results

Yay! Now its time to look at the results. Well since I am not doing this for any particular scientific study… let’s look at some pretty pictures!

Here is an animation of vorticity:

vort
Wake of Oscillating Cylinder

Looks pretty nice! I personally have a big nerd love for vortex shedding…. I don’t know why.

Obviously if you intend to do any scientific or engineering work with this type of problem you would need to think very carefully about the grid resolution, diffusivity model, Reynolds number, oscillation frequency  etc. All of these were arbitrarily selected here to facilitate the blog post and to provide a nice tutorial example!

Conclusion

In this post I briefly covered the set-up of this type of dynamic meshing problem. The main difference for running a dynamic mesh case is that you require a dynamic mesh solver (you must specify in the dynamicMeshDict), and you also require boundary conditions for that solver.

Let me know if there are any problems with this blog post or with the tutorial files provided.

Hanley Innovations top

► Hanley Innovations Upgrades Stallion 3D to Version 5.0
  18 Jul, 2017
The CAD for the King Air was obtained from Thingiverse


Stallion 3D is a 3D aerodynamics analysis software package developed by Dr. Patrick Hanley of Hanley Innovations in Ocala, FL. Starting with only the STL file, Stallion 3D is an all-in-one digital tool that rapidly validate conceptual and preliminary aerodynamic designs of aircraft, UAVs, hydrofoil and road vehicles.

  Version 5.0 has the following features:
  • Built-in automatic grid generation
  • Built-in 3D compressible Euler Solver for fast aerodynamics analysis.
  • Built-in 3D laminar Navier-Stokes solver
  • Built-in 3D Reynolds Averaged Navier-Stokes (RANS) solver
  • Multi-core flow solver processing on your Windows laptop or desktop using OpenMP
  • Inputs STL files for processing
  • Built-in wing/hydrofoil geometry creation tool
  • Enables stability derivative computation using quasi-steady rigid body rotation
  • Up to 100 actuator disc (RANS solver only) for simulating jets and prop wash
  • Reports the lift, drag and moment coefficients
  • Reports the lift, drag and moment magnitudes
  • Plots surface pressure, velocity, Mach number and temperatures
  • Produces 2-d plots of Cp and other quantities along constant coordinates line along the structure
The introductory price of Stallion 3D 5.0 is $3,495 for the yearly subscription or $8,000.  The software is also available in Lab and Class Packages.

 For more information, please visit http://www.hanleyinnovations.com/stallion3d.html or call us at (352) 261-3376.
► Airfoil Digitizer
  18 Jun, 2017


Airfoil Digitizer is a software package for extracting airfoil data files from images. The software accepts images in the jpg, gif, bmp, png and tiff formats. Airfoil data can be exported as AutoCAD DXF files (line entities), UIUC airfoil database format and Hanley Innovations VisualFoil Format.

The following tutorial show how to use Airfoil Digitizer to obtain hard to find airfoil ordinates from pictures.




More information about the software can be found at the following url:
http:/www.hanleyinnovations.com/airfoildigitizerhelp.html

Thanks for reading.


► Your In-House CFD Capability
  15 Feb, 2017

Have you ever wish for the power to solve your 3D aerodynamics analysis problems within your company just at the push of a button?  Stallion 3D gives you this very power using your MS Windows laptop or desktop computers. The software provides accurate CL, CD, & CM numbers directly from CAD geometries without the need for user-grid-generation and costly cloud computing.

Stallion 3D v 4 is the only MS windows software that enables you to solve turbulent compressible flows on your PC.  It utilizes the power that is hidden in your personal computer (64 bit & multi-cores technologies). The software simultaneously solves seven unsteady non-linear partial differential equations on your PC. Five of these equations (the Reynolds averaged Navier-Stokes, RANs) ensure conservation of mass, momentum and energy for a compressible fluid. Two additional equations captures the dynamics of a turbulent flow field.

Unlike other CFD software that require you to purchase a grid generation software (and spend days generating a grid), grid generation is automatic and is included within Stallion 3D.  Results are often obtained within a few hours after opening the software.

 Do you need to analyze upwind and down wind sails?  Do you need data for wings and ship stabilizers at 10,  40, 80, 120 degrees angles and beyond? Do you need accurate lift, drag & temperature predictions at subsonic, transonic and supersonic flows? Stallion 3D can handle all flow speeds for any geometry all on your ordinary PC.

Tutorials, videos and more information about Stallion 3D version 4.0 can be found at:
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Avoid Testing Pitfalls
  24 Jan, 2017


The only way to know if your idea will work is to test it.  Rest assured, as a design engineer your idea and designs will be tested over and over again often in front of a crowd of people.

As an aerodynamics design engineer, Stallion 3D helps you to avoid the testing pitfalls that would otherwise keep you awake at night. An advantage of Stallion 3D is it enables you to test your designs on the privacy of your laptop or desktop before your company actually builds a prototype.  As someone who uses Stallion 3D for consulting, I find it very exciting to see my designs flying the way they were simulated in the software. Stallion 3D will assure that your creations are airworthy before they are tested in front of a crowd.

I developed Stallion 3D for engineers who have an innate love and aptitude for aerodynamics but who do not want to deal with the hassles of standard CFD programs.  Innovative technologies should always take a few steps out of an existing process to make the journey more efficient.  Stallion 3D enables you to skip the painful step of grid (mesh) generation. This reduces your workflow to just a few seconds to setup and run a 3D aerodynamics case.

Stallion 3D helps you to avoid the common testing pitfalls.
1. UAV instabilities and takeoff problems
2. Underwhelming range and endurance
3. Pitch-up instabilities
4. Incorrect control surface settings at launch and level flight
5. Not enough propulsive force (thrust) due to excess drag and weight.

Are the results of Stallion 3D accurate?  Please visit the following page to see the latest validations.
http://www.hanleyinnovations.com/stallion3d.html

If your have any questions about this article, please call me at (352) 261-3376 or visit http://www.hanleyinnovations.com.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.
► Flying Wing UAV: Design and Analysis
  15 Jan, 2017

3DFoil is a design and analysis software for wings, hydrofoils, sails and other aerodynamic surfaces. It requires a computer running MS Windows 7,8 and 10.

I wrote the 3DFoil software several years ago using a vortex lattice approach. The vortex lattice method in the code is based on vortex rings (as opposed to the horse shoe vortex approach).  The vortex ring method allows for wing twist (geometric and aerodynamic) so a designer can fashion the wing for drag reduction and prevent tip stall by optimizing the amount of washout.  The approach also allows sweep (backwards & forwards) and multiple dihedral/anhedral angles.
Another feature that I designed into 3DFoil is the capability to predict profile drag and stall. This is done by analyzing the wing cross sections with a linear strength vortex panel method and an ordinary differential equation boundary layer solver.   The software utilize the solution of the boundary layer solver to predict the locations of the transition and separation points.

The following video shows how to use 3DFoil to design and analyze a flying wing UAV aircraft. 3DFoil's user interface is based on the multi-surface approach. In this method, the wing is designed using multiple tapered surface where the designer can specify airfoil shapes, sweep, dihedral angles and twist. With this approach, the designer can see the contribution to the lift, drag and moments for each surface.  Towards the end of the video, I show how the multi-surface approach is used to design effective winglets by comparing the profile drag and induced drag generated by the winglet surfaces. The video also shows how to find the longitudinal and lateral static stability of the wing.



The following steps are used to design and analyze the wing in 3DFoil:
1. Input the dimensions and sweep half of the wing (half span)
2. Input the dimensions and sweep of the winglet.
3. Join the winglet and main wing.
4. Generate the full aircraft using the mirror image insert function.
5. Find the lift drag and moments
6. Compute longitudinal and lateral stability
7. Look at the contributions of the surfaces.
8. Verify that the winglets provide drag reduction.

More information about 3DFoil can be found at the following url: http://www.hanleyinnovations.com/3dfoil.html.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.

► Corvette C7 Aerodynamics
    7 Jan, 2017

The CAD file for the Corvette C7 aerodynamics study in Stallion 3D version 4 was obtained from Mustafa Asan revision on GrabCAD.  The file was converted from the STP format to the STL format required in Stallion 3D using OnShape.com.

Once the Corvette was imported into Stallion 3D, I applied ground effect and a speed of 75 miles per hour at zero angle of attack.  The flow setup took just seconds in Stallion 3D and grid generation was completely automatic.  The software allows the user to choose a grid size setting and I chose the option the produced a total of 345,552 cells in the computational domain.

I chose the Reynolds Averaged Navier-Stokes (RANS) equations solver for this example.  In Stallion 3D, the RANS equations are solve along with the k-e turbulence model.  A wall function approach is used at the boundaries.

The results were obtained after 10,950 iterations on a quad core laptop computer running at 2.0 GHz under MS Windows 10.


The results for the Corvette C7 model  are summarized below:

Lift Coefficient:  0.227
Friction Drag Coefficient: 0.0124
Pressure Drag Coefficient: 0.413
Total Drag Coefficient: 0.426

Stallion 3D HIST Solver:  Reynolds Averaged Navier-Stokes Equations
Turbulence Model: k-e
Number of Cells: 345,552
Grid: Built-in automatic grid generation

Run time: 7 hours

The coefficients were computed based on a frontal area of 2.4 square meters.

The following are images of the same solution from different views in Stallion 3D.  The streamlines are all initiated near the ground plane 2 meters ahead of the car.

Top View



Side View


Bottom View


Stallion 3D utilizes a new technology (Hanley Innovations Surface Treatment or HIST) that enables design engineers to quickly analyze their CAD models on an ordinary Window PC.  We call this SameDayCFD. This unique technology is my original work and was not derived from any existing software codes.  More information about Stallion 3D can be found at:


Do not hesitate to contact us if you have any questions.  More information can be found at  http://www.hanleyinnovations.com

Thanks for reading.

About Patrick Hanley, Ph.D.
Dr. Patrick Hanley is the owner of Hanley Innovations. He received his Ph.D. degree in fluid dynamics for Massachusetts Institute of Technology (MIT) department of Aeronautics and Astronautics (Course XVI). Dr. Hanley is the author of Stallion 3D, MultiSurface Aerodynamics, MultiElement Airfoils, VisualFoil and the booklet Aerodynamics in Plain English.



CFD and others... top

► Are High-Order CFD Solvers Ready for Industrial LES?
    1 Jan, 2018
The potential of high-order methods (order > 2nd) is higher accuracy at lower cost than low order methods (1st or 2nd order). This potential has been conclusively demonstrated for benchmark scale-resolving simulations (such as large eddy simulation, or LES) by multiple international workshops on high-order CFD methods.

For industrial LES, in addition to accuracy and efficiency, there are several other important factors to consider:

  • Ability to handle complex geometries, and ease of mesh generation
  • Robustness for a wide variety of flow problems
  • Scalability on supercomputers
For general-purpose industry applications, methods capable of handling unstructured meshes are preferred because of the ease in mesh generation, and load balancing on parallel architectures. DG and related methods such as SD and FR/CPR have received much attention because of their geometric flexibility and scalability. They have matured to become quite robust for a wide range of applications. 

Our own research effort has led to the development of a high-order solver based on the FR/CPR method called hpMusic. We recently performed a benchmark LES comparison between hpMusic and a leading commercial solver, on the same family of hybrid meshes at a transonic condition with a Reynolds number more than 1M. The 3rd order hpMusic simulation has 9.6M degrees of freedom (DOFs), and costs about 1/3 the CPU time of the 2nd order simulation, which has 28.7M DOFs, using the commercial solver. Furthermore, the 3rd order simulation is much more accurate as shown in Figure 1. It is estimated that hpMusic would be an order magnitude faster to achieve a similar accuracy. This study will be presented at AIAA's SciTech 2018 conference next week.

(a) hpMusic 3rd Order, 9.6M DOFs
(b) Commercial Solver, 2nd Order, 28.7M DOFs
Figure 1. Comparison of Q-criterion and Schlieren  

I certainly believe high-order solvers are ready for industrial LES. In fact, the commercial version of our high-order solver, hoMusic (pronounced hi-o-music), is announced by hoCFD LLC (disclaimer: I am the company founder). Give it a try for your problems, and you may be surprised. Academic and trial uses are completely free. Just visit hocfd.com to download the solver. A GUI has been developed to simplify problem setup. Your thoughts and comments are highly welcome.

Happy 2018!     

► Sub-grid Scale (SGS) Stress Models in Large Eddy Simulation
  17 Nov, 2017
The simulation of turbulent flow has been a considerable challenge for many decades. There are three main approaches to compute turbulence: 1) the Reynolds averaged Navier-Stokes (RANS) approach, in which all turbulence scales are modeled; 2) the Direct Numerical Simulations (DNS) approach, in which all scales are resolved; 3) the Large Eddy Simulation (LES) approach, in which large scales are computed, while the small scales are modeled. I really like the following picture comparing DNS, LES and RANS.

DNS (left), LES (middle) and RANS (right) predictions of a turbulent jet. - A. Maries, University of Pittsburgh

Although the RANS approach has achieved wide-spread success in engineering design, some applications call for LES, e.g., flow at high-angles of attack. The spatial filtering of a non-linear PDE results in a SGS term, which needs to be modeled based on the resolved field. The earliest SGS model was the Smagorinsky model, which relates the SGS stress with the rate-of-strain tensor. The purpose of the SGS model is to dissipate energy at a rate that is physically correct. Later an improved version called the dynamic Smagorinsky model was developed by Germano et al, and demonstrated much better results.

In CFD, physics and numerics are often intertwined very tightly, and one may draw erroneous conclusions if not careful. Personally, I believe the debate regarding SGS models can offer some valuable lessons regarding physics vs numerics.

It is well known that a central finite difference scheme does not contain numerical dissipation.  However, time integration can introduce dissipation. For example, a 2nd order central difference scheme is linearly stable with the SSP RK3 scheme (subject to a CFL condition), and does contain numerical dissipation. When this scheme is used to perform a LES, the simulation will blow up without a SGS model because of a lack of dissipation for eddies at high wave numbers. It is easy to conclude that the successful LES is because the SGS stress is properly modeled. A recent study with the Burger's equation strongly disputes this conclusion. It was shown that the SGS stress from the Smargorinsky model does not correlate well with the physical SGS stress. Therefore, the role of the SGS model, in the above scenario, was to stabilize the simulation by adding numerical dissipation.

For numerical methods which have natural dissipation at high-wave numbers, such as the DG, SD or FR/CPR methods, or methods with spatial filtering, the SGS model can damage the solution quality because this extra dissipation is not needed for stability. For such methods, there have been overwhelming evidence in the literature to support the use of implicit LES (ILES), where the SGS stress simply vanishes. In effect, the numerical dissipation in these methods serves as the SGS model. Personally, I would prefer to call such simulations coarse DNS, i.e., DNS on coarse meshes which do not resolve all scales.

I understand this topic may be controversial. Please do leave a comment if you agree or disagree. I want to emphasize that I support physics-based SGS models.
► 2016: What a Year!
    3 Jan, 2017
2016 is undoubtedly the most extraordinary year for small-odds events. Take sports, for example:
  • Leicester won the Premier League in England defying odds of 5000 to 1
  • Cubs won World Series after 108 years waiting
In politics, I do not believe many people truly believed Britain would exit the EU, and Trump would become the next US president.

From a personal level, I also experienced an equally extraordinary event: the coup in Turkey.

The 9th International Conference on CFD (ICCFD9) took place on July 11-15, 2016 in the historic city of Istanbul. A terror attack on the Istanbul International airport occurred less than two weeks before ICCFD9 was to start. We were informed that ICCFD9 would still take place although many attendees cancelled their trips. We figured that two terror attacks at the same place within a month were quite unlikely, and decided to go to Istanbul to attend and support the conference. 

Given the extraordinary circumstances, the conference organizers did a fine job in pulling the conference through. More than half of the attendees withdrew their papers. Backup papers were used to form two parallel sessions though three sessions were planned originally. We really enjoyed Istanbul with the beautiful natural attractions and friendly people. 

Then on Friday evening, 12 hours before we were supposed to depart Istanbul, a military coup broke out. The government TV station was controlled by the rebels. However, the Turkish President managed to Facetime a private TV station, essentially turning around the event. Soon after, many people went to the bridge, the squares, and overpowered the rebels with bare fists.


A Tank outside my taxi



A beautiful night in Zurich

The trip back to the US was complicated by the fact that the FAA banned all direct flight from Turkey. I was lucky enough to find a new flight, with a stop in Zurich...

In 2016, I lost a very good friend, and CFD pioneer, Professor Jaw-Yen Yang. He suffered a horrific injury from tennis in early 2015. Many of his friends and colleagues gathered in Taipei on December 3-5 2016 to remember him.

This is a CFD blog after all, and so it is important to show at least one CFD picture. In a validation simulation [1] with our high-order solver, hpMusic, we achieved remarkable agreement with experimental heat transfer for a high-pressure turbine configuration. Here is a flow picture.

Computational Schlieren and iso-surfaces of Q-criterion


To close, I wish all of you a very happy 2017!

  1. Laskowski GM, Kopriva J, Michelassi V, Shankaran S, Paliath U, Bhaskaran R, Wang Q, Talnikar C, Wang ZJ, Jia F. Future directions of high fidelity CFD for aerothermal turbomachinery research, analysis and design, AIAA-2016-3322.



► The Linux Version of meshCurve is Now Ready for All to Download
  20 Apr, 2016
The 64-bit version for the Linux operating system is now ready for you to download. Because of the complexities associated with various libraries, we experienced a delay of slightly more than a month. Here is the link again.

Please let us know your experience, good or bad. Good luck!
► Announcing meshCurve: A CAD-free Low Order to High-Order Mesh Converter
  14 Mar, 2016
We are finally ready to release meshCurve to the world!

The description of meshCurve is provided in AIAA Paper No. 2015-2293. The primary developer is Jeremy Ims, who has been supported by NASA and NSF. Zhaowen Duan also made major contributions. By the way, Aerospace America also highlighted meshCurve in its 2015 annual review issue (on page 22). Many congratulations to Jeremy and Zhaowen on this major milestone!

The current version supports both the Mac OS X and Windows (64 bit) operating systems. The Linux version will be released soon.

Here is roughly how meshCurve works. The input is a linear mesh in the CGNS format. Then the user selects which boundary patches should be reconstructed to high-order. After that, geometrically important features are detected. The user can also manually select or delete features. Next the selected patches are reconstructed to add curvature. Finally the interior volume meshes are curved (if necessary). The output mesh is also stored in CGNS format.

We have tested the tool with meshes in the order of a million cells. But I still want to lower your expectation. So try it out yourself and let us know if you like it or hate it. Please do report bugs so that improvements can be made in the future.

Good luck!

Oh, did I mention the tool is completely free? Here is the meshCurve link again.






► An Update on the International Workshops on High-Order CFD Methods
    9 Sep, 2015
The most recent workshop, the 3rd International Workshop on High-Order CFD Methods, took place on January 3-4, 2015 just before the 53rd AIAA Aerospace Sciences Meeting at the Gaylord Palms Resort and Convention Center in Kissimmee, Florida (Orlando). The workshop was co-chaired by H.T. Huynh of NASA Glenn Research Center and Norbert Kroll of DLR, and sponsored by NASA,  AIAA, DLR and the Army Research Office.

Participants came from all over the world, including students, researchers and practitioners from academia, industry and government labs. A wide variety of methods were covered by the attendees. The final agenda and other details from the Workshop are contained on the following NASA web site:

https://www.grc.nasa.gov/hiocfd/

There are still many unfinished businesses, including high-order mesh generation, robust error estimates and hp-adaptations, and efficient solution methods on extreme scale parallel computers. Please mark your calendar for the 4th Workshop which will take place in the breathtaking Greek island, Crete, on the 3rd and 4th of June 2016 just before the Eccomas / 6th European Conference on CFD (ECFD VI). ECCOMAS will feature a dedicated minisymposium, during which selected participants  will be able to present their results. The Workshop cases and other details are contained here:

http://how4.cenaero.be
 
Hope to see many of you in Greece!

ANSYS Blog top

► How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon
  10 Oct, 2018
Illini EV Concept Team Photo at Shell Eco Marathon 2018

Illini EV Concept Team Photo at Shell Eco Marathon 2018

Weight is the enemy of all teams that design electric cars for the Shell Eco Marathon.

Reducing the weight of electric cars improves the vehicle’s acceleration and power efficiency. These performance improvements make all the difference come race day.

However, if the car’s weight is reduced too much, it could lead to safety concerns.

Illini EV Concept (Illini) is a Shell Eco Marathon team out of the University of Illinois. Team members use ANSYS academic research software to optimize the chassis of their electric car without compromising safety.

Where to Start When Reducing the Weight of Electric Cars?

Front bump composite failure under a load of 2000N.

Front bump composite failure under a load of 2000N.

The first hurdle of the Shell Eco Marathon is an initial efficiency contest. Only the best teams from this efficiency assessment even make it into the race.

Therefore, Illini concentrates on reducing the most weight in the shortest amount of time to ensure it makes it to the starting line.

Illini notes that its focus is on reducing the weight of its electric car’s chassis.

“The chassis is by far the heaviest component of our car, so ANSYS was used extensively to help design our first carbon fiber monocoque chassis,” says Richard Mauge, body and chassis leader for Illini.

“Several loading conditions were tested to ensure the chassis was stiff enough and the carbon fiber did not fail using the composite failure tool,” he adds.

Competition regulations ensure the safety of all team members. These regulations state that each team must prove that their car is safe under various conditions. Simulation is a great tool to prove a design is within safety tolerances.

“One of these tests included ensuring the bulkhead could withstand a 700 N load in all directions, per competition regulations,” says Mauge. If the teams’ electric car designs can’t survive this simulation come race day, then their cars are not racing.

Iterate and Optimize the Design of Electronic Cars with Simulation

Front bump deformation under a load of 2000N.

Front bump deformation under a load of 2000N.

Simulations can do more than prove a design is safe. They can also help to optimize designs.

Illini uses what it learns from simulation to optimize the geometry of its electric car’s chassis.

The team found that its new designs have a torsional rigidity increase around 100 percent. This is after a 15 percent decrease in weight compared to last year’s model.

“Simulations ensure that the chassis is safe enough for our driver. It also proved that the chassis is lighter and stiffer than ever before. ANSYS composite analysis gave us the confidence to move forward with our radical chassis redesign,” notes Mauge.

The story optimization story continues from Illini. It plans to explore easier and more cost-effective ways to manufacture carbon fiber parts. For instance, the team wants to replace the core of its parts with foam and increase the number of bonded pieces.

If team members just go with their gut on these hunches, they could find themselves scratching their heads when something goes wrong. However, with simulations, the team makes better informed decisions about its redesigns and manufacturing process.

To get started with simulation, try our free student download. For student teams that need to solve in-depth problems, check out our software sponsorship program.

The post How to Increase the Acceleration and Efficiency of Electric Cars for the Shell Eco Marathon appeared first on ANSYS.

► Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers
    9 Oct, 2018
This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

This engine intake simulation was post-processed using EnSight Enterprise. This allowed for the processing of a large data set to be shared among servers.

Simulation data sets have a funny habit of ballooning as engineers move through the development cycle. At some point, post-processing these data sets on a single machine becomes impractical.

Engineers can speed up post-processing by spatially or temporally decomposing large data sets so they can be post-processed across numerous servers.

The idea is to utilize the idle compute nodes you used to run the solver in parallel to now run the post-processing in parallel.

In ANSYS 19.2 Ensight Enterprise you can spatially or temporally decompose data sets. Ensignt Enterprise is an updated version of EnSight HPC.

Post-Processing Using Spatial Decomposition

EnSight is a client/server architecture. The client program takes care of the graphical user interface (GUI) and rendering operations, while the server program loads the data, creates parts, extracts features and calculates results.

If your model is too large to post-process on a single machine, you can utilize the spatial decomposed parallel operation to assign each spatial partition to its own EnSight Server. A good server-to-model ratio is one server for every 50 million elements.

Each EnSight Server can be located on a separate compute node on any compute resource you’d like. This allows engineers to utilize the memory and processing power of heterogeneous high-performance computing (HPC) resources for data set post-processing.

The engineers effectively split the large data set up into pieces with each piece assigned to its own compute resource. This dramatically increases the data set sizes you can load and process.

Once you have loaded the model into EnSight Enterprise, there are no additional changes to your workflow, experience or operations.

Post-Processing Using Temporal Decomposition

Keep in mind that this decomposition concept can also be applied to transient data sets. In this case, the dataset is split up temporally rather than spatially. In this scenario, each server receives its own set of time steps.

A turbulence simulation created using EnSight Enterprise post-processing

EnSight Enterprise offers performance gains when the server operations outweigh the communication and rendering time of each time step. Since it’s hard to predict network communication or rendering workloads, you can’t easily create a guiding principle for the server-to-model ratio.

However, you might want to use a few servers when your model has more than 10 million elements and over a hundred time steps. This will help keep the processing load of each server to a moderate level.

How EnSight Speeds Up the Post-Processing of Large Simulation Data Sets

Another good tip to ensure you are post-processed optimally within EnSight Enterprise. Engineers achieve the best performance gains by pre-decomposing the data and locating it locally to the compute resources they anticipate using. Ideally, this data should be in EnSight Case format.

To learn more, check out Ensight or register for the webinar Analyze, Visualize and Communicate Your Simulation Data with ANSYS EnSight.

The post Post-Processing Large Simulation Data Sets Quickly Over Multiple Servers appeared first on ANSYS.

► Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing
    8 Oct, 2018

Your design team will make informed decisions about the products they create when they bring detailed simulations up front in the development cycle.

The 19.2 release of ANSYS Discovery AIM facilitates the need of early simulations.

It does this by streamlining templates for physics-aware meshing and rapid results.

High-Fidelity Simulation Through Physics-Aware Meshing

 Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Discovery AIM user interface with a solution fidelity slide bar (top left), area of interest marking tool (left, middle), manual mesh controls (bottom, center) and a switch to turn the mesh display on and off (right, top).

Analysts have likely told your design team about the importance of a quality mesh to achieve accurate simulation results.

Creating high quality meshes takes time and specialized training. Your design team doesn’t likely have the time or patience to learn this art.

To account for this, Discovery AIM automatically incorporates physics-aware meshing behind the scenes. In fact, your design team doesn’t even need to see the mesh creation process to complete the simulation.

This workflow employs several meshing best practices analysts typically use. The tool even accounts for areas that require mesh refinements based on the physics being assessed.

For instance, areas with a sliding contact gain a finer mesh so the sliding behavior can be accurately simulated. Additionally, areas near the walls of fluid-solid interfaces are also refined to ensure this interaction is properly captured. Physics-aware meshing ensures small features and areas of interests won’t get lost in your design team’s simulation.

The simplified meshing workflow also lets your design team choose their desired solution fidelity. This input will help the software balance the time the solver takes to compute results with the accuracy of the results.

Though physics-aware meshing can create the mesh under the hood of the simulation process, it still has tools allowing user-control of the mesh. This way, if your design team chooses to dig into the meshing details — or an analyst decides to step in — they can finely tune the mesh.

Capabilities like this further empower designers as techniques and knowledge traditionally known only by analysts are automated in an easy-to-use fashion.

Gain Rapid Results in Important Areas You Might Miss

The 19.2 release of Discovery AIM has seen improvements with its ability to enable your design team to explore simulation results.

Many analysts will know instinctively where to focus their post-processing, but without this experience, designers may miss areas of interest.

Discovery AIM enables the designer to interactively explore and identify these critical results. These initial results are rapidly displayed as contours, streamlines or field flow lines.

Field flow and streamlines for an electromagnetics simulation

Field flow and streamlines for an electromagnetics simulation

Once your design team finds locations of interest within the results, they can create higher fidelity results to examine those area of interest in further detail. Designers can then save the results and revisit them when comparing design points or after changing simulation inputs.

To learn more about other changes to Discovery AIM — like the ability to directly access fluid results — watch the Discovery AIM 19.2 release recorded webinar or take it for a test drive.

The post Discovery AIM Offers Design Teams Rapid Results and Physics-Aware Meshing appeared first on ANSYS.

► Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer
    5 Oct, 2018
Traditional chemotherapy can often be blocked by a tumor’s stroma.

Traditional chemotherapy can often be blocked by a tumor’s stroma.

There are few illnesses as crafty as pancreatic cancer. It spreads like weeds and resists chemotherapy.

Pancreatic cancer is often asymptomatic, has a low survival rate and is often misdiagnosed as diabetes. And, this violent killer is almost always inoperable.

The pancreatic tumor’s resistance to chemotherapy comes from a shield of supporting connective tissue, or stroma, which it builds around itself.

Current treatments attempt to overcome this defense by increasing the dosage of intravenously administered chemotherapy. Sadly, this rarely works, and the high dosage is exceptionally hard on patients.

Nonetheless, doctors need a way to shrink these tumors so that they can surgically remove them without risking the numerous organs and vasculature around the pancreas.

“We say if you can’t get the drugs to the tumor from the blood, why not get it through the stroma directly?” asks William Daunch, CTO at Advanced Chemotherapy Technologies (ACT), an ANSYS Startup Program member. “We are developing a medical device that implants directly onto the pancreas. It passes drugs through the organ, across the stroma to the tumor using iontophoresis.”

By treating the tumor directly, doctors can theoretically shrink the tumor to an operable size with a smaller dose of chemotherapy. This should significantly reduce the effects of the drugs on the rest of the patient’s body.

How to Treat Pancreatic Cancer with a Little Electrochemistry

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Simplified diagram of the iontophoresis used by ACT’s chemotherapy medical device.

Most of the drugs used to treat pancreatic cancer are charged. This means they are affected by electromotive forces.

ACT has created a medical device that takes advantage of the medication’s charge to beat the stroma’s defenses using electrochemistry and iontophoresis.

The device contains a reservoir with an electrode. The reservoir connects to tubes that connect to an infusion pump. This setup ensures that the reservoir is continuously filled. If the reservoir is full, the dosage doesn’t change.

The tubes and wires are all connected into a port that is surgically implanted into the patient’s abdomen.

A diagram of ACT’s chemotherapy medical device.

A diagram of ACT’s chemotherapy medical device.

The circuit is completed by a metal panel on the back of the patient.

“When the infusion pump runs, and electricity is applied, the electromotive forces push the medication into the stroma’s tissue without a needle. The medication can pass up to 10 to 15 mm into the stroma’s tissue in about an hour. This is enough to get through the stroma and into the tumor,” says Daunch.

“Lab tests show that the medical device was highly effective in treating human pancreatic cancer cells within mice,” added Daunch. “With conventional infusion therapy, the tumors grew 700 percent and with the device working on natural diffusion alone the tumors grew 200 percent. However, when running the device with iontophoresis, the tumor shrank 40 percent. This could turn an inoperable tumor into an operable one.” Subsequent testing of a scaled-up device in canines demonstrated depth of penetration and the low systemic toxicity required for a human device.

Daunch notes that the Food and Drug Administration (FDA) took notice of these results. ACT’s next steps are to develop a human clinical device and move onto to human safety trials.

Simulation Optimized the Fluid Dynamics in the Pancreatic Cancer Chemotherapy Implant

Before these promising tests, ACT faced a few design challenges when coming up with their chemotherapy implant.

For example, “There was some electrolysis on the electrode in the reservoir. This created bubbles that would change the electrode’s impedance,” explains Daunch. “We needed a mechanism to sweep the bubbles from the surface.”

An added challenge is that ACT never knows exactly where doctors will place the device on the pancreas. As a result, the mechanism to sweep the bubbles needs to work from any orientation.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

Simulations help ACT design their medical device so bubbles do not collect on the electrode.

“We used ANSYS Fluent and ANSYS Discovery Live to iterate a series of designs,” says Daunch. “Our design team modeled and validated our work very quickly. We also noticed that the bubbles didn’t need to leave the reservoir, just the electrode.”

“If we place the electrode on a protrusion in a bowl-shaped reservoir the bubbles move aside into a trough,” explains Daunch. “The fast fluid flow in the center of the electrode and the slower flow around it would push the bubbles off the electrode and keep them off until the bubbles floated to the top.”

As a result, the natural fluid flow within the redesigned reservoir was able to ensure the bubbles didn’t affect the electrode’s impedance.

To learn how your startup can use computational fluid dynamics (CFD) software to address your design challenges, please visit the ANSYS Startup Program.

The post Simulation Optimizes a Chemotherapy Implant to Treat Pancreatic Cancer appeared first on ANSYS.

► Making Wireless Multigigabit Data Transfer Reliable with Simulation
    4 Oct, 2018

The demand for wireless communications with high data transfer rates is growing.

Consumers want wireless 4K video streams, virtual reality, cloud backups and docking. However, it’s a challenge to offer these data transfer hogs wirelessly.

Peraso aims to overcome this challenge with their W120 WiGig chipset. This device offers multigigabit data transfers, is as small as a thumb-drive and plugs into a USB 3.0 port.

The chipset uses the Wi-Fi Alliance’s new wireless networking standard, WiGig.

This standard adds a 60 GHz communication band to the 2.4 and 5 GHz bands used by traditional Wi-Fi. The result is higher data rates, lower latency and dynamic session transferring with multiband devices.

In theory, the W120 WiGig chipset could run some of the heaviest data transfer hogs on the market without a cord. Peraso’s challenge is to design a way for the chipset to dissipate all the heat it generates.

Peraso uses the multiphysics capabilities within the ANSYS Electronics portfolio to predict the Joule heating and the subsequent heat flow effects of the W120 WiGig chipset. This information helps them iterate their designs to better dissipate the heat.

How to Design High Speed Wireless Chips That Don’t Overheat

Systems designers know that asking for high-power transmitters in a compact and cost-effective enclosure translates into a thermal challenge. The W120 WiGig chipset is no different.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

A cross section temperature map of the W120 WiGig chipset’s PCB. The map shows hot spots where air flow is constrained by narrow gaps between the PCB and enclosure.

The chipset includes active/passive components and two main chips that are mounted on a printed circuit board (PCB). The system reaches considerably high temperatures due to the Joule heating effect.

To dissipate this heat, design engineers include a large heat sink that connects only to the chips and a smaller one that connects only to the PCB. The system is also enclosed in a casing with limited openings.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Simulation of the air flow around the W120 WiGig chipset without an enclosure. Simulation was made using ANSYS Icepak.

Traditionally, optimizing this set up takes a lot of trial and error as measuring the air flow within the enclosure would be challenging.

Instead, Peraso uses ANSYS SIwave to simulate the Joule heating effects of the system. This heat map is transferred to ANSYS Icepak, which then simulates the current heat flow, orthotropic thermal conductivity, heat sources and other thermal effects.

This multiphysics simulation enables Peraso to predict the heat distribution and the temperature at every point of the W120 WiGig chipset.

From there, Peraso engineers iterate their designs until they reached their coolest setup.

This simulation led design tactic helps Peraso optimize their system until they reached a heat transfer balance they need. To learn how Peraso performed this iteration, read Cutting the Cords.

The post Making Wireless Multigigabit Data Transfer Reliable with Simulation appeared first on ANSYS.

► Designing 5G Cellular Base Station Antennas Using Parametric Studies
    3 Oct, 2018

There is only so much communication bandwidth available. This will make it difficult to handle the boost in cellular traffic expected from the 5G network using conventional cellular technologies.

In fact, cellular networks are already running out of bandwidth. This severely limits the number of users and data rates that can be accommodated by wireless systems.

One potential solution is to leverage beamforming antennas. These devices transmit different signals to different locations on the cellular network simultaneously over the same frequency.

Pivotal Commware is using ANSYS HFSS to design beamforming antennas for cellular base stations that are much more affordable than current technology.

How 5G Networks Will Send More Signals on Existing Bandwidths

A 28 GHz antenna for a cellular base station.

A 28 GHz antenna for a cellular base station.

Traditionally, cellular technologies — 3G and 4G LTE — crammed more signals on the existing bandwidth by dividing the frequencies into small segments and splitting the signal time into smaller pulses.

The problem is, there is only so much you can do to chop up the bandwidth into segments.

Alternatively, Pivotal’s holographic beamforming (HBF) antennas are highly directional. This means they can split up the physical space a signal moves through.

This way, two cells in two locations can use the same frequency at the same time without interfering with each other.

Additionally, these HBF antennas use varactor (variable capacitors) and electronic components that are simpler and more affordable than existing beamforming antennas.

How to Design HBF Antennas for 5G Cellular Base Stations

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

A parametric study of Pivotal’s HBF designs allowed them to look at a large portion of their design space and optimize for C-SWaP and roll-off. This study looks at roll-off as a function of degrees from the centerline of the antenna.

Antenna design companies — like Pivotal — are always looking to design devices that optimize cost, size, weight and power (C-SWaP) and performance.

So, how was Pivotal able to account for C-SWaP and performance so thoroughly?

Traditionally, this was done by building prototypes, finding flaws, creating new designs and integrating manually.

Meeting a product launch with an optimized product using this manual method is grueling.

Pivotal instead uses ANSYS HFSS to simulate their 5G antennas digitally. This allows them to assess their HBF antennas and iterate their designs faster using parametric studies.

For instance, Pivotal wants to optimize their design for performance characteristics like roll-off. To do so they can plug in the parameter values, run simulations with these values and see how each parameter affects roll-off.

By setting up parametric studies, Pivotal assess which parameters affect performance and C-SWaP the most. From there they could weigh different trade-offs until they settled on an optimized design that accounted for all the factors they studied.

To see how Pivotal set up their parametric studies and optimize their antenna designs, read 5G Antenna Technology for Smart Products.

The post Designing 5G Cellular Base Station Antennas Using Parametric Studies appeared first on ANSYS.

Convergent Science Blog top

► Harness the Power of CONVERGE + GT-SUITE with Unlimited Parallelization
    5 Nov, 2018

Imagine that you are modeling an engine. Engines are complex machines, and accurately modeling an engine is not an easy undertaking. Capturing in-cylinder dynamics, intake and exhaust system characteristics, complicated boundary conditions, and much more creates a problem that often takes multiple software suites to solve.

Convergent Science has a solution: CONVERGE Lite—and we’ve just introduced a new licensing option.

CONVERGE Lite is a reduced version of CONVERGE that comes free of charge with every GT-SUITE license. Gamma Technologies, the developer of GT-SUITE, and Convergent Science combined forces to allow users of GT-SUITE to leverage the power of CONVERGE.

CONVERGE LITE + GT-SUITE OVERVIEW

GT-SUITE is an industry-leading CAE system simulation tool that combines 1D physics modeling, such as fluid flow, thermal analysis, and mechanics, with 3D multi-body dynamics and 3D finite element thermal and structural analysis. GT-SUITE is a great tool for a wide variety of system simulations, including vehicles, engines, transmission, general powertrains, hydraulics, and more.

Let’s think again about modeling an engine. GT-SUITE is ideal for the primary workflow of engine design. But, what if you want to model 3D mixing in an intake engine manifold to track the cylinder-to-cylinder distribution of recirculated exhaust gas? Or simulate complex 3D flow through a throttle body to find the optimal design to maximize power? In these scenarios, 1D modeling is not sufficient on its own.

Visualization of flow through an optimized throttle body generated using data from a CONVERGE Lite + GT-SUITE coupled simulation.

In this type of situation where 3D flow analysis is critical, GT-SUITE users can invoke CONVERGE Lite to obtain detailed 3D analysis at no extra charge. CONVERGE Lite is fully integrated into GT-SUITE and is known for being user friendly. One of the biggest advantages of CONVERGE Lite is that it allows GT-SUITE users access to CONVERGE’s powerful autonomous meshing. With automatic mesh generation, fixed mesh embedding, and Adaptive Mesh Refinement, CONVERGE Lite eliminates user meshing time and allows for efficient grid refinement. In addition, CONVERGE Lite comes with automatic CFD species setup and automatic setup of fluid properties to match the properties in the GT-SUITE model. And as if that weren’t enough, recently CONVERGE Lite has been enhanced to include a license for Tecplot for CONVERGE, an advanced 3D post-processing software.

LICENSING

You can run CONVERGE Lite in serial for free if you have a GT-SUITE license. If you want to run CONVERGE Lite in parallel, you can purchase parallel licenses from Convergent Science. We have just introduced a new low-cost option for running CONVERGE Lite in parallel. For one flat fee, you can obtain a license from Convergent Science to run CONVERGE Lite on an unlimited number of cores. Even though CONVERGE Lite contains many features to enhance efficiency, 3D simulations can be computationally expensive. This new option is a great way to affordably speed up your integrated GT-SUITE + CONVERGE Lite simulations.

CONVERGE Lite is a robust tool, but it does not contain all of the features of the full CONVERGE solver. For example, if you want to take advantage of advanced physical models, like combustion, spray, or volume of fluid, or you want to simulate moving walls, such as pistons or poppet valves, a full CONVERGE license is required. With both a full CONVERGE license and a GT-SUITE license, you can also take advantage of CONVERGE’s detailed chemistry solver, multiphase flow modeling, and other powerful features while performing advanced CONVERGE + GT-SUITE coupled simulations.

The combined power of CONVERGE and GT-SUITE opens the door to a whole array of advanced simulations, like engine cylinder coupling, exhaust aftertreatment coupling, or fluid-structure interaction coupling, that cannot be accomplished with just one of the programs.

Contact a Convergent Science salesperson for licensing details and pricing information.

Contact sales

► Resolving Turbulence-Chemistry Interactions with LES and Detailed Chemistry
  30 Oct, 2018

One of the more controversial subjects we talk about here at Convergent Science is the role of turbulence-chemistry interaction (TCI) when using our SAGE detailed chemistry solver.

What is TCI?

TCI is used to describe two separate but linked processes: enhanced mixing in momentum, energy, and species due to turbulence and the commutation error in the reaction rate evaluation. A good turbulence model should always account for the enhanced mixing due to turbulence.

The commutation error is more difficult to address. In an LES simulation, the commutation error is the difference between evaluating the reaction rates using the spatially filtered quantities and using the un-filtered quantities (the latter is exact and the former is an approximation) and then filtering the reaction rates. It is usually convenient to use the averaged or filtered values to evaluate the reaction rates, which unfortunately means more error. For LES, the commutation error reduces as the cell size is reduced[1], and thus, with sufficient grid resolution, the commutation error becomes negligible.

In this blog post, we briefly describe a study that demonstrates that with sufficient grid resolution, CONVERGE CFD (with LES and detailed chemistry) can resolve the enhanced mixing due to turbulence without explicitly assigning a sub-grid model for the commutation error. For more details, please see the accompanying white paper.

Simulation Strategy

We simulate a canonical turbulent partially premixed flame, Sandia Flame D. We leverage Adaptive Mesh Refinement (AMR) and adaptive zoning as acceleration strategies to speed up the computationally expensive LES simulations. Figure 1 shows the fine resolution around a subsection of the flame due to AMR, which allows us to get good resolution when and where we need it.

Figure 1: Small subsection of the instantaneous temperature distribution of velocity, mixture fraction, mass fractions of CO2 and CO, and SGS velocity at the y = 0 plane from the LES case with minimum grid size 0.25 mm.

Conclusion In Brief

We first conduct grid convergence studies and find that 0.25 mm minimum grid size is sufficient to resolve most of the velocity and species fluctuations.

Then, we demonstrate that the commutation error becomes smaller and we resolve more velocity and species fluctuations as we use finer meshes. With the finest mesh, we match not only the the mean and RMS to the experimental value, but also the conditional mean and the shape of joint probability distribution function.

Finally, we take on the challenge of accurately predicting of non-equilibrium combustion processes. These processes (i.e., extinction and reignition) are dependent on two factors:

  1. An accurate mechanism for the range of conditions simulated and
  2. A good LES solver with sufficient grid resolution.

We compare thousands of data points from experiments to the equivalent points from the LES to determine that CONVERGE correctly predicts the extinction and reignition trends.

So what?

The SAGE detailed chemistry solver with LES has demonstrated success in a host of applications[2,3,4,5,6], including gas turbines and internal combustion engines.

We show in this white paper that when you resolve most of the velocity and species fluctuations and significantly reduce the commutation error, you can predict mixing-controlled turbulent combustion without a model for the commutation error in the reaction rates.

CONVERGE contains multiple acceleration strategies that make SAGE detailed chemistry + LES a reasonable strategy as far as computational costs go. Ready to dive more in-depth? Our TCI white paper is waiting for you!


[1] Davidson, L., “Fluid mechanics, turbulent flow and turbulence modeling,” Chalmers University, 2018. www.tfd.chalmers.se/~lada/postscript_files/solids-and-fluids_turbulent-flow_turbulence-modelling.pdf

[2] Drennan, S.A., and Kumar, G., “Demonstration of an Automatic Meshing Approach for Simulation of a Liquid Fueled Gas Turbine with Detailed Chemistry,” 50th AIAA/ASME/SAE/ASEE Joint PropulsionConference, AIAA 2014-3628, Cleveland, OH, United States, July 28-30, 2014. DOI:10.2514/6.2014-3628

[3] Kumar, G., and Drennan, S., “A CFD Investigation of Multiple Burner Ignition and Flame Propagation with Detailed Chemistry and Automatic Meshing,” 52nd AIAA/SAE/ASEE Joint Propulsion Conference, Propulsion and Energy Forum, AIAA 2016-4561, Salt Lake City, UT, United States, July 25-27, 2016. DOI:10.2514/6.2016-4561

[4] Yang, S., Wang, X., Yang, V., Sun, W., and Huo, H., “Comparison of Flamelet/Progress-Variable and Finite-Rate Chemistry LES Models in a Preconditioning Scheme,” 55th AIAA Aerospace Sciences Meeting, AIAA SciTech Forum, AIAA 2017-0605, Grapevine, TX, United States, January 9-13, 2017. https://doi.org/10.2514/6.2017-0605

[5] Pei, Y., Som, S., Pomraning, E., Senecal, P.K., Skeen, S.A., Manin, J., Pickett, L.M., “Large Eddy Simulation of a Reacting Spray Flame with Multiple Realizations under Compression Ignition Engine Conditions,” Combustion and Flame, 162, 4442-4455, 2015. DOI:10.1016/j.combustflame.2015.08.010

[6] Liu, S., Kumar, G., Wang, M., and Pomraning, E., “Towards Accurate Temperature and Species Mass Fraction Predictions for Sandia Flame-D using Detailed Chemistry and Adaptive Mesh Refinement,” 2018 AIAA Aerospace Sciences Meeting, AIAA SciTech Forum, AIAA 2018-1428. DOI:10.2514/6.2018-1428.

► CONVERGE Workflow Tips
  20 Aug, 2018

As a general purpose CFD solver, CONVERGE is robust out of the box. Autonomous meshing technology built into the solver eliminates the meshing bottleneck that has traditionally bogged down CFD workflows. Despite this advantage, however, performing computational fluid dynamics analyses is still a complex task. Challenges in pre-processing and post-processing can slow your workflow. To streamline the simulation process, CONVERGE CFD software includes a wide array of tools, utilities, and documentation as well as support from highly trained engineers with every license.

Pre-Processing

  • Although you do not have to create a volume mesh, your surface geometry must be watertight and meet several quality standards related to triangulation and normal vector orientation. CONVERGE Studio includes several native surface repair tools to quickly detect, show, and resolve these issues. With an additional license for the Polygonica toolkit, you can leverage powerful surface repair capabilities from within CONVERGE Studio.
  • For engine simulations, a popular acceleration technique is to use a sector (an axisymmetric geometry representing a portion of the model) instead of the full geometry. In CONVERGE, the make_surface utility allows you to quickly create a properly prepared sector geometry based on the piston bowl profile and just a few more geometry inputs. CONVERGE Studio includes a graphical version of this tool.
  • With any CFD software, the multitude of input parameters to control the complex physical models can be overwhelming. In CONVERGE CFD, we provide several checks to help you validate your case setup configuration before beginning a simulation. In CONVERGE, run the check_inputs utility to write information about missing or improperly configured parameters to the terminal. In CONVERGE Studio, you can use the Validate buttons throughout the application to validate input parameters incrementally as you configure the case. Additionally, the Final Validation tool examines the geometry and case setup parameters and provides suggestions for anything that may need to be revised.
  • A staple of the CONVERGE feature set is the ease with which you can simulate complex moving geometries. One requirement is that boundaries cannot intersect during the simulation. There are several ways to verify that your setup meets this requirement. Running CONVERGE in no hydrodynamic solver mode does not solve the spray, combustion, and transport equations. Instead, this type of simulation checks surface motion and grid creation. In CONVERGE Studio, use the Animation tab of the View Options dock to preview boundary motion and check for triangle intersections at each step of the motion. 
  • Many complex engine, pump, compressor, and other machinery simulations employ the sealing feature to prevent flow between regions at various times during a simulation. To test your seal setup, run the CONVERGE sealing test utility by supplying the check-sealing argument after your CONVERGE executable. This command uses a simplified test with only a single level of cells and most options (including AMR, embedding, sources, mapping, events, etc.) automatically turned off.
  • Full multi-cylinder simulations provide accurate predictions for fluid-solid heat transfer, intake and exhaust flow, and other important engine design parameters. Setting up the multiple cylinder geometries and timing can be a frustrating exercise in bookkeeping. The Multi-cylinder wizard in CONVERGE Studio makes this process painless. The wizard is a step-by-step tool that guides you through the process of configuring cylinder phase lag, copying geometry components for additional cylinders, and setting up timing of events such as spark ignition. After your configuration is complete, the wizard provides a quick reference sheet that catalogs the salient details for each cylinder. 
  • Because surface triangles cannot intersect during a CONVERGE simulation, valves (e.g., intake and exhaust valves in an IC engine) must be set to a minimum lift value very close to the valve seats but not technically closed. CONVERGE Studio includes a tool to automatically and quickly move the valves to this position based on profiles of intake and exhaust valve motion.
  • In compressor simulations, the working fluid is often far from an ideal gas. In addition to multiple equation of state models in CONVERGE, you can directly supply custom fluid properties for the working fluid. CONVERGE reads properties such as viscosity, conductivity, and compressibility as a function of temperature from supplied tabular data, obviating the need to link CONVERGE with a third-party properties library.
  • As CONVERGE is a very robust tool, you can use it for many different types of simulations: compressible or incompressible flow, multiphase flow, transient or steady-state, moving geometry, non-Newtonian fluids, and much more. Each of these regimes and scenarios requires you to configure relevant parameters. CONVERGE Studio includes a full suite of example cases across a range of these regimes including IC engines, compressors, gas turbines, and more. It is as simple as clicking File > Load Example Case to open an example case with Convergent Science-recommended default parameters for the given simulation type. You can use the example cases as starting points for your own simulations or run them as-is while you learn to use CONVERGE. 

Post-Processing

  • The geometry triangulation for a CONVERGE simulation may differ from that for a finite element analysis (FEA) simulation because the FEA geometry may have higher resolution in areas most relevant to the heat transfer analysis. CONVERGE includes an HTC mapper utility that maps near-wall heat transfer data from the CONVERGE simulation output to the triangulation of the FEA surface. That way, you can iterate between the two simulation approaches to understand and optimize designs.
  • CONVERGE Studio includes a powerful Line Plotting module to create two-dimensional plots. In addition to providing a high level of plot customization, the module is designed to plot some of the two-dimensional *.out files unique to CONVERGE. Also, you can use the Line Plotting module to monitor simulation properties such as mass flow rate convergence in a steady-state simulation. 
  • One of the post-processing tools available in CONVERGE Studio is the Engine performance calculator. This tool automatically calculates engine work and other relevant engine design parameters for 360 degree or 720 degree ranges from CONVERGE output and the engine parameters in your case setup. The results are collated in a table so that you can easily export them to a spreadsheet.

Documentation

  • Several case setup tutorial video series on the Convergent Science YouTube channel provide step-by-step walkthroughs of full case setups. Refer to these for information on surface preparation, case setup, simulation, and post-processing of some basic CONVERGE example cases.
  • On our CFD Online support forum, you can interact with other CONVERGE CFD users and our knowledgeable and approachable support team for assistance.

Performing CFD analyses can be difficult due to the number of unknowns, uncertainty of boundary conditions, and complexity of flows. CONVERGE CFD helps you by removing the necessity of meshing and giving you auxiliary tools to simplify your workflow.

► Machine Learning for Automotive Engine Design
    7 Aug, 2018

In the last five years, “machine learning” has become a veritable buzzword. From applications as diverse as traffic forecasting and the virtual assistant on your smartphone to genome sequencing, researchers employ machine learning across a broad array of fields to improve predictions based on big datasets.

Beyond adding convenience to everyday life, machine learning can contribute to technology development as well. In a recent collaboration between Argonne National Laboratory, Aramco, and Convergent Science, Moiz et al. applied machine learning techniques to automotive engine research, enhancing computational fluid dynamics (CFD) studies performed in CONVERGE CFD [1]. Machine learning leverages existing datasets to optimize and predict new designs that have improved performance, higher efficiency, and reduced emissions. In light of market competition and increasingly strict emissions requirements, the union of machine learning and engine CFD is a promising development.

Machine Learning Overview

At a very basic level, machine learning means leveraging data to make accurate predictions. An example of this that we encounter every day is targeted advertising. Marketers use machine learning to take information about our demographics and interests and provide relevant product recommendations. More often than not, these recommendations are startlingly accurate.

The first step in developing a machine learning model is for scientists to collect large datasets. Next, the machine learning model applies computational statistics to the data, detecting relationships between inputs and outputs. This process is known as training the model. To evaluate the accuracy of the model, scientists often supply to the model a test dataset that was not included in the training dataset and examine the accuracy of the predictions. The more accurate the algorithm, the lower the risk of inaccurate predictions. Many machine learning algorithms exist (decision tree, support vector machines, neural networks, etc.), some of which have been in development for decades.

CFD Applications

A popular optimization technique for engine designers is the genetic algorithm (GA). CONVERGE includes such a tool, CONGO, which takes a “survival of the fittest approach” to optimize a design. That is, the method pits individuals (designs) against each other in a population with a set of user-defined parameters that vary. Each individual includes characteristics of the various parameters to optimize, such as combustion phasing, combustion shape design, etc. The goal of a GA study is to optimize a result such as indicated specific fuel consumption, while staying within certain constraints such as emissions or peak cylinder pressure.

By definition, a genetic algorithm study needs to run for many successive generations. One of the primary drawbacks of this technique is that the generations can run for a long time, sometimes in the range of months. This is because most engine CFD simulations require between a day and a week for individual results. Engine researchers often require a faster solution than a GA optimization of CFD. To address this, Moiz et al. combined machine learning with genetic algorithm optimization to quickly develop gasoline compression ignition (GCI) engine designs. The engine analyzed in the work uses a low-octane gasoline fuel in partially premixed compression ignition.

First, scientists ran a large (2048 individual CONVERGE simulations) space-filling design of experiments (DoE) to create a training data set. Since the DoE can be defined all at once, the simulations ran concurrently. With the advent of large HPC clusters like the Mira supercomputer at Argonne National Laboratory, the entire DoE of CFD simulations ran in a few days. The authors also investigated using smaller subsets of the training dataset to see if a less expensive DoE would be sufficient. They found that the learning curves were promising down to a DoE with sample size of 300.

An emerging combustion technology like GCI has ample room for optimization to maximize efficiency and minimize emissions, and computational studies are ideal for this task. In the current work, the authors employed a machine learning genetic algorithm approach to reduce the design cycle for optimizing a GCI engine and overcoming the above-mentioned obstacles. The general procedure is as follows:

  1. Ran over two thousand high-fidelity CFD simulations in CONVERGE to create a large dataset on which to train the machine learning model,
  2. Trained and tested the machine learning model on the CFD data,
  3. Used the machine learning algorithm as an emulator of the design space for optimization to optimize the engine designs.

The machine learning (ML) GA procedure poses a speed advantage over a traditional GA optimization. First, engineers can run the initial CFD simulations in parallel, generating seed data very quickly. Second, the ML GA emulator can evaluate an individual design in a few seconds in comparison with a high-fidelity CFD simulation, which can take around 12 hours on 128 processors.

In a GA optimization with CFD results as the objective function, sequentially running the CFD simulations is a bottleneck in the process. The ML GA approach, however, reduces the time significantly, allowing a full optimization in approximately a day. An additional benefit of this technique is that engineers can use the initial space-filling DoE datasets for future design space interrogation or uncertainty analyses.

Machine learning is a powerful tool which is now becoming ubiquitous in software applications. It is only natural that, when combined with CFD, ML GA methods help designers more rapidly optimize engine efficiency and performance.

References

[1] Moiz, A., Pal, P., Probst, D., Pei, Y., Zhang, Y., Som, S., and Kodavasal, J., “A Machine Learning-Genetic Algorithm (ML-GA) Approach for Rapid Optimization Using High-Performance Computing,” SAE Paper 2018-01-0190, 2018. DOI:10.4271/2018-01-0190

► CONVERGE for Compressors: Proven Tools, New Application
  27 Jun, 2018

Computational fluid dynamics tools such as CONVERGE CFD offer the ability to analyze and optimize compressors without the difficulty and expense (both time and money) of generating and testing physical prototypes.

With CONVERGE, several core technologies make your compressor simulation workflow easier, faster, and more accurate.

AMR Strategy

A staple of the robust feature set in CONVERGE is Adaptive Mesh Refinement (AMR). This feature refines and coarsens the mesh on the fly in response to criteria you specify before starting the simulation. AMR helps maintain resolution in the tight gaps between the moving parts in a compressor. In this way, you can trust CONVERGE to automatically capture relevant flow features.

For compressor simulations, AMR is particularly applicable for resolving flow structures around valves. As there are tight clearances in these small gaps, CONVERGE increases mesh resolution automatically in response to large gradients in velocity, temperature, and other quantities of interest.

Additionally, you can modify the sub-grid scale (SGS) parameter for fine-grain control of the AMR algorithm sensitivity. As shown in the video below, AMR allows you to accurately resolve the jets of fluid traveling through the valve in a reciprocating compressor.

A grid convergence study further demonstrates the advantages of AMR. In this study, we successively refine the grid until quantities of interest reach a converged value (in this example, and as shown in Figures 1 and 2 below, for discharge valve lift and cylinder pressure). One way to perform a grid convergence study is to reduce the size of the base grid (and thus increase the cell count) for successive runs. A better option is to modify the AMR embedding scale and CONVERGE will create finer grids in the vicinity of high gradients, reaching a converged solution faster and with fewer total cells. Table 1 below compares cell count and wall clock time for the base grid and AMR grid refinement studies shown in Figures 1 and 2. Both the finest base grid and the finest AMR level result in a converged solution, but the simulation with AMR takes less time and uses fewer total cells than the simulation with the finest base grid.

Figures 1 and 2: Discharge valve lift and cylinder pressure compared between refined base grid and increased AMR embed scale
Cell count Wall clock time (hrs)
Base grid 1 285,614 0.69
Base grid 2 1,431,153 6.76
Base grid 3 7,577,619 15.80
No AMR 285,600 0.78
AMR level 2 670,359 2.16
AMR level 3 2,138,322 9.55
Table 1: Cell count and wall clock time for base grid and AMR convergence study

Reed Valve Deformation (FSI)

To further increase the accuracy of compressor calculations, CONVERGE includes fluid-structure interaction (FSI) modeling. This capability allows you to model the interactions between the bulk flow and reed valves (e.g., in reciprocating compressors). This way, you can accurately resolve the physical behavior within the compressor machinery to predict failure points.

The reciprocating compressor shown in the video above employs the 1D clamped beam model in CONVERGE to predict the fluid-structure interaction. Notice how the valve deforms realistically in response to the flow through the valve.

Custom Fluid Properties

In many cases, the working fluid within compressor machinery is far from an ideal gas. In CONVERGE, you can select from several different equation of state models to accurately represent the physical properties of your working fluid. Beyond the ideal gas law, CONVERGE includes cubic models such as Redlich-Kwong and Peng-Robinson to suit your application.

Also, you can directly supply custom fluid properties for the working fluid. Instead of linking CONVERGE with a third-party properties library, you can provide tabular data files that contain the fluid properties. These custom properties include viscosity, conductivity, compressibility, and more as a function of temperature.

For many applications, such as with air as the working fluid, the ideal gas law is an appropriate choice for the equation of state (as shown in Figures 3 – 6 below).

Figures 3 to 6: Examples in which the ideal gas law works well for air

Figures 7 – 10 below compare various fluid properties of supercritical CO2 calculated via several different methods. In these examples, the tabular fluid properties match very closely with NIST data. The Peng-Robinson equation of state model provides the next-best match.

Figures 7 to 10: Comparisons of various EOS and tabular data to NIST data

CONVERGE offers several technologies that address the difficulties of compressor CFD while making your workflow easier and more accurate. Want to learn more about integrating CONVERGE into into your simulation workflow? Get in touch with us here.

► Return of an Old Friend: One Engineer’s Thoughts on Tecplot 360
  31 May, 2018

You may have seen the press release: starting May 31, a version of the Tecplot 360 flow visualization software will be packaged with CONVERGE. No corporate details here–this is an engineer’s viewpoint. I am a longtime Tecplot user, having worked extensively with nearly every version since 2008 R1. I’m not a trainer, so I won’t try to teach you how to use Tecplot (if you’d like to see a CONVERGE-focused introduction to Tecplot 360, Tecplot Product Manager Scott Fowler gave a webinar earlier this year). Rather, I’ll tell you what I like about it as a CFD research engineer, and what you might like too. The brief version: Tecplot for CONVERGE is a user-friendly tool that works well and makes sense.

To me, the most important characteristic of any tool is usability. If I can’t figure out how to make it work, it’s of no use to me. My introduction to Tecplot involved no formal training and no user guide (although I’m sure there was one available). I was working off of nothing except for my fellow graduate students and a willingness to experiment. It turned out that this was enough!

Tecplot’s user interface is approachable and unintimidating. The workflow is logical and smooth. Some software packages give the impression that they were designed by a GUI team with no engineering knowledge; some packages look like they were written by engineers with no clue about interface design. Tecplot bridges that gap. The answer to “How do I…” is usually logical and straightforward, and Tecplot feels like it was designed from the ground up by a team that had extensive experience with CFD.

Let me give you an example. Suppose I’m loading a complex 3D flowfield. When I first load that dataset, I don’t know exactly how I want to visualize it. I will probably pan, zoom, and rotate through a wide variety of views, trying to figure out the best perspective from which to visualize my flow. As with many packages, in Tecplot I can do this with just the mouse, without resorting to menu buttons to change modes. The difference is, once I find a view I like, Tecplot gives me precise control over the camera location and direction. If I alter the view and don’t like the change, I can revert the view setup. If I want to compare several different datasets, I don’t have to fiddle around with the mouse controls to get approximately the perspective I want. I can copy the center of rotation and spherical angles and get precisely the right view, with a minimum of fuss. As my understanding of my dataset grows from general familiarity to exacting detail, Tecplot offers me increasingly exacting controls.

I like Tecplot’s approach to data and file structuring. In my workflow, all data at a certain simulation time is written to a single <casename>_<filenumber>_<time>.plt file. Because of this one-to-one relationship, I can see at a glance how many datasets I have available for my case (rather than having different files for different variables). When I load my .plt file, data are structured by zone. Each zone might have different variables (e.g., a fluid zone versus a parcel zone), and I can display them differently. I can extract sub-zones (e.g., a slice from a fluid zone) and display those separately. If I loaded multiple data files, I can animate zones over time. If I write out a new .plt file, I can write specific zones to that file. This is especially helpful for, say, preserving interesting cross-sections of a very large volumetric dataset.

Importantly, Tecplot for CONVERGE retains nearly the entire feature set of a stand-alone Tecplot 360 installation. It is not limited in cell count, processor count, plot details, data alteration, or most other functional details. The chief restriction is in file formatting. Tecplot for CONVERGE is limited to output files that have been converted using a new post_convert executable. This post_convert will be released with Tecplot for CONVERGE and will be included with all future CONVERGE Studio and CONVERGE solver packages. Make sure to select the “Tecplot for CONVERGE or Tecplot 360” option in post_convert when converting output files. If you opt to purchase a full Tecplot 360 license, of course it is able to read these specially formatted files.

Tecplot offers powerful data alteration and calculation tools, as well as a robust scripting capability. Once you tell Tecplot which variable names correspond to which state variables, Tecplot can calculate on demand many derived quantities commonly used in CFD. No more trying to remember the functional form of the Q-criterion, because it’s built in. If the quantity you need is not included in this hundred-odd set, you can directly specify whatever calculation you wish to perform. Further, all of this can be scripted in a macro. You can record macros through the GUI (which journals all of the actions you’ve performed) and play them back, or you can write a plaintext macro directly.

Finally, Tecplot makes attractive images and animations! Much like the viewport commands, Tecplot adopts sensible defaults but gives you exacting control when you want it. I can add, adjust, and remove control points on the contour plot color map, banded or continuous, and drop in contour lines at specific values. I can plot spray parcels by various shapes and in various colors and sizes (including by spray variable values). Tecplot’s flexibility allows me to make a plot in twenty seconds that looks pretty good, spend twenty minutes making it look exactly right, or anywhere in between.

I love Tecplot, and I am very happy that Convergent Science is partnering with the Tecplot team. Not every engineer prefers the same tool, of course. CONVERGE will continue to support a wide range of flow visualization and analysis packages through the post_convert utility. But no matter your present visualization tool of choice, I encourage you to take Tecplot for CONVERGE out for a spin. I think you’ll like what you see.

Numerical Simulations using FLOW-3D top

► Bugs and Baffles
  13 Nov, 2018

This blog was contributed by Colton Paul, an intern in the Graphical Users Interface Group.

Getting Into the Flow

Having just graduated out of state at NYU Shanghai and missing the desert something terrible, I found myself back in New Mexico asking the usual question: “What now?” During college my interests jumped around from computer science to physics to math and back again, but I accepted that I would have to settle for only one of those interests at a time. Luckily, I got the chance to meet up with a few Flow Science engineers who convinced me otherwise by describing a computational physics company right here in Santa Fe. I applied for a GUI internship the next morning.

Starting my internship at Flow Science, I was faced with a coding language and libraries that I wasn’t very familiar with, but after a couple weeks I started to get the hang of it. To ease me into our development technologies, my initial task was to take code that I wrote as part of the interview process and build a GUI to interact with it. This would be the first of many examples where I would need to ask myself, “Did I write my code such that it could be extended and reused?”

Colton Paul, Flow Science GUI Intern
Colton (front) reacclimates to Santa Fe’s elevation on a hike with Nathan LeBlanc (the crazy one in back wearing shorts!), CFD Engineer.

At the same time, I was given a copy of the whole UI code repository to play around in, break, and explore. And when I say “explore,” I mean it. I still find large areas of the code that I haven’t seen before. The code’s size and complexity was daunting and vastly different than college projects where I could easily familiarize myself with every aspect of the repository. FLOW-3D has been a work in progress for decades and compelled me to change my mindset from coding something that works well now to creating something that can still work many years from now. As I started being assigned small tasks in the code, the moving pieces began to come together. I went from needing an hour to change a label to being able to confidently solve some interesting problems in the software. Before long, I was integrating into the workflow of the team and getting into a weekly rhythm.

A Week in the Life of a Flow Science GUI Intern

Mondays kick off with a warm cup of coffee and some of our License Administrator, Joyce Jensen’s delicious baked goods to fuel my morning. The GUI group gets together and discusses questions that came up during the week before and what we need to accomplish to stay on schedule. The meetings are more akin to conversations than instructions, and being a small team, everyone’s able to voice their opinions on designs and plans.

I continue working on various tasks in the GUI. Some of them are simple fixes, but others are truly puzzles, and solving them is as satisfying as placing the last piece in a jigsaw.  Of course, for every solution there are three better ones, which is why it’s nice to have mentors push me towards the better ones. The four senior members of the team alternate mentoring me and my fellow intern, Danya Al-Rawi. Each has a slightly different style of teaching and working, so by the end of the internship we’ll have picked up some unique tips from everyone.

My favorite problems extend beyond software engineering; they’re the ones that require knowledge of the physics that FLOW-3D simulates and why there’s so much communication needed throughout the company. Calling senior CFD engineer Dan Milano and stopping by VP of Product Development John Ditter’s desk is well ingrained into my weekly routine. “How does our physics engine simulate boundary conditions in cylindrical coordinates?” “Which of these options is more intuitive for users simulating valves?” “We have a bet on how mesh blocks work. Please settle it for us!” Question by question, I start to understand what FLOW-3D is capable of and see the bigger picture behind some of my small tasks.

Although one learns a lot through practice, the team makes a point to stay sharp on software engineering theory. My favorite part of the week is getting together for a “Lunch and Learn” where someone on the team presents a software engineering concept, and everyone eats and discusses. Being out of school, it’s a nice change to get some free lectures.

Once I’ve shut down my computer, the next steps depend on the day. Almost every week after work some of the Flow Science crew hit the field for some casual soccer, which serves as an important reminder of both Santa Fe’s elevation and how necessary it is for me to run a bit. Fridays on the other hand are for pulling out the Wii U and letting the day’s frustrations out on Super Smash Bros or hanging out on the patio when the weather demands it.

Moving Forward

I’m just over halfway through the internship, but I can confidently say that I’m a better software engineer than I was two months ago. I’ve appreciated the opportunity to work in this environment and the saintly patience required of my mentors and coworkers to address my endless questions. I’m looking forward to discovering what the rest of the internship will bring.

► Angles of Inclination
  24 Oct, 2018

In her intern blog, Allyce Jackman discusses applying her Mech Eng degree to CFD.

Going from an academic to a professional setting is a daunting yet exciting transition. After studying a broad range of topics in mechanical engineering there’s both excitement and fear at all the possible options to pursue. I wanted to stay in Santa Fe and had heard amazing things about Flow Science, so I decided to put my hat in the ring for a CFD internship. After a long interview with topics ranging from volcanoes to sales I already had an itch for more. To my delight, I had scored an internship.

Allyce Jackman, Flow Science Intern
Ally along with her mentor, John Wendelbo on National Intern Day.

Discovering my role

When I began as a CFD intern at Flow Science I wasn’t entirely sure what my role would entail. However, I was soon given a general multiphysics track with an emphasis in coating applications. Due to the broad nature of the internship I had to decide my own approach to tackling coating simulations which began with reading paper after paper on coating technology, experiments, and general theory. I often found myself in a sort of chaotic dance between researching and simulating on three different computers. I quickly got to the point where I had so many tabs open that the top of my browser resembled a deck of cards. To say that I was knee-deep in coating research would be appropriate, and I still feel that I haven’t even scratched the surface of what there is to learn.

Wrapping my head around coating

Have you ever thought about how photos get that glossy finish? Or how that primer got in every nook and orifice of your car? Or how all those labels on packaged foods were printed? I can tell you that I certainly hadn’t, and when beginning my journey into coating simulations I had no idea how far this topic could reach. After reading a mountain of research papers, which were more academic in nature, I realized I needed to take a step back and get a better idea of what coating looks like in a manufacturing setting. I watched some videos of large-scale coating machines in action and realized that there is so much to consider when identifying parameters that could be implemented for simulation. A typical roll coating machine will have 5 to 6 rollers, all with different purposes, made of different material, and sometimes running at different speeds. There are rollers for application, metering, drying, and winding. The nature of each roller in conjunction with the nature of a given fluid will determine whether your coating will come out even or riddled with defects, and there are endless configurations.

Getting down to business

When setting up a coating simulation using CFD it is important to identify what results you’re after and what scales they exist on. For instance, when using a large machine (about the size of a smart car) to produce a very thin coating (thinner than a sheet of paper!) you must decide whether to approach the macro, to incorporate machine parameters, or the micro, to capture micron-sized coating thickness. Typically, you will be interested in the micro scale, in which case you have to correctly identify which area of your domain to isolate.

For instance, I set up a validation study in an attempt to capture the ribbing defect resulting from a high capillary number in forward roll coating. My domain was a 25 mm2 block at the meniscus of the two rolls. This block contained 20,000,000 cells and was expected to take 16 days for a 1 second simulation. I let it run over a long weekend and when I returned, the picture I saw after a long render of a significant amount of data points was a beautifully ribbed coating thickness. I celebrated in the early morning office silence as I had correctly chosen the area which would show exactly what I was after.

However, this success on the first try was by no means the norm, and due to my initial lack of knowledge of CFD I needed to elicit a good amount of help from the brilliant minds that fuel Flow Science. In doing this I discovered just how genuinely interested everyone is in CFD and its usefulness in solving problems. I quickly learned how exciting it can be to correctly simulate a problem in FLOW-3D and further validate the accuracy of the program. The staff at Flow Science are obviously highly intellectual but, maybe not as obvious, is their humble and inclusive nature. When making my final presentation to showcase what I had done over the course of my internship it was quite obvious that everyone wanted me to succeed which made this daunting task a lot more approachable. It was so interesting looking back on all the material I had created and realizing how my process had improved compared with my early models which were either overly complicated or missing important information. Participating in this internship at Flow Science not only helped my technical knowledge, but gave me a lot of insight into how to work efficiently and organize my work to make for a compelling presentation.

Editor’s note: After completing a very successful internship, Ally will join the Flow Science Sales team as a full-time CFD Engineer in November.

► Metal Casting and Mountain Climbing
  26 Sep, 2018

This blog was contributed by Ajit D’Brass, CFD Intern at Flow Science.

Month One – Heading West

June was slowly creeping towards triple-digit heat, the AC in my car needed a recharge, and I had just graduated from college. I never thought I’d leave the great State of Texas for its enchanting next-door neighbor, New Mexico. But ready for change, I jumped at the opportunity to test my mettle as an engineering intern at Flow Science.

On my way to Santa Fe, my mind wandered to the possibilities of the future. My knowledge of Flow Science was limited to my interviews and a terse web search. A connection to LANL, very suave simulations, and a software company with a highly-educated crew; high science in the high desert.

Ajit D'Brass, Flow Science Intern
Ajit and his mentor, Karthik Ramaswamy, a CFD Engineer at Flow Science.

On my first day I met with John Wendelbo (Director of Sales). He caught me up on the various teams in the company and introduced me to the staff. Everyone extended an inviting greeting and the office felt casual. The staff is a diverse group of people who are working on complex real-world CFD problems. As I was settling into my new workstation, I heard a small cheer from the Free Surface Café.  To my surprise, lunch breaks were accompanied by a fervent viewing of the 2018 World Cup. WHAT!? CFD and Soccer. I immediately felt that I had made the right decision.

My first weeks at Flow Science were spent improving my CAD skills and learning how to set up models using FLOW-3D CAST. It was an immediate and challenging check of my knowledge of fluid dynamics and practical heat transfer topics. To aid my journey, I was encouraged to reach out to my coworkers. During this preliminary stage of my internship (aka confusion, disillusion, inferiority, and frustration) having the support, sales, and development teams at my disposal proved an immense benefit. I’m truly grateful for their approachable nature and clear advice.

Wrapping up the month, Amir Isfahani (CEO) hosted an open house for the local tech community and gave a live Q&A with the Mayor of Santa Fe, Alan Webber. The whole vibe of the evening was positive. There was also a visible sense of pride in the innovation throughout the company’s history.

Month Two – Mountains and Metal

I joined the weekly soccer game that the staff plays on Tuesdays. Apart from being good exercise, it was a great way to get to know everyone outside of the office. While the games are only semi-competitive, there are top-level disagreements over probable goals. A good engineer can make a good argument!

Northern New Mexico houses some of the most breathtaking landscapes that I’ve experienced. Driving north through town I always spot the Sangre de Cristo Mountains looming on the horizon. I was invited to hike to Lake Catherine in the Pecos Wilderness. I’ve never really trekked but felt game to get outside the city. The reality of the 13.1-mile hike was immediately realized by my protesting legs and lungs. Although, the reward of the view…

Ajit treks to Lake Catherine
Ajit (left) arrives at Lake Catherine with his fellow trekkers, Karthik (center) and Paree (right).

Meanwhile, my modeling skills were getting sharp. John gave me the go-ahead to start creating my own projects based on the objectives of the internship program. My focus was designing full riser and gating systems for gravity pour castings. I also started to review research papers on alloy solidification. Simulation of casting takes into account many different physics models and getting these in harmony is amazing work.

Finally, I started getting a feel for the town’s meandering streets…

Month Three – Monsoons

It started raining every evening here in Santa Fe. The nights grew cool and brisk. I accompanied another hiking group from the office to Wheeler Peak (the highest point in New Mexico, at 13,159’). It proved to be another physical and mentally challenging trek, once again reminding me of how hard work and dedication pays off. Plus, the drive from Santa Fe to Taos is stunning!

I was now feeling confident as a FLOW-3D CAST user. I’ve had some breakthroughs on understanding how simulations are derived, how users can connect with this program, and how to efficiently set up simulations for objective-based analysis.

The capstone deliverable of the internship program was a presentation in front of the company. I’ve worked with tight deadlines before, but never in a CFD environment. What I found was that communicating one’s understanding is as important as one’s comprehension. The sales team here communicates with clarity and accuracy, and I hope to get the knack for it.

The last two weeks I spent time building meaningful simulations to go with a clear PowerPoint presentation. I received a ton of feedback in the form of “dry-runs” and critiques. It’s nice to know that this is something that requires practice and repetition. While there is a slight improvised sensibility, knowing the cues that create a meaningful narrative flow is imperative.

It’s been a great experience here, and I have nothing but gratitude for the people of Flow Science. They have opened their hearts and minds to me. The knowledge I’ve acquired will inform both my professional and personal life to come.

Editor’s note: After taking a well-deserved break to go “wrangle some details in Texas,” Ajit will join the Flow Science Sales team as a full-time CFD Engineer in October.

► Mitigating Total Dissolved Gas at Boundary Dam
  17 Sep, 2018

This article was contributed by Nikou Jalayeri, Water Resources Engineer at HATCH

The Boundary Dam is located on the Pend Oreille River in northeastern Washington. The project consists of a 340 ft. high concrete arch dam, seven low level sluiceway outlets, two high level overflow spillways (Spillway 1 and Spillway 2), and an approximately 1,003 MW authorized capacity powerhouse. The spillway and sluiceway discharge at the Boundary Hydroelectric Development have been shown to produce high total dissolved gas (TDG) concentrations in the tailwater of the spillway and the river reach downstream. Studies were commissioned to determine modifications to the project’s spillway structures to help mitigate this gas production. Resolution of many of the hydraulic design issues for the study relied heavily on the results of numerical hydraulic models. These modifications were constructed and tested in the field. The CFD model that was developed in support of these studies was used to simulate flows through a number of the project’s seven sluice gates and two overflow spillways. This model was also used to simulate the entry and movement of these flows through the project’s downstream plunge pool and powerhouse area.

FLOW-3D model spillway roughness elements
Figure 1. 3-D View of Spillway 1 Roughness Elements

FLOW-3D was selected for the analysis given its ability to simulate free falling jets, and its unique algorithm for simulating air entrainment by turbulence at the free surface. These capabilities make the program very well suited for simulating the varied and complex flow conditions in the project tailrace. The FLOW-3D models developed for the Boundary Dam study have primarily been used to develop an understanding of the governing hydraulic and hydrodynamic processes driving gas exchange in the tailrace of the existing project under spill conditions. In addition, these models been used to develop the designs of structural TDG mitigation alternatives (including estimation of the hydraulic loads expected on proposed appurtenances), and in combination with the TDG predictive model, to predict the TDG performance of proposed TDG mitigation alternatives.

Boundary Dam Hatch FLOW-3D
Figure 2. 3D View of the Unmodified Spillway 1 Jet: 10,000 cfs Flow (left), 13,000 cfs Flow (right)

To do so, representative air bubbles were released on the spillway in the model and tracked as they were entrained into the plunge pool and tailrace, circulated within the plunge pool, and eventually exhausted at the surface.  The model tracked the pressure- and time-histories associated with each of these representative air bubbles.  This data was then used as input to a TDG predictive tool to help predict total dissolved gas production in the tailrace. The overall predictive performance was successfully calibrated and validated to actual prototype (field) TDG data.  TDG predictions were made for the project using a two-step process:  the CFD model was first applied to assess the plunge pool hydraulics and flow patterns, and then the hydraulic output of the CFD model was imported into the Plunge Pool Gas Transfer (PPGT) model, which was developed using Excel.

The model was first run to simulate flow conditions for the existing or base case scenario with flows of 10,000, 13,000, and 20,000 cfs through each of the Project Spillways. The simulated hydraulic conditions for this test were analyzed. Bubble particles were then added to this model, the run was re-started, and the particles were tracked until they were able to reach the surface, and exhaust back into the atmosphere.

Following the base case runs, various CFD simulations were conducted to assess the hydraulic conditions that would result from the introduction of Roughness Elements (REs) on the downstream end of the spillway chute.  The introduction of these REs helps to break up the jet at the end of the chute more quickly and efficiently, accelerating boundary layer growth and resulting in the formation of small “packets” of water entering the plunge pool rather than coherent streams/jets.  This accelerated breakup of the jet will help to reduce overall plunge depths, and reduce gas transfer. Given concerns for potential cavitation damage on the spillway chute floor and on the REs themselves, additional runs were undertaken to test the effect on flow conditions at the REs if a ramp were to be installed immediately upstream of the roughness elements. The Spillway 1 RE geometry is presented in Figure 1.

Modified spillway FLOW-3D design
Figure 3. 3D View of the Modified Spillway 1 Jet : 10,000 cfs Flow (left), 13,000 cfs Flow (right)

The final model results were used to help assess the impact that the addition of these modifications would have on TDG levels downstream of the project under a range of flows.  CFD runs were made with identical flow releases through the spillways under both existing and modified conditions, bubble histories were extracted from the CFD results and input to the TDG predictive spreadsheet model. The results showed that the proposed RE configuration for Spillway 1 is effective at reducing TDG production, but appears to deliver the greatest TDG reduction when operating at a flow of approximately 10,000 cfs. For higher flows, the ability of the roughness elements to break up the jet appears to be reduced, since the jet begins to override the roughness elements. This results in the formation of a more competent jet core that is able to penetrate the plunge pool to a greater depth. Figure 2 illustrates the difference between the baseline (existing) case and the modified Spillway 1 for flows of 10,000 cfs and 13,000 cfs respectively.

► Developer
    6 Sep, 2018

Flow Science is searching for a CFD Developer to join its Research and Development Department. The work of a Developer focuses on research, development, implementation and documentation of new additions and modifications in the numerical methods and computational physics in our flagship CFD software, FLOW-3D. Continuous, cutting-edge research and development remains the cornerstone of Flow Science’s business model in order to meet the increasing demands of FLOW-3D’s commercial and academic user base and maintain its competitive advantage in the CFD industry.

This position will be focused on FLOW-3D’s civil, hydraulics, water & environmental and coastal engineering markets. Applicants with experience in model development in these areas in academic or industry are encouraged to apply.

Experience, skills and knowledge

  • Good understanding and proven experience in numerical methods used in CFD, including heat and mass transfer, and free surface modeling
  • Good understanding of software development process for large projects and an excellent command of modern FORTRAN
  • Experience in hydraulics, water and environmental engineering, and/or coastal engineering CFD modeling: depth-averaged shallow water equations, waves, sediment transport and scour, multi-phase flows, turbulence and air entrainment, and reaction kinetics
  • Experience in implicit and iterative solvers for incompressible flows
  • Experience in developing and debugging large computational codes
  • Good understanding and proven experience in high performance programming using OpenMP and MPI

Education

A PhD in the discipline of civil engineering, mechanical engineering, applied mathematics or physics with focus on CFD is required.

Attributes

The ideal candidate for this position will have excellent oral and written communication skills, excellent interpersonal skills, and the ability to work both independently and as part of a team.

Principal duties

  • Research, development, and implementation of new additions and modifications in numerical methods and computational physics to Flow Science’s flagship CFD software, FLOW-3D.
  • Insuring that the new developments correspond to the company’s general development goals and are completed in a timely manner.
  • Insuring that the newly developed algorithms are robust, efficient and of high quality, using consistent programming style throughout, ensuring readability and clarity of the newly added/modified coding.
  • Documenting the new additions and modifications by writing technical notes and amending the User Manual.
  • Maintain accuracy, stability and efficiency of the main solver program, including debugging and finding solutions for issues with the existing and newly added numerical and physical models.
  • Providing assistance to users, primarily through the Company’s support staff, but also directly when necessary.

Benefits

Flow Science offers an exceptional benefits package to full-time employees including employer paid medical, dental, vision coverage, life and disability insurances, 401(k) and profit sharing plans with generous employer matching, and an incentive compensation plan that offers year-end bonus opportunity.

Contact

Resumes may be submitted via mail (Flow Science, Attention: Human Resources, 683 Harkle Road, Santa Fe, NM 87505), fax (505-982-5551) or e-mail (careers@flow3d.com).

Applicants must be available to attend an onsite interview in Santa Fe, New Mexico.

Learn more about careers at Flow Science >

► Exploring Additive Manufacturing and Microfluidics
  29 Aug, 2018

By Ruendy Castillo, Summer Intern at Flow Science

“I’ve never heard of it.” That is what my answer would have been before participating in this internship if I’d been asked what additive manufacturing or microfluidics meant. And truly, I had never heard of such terms before. I still remember on my first day when my supervisor, Paree Allu, started referring to Additive Manufacturing as ‘AM’. I just kept staring at him thinking about the fact that up until that point, I had only been applying the term ‘AM’ to a time convention or a transmitting signal on the radio.

Working at Flow Science meant learning something new every day. Whether it was something simple like connecting to metal casting companies over LinkedIn or something complex, such as understanding parameters in a software simulation; I was able to expand my knowledge and professional skills in a real job setting.

Ruendy Castillo, CFD intern

I am currently working to obtain a Bachelor of Science in Civil Engineering at the University of New Mexico (UNM). You might ask, why would a civil engineer want to work as a computational fluid dynamics (CFD) intern? But then, why not? The level of knowledge gained could not have been greater, and let’s say, knowledge really takes you places. This year I was able to be part of the Science, Technology, Engineering, and Mathematics Talent Expansion Program (STEP) funded by the National Science Foundation. This program provides UNM students an eight-week summer internship with a company/agency or a faculty member at the University. I was also fortunate to be part of the National Migrant Scholars Internship (NMSI) Initiative operated by the Migrant Student Services at Michigan State University. This program identifies and recruits College Assistance Migrant Program students from across the nation, and then matches them with experiential learning opportunities preparing them for future careers.

My internship at Flow Science was split into two sectors: Additive Manufacturing and Microfluidics.

Additive Manufacturing: Adding a Layer of Complexity

Additive Manufacturing or AM is a process by which three-dimensional objects are built using a layer-upon-layer approach. Some additive manufacturing processes include direct energy deposition, binder jetting and laser powder bed fusion. FLOW-3D is a CFD software that simulates additive manufacturing processes that help users understand the underlying physics at the micro and meso-scales. Using experimental data such as molten pool dimensions, it is possible to calibrate FLOW-3D models and once calibrated, the software allows users to develop process windows for different kinds of alloys, minimizing the number of experimental runs.

Paree and I worked together on calibrating FLOW-3D models using experimental data for laser powder bed fusion processes from the National Institution of Standards and Technology (NIST). The NIST experimental data consisted of a series of tests of laser scan tracks made on a bare nickel-based superalloy, IN625 metal surface. Once we calibrated the models using experimental data from NIST, our objective was to compare how FLOW-3D did against test data for newer process parameters. Daily, I would set up and run these simulations in the software, then analyze and record the results to capture any trends in our data. A total of 40 simulations were run, each taking around 3-4 hours for the calibration and test studies. At the end of the internship, I had an opportunity to give a presentation on our findings to the entire company. Not only were we close in matching the simulation results to the experimental data for the test cases, but we also uncovered useful trends in the data that showed how the laser power and scan speeds can affect melt pool length, depth and width in laser powder bed fusion processes.

The company also sells FLOW-3D CAST, which is a specialized version of FLOW-3D for metal casting simulations. I had an opportunity to research die casting companies in the Michigan area since our company was heading there to give a presentation at a North American Die Casting Association workshop. I would study the company websites, find contact information and call to see if they would be interested in meeting Paree to talk about how FLOW-3D CAST could bring great value to their company. I still remember how my first call went horribly wrong. Even if it sounds funny, I had to mentally prepare myself. I also wrote a script with the exact words that I was going to use when calling the companies. Let me tell you, I was nervous. I was also able to connect with over 300 users on LinkedIn that specialize in Additive Manufacturing, which for me, sounds ‘kinda’ impressive. Through this exercise, I learned how to communicate technical knowledge effectively over the phone and email since communicating is an important skill for engineers to master.

Spinning into Microfluidics

The second part of my internship focused on the research of pneumatic pumping in centrifugal microfluidic platforms. At this point, I started working with Adwaith Gupta, a CFD Engineer at Flow Science. Adwaith is part of the marketing team and oversees the microfluidics industry. Essentially, the same objectives applied as to what I did in AM- validating FLOW-3D results with experimental data.

For over 40 years, the centrifugal microfluidic platform, otherwise known as compact discs (CD), has been a research topic in both academia and industry. Currently, CD microfluidics is emerging as an advanced system for lab-on-a-chip (or, lab-on-a-CD) applications primarily geared towards the biomedical industry. Many applications use CD microfluidics, such as rapid diagnostics using immunoassays and plasma separation from blood. Most CDs rely on either application of special surface treatments on the CD surface or on active forces (generated by magnetic or electric devices). These approaches to strictly control the behavior of liquid inside these miniature CDs can be cumbersome because of extra chemicals (surface treatments) or too many devices. Inherently, CDs allow uni-directional flow of liquid due to rotational forces, limiting the real-estate for designing complex immunoassays while maintaining the compactness of the disc. To overcome all these limitations of traditional CDs, the researchers presented an alternate technique for centrifugal microfluidics that uses pneumatic compression.

The research consisted of a specially designed fluidic manifold with different compartments. Initially the compartments have air, but on increasing the rotational frequency of the CD, pneumatic compression would increase. By slowing the velocity of the disc, this same pneumatic energy (the energy contained in the compressed air) would be released to then pump fluids back toward the center of the CD, thus overcoming the uni-directional fluid movement. (Wow, I feel amazed for having the ability to explain this process!) Let me remind you that these are very tiny (a few centimeters in diameter) discs with the ability to minimize the many applications normally done in lab, therefore it’s difficult at times to obtain exact data.

I set up the simulations, recorded the simulation data, and compared it to the experimental data. This project was presented at the end of the internship to the company. While working with Adwaith, I also had the opportunity to complete a list of about 200 companies around the world that work in the microfluidics industry. The goal of this list is to reach out to these companies and showcase the applicability of FLOW-3D to their work.

The Takeaways

Certainly, it was a very intense summer. The level of work and commitment at this internship was very high. But, even when it was expected that I put in all my possible time, the willingness to help me was tremendous. As I remember Paree telling me during my interview, “We don’t want you to fail in this program,” and that motivated me to work even harder. During the coming year, I will be part of El Puente Research Fellowship. This program supports and promotes undergraduate research to prepare students for graduate level education over the course of two semesters. This internship gave me confidence for my future work. I feel more prepared, and now, the only thing left to say is thank you!

Mentor Blog top

► Technology Overview: Mercury Racing Intercooler Filter Using Simcenter FLOEFD
  14 Nov, 2018

Hiro Yukioka Senior Engineer/Technical Specialist from Mercury Marine discusses how Simcenter FLOEFD aids the engineering process of designing an Intercooler Filter.

► Technology Overview: Smart Design Series: Gaining Insight Into Intake Manifold Designs
  13 Nov, 2018

Welcome to the Smart Design Series, a library of videos developed to help you become familiar with how Computational Fluid Dynamics (CFD) can be utilized by the design engineer. The series will illustrate how your CAD models can be easily analyzed and dissected to give insight into the workings of their behavior with Simcenter FLOEFD, the award-winning CAD-embedded CFD solution for the design engineer. In this installment of the series, we will review an intake manifold design. The model is quickly prepared for analysis and analyzed. A parametric study is then set up to find how the angle of the throttle valve will affect the flow rate at the exit and then we’ll compare the results between different scenarios. We will also set up a transient study to see the changes in pressure over time when each exit has a different pressure profile versus time. We should point out that while this video features Siemens NX, you can expect the same level of integration with Creo, CATIA V5 and Solid Edge. Watch this 5-minute video now.

► Blog Post: Calling all Engineering Students in Europe
    8 Nov, 2018
If you’re an engineering student and can get to Prague on your own steam then read on! The Siemens Simcenter Conference, the world’s premier engineering simulation and test event, is taking place December 3rd – 5th in Prague, Czech Republic. And to thank the next generation of engineers for their hard work, Siemens would like to invite 8 university students to attend the conference. How
► Training course: Solid Edge Electrical Design On-Demand Training Library
    2 Nov, 2018

These Solid Edge Electrical courses includes interactive videos, written course materials, knowledge checks and hands-on labs through a Virtual Lab platform

► Blog Post: Article Roundup: Thermal Simulation for Autonomous Cars, Cloud-Based Chip Design, Reliability Verification, Wally Rhines on PCB Design & SMTP in Embedded Devices
    1 Nov, 2018

Thermal Simulation Software Aims to Improve Design of Autonomous Cars Is Cloud Computing Suitable For Chip Design? Beyond DRC and LVS, why Reliability Verification is used by Foundries Wally Rhines Talks About the Future of PCB and System Design Using an SMTP client Thermal Simulation Software Aims to Improve Design of Autonomous Cars Design News The thermal behavior of an autonomous electric vehicle

► On-demand Web Seminar: What’s New in Simcenter Flomaster Software V9.1
  25 Oct, 2018

Find out what’s new in Simcenter Flomaster Software V9.1.

Tecplot Blog top

► Tecplot 360 2018 R2 Helps Geoscientists Analyze Simulated Data
  17 Oct, 2018

Load netCDF data with FVCOM loader, import georeferenced images and shapefiles, compute vertical transects

BELLEVUE, WA (October 17, 2018) – Tecplot, Inc. has announced the general availability of Tecplot 360 2018 Release 2.

This release will benefit geoscientists who work with results from numerical models such as FVCOM (Finite-Volume Community Ocean Model), ROMS (Regional Ocean Modeling System), WRF (Weather Research Forecasting Model) and Telemac (Loaders for ROMS, WRF and Telemac are available upon request, contact support@tecplot.com). Highlights that will benefit general Tecplot customers include new colormaps, handy Python scripts, updated Excel add-in, and a CONVERGE output file loader. See all of what is new.

“In our research [of the Geoscience market], we’ve found that most geoscientists spend an inordinate amount of time writing and modifying scripts to evaluate their simulation output,” says Scott Fowler, Tecplot 360 Product Manager. “We’ve also found that many with 3D model results are missing important features in their data because they use tools that do not easily support viewing and exploring the full 3D data.”

Puget Sound in Tecplot 360

Georeferenced image of Puget Sound (Salish Sea) with vertical transect. Image created with Tecplot 360

Unlike script-based post-processing, Tecplot 360 allows geoscientists to quickly view and analyze full 3D models in an interactive user interface – without the need to write scripts. Tecplot 360 allows a quick and thorough exploration of XY, 2D and 3D results, which ensures that important information is neither missed nor overlooked.

Automation can speed analysis, and Tecplot 360 has a rich Python API known as PyTecplot. The Tecplot 360 Python API can generate images, movies and perform advanced analysis. Python scripts may be recorded based on user actions, which accelerates understanding of the API.

Unlike open-source solutions, Tecplot 360 comes with a responsive and knowledgeable Technical Support Team.  Tecplot’s website reports a 96.4% response time within one business day, and a 61.3% immediate response time.

Key features in this release for geoscientists are:

  • FVCOM loader for netCDF data.
  • ROMS, WRF, Telemac loaders available on request.
  • Georeferenced image import to help communicate the location of your data (see video).
  • Shapefile import to give additional context to the location of your data (see video).
  • Vertical transect (curved slices) computation for vertically projecting a 2D plane along a prescribed path (see video).
  • New colormaps for creating beautiful plots.
  • Load-on-Demand technology for loading huge datasets.

Download Tecplot 360

Tecplot 360 2018 Release 2 is available for download as Free Trial Software, or for customers through the MyTecplot Customer Portal.

About Tecplot, Inc.

Tecplot, an operating company of Toronto-based Constellation Software, Inc. (CSI), is the leading independent developer of visualization and analysis software for engineers and scientists. CSI is a public company listed on the Toronto Stock Exchange (TSX:CSU). CSI acquires, manages and builds software businesses that provide mission-critical solutions in specific vertical markets.

Tecplot visualization and analysis software allows customers using desktop computers and laptops to quickly analyze and understand (local or remote) information hidden in complex data, and communicate their results to others via professional images and animations. The company’s products are used by more than 47,000 technical professionals around the world.

Contact:
Margaret Connelly
Marketing Manager, Tecplot, Inc.
pr@tecplot.com
(425) 653-1200

 

The post Tecplot 360 2018 R2 Helps Geoscientists Analyze Simulated Data appeared first on Tecplot.

► Georeferenced Images in Tecplot 360
  17 Oct, 2018

 

Description

In this video, we introduce the use of georeferenced images in Tecplot 360. For datasets that represent a geographic region, a georeferenced image gives additional context to help communicate the location for your data.

We start with a dataset already open, which are results from the Salish Sea model courtesy of Pacific Northwest National Labs. The Salish Sea is situated along southwestern British Columbia and northwestern Washington State. Without a background map the geography is difficult to ascertain when looking at this model.

To better understand our region, we will import a georeferenced image via Insert > Image/Georeferenced Image… Note that you can also use the image insertion tool on the toolbar.

World Files

A georeferenced image consists of an image file and a world file. Tecplot 360 supports JPG, PNG, and BMP image formats. The world file defines the coordinates for where the image should be placed. Tecplot 360 is unit agnostic, so make sure that your world file is in the same coordinate system as your data. This model uses UTM 10, so our world file is also UTM 10.

Once the image is loaded it is placed in the plot with your data. In this case we’re in a 3D Cartesian view so the image is an actual 3D object which will rotate with the data. As we animate through time notice that the tide drops below our image. To adjust the Z-value of the image, right-click on the image and select Image details…. Note that you can also launch this dialog by double-clicking on the image. Use the slider or text field to adjust the image to a lower location – in this case we’ll use -15. Now you can animate and the surface temperature is no longer covered by the image.

Georeferenced images may also be used in 2D plots. In this case you likely no longer need the axes so you may want to disable the axes and increase the extents of the viewport to use the entirety of the frame. In 2D there is no Z adjustment as the image is always drawn behind the data.

This concludes the tutorial for georeferenced images in Tecplot 360.

Thank you for watching!

Try Tecplot 360 for Free

The post Georeferenced Images in Tecplot 360 appeared first on Tecplot.

► Converting Shapefiles to PLT Using PyTecplot
  17 Oct, 2018

Description

In this video, we introduce the use of shapefiles in Tecplot 360. For datasets that represent a geographic region, a shapefile can give additional context to help communicate the location of your data. In this video we’ll demonstrate the use of a Python script to convert a shapefile to Tecplot binary data format.

Here we are looking at pressure contours from a simulation of Hurricane Katrina. Without knowing it is hurricane Katrina, it’s difficult to tell where we are in the world. A shapefile will help make the region we are looking at more obvious.

Prerequisites are that you have a 64-bit version of Python and the PyTecplot and pyshp (Pyshape) Python modules installed. From a command prompt execute the script as such:

shapefile_to_plt.py USA_adm1.shp USA_adm1.plt

The script will then prompt you for additional information.

  1. Convert to a single zone or one zone per shape. In this case we have a shapefile that represents the entire United States where each state is a separate shape. If we convert this file to one zone per shape, we will have one zone per state. This will allow us to turn on and off individual states and color them differently. Select option #2.
  2. Choose the variable names to use. Note that this script does not make any coordinate transformations. If your shapefile is in UTM or Stateplane, it’s likely that you’re displaying your data using X/Y variables – so you should select X/Y. If your shapefile is in longitude/latitude coordinates you should select lon/lat as the exported variable names. In this case our shapefile is in lon/lat and we are displaying our data in lon/lat as well, so select option #2.
  3. Finally, when choosing a separate zone per shape, the script will also prompt you for which shapefile record column to use to name the zones. It will display the column name and the first shape file entry. This selection is critical to identify each shape while in Tecplot 360. In this case we’ll choose option #5 which will give each shape the name of the state.

Now that we’ve converted the shapefile to Tecplot PLT format we can append it to the dataset.

  • File > Load Data
  • Browse to the PLT file
  • Append data to active frame
  • Match lon/lat to XLONG and XLAT respectively

Now that the shapefile data is loaded we will activate the Mesh layer so we can see the shapefile information. This presents a much clearer picture of the region and the path of the hurricane. If we fit the data to the screen, we can see the entire United States. Because we imported each state as a separate zone, we can deactivate any zones that are outside of our region of interest.

This concludes the tutorial on using shapefiles in Tecplot 360.

Thank you for watching!

Try Tecplot 360 for Free

The post Converting Shapefiles to PLT Using PyTecplot appeared first on Tecplot.

► Calculating Average Over Time
  17 Oct, 2018

Description

Computing an average over time is an important method in CFD to understand overall trends in turbulent flow fields. In ocean science, a time average can be used to remove the effects of seasonality to understand long term trends. Thankfully, this can be accomplished fairly easily in Tecplot 360.

In this video we’ll start with a vertical transect extracted from an FVCOM solution. This transect shows the salinity along a prescribed path through time. The data has a common grid throughout time, so we can use simple equations to compute the average.

We will duplicate the first zone in our transect and rename the zone to “Transect Time Average.”

Then we can use the Specify Equations dialog to compute the average as such:

  1. Select the Transect Time Average zone
  2. Enter an equation summing the salinity in each zone, and dividing by the number of zones:
    {salinity} = ({salinity}[1]) + {salinity}[2] + … + {salinity}[n]) / n

Clearly, this will become cumbersome for a large number of timesteps and multiple variables. This will be better done with a Python script which will not only easily handle a large number of zones, but can also compute the time average for a large number of variables.

To run this script we must first enable PyTecplot Connections via the Scripting menu. Then, from a command prompt we execute the script. This will prompt us for which Strand we want to average. The strand number can be found in the Dataset Information dialog. A strand is simply an integer which identifies a collection of zones through time.

Once we enter the strand number the script will handle the zone duplication and execution of the formulas to average the results.

When the script is finished we’ll simply copy the original frame and activate the Time Average zone to view our results.

This concludes the tutorial for computing an average over time in Tecplot 360 using PyTecplot.

Thank you for watching!

Try Tecplot 360 for Free

The post Calculating Average Over Time appeared first on Tecplot.

► Computing a Vertical Transect in Tecplot 360
  17 Oct, 2018

Description

In this video we’ll show you how to compute a vertical transect from FVCOM data using PyTecplot, our Python API (in Tecplot 360).

A vertical transect is a 2D plane that follows a prescribed path through the data and is projected vertically from the surface to the bottom.

We start with an FVCOM dataset already loaded. This dataset is of the Salish Sea, extending from British Columbia and down into Washington State.

Vertical Transect

Vertical Transect of the Salish Sea using Tecplot 360.
Related Webinar

Define the Vertical Transect Path

First we need to define the path of the transect. To do this select the poly-line geometry tool. Using the geometry tool, click on locations to define the path of the transect.

Here we’ll define a path from the Straight of Juan de Fuca through the San Juan and Gulf islands, and up to the mouth of the Fraser River.

Extract the Points

Then right-click on the geometry to extract the points. This will create a new zone called “Extracted Points” which defines the XY coordinates to use for the transect.

Check the Grid Density

It is important to ensure that there are enough XY points to effectively capture the density of the grid. By turning on Scatter for the Extracted Points and turning on Mesh for the solution data we can see that the density of XY points is sufficient for our grid. Too many points are better than too few points, as you want to ensure you don’t skip over too many cells. This will result in a good continuous transect.

Invoke PyTecplot

Now that we have the XY points defined by the Extracted Points zone, we can invoke the PyTecplot script. To do this we must first allow PyTecplot Connections via Scripting -> PyTecplot Connections. Now you can run the VerticalTransect.py script from a command prompt. Using “python –O” will run the script in optimized mode, which will improve performance.

This script will connect to Tecplot 360, find the zone called Extracted Points and use the XY points to define a vertical surface zone through the volume, which defines the shape of the transect. The solution variables are then interpolated from the solution data onto the transect zone at each time step. This dataset has 24 timesteps with approximately 140,000 elements per timestep – this script should take about 15 seconds to run.

Animate Through Time

Now that we have the result, we can animate through time with the two frames linked together.

If you have a specific transect path, you can easily modify the script to supply a specific set of XY points.

This concludes the tutorial for computing a vertical transect in Tecplot 360.

Thank you for watching!

Try Tecplot 360 for Free

The post Computing a Vertical Transect in Tecplot 360 appeared first on Tecplot.

► New TecIO API for Tecplot 360 .szplt Output
  26 Sep, 2018

TecIO is the library supplied with Tecplot 360 that enables third-party applications to output solution data directly to Tecplot binary-formatted files.

It exists as two separate libraries. The original serial version of TecIO outputs either .plt or the newer .szplt format. The newer parallel version, TecIO-MPI, outputs only .szplt.

Both versions are also available in source form, which enables their use on some otherwise unsupported platforms. And in the case of TecIO-MPI, it enables support for a variety of MPI implementations.

In its serial form, TecIO has been around for a long time, and it’s starting to show its age. A few examples:

  • As zone types and features have been added, the routine to create zones (now called TECZNE142) has accumulated parameters to accommodate the new features. It now has 21 parameters—a bit ungainly.
  • Only 10 files may be output simultaneously, and you have to do some bookkeeping to keep track of which file you’re currently outputting to.
  • Solution data must be output in sequential zone-variable order.
  • Integer data types are not supported.
  • 32-bit integer parameters limit the size of the zones you can output, a limitation that will become important for some customers in the next few years.

New TecIO API

To address these and other issues, Tecplot 360 2017 R2 introduced a new API (application programming interface) to TecIO, both serial and parallel versions. This new API offers many advantages over the legacy API:

  • Separate routines for creating each zone type take only the parameters required for that zone type.
  • The number of files you can output simultaneously is limited only by the underlying operating system.
  • You can output zone variables out of order—you are not required to output all of your X variables before you can start outputting your Y variables, etc.
  • You can output zones that have 8-, 16-, and 32-bit integer data.
  • 64-bit integer parameters allow outputting zones whose index limits would overflow 32-bit integers.

You can see the new API at work and compare it with the old API in TecIO’s C++ example flushpartitioned.cpp. You’ll find lines such as the following, which show the old and new side-by-side:

#if defined OLD_API
returnValue = TECDAT142(&pNCells[ptn - 1], p[ptn - 1], &dIsDouble);
#else
returnValue = tecZoneVarWriteFloatValues(fileHandle, zone, 4,
ptn, pNCells[ptn - 1], p[ptn - 1]);
#endif

Fortran 90 programmers might prefer to look at the example rewriteszl.F90, which shows both reading and writing a .szplt file using TecIO’s Fortran 90 interfaces.

Please note that this new API in its current form outputs only the newer .szplt format, which is not readable by Tecplot Focus and does not (currently) support polyhedral zones. The legacy API will continue to be available until those limitations are addressed. But if .szplt works for you, please consider using the new API for your Tecplot binary file output.

As always, please contact our excellent technical support team if you have questions.

Happy Tecplotting!

Learn More and Download the TecIO Library


Dave Taflin
David E. Taflin, Ph.D.
Senior Software Engineer
Tecplot, Inc.

The post New TecIO API for Tecplot 360 .szplt Output appeared first on Tecplot.

Schnitger Corporation, CAE Market top

► AU 2018 goes convergent
  15 Nov, 2018

For the first time in years, I had to leave AU before the bitter end. It’s for a work commitment, so 100% necessary, but it means that this writeup doesn’t cover what will happen in the AEC and manufacturing keynotes, which took place after I left. I’ll watch the replays when I’m back in the offie –and I urge you to watch them too, if you’re interested. Even though I didn’t stay until the bitter end, I did spend nearly 2 days with Autodesk, Autodeskers and customers. Lots of customers.

Pending any awesome acquisitions or other stuff that might have been announced after I left, here are my top takeaways:
  1. Autodeskers uniformly told me that the last year has been tough, but they feel things are going in the right direction. Layoffs are never easy on anyone and the after-effects linger long after the reduction actually happens. The hangover from earlier this year doesn’t seems to be fully cured, yet people are growing into their new roles, it’s becoming clear whom to call for what kind of question — and the job now is to move forward.
  2. Customers are, as usual, happy, not happy and everywhere in between. Some love subscriptions, others don’t. Some love what Fusion/Revit/Inventor/BIM 360/Forge lets them do, others see only the gaps. AU is 11,000 people of varying skill and needs; the “genius bar”-like setup in the expo area was mobbed as users and developers nerded out together.
  3. Autodesk CEO Andrew Anagnost gave a keynote last year that painted a gloomy picture for workers in industries where automation is set to replace routine tasks —and therefore, eliminate the need for the humans who do those jobs. This year, he continued that theme but painted a bit more hopeful a picture. Rote jobs may disappear but new jobs are being created, for data scientists, cloud architects, BIM managers and more. Mr. Anagnost urged attendees to focus on their skill sets and said that Autodesk was committed to helping its customers skill up where needed, including changing accreditations from being product-specific to being role-specific (so from Revit expert to BIM manager), and to putting at least some of the classes and tests on the e-learning platform, Coursera.
  4. Mr. Anagnost also held a Q&A session with media and analysts. He told us that he sees Autodesk products being completely reinvented in 10 years, as the convergence of AEC products and processes merges with those of discrete manufacturing. AEC is about jobs of 1, where may things are custom. Manufacturing is about making many of the same object, with some variants. He believes (and it likely correct) that both will settle on semi-customized or configured: you’ll have a car built to your specifications from a limited menu and buy a house that’s not completely unique. Both will see manufacturing efficiencies, increased purchasing power and all of those good financial things, but will require far more data integration than is currently the case. Autodesk is working to connect these silos.
  5. And that affects Autodesk’s interest in IoT. Mr. Anagnost doesn’t see Autodesk building an IoT platform — he sees data and the decisions made from it being Autodesk’s strength. “Sensor data is just data”, nothing more or less. Where it comes from matters only in that it needs to be secure, legitimate and so on, but after that it’s just fodder for calculations.
  6. The Q&A was overwhelmingly AEC-focused. I tossed Mr. Anagnost a softball question to see if I could get comments on Fusion, Inventor, AutoCAD or anything else manufacturing-related. Nope. The message was that generative design applies broadly to all sorts of problems and that AEC and manufacturing will converge. The expo area was much more balanced, with a lot of cool Autodesk R&D projects in manufacturing from additive to generative and more.

I think that was emblematic of what I saw and experienced at AU this year. For years, Autodesk focused on makers, mostly manufacturing customers and their needs. Under Mr. Anagnost, the company is shifting back a bit towards its AEC strengths, with the full intention of roping manufacturing into that overall message. But don’t misconstrue: Fusion, Inventor, NEI Nastran, Delcam et al and all of the Forge-enabled manufacturing apps are still going strong; they’re just not being granted the same visibility.

Note: Autodesk graciously covered some of the expenses associated with my participation in the event but did not in any way influence the content of this post. The cover image is of Andrew Anagnost delivering his keynote.

The post AU 2018 goes convergent appeared first on Schnitger Corporation.

► Bentley makes AIworx official, announces second deal
  13 Nov, 2018

Bentley’s acquisitions just keep rolling. At last month’s Year in Infrastructure conference, the company announced one in pedestrian simulation, LEGION, and Agency9 to complement its existing digital cities offering. Another one was hinted at: AIworx, which makes  machine learning and internet of things (IoT) technologies. At YII, Bentley CEO Greg Bentley said that AIworx’ apps for “smart, connected systems machines, including instrumentation, sensors and communication systems” will improve how infrastructure assets are designed and operated. To recap, Mr. Bentley’ us storm drains as an example: sensors in the wastewater system can predict flooding, which could be a sign of bad design — so feeding back that information can improve the next design iteration. This idea has so much potential to monitor and cause reaction but also to affect next-gen designs.

Today’s announcements make it official. AIworx co-founder Andre Villemaire, said, “The biggest opportunities [AIworx has] worked on have to do with improving infrastructure asset performance on an industrial scale, by way of the data from connected machines, instrumentation, sensors, and communications systems—and we’re excited to dedicate ourselves to that advancement. Now, by incorporating our tools into Bentley’s services for digital twins, we enable infrastructure operators to multiply the potential benefits of machine learning and IoT.”

Francois Valois, Bentley’s VP of portfolio development adds, “Our new colleagues from AIworx have already been delivering on this potential, and now, leveraging the analytics visibility, which Bentley’s digital twin cloud services uniquely provide, these advancements from going digital will accelerate exponentially!”

But that’s not all. Bentley also announced that it has acquired ACE enterprise Slovakia, maker of technology that connects enterprise resource planning (ERP), enterprise asset management (EAM), and geographical information systems (GIS) systems. Why do this? Because in order to effectively develop a maintenance strategy, the operator needs to know exactly which asset (via GIS location), what needs to be done (EAM) and how that ties into the overall corporate environment of spare parts and capable skilled workers. Imagine a rail line; not know exactly which of a possible hundred switches is starting to fail could be an expensive exercise in trial and error testing. Knowing which switch lets a crew pull the correct replacement parts and spawn a task to order additional spares.

According to Bentley’s materials, ACE enterprise is already a Bentley technology partner and the ACE Enterprise Platform underpins the Bentley AssetWise connector to SAP ERP and SAP HANA.

So, what’s the underlying strategy: connecting systems is where asset performance often falls down. Making the correct decision about whether to try to struggle on before replacing a pump requires as-is information about the pump and its maintenance history, real-time operating data, an understanding of the economics of the alternatives … That’s not stored in any one system (and it likely shouldn’t be). And that, in turn, means that the  connections between data and systems are becoming more critical. By bringing this all in-house, Bentley gets closer and closer to helping customers realize a full vision for asset optimization at a much broader, enterprise level.

No terms were disclosed for either deal — but I imagine the acquired companies were relatively small.

The post Bentley makes AIworx official, announces second deal appeared first on Schnitger Corporation.

► Altair + Datawatch = interesting opportunities
    6 Nov, 2018

Yesterday, Altair announced that it wants to acquire/merge with Datawatch, prompting a lot of “who?” and “why?”. Since I hadn’t heard of Datawatch, I did a bit of research — here’s what I learned and why I think this combination is an interesting idea, with lots of potential upside.

First, who’s Datawatch? Datawatch solutions help clients gather, sanitize, process, analyze and visualize data. Datawatch’s 14,000 clients come from financial services, healthcare, retail and other industries — but not, typically, manufacturing. Its main products are Monarch and Swarm, for data prep; Angoss for predictive analytics, and Panopticon for real-time visualization and analysis. Some products are cloud, others on-prem, but the main business model seems to be perpetual sales.

During a conference call with investors, Altair and Datawatch said that Datawatch had revenue of $36 million in 2017, and that sales for the 12 months through June 2018 were around $40 million — in other words, it’s growing right now. But it’s been a lumpy path, with revenue growth over the last five years ranging from -14% to +19%, for a five-year compound annual growth rate (CAGR) of 9%.

Altair CEO Jim Scapa characterized Datawatch as a “data preparation, data science and real-time digital analytics company with a long and strong market presence, a well-established best-in-class products used by customers, including 93 of the Fortune 100.” He went on to say that Datawatch’s “technology is highly relevant and applicable to almost any company in vertical market today. Bringing Datawatch into Altair should result in a powerful offering consistent with our vision to transform product design and decision-making by applying simulation, data science and optimization throughout product life cycles.”

And here’s the vision bit, from Mr. Scapa: “We see a convergence of simulation and machine learning technology to live and historical sensor data as essential to creating better products, marketing them efficiently and optimizing their in-service performance. [With Datawatch,] Altair will be able to provide a broad-solution offering, under a compelling licensing model, to meet all of their digitalization needs.” I’ve bolded a couple of key bits for further examination.

First, the convergence of machine learning and simulation. In a product context, machine learning is about looking for patterns: Knowing what happened in the last 1,000 duty cycles, when will that rotor fail? Reconstructing the last n accidents, why did that frame component buckle? It’s backwards-looking, crunching through mountains of data to figure out what’s relevant and find the lessons buried deep. Simulation takes that baseline and asks, “what if”: what if we changed the design of the rotor? and applies physics to try to arrive at an answer. They’re completely different approaches to the same end-result: better performance, safer cars and so on.

And that’s why the second element is so important: the commercial model. Just as it took decades for manufacturers to build simulation programs alongside design and engineering, it will take time for them to figure out what data they have, what they need and how to use it to meet their business goals. Adding a machine learning capability to Altair’s HyperWorks scheme will make it accessible, low=risk. and sandbox-y so that people can experiment at will.

Mr. Scapa laid out three potential use cases in describing Datawatch. The first is data preparation: “Every company has data all over the place –business, marketing, engine– and often in different places and in many different forms. Datawatch’s foundational technology lets you bring that data in very, very easily. It has a huge number of tools – it is arguably the best data prep technology on the market.” So there is applicability in our world for customer requirements creation, as-used data to inform simulations and similar uses.

Machine learning was Mr. Scapa’s second case. “Angoss is an environment to set-up doing machine learning and we think that’s going to be relevant for anybody who is trying to apply machine learning algorithms. Whether you’re trying to do predictive analytics for predicting failure or we see actually even applying it to some of the core things that
we do, some of the simulation things that we’re doing for crash or optimizing for crash and those sorts of things.”

Finally, he cited “real time data streaming and visualization, which relevant to the IoT data that you’re bringing in. [Some Datawatch] customers have streaming data from marketing and other sources as well. Financial is where they’ve played a lot, but there’s really not much like it in the market and we think it’s just going to be huge with all the data coming from IoT.”

Datawatch CEO Michael Morrison gave a little insight into the competitive environment, telling investors that Alteryx is the biggest competitor in data preparation, that he believes Panopticon has no real competitors. His most important point on the competition, however, was this: “We typically compete against building yourself. And so, when you get into the IoT space and real-time streaming data, and time series data, we’re very confident about how we stand.”

The companies say that there is very little overlap in their customer bases. That gives Altair the opportunity to perhaps package Datawatch’s products with its own HPC and other technologies — and, of course, where relevant, to move what appears to be a predominantly perpetual model to a more repeatable subs revenue stream. That’s going to be a challenge, for sure, since the people Altair sells HyperWorks to are likely not the current, typical Datawatch customer. How that will happen is, for now, TBD.

Here’s the thing: investors don’t like the deal, sending the share price down 20% yesterday (when the markets were flat). And I get that: it’s big in dollar terms, brings unpredictable revenue and profitability, and will likely be a distraction to management. And it’s not SIMSOLID, another physics solver or something else that’s more typically Altair. But here’s the thing: Altair knows how to integrate acquisitions, has done dozens over the years with mostly the same management — I’m not worried about that. The concerns around revenue and profitability are real and valid — but this isn’t so big an add that Altair is betting the farm. Altair generates plenty of cash to cover its debt service and is conservative in how spends money; if anything, I’d worry about under investing in the sales resources to take Datawatch products into long-time HyperWorks CAE customers.

The main concern, then, boils down to “it’s not CAE”. But is that so bad? Altair isn’t all CAE, even today.

Altair has always been in it for the long-term, and that’s what this acquisition is all about. Manufacturers, Altair’s traditional base, hear all the buzz about IoT and data and analytics, and they’re starting to wonder what this means for them. They want a low-risk way to try to use the data in historians and from sensors. Those projects, in sandboxes, may not all succeed but users will eventually hit upon the magic combination of data in and analyses out that yield a business benefit — and then Altair starts making real money from simulation customers applying Datawatch.

But Altair is NOT abandoning traditional CAE. At all. In fact, Mr. Scapa told investors, “I know, from the outside this might look like it’s far afield but it’s really becoming more and more relevant because of this convergence between high performance computing, simulation and data science –ultimately, they will be one. We’re just anticipating that and making decisions in that context. We’re continuing to look at lots of more traditional CAE and simulation. We just acquired SIMSOLID and we’re very active in this space, too.”

Still active in CAE. Check. Adding something new and interesting. Check. Is this a slam-dunk with no business challenges? No. But, overall, I like it.

The post Altair + Datawatch = interesting opportunities appeared first on Schnitger Corporation.

► Quickie: Altair to acquire Datawatch for analytics
    5 Nov, 2018

Dashing out the door to an all-day meeting and this pops into the inbox: Altair is merging with really, acquiring) Datawatch, a company that makes data prep, data prediction, and high-volume data visualization technologies that Altair feels will enable it to address the anticipated need for big data analytics.

Altair’s James Scapa said in the press release, “Bringing Datawatch into Altair should result in a powerful offering consistent with our vision to transform product design and decision making by applying simulation, data science and optimization throughout product lifecycles. We see a convergence of simulation with the application of machine learning technology to live and historical sensor data as essential to creating better products, marketing them efficiently, and optimizing their in-service performance. Datawatch is a great team of people with best-in-class products, and we look forward to their joining us.”

Datawatch is a public company, hence the merger word. Altair will pay $13.10 per share in cash, for an equity value of $176 million. That’s a 35% percent premium to the closing price of Datawatch’s common stock on November 2, 2018. There’s all the usual stuff about approvals and so on — and there will be a conference call at 8:30AM today to discuss the whole thing. Bottom line: the two companies’ boards unanimously approved the deal and now Datawatch’s shareholders have to be convinced to sell.

More info here.

The post Quickie: Altair to acquire Datawatch for analytics appeared first on Schnitger Corporation.

► Eek – schnitgercorp.com not secure???
    3 Nov, 2018

You may have noticed that your browser throws up a little warning that Schnitger Corp’s website is not secure. That’s currently true because we’ve run into an IT-gremlin issue related to SSL certificates. We’re working on a fix and hope to turn that red warning into a happy green https soon.

But it actually doesn’t matter, since we never ask for any information about you that could be compromised in an insecure website. You can read all about our privacy policy here. We understand the concern, though, and will fix ASAP.

Thank you for visiting schnitgercorp.com.

 

The post Eek – schnitgercorp.com not secure??? appeared first on Schnitger Corporation.

► Nemetschek’s Build offerings contribute to strong Q3
    1 Nov, 2018

Nemetschek Group, the holding company for brands like Allplan, Graphisoft, VectorWorks and Bluebeam, announced results this week that show continued progress in digitizing the AEC world.

The details:

  • Group revenue for Q3 was €115 million, up 20% as reported and up 20% in constant currencies (cc). Organic revenue was up 17%
  • Software revenue was €52 million, up 14%
  • Recurring revenue (which includes maintenance and subscriptions) was €58 million, up 26%
  • Service revenue was €4 million, up 12%
  • Subscription revenue is grouped into the maintenance/recurring revenue line. It was €6 million, up 64% from a year ago as Nemetschek has been offering this as an alternative to perpetuals — but not forcing anyone to one mechanism or another
  • By segment, Design remains the largest as it reported revenue of €68 million, up 11%
  • Build reported the strongest revenue growth, with its Q3 total of €37 million up 35% from last year. Build includes Solibri and Bluebeam — see below
  • Manage reported revenue of €3.6 million, boosted by MCS Solutions, which Nemetschek acquired during the quarter. MCS makes property, facility and workplace management solutions and contributed €1.4 million in Q3. As a result, total reported growth in the Manage segment was 75%, while organic growth was 8%
  • Finally, Media & Entertainment reported revenue of €6 million, up 19%

Patrik Heider, CFOO of the Group, said of the results, “Our sustained fast pace of growth shows that our strategic priorities are the right ones. The acquisition of MCS Solutions represents a strategically important investment in the Manage segment. We have also maintained the growth dynamic in licenses and recurring revenues from subscriptions and service contracts. And even with our investments in growth, our profitability is still at a very high level. All of this provides an extremely solid basis for the final quarter of the year and beyond.”

Nemetschek also reiterated its 2018 guidance, with revenue of €447 million to €457 million. The company will host a capital markets day on November 13, and we should get further insight into its plans for 2019.

What’s interesting about Nemetschek is its drive to move outwards from BIM into construction and building operations. In general, design is done by an architect or engineering firm; construction by a build specialist — and the asset is then managed by someone completely different. Three sets of stakeholders and potential client sets. Nemetschek has been acquiring brands like Bluebeam and Solibri that bridge these islands. Bluebeam is a set of markup and collaboration solution that connects design with construction with owners. Solibri is called a BIM model checker, but is really a set of validation, compliance control, design process coordination, review, analysis and code checking tools. Both help make sure that what is being built matches the plan and that all stakeholders understand both intent and execution. Nemetschek stands to grow simply by marketing these tools outside their native geographies (North America for Bluebeam and Northern Europe for Solibri).

But it’s more than that. Every study that’s published says that we cannot build homes, hospitals and schools fast enough to meet the demands created by population growth and urbanization. Tying together design and construction and using digital means as much as possible is the only way we have a hope of coping. There’s serious underlying demand for its tools, in addition to whatever Nemetschek fulfills by taking current products to new markets.

We’ll check in again after the CMD next month.

The post Nemetschek’s Build offerings contribute to strong Q3 appeared first on Schnitger Corporation.


return

Layout Settings:

Entries per feed:
Display dates:
Width of titles:
Width of content: