## CFD News and Announcements - Message Display

### [Optimal Solutions] Ideas for Addressing Multi Objective Problems and Achieving a Robust Design

 Posted By: John Jenkins Date: Fri, 9 Oct 2009, 11:28 p.m.

Ideas for Addressing Multi Objective Problems and Achieving a Robust Design © By Rob Lewis, TotalSim UK

[Dr. Sculptor™ articles contain knowledge-based tips & tricks and tools to help you find better designs faster}

Normally when you optimize you have a few shape parameters, a single performance indicator and you proceed to minimize/maximize the performance indicator by varying the parameters, subject to any parameter constraints. But what if you have more than one performance indicator, maybe two or three? In some problems this is the case, where your multiple performance indicators can’t be combined to a single goal. Perhaps you can’t combine them because there is no expression or model that combines them. Perhaps you can get a more effective design by considering the elements that constitute overall performance separately rather than reducing to a single measure.

So how do you proceed? One approach you can take is to have a single design of experiments (as you might do for single objective). For each experiment you evaluate the two or more performance measures by doing the CFD run, or in fact one performance could be a structural calculation, a volume, drivability or even an aesthetic score. The result is a table of experiments, each with shape parameters and performances. Next you create meta models, or surrogates for the performance responses. Common methods here are radial basis functions, kriging, or polynomials. Having built a quick lookup model (the meta model) for each performance you can quickly evaluate your design. Your baseline design will have values for each performance and for now (assuming two performances), takes a point on a simple plot with performance 1 on the x-axis and performance 2 on the y-axis.

Your task now is to see how much area around the baseline in the performance/performance xy plot can be ‘captured’ by the parameterization of your design. The two meta models let you do this simply and quickly. A random design is defined, so a random set of design parameters within the design limits is created. Your design is evaluated for performance 1 and 2 using the two meta models, and the point is plotted on the performance/performance graph. With this process repeated a few hundred times you quickly form a scatter of potential designs around the baseline which define an area or ground that has been ‘captured’ or realized. The points are potential because they are predictions from the meta model and are therefore projections. Your performance/performance plot will illustrate the tradeoff between the two performances and show how much gain or trade-off gain can be made away from the baseline.

For the last step you select a few interesting and good candidates from your cloud and run a CFD or other analysis to confirm that they are in fact good projections by the meta model. Your tradeoff plots will normally have an interesting shape and will lead you to test perhaps two or three designs physically. They will have different performance properties but your two or three candidates represent different solutions, each an optimum trade off but between them offering a robust approach. The method outlined above can be easily extended to 3d, with the 2 scatter of potential designs becoming a 3d cloud. Beyond 3d the challenge becomes ‘how do we visualize this 4d (+) performance space. For these types of problems tools like Parvis can be quite powerful http://home.subnet.at/flo/mv/parvis/. Stepping back, the process took a point in the performance/performance space representing the base line and design parameterization looked to inflate that point to an area. The smarter the parameterization the more inflation we get. But bigger isn’t really what we want – we are looking for a win-win (i.e. an optimal compromise), or a design that performs well in both performances. Sometimes the performance/performance plot comes out this way and clearly shows we can have a design that performs well on all counts. Other times we are left with the tradeoff between gain on one performance and loss on another. Attaining the win-win is often luck, but smart parameterization, or an additional attempt at parameterization can get us closer to the win-win.

Finally let’s look at robustness and how we might use clever parameterization and performance measures to achieve robustness. What do we mean by robustness?, in simple terms a design that performs and continues to perform if the environment changes or in fact if the design shape is altered slightly. It could be that we’ve found a 5% drag saving on a vehicle, but it isn’t a robust design in that due to slight errors from manufacturing tolerance and fluctuations in the environment it operates in the drag is only 3% better than baseline. To approach this problem we maybe have to think of the control parameters as more than just design or geometry parameters. We can for example introduce parameters for robustness; this could be a wind angle, a part deflection due to loading or the position of certain components due to tolerances and so on. So we end up with a set of input parameters that partly define the geometry of the component and partly define the variability in the environment that it operates in. We then proceed as before, define a design of experiments, run the experiments extract our performance measures for each experiment and then create a meta model that can quickly take design and environment parameters and give performance measures. To measure a more true performance of a design we would then create an ‘integrated performance measure’, this is where we would take the performance measures and integrate them over a range of environment variables.

So for example we’re doing a drag reduction on a car bumper/fender, we have 3 shape change parameters that define the bumper and 1 additional parameter that controls the ride height of the car (so high ground clearance or low). We then measure drag, so just 1 performance measure here. To search for the lowest drag at 1 ride height is fairly simple, but our goal is to define a bumper shape that works at a variety of ride heights. To proceed we create a function using the meta model, which takes in the 3 design variables for the bumper shape and calculates the drag at say 10 ride heights. This could then be simply averaged, or perhaps weighted around the average vehicle ride height, so the drag values at very high or very low are only lightly weighted. We then have a function for measuring the performance of a design over a range of operating or environmental conditions. It is this function that we then minimize. This approach can be extended to multi objective work too, where each objective may be an integrated performance measure over a range of environment parameters.

Other approaches to robustness might propose that we don’t optimize for the drag of a single design point but we optimize for the average value in a weighted cloud around that design point. These approaches tend to look for high plateaus rather than very high peaks. Again this approach can be also integrated over some environment parameters, and even made multi objective.

In summary, optimization can sometimes over simplify the objectives with the drive to boil everything down to a single performance indicator. This can cost the designer insight and a more rounded product performance. Optimization can also succeed in balancing the design on two pins, a pin for manufacture and a pin for operating environment. Successful optimization needs to look for more healthy and rounded designs, and perhaps more for designs that can be the starting point for the next evolution of designs rather than a quick win.

For information about our next CAE Optimization Webinar on this topic email webinarinfo@gosculptor.com with "Webinar Info" in the Subject line.

Rob Lewis has been Manager of TotalSim Ltd, a UK based CFD/Aero consultancy since 2007. (www.totalsimulation.co.uk) Previous to his current position, Rob was Manager of Advantage CFD, race car CFD consultancy. From 1995-1998, Rob worked for Fluent Europe.

Dr. Mark Landon is the Chief Technology Officer of Optimal Solutions Software, LLC. He has over 20 years of experience in computer aided geometry, computer graphics, engineering analysis software, and design optimization.

If you have a question for Dr. Sculptor™ or a design problem you would like him to look at, contact him at: DrSculptor@OptimalSolutions.us