Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

PART 5


MORE ADVANCED TOPICS IN OPTIMIZATION

16


Design Optimization Under Uncertainty

16.1 Overview

The previous chapters presented deterministic optimization methods. These do not implicitly take into account the inherent uncertainties typically present in the design process, in the system being designed, or in the modelsdescribing the system behavior. Uncertainties emanates from myriad sources. These include imperfect manufacturing processes and material properties, fluctuating loading conditions, over-simplified engineering models, oruncertain operating environment. All of these may have a direct impact on the system performance in its use or in the market place. To obtain a reliable and robust design (these terms are defined later), these uncertainties must beconsidered as part of the design process.

In the past, empirical safety factors were often used to guard against engineering design failure [1]. Safety factors resulted in overly conservative designs, increasing the probability that businesses may lose their competitive edgein terms of cost and performance. More recently, design optimization methods under uncertainty have gained broad recognition. These approaches explicitly consider uncertainties of various forms, and search for designs that areinsensitive to variations or uncertainty to a significant extent.

This chapter introduces the concept of design optimization under uncertainty, and discusses the pertinent popular approaches available. Section 16.2 defines a generic example that is used throughout the book to facilitate thepresentation of the material. The next Section (Sec. 16.3) defines five generic components/STEPS involved in design under uncertainty. This is followed by the presentation of these STEPS in the following five Sections: (1)uncertainty types identification (Sec. 16.4), (2) uncertainty quantification (Sec. 16.5), (3) uncertainty propagation (Sec. 16.6), (4) formulation of optimization under uncertainty (Sec. 16.7), and (5) results analysis (Sec. 16.8).Section 16.9 briefly discussed other popular methods. A chapter summary is provided in Sec 16.10.

16.2 Chapter Example

This section defines a generic example that it used throughout the chapter to illustrate the various concepts involved in design under uncertainty. Consider the optimization of a two bar truss shown in Fig. 16.1. The objective is toidentify the optimal position of node P from the left, b, and the optimal cross sectional areas for the bars, a1 and a2, such that the squared displacement at the node P is minimized. The loads W1 and W2 are applied as shown inFig. 16.1. The deterministic optimization problem statement for the truss example is given as follows.

Figure 16.1. Two-Bar Truss Example

(16.1)

subject to

(16.2)

(16.3)

(16.4)

(16.5)

(16.6)

(16.7)

The variables u1 and u2 are the horizontal and vertical deflections at node P, respectively; Si is the normal stress induced in each bar; Smax is the maximum allowable stress; θ is the angle between the horizontal and Bar 1 (defined byh1); and β is the angle between the horizontal and Bar 2 (defined by h2). The volume of the truss is required to be equal to 4,000 in3 (defined by h3). The above constraints are presented in a normalized form. The fixed parametersfor this problem are the Young’s Modulus, E = 29×103 ksi; the truss height, L = 60 ft; the maximum allowable stress, Smax = 350 ksi; and the loads, W1 = 100 kips, and W2 = 1, 000 kips. Additionally, ai min = 0.8 in2, and ai max = 3.0in2 for i = (1, 2); bmin = 30 ft; and bmax = 90 ft. The expressions for the stresses are given as

(16.8)

(16.9)

The deterministic optimum of this problem (using an initial guess [(a1min+a1max)2, (a2min+a2max)2, (bmin+bmax)2, 0.1, 0.1]) is: J = 187.5 in 2, a1 = 2.55,a2 = 1.5,b = 3600 in.

Discussion: The above example is used to illustrate why the consideration of uncertainty is important.

The deterministic problem presented does not take into account the various uncertainties that are present in the problem. For example, in the cross sectional areas of the bars, the loads applied are subject to random variations. Thematerial properties, such as the Young’s modulus, and the maximum allowable stress could be uncertain. The engineering model we have assumed for the analysis may also be over-simplified.

Given these uncertainties, optimizing the truss in a deterministic fashion as shown above may result in design failure due to a possible stress constraint violation (Eq. 16.2) under uncertainty. The design might not be reliable. Inaddition, the deterministically optimized design may not consistently perform as intended under uncertainty (the objective function is viewed as a measure of the performance of the design). Since deterministic optimization does notconsider uncertainties, a small variation in the problem parameters due to any of the uncertainties mentioned above may lead to significant performance deterioration, or even catastrophic failure. The resulting design performancewill not be robust. The goal of design optimization under uncertainty is to explicitly consider the influence of uncertainty, and to improve the design’s reliability and robustness in view of these uncertainties.

16.3 Generic Components/STEPS of Design Under Uncertainty

This chapter presents five generic steps involved in design optimization under uncertainty, as descrived in Fig. 16.2. At every step discussed, the two bar truss example will be used to quantitatively illustrate the concepts presented.The popular approaches used for each of the above steps will be discussed, and pertinent references for theoretical and practical details will be provided.

Figure 16.2. Overview of Design Optimization Under Uncertainty

Given the design objectives, variables, and constraints, a typical design optimization problem under uncertainty involves the following five steps.

1.STEP 1: Identifying Types of Uncertainty: What types of uncertainties exist in the problem (e.g., uncertainties in parameters, in design variables, or in the model itself)? We will study this issue in Sec. 16.4 with examples.

2.STEP 2: Uncertainty Quantification: How do we model the uncertainty mathematically (e.g., using probability theory)? How can the uncertainty be defined in terms of a set of parameters (e.g., mean and standard deviation)?This step is discussed in more detail in Sec. 16.5 with examples.

3.STEP 3: Uncertainty Propagation: How do we propagate the uncertainty in the models (e.g., given the uncertainty parameters in the cross sectional areas in the truss problem, how do we calculate the uncertainty parameters of the stress)? Sec. 16.6 covers this topic in detail with examples.

4.STEP 4: Development of an Optimization Formulation: How do we incorporate the uncertainty into the optimization problem? We will study this issue in Sec. 16.7 with examples.

5.STEP 5: Analyzing Results: How do we interpret the results? What are the tradeoffs involved? Sec. 16.8 presents more details concerning this topic with examples.

The reader should keep in mind that each of the above steps are evolving areas of active research. What follows is an introductory presentation of the popular techniques available in the literature, illustrated with the two bar trussexample.

16.4 STEP 1: Identifying Types of Uncertainty

There can be various sources of uncertainty and differing classifications of these sources of uncertainty. In the context of modeling, uncertainties in systems can be classified into two types: Aleatory and Epistemic uncertainties.Aleatory uncertainty refers to the inherent variability that exists in physical processes. The word “aleatory,” derived from the Latin “alea" [2], means rolling the dice. Essentially, it refers to uncertainty introduced by the intrinsicrandomness of a system/phenomena. It is sometimes also referred to as parametric uncertainty (i.e., uncertainty in the parameters of the design). This kind of uncertainty is typically not reducible. Aleatory uncertainties are relativelywell understood.

Example: In the two bar truss example, uncertainties in the parameters, such as the cross sectional areas of the bars and the material properties, are examples of aleatory uncertainties. Truss bars are typically batch-produced. Thecross sectional areas of the bars across a production batch usually vary due to manufacturing process variations.

Assume a sample of 100 bars are collected from a production batch, and the cross sectional areas of the members are measured, as given in Table 16.1. In deterministic optimization, the tolerances in the areas are not considered.In most design optimization methods under uncertainty, a statistical measure of the variation in the data (Table 16.1) is used in the optimization (e.g., mean and standard deviation). The data in Table 16.1 is also provided in thebook website (www.cambridge.org/Messac).

Table 16.1. Sample Set of One Hundred Cross-Sectional Areas for a Batch-Produced Truss Bar (in2)



0.943

0.966

0.903

0.978

1.023

1.142

1.115

1.121

0.893

0.959

1.229

0.857

1.066

1.167

1.088

1.116

0.947

0.906

0.948

1.110

0.879

0.929

1.133

1.072

0.940

0.988

0.966

0.816

0.867

1.192

1.026

1.015

1.063

0.751

0.819

1.033

1.142

1.148

0.921

1.024

0.917

0.956

0.935

1.071

0.937

1.094

1.006

1.071

0.958

1.054

0.998

0.960

0.921

0.969

0.955

0.931

0.971

0.820

0.871

1.050

1.037

1.060

1.060

0.853

1.028

0.978

0.954

1.072

0.959

0.880

1.096

1.099

0.997

1.074

0.925

1.136

1.017

0.919

0.849

1.047

1.038

1.028

1.161

1.013

0.910

1.053

0.998

1.029

1.159

0.929

0.849

0.875

0.966

0.951

1.011

0.947

0.989

1.045

1.035

0.802



The mean of the observed set of values in Table 16.1 is 0.995 in2, and the standard deviation is 0.096 in2. The MATLAB commands mean and std can be used to find the mean and standard deviation of a data set, as shownbelow.

% Using the data given in Table 1.1, define the
% variable "set" as a vector of 100 elements.
m = mean(set);
s = std(set);

As we will see later, the mean and standard deviation of uncertain quantities are used in optimization under uncertainty.

Epistemic uncertainty [3] arises because of the following related factors: (1) lack of knowledge in the quantity, environment, and/or physical process being modeled, (2) insufficient data, (3) over-simplification of complexcoupled physical phenomena, and (4) lack of knowledge of the possible failure modes of the design. The word “epistemic,” derived from the Greek “episteme,” means knowledge [2]. Thus, it refers to uncertainty introduced by the lack of knowledge of a system/phenomena. Epistemic uncertainty is also commonly referred to as modeling uncertainty, and is usually more difficult to model than aleatory uncertainty. This kind of uncertainty can bereduced by developing a better understanding of the involved phenomena, (e.g., by conducting more experiments).

Example: In the two bar truss problem, epistemic uncertainty could arise from the fact that we have included only compressive failure mode for the bars. We ignored other possible failure modes, (e.g., buckling). In otherwords, we have over-simplified the failure analysis for the problem. This is typically viewed as a modeling or epistemic uncertainty. In a complex design problem, such as aircraft design, the designer might not even be fullyaware of the modes of failure that have been ignored, which might adversely impact the reliability and robustness. For the truss problem, the modeling uncertainty in the stresses in the bars, Si {i=1,2}, can be represented byconsidering a multiplicative term, Si*, and an additive term, , as shown below.

(16.10)

(16.11)

The additive and multiplicative terms could be functions of the design variables with problem-specific definitions. The additive term, , could be the higher order term of an expansion, such as a Taylor series. In keeping withthe scope of this introductory chapter, we will henceforth use = 0 and Si* = 1.

Engineering design often involves both types of uncertainties. As a result, it is sometimes challenging to classify a source of uncertainty (at the modeling stage) as exclusively aleatory or epistemic. In recent years, interestingmodels and methodologies have been developed to address systems with mixed aleatory-epistemic uncertainties (simultaneous presence of both uncertainties) [4, 5, 6]. The report by Eldred and Swiler [5] provides a helpful reviewof these methods and the corresponding benchmark results. Once the uncertainties in a problem have been identified, the next step is to quantify those uncertainties.

16.5 STEP 2: Uncertainty Quantification

Thus far, we have not defined how to model uncertainty in a mathematical form. The question of interest in this section is: how to parameterize uncertainty? That is, how to describe uncertainty using a set of parameters? Theseuncertainty parameters will become part of the optimization problem in various forms. Quantifying aleatory uncertainties will be discussed. A detailed discission of epistemic uncertainties is beyond the scope of this introductorychapter (see Ref. [7]).

This section presents uncertainty quantification methods that are based on the availability of data for a particular problem. What is meant by “data"? If a quantity of interest is uncertain or random, data refers to a set of possiblemeasured outcomes or samples of the quantity. This data may be available from a manufacturer’s catalog or recorded information from past history. Given that data may or may not always be available, uncertainty quantification in two different cases will be studied, as follows.

16.5.1 Sufficient Data Available: Probability Theory

To mathematically represent a random quantity, probability theory can be used when sufficient data is available. Basic knowledge of probability theory by the reader is assumed [8] in this chapter.

In probability theory, a random variable is associated with the outcome of an uncertain event. For example, the cross sectional areas of the truss, a1 and a2 (in Fig. 16.1), are random variables, since they are batch-produced and have associated tolerances. In this chapter, random variables are denoted by upper case letters, A1 and A2. A probability density function (PDF), denoted by fX(x), is used to compute the probability that a continuous random variableX lies between two limits x1 and x2, as given below.

(16.12)

There are several standard PDF types that are used in engineering applications, such as Uniform, Normal, Weibull, and Lognormal distributions [8]. Normal distributions are often used for design optimization problems. The twoquantities, mean and standard deviation, define the bell-shape of the normal distribution depicted in Fig. 10.4. The shaded area in Fig. 10.4 represents the probability that a random variable lies within one standard deviation of themean value.

When sufficient data is available, techniques such as histogram or probability plots may be used to explore the underlying distribution [9]. While more than one standard distribution may “fit" a data set, the underlying physicalquantity may suggest a particular distribution. For example, normal or lognormal distributions are used to represent physical dimensions and material properties. Weibull distribution is commonly used in reliability problems to model quantities such as time to failure. Statistical tests, such as the Chi-square test or Kolmogorov-Smirnov test, can be used to determine the goodness of a PDF fit [9].

Figure 16.3. Normal Distribution

Once a PDF is fit to a data set, the uncertainty parameters that are of interest are usually the first two moments of the PDF, namely, mean and standard deviation.

Example: In the two bar truss problem, the cross sectional areas could be modeled as normal random variables. Material properties are usually given as lognormal variables, since most of them are positive quantities. For thegiven data set of cross sectional areas (Table 16.1) for the two bar truss problem, find the underlying distribution using the two MATLAB commands: histfit and normplot. The histfit command plots the histogram of the givenset, while normplot command plots the normal probability plot for the given data.

% Using the data given in Table 16.1, define the
% variable "set" as a vector of 100 elements.
histfit(set);
figure(2) % Open a new figure
normplot(set);

Figure 16.4 presents the pertinent plots for the data set given in Table 16.1. Note that a data set is considered normally distributed if its normal probability plot is approximately linear, as is the case in Fig. 16.4. Therefore, the crosssectional area can be assumed to be normally distributed. The shape of the histogram further validates the normality assumption.

Once the assumed distribution of the data is tested and deemed satisfactory, the uncertainty parameters of interest are the mean and the standard deviation of the PDF. For the given data for the cross sectional areas, the mean and the standard deviation are 0.9947 and 0.0961 in2, respectively.

Figure 16.4. Finding the Underlying Distribution of the Given Cross-Sectional Area Data Set

Thus far, we discussed the case where the data set for the cross sectional areas was provided to us. How can we quantify uncertainty when sufficient data is not available? That is the next topic of our discussion.

16.5.2 Insufficient Data: Non-Probabilistic Methods

As previously discussed, probability theory for uncertainty quantification can be used when sufficient data is available. However, the data sufficiency requirement is not always satisfied (e.g., during early conceptual design whensufficient data is not available). Gunawan and Papalambros [10] propose the notion that insufficient data has become a major bottle-neck in engineering analysis involving uncertainty. For these cases, the use of evidencetheory [11, 12, 13], possibility theory [14, 15, 16, 17, 18], Bayes theory [19, 20, 21], and imprecise probabilities is an emerging trend [22, 17, 23]. Evidence theory uses fuzzy measures called plausibility and belief to measure thelikelihood of events. These fuzzy measures are the upper and lower bounds on the probability of an event. Plausibility and belief measures are, respectively, associated with evidence theory and the classical probability theory. Wenote that there is some controversy associated with these non-probabilistic approaches.

In recent years, other methods have also evolved to address insufficient data scenarios. A combination of Evidence Theory and the Bayesian approach was suggested by Zhou and Mourelatos [24] to deal with insufficient data. Wang et al. [25] presented a new paradigm in system reliability prediction that allowed the use of evolving, insufficient, and subjective data sets. To deal with these data sets, a combination of probability encoding methods andBayesian updating mechanism was used.

How to quantify the basic uncertainties (i.e., uncertainties in design variables and design parameters) has been discussed. The next step is to compute the uncertainty parameters of the functions of random variables (i.e.,constraints and objectives in the optimization problem).

16.6 STEP 3: Uncertainty Propagation

The goal of uncertainty propagation is to compute the uncertainty characteristics of a function of random variables (known as a random function), given the uncertainty characteristics of the variables (known as input variables)present in those functions (see Ref. [26]). The uncertainty characteristics of interest could be the moments of the function, or the probability of failure of the function.

The random function of interest may be a given linear or nonlinear function of the constituent random variables, or may be a black-box function with no explicit/provided functional form. In some cases, where the random functionis given, it may be possible to analytically compute the moments of a function. When this is not possible, other methods to propagate the input uncertainties must be employed. This section introduces some popular uncertaintypropagation methods in the literature. From an optimization perspective, uncertainty propagation is an important step that is required for objective and constraint function formulation.

This section introduces four popular approaches for uncertainty propagation: (i) sampling methods, (ii) analytical methods, such as the First Order Reliability Method (FORM) and the Second Order Reliability Method (SORM),(iii) polynomial approximation using Taylor series, and (iv) advanced methods, such as stochastic expansion. Illustrative examples are provided for discussion.

Example: Let’s revisit the two bar truss problem. Once the uncertainties in the variables have been quantified, (e.g., cross sectional areas), we then need to calculate the uncertainty parameters of the quantities that are functions ofthe areas (e.g., compressive stresses, S1 and S2). These estimated parameters are needed for the optimization process.

16.6.1 Sampling Methods

Sampling methods are used to generate a set of sample points for the input variables as per their uncertainty distributions. At each generated sample, the values of the random functions are computed, and a set of sample points of thefunction values are subsequently generated. This generated function sample set can then be used to compute statistics of interest for the random function. An illustrative example is provided shortly after the important practical issues are discussed. More details regarding this topic of sampling methods in optimization can be found in [27].

The first issue of importance in sampling methods is how to distribute the sample set for the input random variables. Should they be uniformly distributed (uniform sampling), randomly distributed (Monte Carlo sampling), or should we concentrate the samples in a desired region of importance (importance sampling)? Other popular sampling techniques include stratified sampling and Latin Hypercube sampling. Monte Carlo and Latin Hypercubesampling are very commonly used in the optimization community. The selection of a sampling scheme is dependent on the level of computational resources available, the acceptable error in the estimated parameter, and also on the nature of the data. The focus will be on Monte Carlo sampling techniques.

The second issue of importance in sampling methods is how many input samples are needed (e.g., is 10,000 samples sufficient? or is 106 sufficient?) This number depends on the accuracy required in the quantity being estimated.How are these sampling considerations important in the present context of optimization under uncertainty?

For this case, the goal of the sampling method is to estimate failure probabilities and/or moments of the random function. The failure probability can be estimated from a Monte Carlo simulation as follows. Generate N inputrandom samples, and compute the corresponding constraint values. Then, identify those Nf instances out of N that violate the constraint feasibility. The probability of failure can then be estimated as .

The failure probability in engineering problems of interest may be as low as 10-6. To observe at least one failure in a Monte Carlo simulation for such a case, the sample set should have at least 106 simulations. The number ofsamples N should be chosen to be at least one order of magnitude higher than 106. Several problems at the end of the chapter are provided to illustrate the issue of the number of input samples.

The advantage of sampling methods is that (i) they are relatively accurate, and (ii) the pertinent (sampling) errors are usually quantifiable. However, sampling methods, especially when used in optimization, can becomputationally expensive. Many mathematical computational software packages have built-in functions to generate random numbers using standard distributions. As shown in the following example, MATLAB has built-in functionsto generate normal random variables.

Example: Assume for the truss problem that the cross sectional area, A1, of the truss is normally distributed with a mean value of 1.6 in and a standard deviation of 0.05 in. Use the MATLAB normrnd function to generate 10,000instances of the variable A1.

% To generate a vector of 10,000 X 1
% normal random variables
A1 = normrnd(1.6,0.05,10000,1)

At each generated instance, compute the value of S1 (using Eq. 16.8). Assume for this particular case that b and L are deterministic. In addition, assume that θ = 30° = 60°, yielding S1 = . Using this expression, obtain aset of 10,000 stress values, that is used to compute the uncertainty parameters of the stress. The generated data set can be used to evaluate the probability of failure of the constraint S1 < Smax.

From the results of MATLAB, the probability of failure ranges from approximately 0.92 to 0.94. If the program is run multiple times, the resulting estimated probability of failure may change from run to run because of the randomnature of the simulation. The amount of variation in the failure probability value depends on the number of simulation samples considered. As the number of simulation samples increases, the variation in the failure probabilityfrom run to run decreases. This point is further illustrated through the problems at the end of the chapter.

The high failure probability obtained above is a function of the input design variable uncertainties. The uncertainty-based optimization helps find input uncertainty parameters that yield acceptable failure probabilities in asystematic manner.

16.6.2 First-Order and Second-Order Reliability Methods (FORM and SORM)

The First Order Reliability Method (FORM) and the Second Order Reliability Method (SORM) are more popularly used as major components of the Reliability-based Design Optimization (RBDO) architecture (see Rozvany andMaute in Ref. [28]). However, they can also be leveraged to estimate uncertainty propagation. Zaman et al. [29] reported that if the uncertainty described by intervals can be converted to a probabilistic format, well establishedprobabilistic methods of uncertainty propagation, such as the Monte Carlo methods [30] and the optimization-based methods (FORM and SORM), can be readily used. The application of probabilistic methods for uncertaintypropagation will avoid the computational expense of interval analysis, as it allows for the treatment of aleatory and epistemic uncertainties together without nesting [29]. Du [31] also proposed a unified uncertainty analysis methodbased on the FORM, where aleatory uncertainty and epistemic uncertainty were modeled using probability theory and evidence theory, respectively. Further discussion of the FORM and the SORM can be found in Sec. 16.7.1.

16.6.3 Polynomial Approximation Using Taylor Series

Using this approach, the mean and standard deviation of the random function are approximated by first building a polynomial approximation of the function, followed by finding the moments of this polynomial approximation. A linear or quadratic polynomial approximation of the random function is constructed using a Taylor series expansion about a point of interest, usually the mean value vector.

Unlike sampling methods, this approach is not commonly used to estimate probability of failures because of its limited accuracy, especially for non-normal cases. It is more commonly used in robust design optimizationformulations. The mean and the variance of a function of random variables, g(X), can be approximated using the first-order Taylor series [9], given as

(16.13)

(16.14)

where nx is the number of elements in the input random vector X, μX and σX denote the vectors of the mean and standard deviation, respectively, of X; and Cov(Xi,Xj) denotes the covariance between the variables Xi and Xj, with i,j ={1,...,nx}. Covariance is a measure of correlation between two sets of random variables [8].

The standard deviation of g can then be computed as σg = . If the design variables are assumed to be statistically independent, the covariance between them is zero, and the variance expression given in Eq. 16.14 reduces to

(16.15)

Using the above method to estimate moments is quite simple, as it uses a linear approximation of the random function to approximate its moments. In the case of black box functions, where it is not possible to compute theanalytical partial derivatives in the above equations, finite differences may be used instead. A careful choice of step size for the finite differences is necessary. This issue is further explored in Chapter 7.

Example: Consider the expression for the stress, S1 = , from the previous subsection. Assuming that A1 is the input random variable with a mean and a standard deviation of μA1 = 1 in2 and σA1 = 0.1 in2, respectively; themean and standard deviation of S1 can be estimated as

(16.16)

(16.17)

16.6.4 Advanced Methods Overview

The previous subsection reviewed the Taylor series method that approximates the moments of a random function by taking the moments of a polynomial approximation. The First-order Taylor series approximation is commonlyused, since it is simple to implement and requires only gradient information. However, the method can yield erroneous results in some cases [32], and must be applied carefully. A first-order Taylor series approximation builds a linear approximation of the random function to compute its approximate moments. The approximation is valid only in the neighborhood of the linearization point, and may not be suitable for highly nonlinear functions.

One could also use a second order Taylor series approximation of the mean and the variance of a random function, which may yield higher accuracy when compared to the first order approximation at the cost of highcomputational expense.

Another method to compute moments is to approximate the distribution with respect to which the moments are computed [33]. Use of advanced methods, such as polynomial chaos [34] for uncertainty propagation, is also popular.

16.6.5 An Important Note on Uncertainty: Analysis vs. Optimization

Thus far, we have discussed the details of uncertainty quantification and propagation. These two steps together are commonly referred to as uncertainty analysis or, in some cases, probabilistic analysis or stochastic analysis. Note that uncertainty analysis alone is a complex and computationally expensive task, and may be the end goal of some engineering studies. These studies are performed to estimate the reliability or performance robustness of a system,and may involve complex physical phenomena modeled by computationally expensive simulation tools.

Design optimization problems under uncertainty can be viewed as one level above uncertainty analysis, where each iteration of optimization usually requires uncertainty analysis of the objective and constraint functions.Optimization adds another layer of complexity to these problems.

Why then optimize these problems? As was discussed with the truss example previously, each set of input design variables yields a particular failure probability of the design. Ideally, the design will have the least possibleconstraint violation. Given a set of constraints in the problem, optimization helps automate the search for the least constraint violation for all constraints, while simultaneously minimizing the objectives. Without optimization,uncertainty analysis alone will merely allow us to explore the system’s probabilistic behavior, but without the means to favorably impact it.

16.7 STEP 4: Embedding Uncertainty into an Optimization Framework

This step considers how the optimization problem under uncertainty is posed mathematically, and how the concepts studied thus far can be used in an optimization framework. Probabilistic methods, such as robust designoptimization (RDO) and reliability based design optimization (RBDO) techniques, are popular approaches that include the effects of uncertainty in an optimization framework. First, a brief overview of the pertinent issues isprovided; illustrative examples are given later.

There are two primary challenges that designers face in optimization problems under uncertainty. The first challenge is to ensure that the design does not fail under uncertainty. This involves controlling the constraints’ failureprobabilities by repeating the constraint uncertainty analysis for different values of the design variables, as illustrated in Fig. 16.5. One of the primary differences between RBDO and RDO techniques is the approach used foruncertainty analysis for constraints.

Figure 16.5. Uncertainty Problems in Engineering Design

The second challenge in optimization problems under uncertainty is to maintain robust performance of the system design. Consider the design of an air conditioning system that is required to maintain an ambient temperature of 27o C in a room. Several conditions are uncertain in this design (e.g., the outside temperature). In spite of the uncertainty, the design of the air conditioning system must have the ability to maintain the desired ambient temperature in the room (i.e., the design should remain optimal). Moreover, it is required that the temperature of the room not significantly fluctuate under uncertainty. In other words, the performance of the design must be robust. A robust design is one that is minimally sensitive to uncertain conditions. The second important difference between the RBDO and RDO approaches is that RDO methods typically consider robustness metrics as one of the objectives to beminimized, while RBDO approaches typically do not.

The above two challenges (maintaining feasibility and robustness) are generally conflicting. Because of the computational complexity involved, most current research in this area tends to focus either (1) on rigorous constraintformulations (RBDO approaches) alone, with simplified problem architecture, or (2) on a rigorous problem architecture alone, with a simplified constraint structure.

Example: The two bar truss problem is used to explore the above discussion in concrete terms. What is the goal of the optimization problem under uncertainty? Recall that in the deterministic formulation, the goal was tominimize the squared nodal displacement. In the uncertainty problem, this quantity is now random and has its own PDF, which is not a normal distribution. We use the uncertainty propagation tools previously discussed todetermine the approximate mean and standard deviations of the squared displacement. More pertinent quantitative details are discussed in Sec. 16.7.3.

The objective of the robust problem can then be posed in terms of minimizing the mean of the squared displacement (optimality), or the standard deviation of the squared displacement (robustness), or both. In addition, theconstraints must be suitably posed. The constraint can be posed in terms of the probability of constraint violation (RBDO approach), or by shifting the constraint boundary by a suitable amount (RDO approach).

We examine the RBDO methods first, followed by the RDO method.

16.7.1 Reliability-Based Design Optimization (RBDO)

The development in this field is primarily derived from the structural reliability community [9]. In this field of research, significant progress has been made over the last decade. This section provides a high level summary of the ideas involved in this area of research. Interested readers are directed to the references for details.

The RBDO approach emphasizes high reliability of the design by ensuring a desired probability of constraint satisfaction. The mean of a desired performance metric is usually used as the objective function for RBDO problems. A general formulation for the constraint g(X) ≤ 0 in RBDO approach can be given as

(16.18)

where R is the desired reliability of the constraint. The probability of failure in Eq. 16.18 is given by the following integral.

(16.19)

where fX(x1,x2,...xn) is the joint probability density function of the n random variables {X1,...Xn}. In practice, the joint probability density function of the design variables is almost impossible to obtain. Even if it can be obtained,evaluating the multiple integral in Eq. 16.19 is difficult. Analytical approximations of the integral that yield the probability of failure are typically used: the First-Order Reliability Method (FORM) and the Second-Order ReliabilityMethod (SORM) [9].

The inequality constraint g(X) ≤ 0 is usually called the limit state equation. The limit state equation can be a linear or a nonlinear function of the design variables. The FORM method can be used when the limit state equation is a linear function of uncorrelated normal random variables, or is represented as a linear approximation of equivalent normal variables. The SORM method estimates the probability of failure using a second order approximation of the limit state.

The concept of the Most-Probable Point (MPP) is used to approximate the multiple integral [35]. The MPP is usually a point in the design space that is at the minimum distance from the origin (closest to failure), after performingrequisite co-ordinate transformations. In the case of nonlinear constraints, the computation of the MPP distance from the origin is an optimization problem. The overall RBDO problem is a nested optimization problem, where thecomputation of the probability of failure at each iteration itself is an optimization problem. Several computationally efficient techniques, such as sequential methods [36], and hybrid methods [37], have been developed for efficientimplementation of the nested (“double loop”) RBDO formulations.

It is important to note that there exists multiple optimization architectures within RBDO (e.g., “double loop” and “single loop" approaches). The more traditional approach is the nested or the “double loop” approach as discussedabove. In this approach, each iteration step of the design optimization process requires a loop of iterations of the reliability analysis. Two popular “double loop” approaches are the reliability index approach (RIA) [38] and theperformance measure approach (PMA) [39, 37]. These approaches apply FORM to perform the reliability analysis, which requires an inner nonlinear constrained optimization loop. When the constraints are active, the twoapproaches yield similar results. However, in the literature, PMA has been reported to be more efficient and stable than RIA [39, 37]. When the reliability analysis (comprising the inner loop) searches for the MPP, the overall“double loop" computation can become prohibitive, particularly if the concerned function evaluation is computationally expensive [40, 41, 42].

In order to reduce the computational burden of RBDO, several approximate RBDO approaches have been developed that decouple the double loop problem. These approaches are popularly known as “single loop" approaches. A list of “single loop” approaches is summarized in the paper by Nguyen et al. [43]. One of the major “single loop” approaches uses the Karush Kuhn Tucker (KKT) optimality condition to approximate the solution of the inner-loopoptimization [44]. The inner-loop is replaced by a deterministic constraint, which transforms a double-loop RBDO problem into an equivalent single-loop optimization problem. More advanced single loop approaches have beenproposed in recent years [45, 43].

16.7.2 Use of Approximation Methods Under Uncertainty

Incorporating reliability evaluations within an optimization framework can be computationally prohibitive. This problem is compounded in the presence of computationally expensive limit states, where each evaluation of thefunction could take hours or possibly days. To alleviate this computational burden, a popular approach is to use approximation methods, such as response surfaces and Kriging, in optimization problems under uncertainty. Theapproximation could be done at an optimization level, at the uncertainty propagation level, or both. More details can be found in the reference [46].

Example: The two bar truss problem that has been considered thus far has a simple failure surface, which is the normal stress equation. Instead, if the stresses were computed using a finite element analysis, the computationalexpense would significantly increase. At each iteration of the optimization problem, an uncertainty propagation for the finite element code would be performed; which, in itself, is a computationally expensive task. Approximationmethods can significantly reduce computational costs.

16.7.3 Robust Design Optimization (RDO)

As previously mentioned, the focus in RDO methods is to minimize the effects of uncertainty on product performance [7, 47, 49, 50, 51, 52, 53, 54, 55]. While the constraint handling in RDO problems is typically simpler than thatin RBDO problems, RDO has been used in problems in a multidisciplinary and multiobjective setting. The challenges in these problems and the pertinent formulation methods are discussed in this section. An illustrative example is provided after the theoretical developments are discussed.

To reduce the computational burden associated with probabilistic constraints, a more simplistic approach known as the moment matching approach is widely used in RDO [56, 48]. The moment matching approach employsmoments of the constraints and objectives in the optimization framework, unlike the computationally expensive probabilistic formulation in the RBDO approach. For inequality constraints, a worst-case formulation is usually used,where the constraints are shifted so that the worst case uncertainty still results in a feasible design [48] (see Fig. 16.6).

Figure 16.6. Inequality Constraints: Deterministic vs. Robust

Equality constraints need careful consideration when formulated under uncertainty because of the strictness associated in their feasibility. An equality constraint can be classified into two types in a robust problem[57, 58]: (1)Type I: those that are always exactly satisfied (e.g., laws of nature, such as static and dynamic equilibrium), and (2) Type II: those that are not always exactly satisfied (e.g., designer-imposed dimensional constraints, such asEq. 16.5).

Classification of constraints into the above two types depends on the nature of the design variables present in them, and/or the designer preferences [58]. Some of the existing equality constraint formulations are [58]: (1) TYPE I constraints: elimination of the constraint by substituting for a dependent variable, or satisfy the constraint at its mean value, and (2) TYPE II constraints: satisfy the constraint as closely as possible using the approximate momentmatching method [58], or satisfy exactly at its mean value (see Fig. 16.7). An illustrative example is presented shortly.

Figure 16.7. Formulation for Type II Constraints

The objectives of the RDO problem are usually to optimize the mean of the objective function, to minimize the standard deviation of the objective function, or both. The standard deviation of the objective function can beestimated using a first order Taylor series expansion. The RDO mathematical formulation is presented using the two bar truss example.

Example: Recall the deterministic truss formulation presented earlier. Using the discussion provided in this section regarding inequality and equality constraints, objective function, and design variables, the following RDOformulation can be obtained (explained in detail next).

(16.20)

subject to

(16.21)

(16.22)

(16.23)

(16.24)

1. Design variables: Assume that a1, a2, and b are uncertain normal random variables. The corresponding random design variables are denoted by A1, A2, and B. Assume σai = 0.005 in and σb = 6 in.

2. Inequality constraints: The inequality constraints, S1 and S2, and the design variable bounds are shifted by three respective standard deviations. This makes the constraints more conservative by shrinking the feasible designspace, as illustrated in Fig. 16.6. The mean and the standard deviation of the inequality constraints can be computed using the uncertainty propagation methods discussed in Sec. 16.6.

3.Equality constraints: Constraints h1 and h2 of the deterministic problem are connectivity constraints at the node P, which must be satisfied. These are TYPE I constraints. Eliminate the constraints h1 and h2 by substituting for θand β. Constraint h3 is a dimensional constraint, which restricts the structural volume of the truss to be equal to 4, 000 in3. This is a TYPE II constraint. We could either satisfy the constraint exactly at its mean value, or use theapproximate moment matching method. We chose the first option because of its simplicity.

4. Objectives: Consider minimization of the mean and the standard deviation of the squared nodal displacement, μJ and σJ, which can be calculated using a first order Taylor series expansion.

5. Solutions: Note that the deterministic single objective problem (Eq. 16.1) has now become multiobjective under uncertainty (Eq. 16.20). Solve this multiobjective problem and obtain its Pareto set (see Fig. 16.8(a)). The aboveformulation presents an interesting tradeoff situation, since a Pareto set of robust solutions is now available to choose from - as opposed to a single deterministic solution. Each of the Pareto solutions represents a differenttradeoff between the μJ and σJ objectives.

6. Observation: Obtaining the partial derivatives for the above expressions for the first order Taylor series approximation can be tedious. The MATLAB Symbolic Toolbox was used in this case to ease the burden. In addition, notethat the objectives are of different magnitudes, and scaling may be needed.

Figure 16.8. Two-Bar Truss Results

Next is the final step of the uncertainty-based optimizations as shown in Fig. 16.2. Once we have the solutions, how do we interpret them?

16.8 STEP 5: How to Analyze the Results

This section presents a series of issues that are important in analyzing the results. These include: (1)mean performance and robustness tradeoff, (2) deterministic vs. robust solution, (3) constraint tradeoffs, (4) final design choice, and (5) multiobjective problems under uncertainty: decision making problem.

16.8.1 Mean Performance and Robustness Trade-off

The mean performance and the variation in performance are usually conflicting, and a tradeoff decision must be made. Several researchers have studied how to model this tradeoff [7, 47] using multiobjective formulations.Constraint satisfaction also affects these tradeoff decisions.

Example: Figure 16.8(a) presents the Pareto frontier for the two bar truss RDO problem, which reflects the tradeoff between the mean performance and the robustness objectives. The Pareto frontier provided in Fig. 16.8(a) hasbeen generated using the normalized normal constraint method [59], which is described in detail in Chapter 17. Use of the weighted sum method for this problem cannot generate the complete representation of the Pareto frontier.

Next, we compare the robust solutions with the deterministic solution.

16.8.2 Deterministic vs. Robust Solutions

The deterministic optimization does not consider uncertainty explicitly, and not a good choice for the final design when uncertainty is important. If the deterministic design was chosen as the final design, a small change in the designvariable values will likely push the constraints into the infeasible region, or yield a higher standard deviation for the objective function. Instead, uncertainty-based methods take the uncertainty into account to minimize constraintviolations.

Example: Assume that the deterministic optimum is used for the truss design. Now assume that the design variables are uncertain, with σai = 0.005 and σb = 6 in. Assume the deterministic optimum to be the mean design variablevector, and use the above standard deviation values for the design variables. Compute the constraint violation for the S1 constraint using a Monte Carlo simulation. Since the deterministic solution did not take uncertainty intoaccount, the failure probability for the S1 constraint is 0.53, which is a constraint violation.

When the violation of the S1 constraint about the robust optima is computed, the failure probabilities range from 0.0008 to 0.0025. This is one of the primary advantages of optimization under uncertainty. Better constraintfeasibility is obtained by accounting for the uncertainties beforehand.

Now examine how the robustness of the objective function compares for the deterministic vs. RDO cases. As above, use the deterministic optimum for the mean design variable values and the above prescribed standard deviationvalues. If the robustness metric for this scenario, σJ, is computed, a value of σJ = 3.8 in2 will be obtained. This implies that if the truss is designed deterministically, and there are uncertainties in the design variables as given above,there will be important changes in the deflection of the node P. However, if the robust formulation is used, the output σJ is explicitly minimized, thereby ensuring a robust design. Note that the least σJ for the RDO formulationalong the Pareto frontier is approximately 2.5 in2, which is more desirable than the deterministic scenario (see Fig. 16.8(a)).

The next issue of interest is the tradeoffs that arise between constraint satisfaction and objective minimization.

16.8.3 Constraint Trade-offs

The RDO formulation usually formulates the inequality constraints by shifting them (i.e., making them stricter than does the deterministic formulation). While doing so can reduce the constraint violations under uncertainty, it alsousually makes the RDO mean objective worse than the deterministic objective. The higher the desired constraint satisfaction, the worse the RDO mean objective. Similarly, for equality constraints, the constraint satisfaction underuncertainty can be increased only if a deteriorated mean performance is acceptable. The approximate moment matching method is generally more suitable for exploring equality constraint tradeoffs. Engineering problems display this kind of tradeoff between constraint satisfaction and objective minimization.

Example: For the two bar truss RDO formulation, note that as you increase the inequality constraint shift from three to six standard deviations, you will observe that the estimated failure probability for the S1 constraint is reducedto zero along the Pareto frontier. However, this improved failure probability also results in the worsening of the μJ and/or σJ, as shown by the hollow circles in Fig. 16.8(b). As the shift increases, the Pareto frontier shifts into thenortheast region of the design space. This is an interesting tradeoff in optimization problems under uncertainty.

16.8.4 Final Design Choice

The various tradeoff studies and trends regarding constraint satisfaction and robustness metrics have been presented. Now that a set of candidate solutions have been generated, how do we choose one desirable design? This finalchoice entails some subjectivity, and is usually made after an extensive design space exploration has been conducted. Visualization techniques can help understand the tradeoffs graphically [60, 61].

16.8.5 Multiobjective Problems Under Uncertainty: Decision-Making Problem

The problem formulations discussed thus far considered only a single deterministic objective function. Design problems are often multiobjective in nature, and can be challenging to model in the robust domain. Managing thetradeoffs due to constraint satisfaction, robustness, and mean performance objectives of a single objective problem can be a complicated task by itself. If there are multiple design objectives, the tradeoff scenario can be verychallenging. Interested readers in this topic are referred to [7, 47].

16.9 Other Popular Methods

This section introduces two other popular approaches used for robust design and optimization under uncertainty: (1) The Taguchi Method, and (2) Stochastic Programming.

16.9.1 Taguchi’s Robust Design Methods

Taguchi defines robustness as “the state where the product or process design is minimally sensitive to factors causing variability, at the lowest cost” [62]. Taguchi’s product design approach consists of three stages: system design,parameter design, and tolerance design. System design is the conceptual design stage. Parameter design, also known as the robust design, enhances the robustness of the design by identifying factors that reduce the design’ssensitivity to noise. Tolerance design is the phase where appropriate tolerances for parameter values are specified. Taguchi methods use metrics, such as signal-to-noise ratio and quality loss functions, to perform parameter design.However, within the basic Taguchi method, constraints are not incorporated. Further enhancements of the Taguchi method can be found in these references [63, 64].

16.9.2 Stochastic Programming

Stochastic programming and similar techniques are used extensively in the fields of mathematics, finance, and artificial intelligence. Many ideas and issues that were studied in this chapter provide the building blocks for theseapproaches as well. Interested readers are pointed to the following references in stochastic programming [65, 66, 67, 68].

16.10 Summary

This chapter provided an introductory presentation of the issues involved in design optimization under uncertainty. This area is an active field of research, and this chapter provides a summary of the popular methods available, withreferences to more detailed information for interested readers. Optimization under uncertainty is a process that can be defined as including five main steps as defined in Fig. 16.2. We explained the basics of each of the five stepsinvolved, and provided examples to illustrate the discussion. Using a truss example, optimization problems under uncertainty were presented as decision making problems, where a tradeoff must be made among multiple conflictingissues of interest.

16.11 Problems

Intermediate Problems

16.1Derive the equations for the stresses S1 and S2 for the truss problem shown in Eqs.16.8 and 16.9.

16.2Using MATLAB, duplicate the deterministic two bar truss results presented in Sec.16.2.

16.3Duplicate the results found in Fig.16.4 using the normplot and normrnd commands in MATLAB.

16.4Consider the results of Sec.16.6.1.

(1)Duplicate the results of the example discussion of Sec.16.6.1.

(2)Now let the number of samples be 10. Estimate the failure probability for this case. Run your program 10 different times and observe the results. Do all the 10 runs match? Explain.

(3)Increase the number of samples to 1,000, 10,000, and 1,000,000. For each case, run your program 10 times and record the probability of failure values. As the number of samples increase, report your observations regarding the failure probability values of the multiple runs.

16.5 Explore the histfit command in MATLAB. Understand what the X-axis and Y-axis values of the plot represent.

(1)For each case of number of samples given in Problem 16.4, plot the histogram of the stress values and label your plot. Observe the change in the shape and position of the histogram as you increase the number of samples. Explain your observations.

(2)Where does the maximum allowable stress, Smax, lie on the plot? Show the failure region on the plot.

(3)Comment as to why the failure probability in this case appears sensible using the histogram plot and the Smax value.

16.6Repeat Parts (2) through (3) of Problem 16.4 with a mean A1 of 1.8 in, and the standard deviation given in Sec.16.6.1. How do the values of the failure probabilities change when compared to Problem 16.4?

16.7Our objective is to minimize the failure probability obtained in Sec.16.6.1. Setup an optimization problem that uses the mean of A1 as a design variable. Assume that the standard deviation of A1 is as prescribed in Sec.16.6.1.Solve the optimization problem to find the value of the mean of A1 that yields the least failure probability for the S1 constraint. (Hint: Each iteration of your objective function requires one Monte Carlo simulation. Use a sample size of 100,000.)

16.8Repeat Problem 16.5 using the assumptions in Problem 16.6.

(1)Observe the change in the shape and position of the histogram when compared to those obtained in Problem 16.5. What do you observe? (Hint: Pay special attention to the tails of the histogram distribution, and compare it with the maximum allowable stress value.)

(2)Identify the failure regions on the plot. Justify the failure probability values for this case using the histogram plot.

16.9Consider the stress S2 given in Eq.16.9. Assume that A2 is normally distributed with a mean of 1 in, and a standard deviation of 0.1 in. Assume that θ = 60° = 30°. Derive the expression for S2 based on the data given above.Repeat Questions (2) through (3) of Problem 16.4.

16.10In the previous problem, the failure probability of S2 was very high. What can be done to reduce its value? What are the parameters you can change to reduce the failure probability? Discuss how optimization can help in this context.

16.11Setup an optimization problem that uses the mean of A2 as a design variable. Assume that the standard deviation of A2 is as prescribed in Problem 16.9. Solve the optimization problem to find the value of the mean of A2 that yields the least failure probability for the S2 constraint. (Hint: Each iteration of your objective function requires one Monte Carlo simulation. Use a sample size of 100,000.)

16.12Consider the equation h = +, which is part of the equality constraint of the truss problem. This expression denotes the structural volume of the truss. Assume that A1 and A2 are normal random variables with meanvalues of 2.4 and 1.5 in, and standard deviations of 0.1 and 0.1, respectively. Assume that b = L. Using a Monte Carlo simulation, estimate the mean and standard deviation of the structural volume.

16.13Using the histfit command, plot the histogram of the structural volume from the previous problem.

(1)In the constraint shown in Eq.16.5, observe that the structural volume is restricted to be 4,000 in3. Show this value on your histogram plot. Comment on the failure region of the constraint.

(2)How does the failure region of the equality constraint differ from that of the inequality constraint studied in the earlier examples?

(3)What can we possibly change in the given data to reduce the failure region for the equality constraint? How can optimization help in this context?

16.14Consider a linear function P = 2X1+3X2+4X3, where X1,X2, and X3 have means of 0.1, 0.4, and 1, respectively; and standard deviations of 0.01, 0.01, and 0.01, respectively. Estimate the moments of P using a first order Taylor series approximation.

16.15Consider a nonlinear function, f = 5X12-X 2X3+cos(X4), where X1,X2,X3,X4 have means of 1, 1, 1, 0, and standard deviations of 0.1, 0.1, 0.05, 0.1, respectively. Estimate the moments of f using first order Taylor seriesapproximation.

16.16Duplicate the results discussed in Sec.16.6.3 for the truss problem. Derive the expressions shown in Eqs.16.16 and 16.17.

16.17We are interested in finding the estimates of the mean and standard deviations of the stress S2 of the truss problem. In Eq.16.9, assume that A2 is normally distributed with a mean of 1 in, and a standard deviation of 0.1 in. Assume that θ = 60° = 30°. Estimate the mean and standard deviation of S2 using a first order Taylor series expansion.

16.18Consider the equation h = +, which is part of the equality constraint of the truss problem. This expression denotes the structural volume of the truss. Assume that A1 and A2 are normal random variables with meanvalues of 2.4 and 1.5 in, and standard deviations of 0.1 and 0.1, respectively. Assume that b = L. Find the mean and standard deviation of h using a first order Taylor series approximation.

16.19Read the following paper and prepare a 2-page summary of its key contributions. The paper reference is: Messac, A., and Ismail-Yahaya, A., “Multiobjective Robust Design Using Physical Programming,” Structural andMultidisciplinary Optimization Journal, of the International Society of Structural and Multidisciplinary Optimization (ISSMO), Springer, Vol. 23, No. 5, 2002, pp. 357-371.

16.20Implement the RDO formulation for the two-bar truss problem in MATLAB shown in Eqs.16.20 through 16.24. Using this MATLAB code, generate the results shown in Sec.16.8.2 for the deterministic case.

16.21Duplicate the deterministic case results shown in Sec.16.8.3.

16.22Duplicate the deterministic case results shown in Sec.16.8.1.

BIBLIOGRAPHY OF CHAPTER 16

[1]J. N. Siddall. Probabilistic Engineering Design: Principles and Applications. Mechanical Engineering Series. CRC Press, 1983.

[2] A. D. Kiureghian and O. Ditlevsen. Aleatory or epistemic? Does it matter? Structural Safety, 31(2):105-112, 2009.

[3] H. Agarwal, J. E. Renaud, E. L. Preston, and D. Padmanabhan. Uncertainty quantification using evidence theory in multidisciplinary design optimization. Reliability Engineering and System Safety, 85(1-3):281-294, 2004.

[4] R. W. Youngblood, V. A. Mousseau, D. L. Kelly, and T. N. Dinh. Risk-informed safety margin characterization (rismc): Integrated treatment of aleatory and epistemic uncertainty in safetyanalysis. In The 8th International Topical Meeting on Nuclear Thermal-Hydraulics, Operation and Safety(NUTHOS-8), Shanghai, China, October 2010.

[5]M. S. Eldred and L. P. Swiler. Efficient algorithms for mixed aleatory-epistemic uncertainty quantification with application to radiation-hardened electronics - Part I: Algorithms and benchmark results. Technical Report SAND2009-5805, Sandia National Laboratory, Albuquerque, NM andLivermore, CA, September 2009.

[6] A. Urbina, S. Mahadevan, and T. L. Paez. An approximate epistemic uncertainty analysis approach in the presence of epistemic and aleatory uncertainties. Reliability Engineering and System Safety, 77(3):229-238, 2002.

[7] R. C. Smith. Uncertainty Quantification: Theory, Implementation, and Applications. SIAM, 2013.

[8] S. M. Ross. A First Course in Probability. Prentice Hall, Upper Saddle River, NJ, 5 edition, 1998.

[9] A. Haldar and S. Mahadevan. Probability, Reliability, and Statistical Methods in Engineering Design. John Wiley and Sons, Inc, New York, 2000.

[10]S. Gunawan and P. Y. Papalambros. A Bayesian approach to reliability-based optimization with incomplete information. ASME Journal of Mechanical Design, 128(4):909-918, 2006.

[11]K. Sentz and S. Ferson. Combination of evidence in Dempster-Shafer theory. Technical Report SAND2002-0835, Sandia National Laboratories, Setauket, New York, April 2002.

[12] H. R. Bae, R. V. Grandhi, and R. A. Canfield. Uncertainty quantification of structural response using evidence theory. AIAA Journal, 41(10):2062-2068, 2003.

[13] J. C. Helton, J. D. Johnson, W. L. Oberkampf, and C. J. Sallaberry. Sensitivity analysis in conjunction with evidence theory representations of epistemic uncertainty. Reliability Engineering and System Safety, 91(10-11):1414-1434, 2006.

[14] L. V. Utkin and S. V. Gurov. A general formal approach for fuzzy reliability analysis in the possibility context. Fuzzy Sets Systems, 83:203-213, 1996.

[15] X. G. Bai and S. Asgarpoor. Fuzzy-based approaches to substation reliability evaluation. Electric Power Systems Research, 69(2-3):197-204, 2004.

[16] L. Du, K. K. Choi, B. D. Youn, and D. Gorsich. Possibility-based design optimization method for design problems with both statistical and fuzzy input data. ASME Journal of Mechanical Design, 128(4):928-935, 2006.

[17] J. Zhou and Z. P. Mourelatos. A sequential algorithm for possibility-based design optimization. ASME Journal of Mechanical Design, 130(1), 2008.

[18] B. D. Youn, K. K. Choi, and D. Gorsich. Integration of possibility-based optimization to robust design for epistemic uncertainty. ASME Journal of Mechanical Design, 129(4):876-882, 2008.

[19] F. P. N. Coolen and M. J. Newby. Bayesian reliability analysis with imprecise prior probabilities. Reliability Engineering and System Safety, 43(1):75-85, 1994.

[20] H. Z. Huang, M. J. Zuo, and Z. Sun. Bayesian reliability analysis for fuzzy lifetime data. Fuzzy Sets Systems, 157(12):1674-1686, 2006.

[21] B. D. Youn and P. Wang. Bayesian reliability-based design optimization using eigenvector dimension reduction method. Structural Multidisciplinary Optimization, 36(2):107-123, 2008.

[22] Z. P. Mourelatos and J. Zhou. A design optimization method using evidence theory. Journal of Mechanical Design, 128(4):901-908, 2006.

[23] J. M. Aughenbaugh and C. J. J. Paredis. The value of using imprecise probabilities in engineering design. ASME Journal of Mechanical Design, 128(4):969-979, 2006.

[24] J. Zhou and Z. P. Mourelatos. Design under uncertainty using a combination of evidence theory and a Bayesian approach. SAE International Journal of Materials and Manufacturing, 1(1):122-135, April 2009.

[25] P. Wang, B. D. Youn, Z. Xi., and A. Kloess. Bayesian reliability analysis with evolving, insufficient, and subjective data sets. ASME Journal of Mechanical Design, 131(11):111008-11, 2009.

[26] M. Grigoriu. Stochastic Systems: Uncertainty Quantification and Propagation. Springer, 2012.

[27] T. W. Simpson, D. Lin, and W. Chen. Sampling strategies for computer experiments: Design and analysis. International Journal of Reliability and Application, 2(3), 2001.

[28] G. I. Rozvany and K. Maute. Analytical and numerical solutions for a reliability-based benchmark example. Structural and Multidisciplinary Optimization, 43(6):745-753, 2011.

[29] K. Zaman, M. McDonald, and S. Mahadevan. Probabilistic framework for uncertainty propagation with both probabilistic and interval variables. ASME Journal of Mechanical Design, 133(2):021010-14, February 2011.

[30] C. P. Robert and G. Casella. Introducing Monte Carlo Methods. Springer-Verlag, 2010.

[31] X. Du. Unified uncertainty analysis by the first order reliability method. ASME Journal of Mechanical Design, 130(9):091401, 2008.

[32] G. W. Oehlert. A note on the Delta method. The American Statistician, 46(1):27-29, February 1992.

[33] R. N. Bhattacharya and R. R. Rao. Normal Approximation and Asymptotic Expansions. SIAM, 2010.

[34] D. Xiu and G. E. Karniadakis. Modeling uncertainty in flow simulations via generalized polynomial chaos. Journal of Computational Physics, 187(1):137-167, 2003.

[35] A. M. Hasofer and N. C. Lind. Exact and invariant second-moment code format. Journal of Engineering Mechanics Division, 100(1):111-121, 1974.

[36] X. Du and W. Chen. Sequential optimization and reliability assessment method for efficient probabilistic design. ASME Journal of Mechanical Design, 126(2):225-233, 2004.

[37] B. D. Youn, K. K. Choi, and Y. H. Park. Hybrid analysis method for reliability-based design optimization. ASME Journal of Mechanical Design, 125(2):221-232, 2003.

[38] I. Enevoldsen and J. D. Sorensen. Reliability-based optimization in structural engineering. Structural Safety, 15(3):169-196, 1994.

[39] J. Tu., K. K. Choi, and Y. H. Park. A new study on reliability-based design optimization. ASME Journal of Mechanical Design, 121(4):557-564, 1999.

[40] B. D. Youn, K. K. Choi, and L. Du. Enriched performance measure approach (+pma) for relibaility-based design optimization. AIAA Journal, 43(4):874-884, 2005.

[41] R. Yang and L. Gu. Experience with approximate reliability-based optimization methods II: An exhaust system problem. Structural and Multidisciplinary Optimization, 29(6):488-497, 2005.

[42] S. Shan and G. Wang. Reliable design space and complete single-loop reliability-based design optimization. Reliability Engineering and System Safety, 93(8):1218-1230, 2008.

[43] T. H. Nguyen, J. Song, and G. H. Paulino. Single-loop system reliability-based design optimization using matrix-based system reliability method: Theory and applications. ASME Journal of Mechanical Design, 132(1):011005-11, 2010.

[44] J. Liang, Z. Meorelatos, and J. Tu. A single-loop method for reliability-based design optimisation. International Journal of Product Development, 5(1-2):76-92, 2008.

[45] M. Ba-Abbad, E. Nikolaidis, and R. Kapania. New approach for system reliability-based design optimization. AIAA Journal, 44(5):1087-1096, 2006.

[46] R. Jin, X. Du, and W. Chen. The use of metamodeling techniques for optimization under uncertainty. Structural and Multidisciplinary Optimization, 25(2):99-116, 2003.

[47] A. Messac and A. Ismail-Yahaya. Multiobjective robust design using Physical Programming. Structural and Multidisciplinary Optimization, 23(5):357-371, 2002.

[48] X. Du and W. Chen. Towards a better understanding of modeling feasibility robustness in engineering design. ASME Journal of Mechanical Design, 122:385-394, December 2000.

[49] X. Gu, J. E. Renaud, S. M. Batill, R. M. Brach, and A. Budhiraja. Worst case propagated uncertainty of multidisciplinary systems in robust optimization. Structural Optimization, 20(3):190-213, 2000.

[50] S. Gunawan and S. Azarm. Multi-objective robust optimization using a sensitivity region concept. Structural and Multidisciplinary Optimization, 29(1):50-60, 2005.

[51] W. Chen, R. Garimella, and N. Michelena. Robust design for improved vehicle handling under a range of maneuver conditions. Engineering Optimization, 33(3):303-326, 2001.

[52] C. D. McAllister and T. W. Simpson. Multidisciplinary robust design optimization of an internal combustion engine. ASME Journal of Mechanical Design, 125(1):124-130, 2003.

[53] J. K. Allen, C. Seepersad, H. Choi, and F. Mistree. Robust design for multiscale and multidisciplinary applications. ASME Journal of Mechanical Design, 128(4):832-843, 2006.

[54] H. Liu, W. Chen, M. Kokkolaras, P. Y. Papalambros, and H. M. Kim. Probabilistic analytical target cascading: A moment matching formulation for multilevel optimization under uncertainty. ASME Journal of Mechanical Design, 128(4):991-1000, 2006.

[55] B. Dodson, P. Hammett, and R. Klerx. Probabilistic Design for Optimization and Robustness for Engineers. John Wiley and Sons, 2014.

[56] A. Parkinson, C. Sorensen, and N. Pourhassan. A general approach for robust optimal design. ASME Journal of Mechanical Design, 115(1):74-80, 1993.

[57] S. Rangavajhala, A. A. Mullur, and A. Messac. Equality constraints in multiobjective robust design optimization: Decision making problem. Journal of Optimization Theory and Applications, 140(2):315-337, 2009.

[58] S. Rangavajhala, A. A. Mullur, and A. Messac. The challenge of equality constraints in robust design optimization: Examination and new approach. Structural and Multidisciplinary Optimization, 34(5):381-401, 2007.

[59] A. Messac, A. Ismail-Yahaya, and C. A. Mattson. The normalized normal constraint method for generating the Pareto Frontier. Structural and Multidisciplinary Optimization, 25(2):86-98, 2003.

[60] G. Stump, T. W. Simpson, M. Yukish, and L. Bennett. Multidimensional visualization and its application to a design byshopping paradigm. In 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization Conference, Atlanta, GA, USA, September 2002.

[61] S. Rangavajhala, A. A. Mullur, and A. Messac. Uncertainty visualization inmultiobjective robust design optimization. In 2nd AIAA Multidisciplinary Design Optimization Specialist Conference, Newport, RI, USA, May 2006.

[62] G. Taguchi, Subir Chowdhury, and S. Taguchi. Robust Engineering: Learn How to Boost Quality While Reducing Costs and Time to Market. McGraw-Hill, 1999.

[63] K. N. Otto and E. K. Antonsson. Extensions to the Taguchi method of product design. ASME Design Theory and Methodology, 115(1):5-13, 1993.

[64] K. L. Tsui. Robust design optimizaton for multiple characteristic problems. International Journal of Production Research, 37(2):433-445, 1999.

[65] J. R. Birge and F. Louveaux. Introduction to Stochastic Programming. Springer, 2011.

[66] I. M. Stancu-Minasian. Stochastic Programming with Multiple Objective Functions. Editura Academiei, 1984.

[67] D. Bertsimas and M. Sim. The price of robustness. Operations Research, 52(1):35-53, 2004.

[68] J. Mulvey, R. J. Vanderbei, and S. Zenios. Robust optimization of large scale systems. Operations Research, 43(2):264-281, 1995.