Introducing Linear and Nonlinear Programming- USING OPTIMIZATION—THE ROAD MAP - Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

PART 2


USING OPTIMIZATION—THE ROAD MAP

5


Introducing Linear and Nonlinear Programming

5.1 Overview

This chapter introduces important fundamental principles for the optimization of any system whose performance can be quantitatively defined in term of quantities that can be controllably changed (i.e., design variables). In Sec. 5.2, we discuss the various classes of optimization problems. Specifically, we discuss optimization in terms of (i) linearity vs. nonlinearity, (ii) single vs. multiple objectives, (iii) discrete vs. continuous design variables, and (v) constrained vs. unconstrained optimization. In Sec. 5.3, we discuss the important realization that in most cases, problems are treated as if they involved a single objective. In Sec. 5.4, we present the solution approaches to optimization: analytical, numerical, graphical, and experimental. In the last section, Sec. 5.5, we introduce several of the software packages available for solving different classes of optimization problems.

5.2 Problem Classes

As we deal with a given optimization problem, it is important to know what class (or type) of problem it represents. This is because each different class of optimization problem generally calls for a different type of solution approach, as well as a different class of software or algorithm. In this section, we define the major classes of optimization problems, part of which is presented in Fig. 5.1. A general optimization problem can be formulated using the following set of equations (Refs. [1, 2].

(5.1)

subject to

g(x) ≤ 0

(5.2)

h(x) = 0

(5.3)

xlxxu

(5.4)

Figure 5.1. Classification of Optimization Methods

The function J(x) represents the objective function, which we would like to minimize. The idea being that as we minimize the objective function, the system or design will behave better (e.g., cheaper, stronger, lighter, or faster). The function g(x) represents a vector of inequality constraints and the function h(x) represents a vector of equality constraints. These constraints make the design feasible (i.e., not unacceptable). For example, they might ensure that the mass is not negative or that a process is not prohibitively expensive. They are called behavioral constraints. The vector x represents the vector of design variables. These are the quantities that we can change in the design to improve its behavior. The constraints on the design variables are called side constraints. A set of design variables that fully satisfies all the constraints is called a feasible solution (even if it does not minimize the objective function). However, thus far, we have provided no information regarding the important properties of the above quantities. These properties define the class of the optimization problem. We will now be more specific.

Broadly speaking, optimization problems can be classified along seven major categories. Knowledge of, and appreciation for, these categories is important as they help us understand whether the problem at hand is simple or involved, and whether a given algorithm or software applies to our problem. Figure 5.1 provides a helpful presentation of the interaction between the first four. The categories are as follows:

1.Linear vs. Nonlinear

2.Constrained vs. Unconstrained

3.Discrete vs. Continuous

4.Single vs. Multiobjective

5.Single vs. Multiple Minima

6.Deterministic vs. Nondeterministic

7.Simple vs. Complex

These categories are discussed below.

(1)Linear vs. Nonlinear: Are the functions J(x), g(x), and h(x) linear or nonlinear functions of x? When the objective function and the constraints are all linear, the problem is called a linear programming (LP) problem. (Asmentioned earlier, linear programming and linear optimization mean the same thing. The former is often used for historical reasons). When either the objective function or any of the constraints is a nonlinear function of the design variables, the problem is called a nonlinear programming (NP) problem (which means the same as a nonlinear optimization problem). Generally, LP problems are much easier to solve than NP problems, particularly when the dimension of the problem is large (i.e., large number of design variables). In addition, the more nonlinear the problem, the more difficult it may be to optimize. It is also more likely to present numerical difficulties. Fortunately, there is much that we can do to minimize these difficulties. As a general rule, solving a large LP problem is a more reliable process than solving a large NP problem. There are many different ways to pose a LinearProgramming problem, all of which are practically equivalent. One possible option is as follows.

(5.5)

subject to

Axb

(5.6)

Aeqx = beq

(5.7)

xlxxu

(5.8)

In the above equations, the quantities A and Aeq represent matrices of constant coefficients. The quantites c,b, beq, xl, and xu represent constant vectors. The important observation to make here is that the objective function, as well as the constraints, are all linear functions of the design variable x, which is what makes this a linear programming problem. As a final comparison, we examine the equality constraints in the linear and nonlinear programming formulations. Specifically, let us compare Eq. 5.3 and Eq. 5.7. In the former, h is a function of x, which may be linear or nonlinear. In the latter, we have a multiplication between a matrix, Aeq, and a vector, x, which is a linear operation (see Refs. [3, 4]).

(2)Constrained vs. Unconstrained: Does the optimization problem have constraints? When the optimization problem does have constraints, we call it a constrained optimization problem. When we do not have any constraints at all, the problem is an unconstrained optimization problem. Most practical problems involve constraints. Thus, unconstrained problems generally have more theoretical than practical value. Specifically, in the case of nonlinear programming, the problem statement would only involve Eq. 5.1 - the relations 5.2 and 5.3 would not be part of the (unconstrained) optimization problem. However, for reasons that will be explained later, we must have constraints in the case of linear programming. Otherwise, the solution will not generally be finite. (We invite you to ponder why this is the case.) In other words, in the case of linear programming, at least one of the relations 5.6, 5.7, and 5.8 must be part of the linear programming problem. This discussion applies in the case of continuous design variables (why this limitation?). Continuity or discreteness of design variables is discussed next.

(3)Discrete vs. Continuous: Are any of the design variables discrete, or are they all continuous? If any design variable is discrete, we no longer have a continuous optimization problem. We note the following typical cases. (i) In the first, the design variables could be restricted to take only the values of 0 or 1. This case is called 0-1 programming or binary programming. (ii)In the second case, the design variables are restricted to take on only integer values. This case is called integer programming (see Refs. [5, 6]). (iii) In the third case, the design variables are restricted to take on only a given prescribed set of real values. This case is generally called discrete optimization.Often, we have a mixture of the above cases, where some variables may be 0-1 type, some may be integers, some may take on given/prescribed real numbers, while others might be continuous. Different methods are available to handle these different cases. Continuous optimization problems are usually much easier to deal with. It is worth noting that in some cases, the integer programming problem might allow for the design variable to take on any integer value, while in other cases, it might call for a restricted given set of integer values. We invite you to think of practical optimization cases that may involve the above different cases. One such example is the case of optimizing the diameter of a pipe that is available in ten different sizes. Another case may involve optimizing profit in the case where a loan package must be chosen among ten possibilities, each of which involves a different interest rate and a different origination fee (see Ref. [7]).

(4)Single vs. Multiobjective: In general, the design or optimization of practical systems involves tradeoffs among competing objectives. For example, to improve a car’s crashworthiness, we make it heavier; but, in the process, the fuel efficiency is worsened. Therefore, these two objectives (crashworthiness and fuel efficiency) compete against each other and require a compromise where we tradeoff one for the other as needed for an optimal situation. As we think of the above situation and of most practical problems, we realize that most optimization problems involve more than a single objective. Most are multiobjective in nature. There are many methods available to deal with both single objective and multiobjective problems. This is an important area of study for the practical application of optimization. In fact, we devote a whole chapter (Chapter 6) to this topic. As such, we will not discuss it here any further. We only note that this is an important classification of optimization problems.

(5)Single vs. Multiple Minima: Does the optimization problem have a single minimum/maximum (optimum), or multiple optimum values (i.e., unimodal or multimodal). Solving optimization problems that have several optima is referred to as global optimization. This optimization case is much more difficult to handle than single optimum optimization, as we will see later. Furthermore, the algorithms that are used to handle unimodal or multimodal problems are quite different. A significant class of engineering problems can be managed using single-optimum optimization algorithms, which is the main focus of this book. Finally, we note that single minimum search isreferred to as local optimization, while multi-minima search is referred to as global optimization.

(6)Deterministic vs. Nondeterministic: In recent years, designers have realized that information is almost never exact. They began to understand that there is a high cost associated with low/tight tolerances. The more precisely we manufacture a part, the more costly it will be. We also started to understand that it is not necessary for every part in a design to have the same tolerance. In addition, there are certain aspects of the product over which we do not have direct control. Demand for a product, for example, is something that we can only estimate. The net result is that there are some aspects of the design that can be represented by design variables that are deterministic, whileother design variables might have to be treated as as nondeterministic (e.g., probabilistic or stochastic). In this book, we will explicitly focus on deterministic optimization, although many of the techniques discussed in this book can be used for nondeterministic optimization (see Ref. [8]).

(7)Simple vs. Complex Problem: Perhaps the most critical aspect of a problem at hand is to understand whether it will be Simple or Complex endeavor. Let us explain what we mean. A simple problem can be viewed as one that can be solved relatively easily by virtue of certain characteristics. This can be the case (i) because the model of the system is provided or readily created, (ii) because it only involves continuous variables, (iii) because it is not strongly nonlinear, (iv) because it is expected that local optimization will be sufficient, (v) because the computational model of the system behavior can run on a computer in seconds or minutes (not hours), (vi) because the number of design variables is not large, (vii) because all the models needed to describe the system behavior can run on a single computer, or (viii) because all the design variables are deterministic. An assessment of the aboveitems gives us a sense of how simple or complex it will be to optimize the design or system. In practice, however, each of the above items only provides us with indications, and not absolutes, in terms of how easy a problem will be to solve.

In Fig. 5.1, we observe the interaction of the first four categories discussed above. The shaded area represents a case that should not be considered. Specifically, this case represents continuous unconstrained linear programming, which leads to a solution at ∞ or -∞. The linear programming chapter (Chapter 11) presents this scenario in detail. The chapters in which various cases are discussed in the book are also provided. Boxes 1 and 2 are covered by the bulk of this book and most introductory texts. Parts of Boxes 3 and 4 are also covered to offer the possibility of a more advanced treatment of practical optimization.

5.3 Single Objective Optimization—An Inclusive Notion

Most practical problems in design or in other technical areas involve conflicting considerations. For example, when you make a part as light as possible, it cannot also become as strong as possible. In this case, we are dealing with two conflicting objectives: minimize mass and maximize strength. In practice, we must settle for a compromise between the two objectives. One way to deal with this compromise (i.e., tradeoff) is through so-called multiobjective optimization methods. We present this important topic in greater detail in Chapter 6.

However, there are important reasons to learn about single objective optimization:

(i)As you will see, in practice, a multiobjective problem can be transformed to make it appear to be single objective. This seemingly single objective is actually some form of aggregation of the multiple objectives involved. This point will become clearer later.

(ii)In line with the above comment, we often treat computational optimization as single objective.

(iii)Finally, we note that some problems are truly single objective. The most glaring example is in the case where one wishes to simply maximize one quantity - profit.

5.4 Solution Approaches: Analytical, Numerical, Experimental and Graphical

In the above section, we discussed the various categories, or classes, of optimization problems. We also discussed their associated implications in terms of overall difficulty. In this section, we discuss the various approaches that areavailable to us to solve optimization problems. Generally speaking, we can identify four broad solution approaches: analytical, algorithmic, experimental, and graphical. We briefly discuss the first three, followed by a more detailed presentation of the fourth (graphical) approach.

5.4.1 Analytical Optimization

The analytical approach is one that most high school and first year engineering students are actually already familiar with. To use this approach, we must have a mathematical function available to us that we wish to minimize. This mathematical function is supposed to represent the performance of our design or system. By finding the conditions that maximize this function, we will have identified the conditions that maximize the performance of the design. We learned in calculus that if we differentiate this function, and we identify the point where the obtained derivative is zero, we will also have identified an optimum point. We can then take the second derivative. If it is positive, we havea minimum. If it is negative, we have a maximum. In the case where we also have constraints in the problem, the situation is a bit more involved. It requires us to use lagrange multiplyers, which we present in Sec. 13.5.

This approach was the most viable in the pre-computer age, when differential calculus rather than numerical approaches was king. Unfortunately, for most practical problems, this approach is too complicated to be viable.However, while this approach is not a viable practical approach, it does help us understand how to develop more useful numerical algorithms upon which the whole modern field of computational optimization is based.

5.4.2 Numerical (or Algorithmic) Optimization

The numerical optimization approach is indeed the modern way to optimize. It fully exploits the power of a computer as a computational workhorse. It helps us explore endless possibilities that we would not be able to identify in any manual way. Basically, this approach uses algorithms to help us search the possible options. Stated differently, we start with one possible design (generally a bad one!). Based on some logic in an algorithm, we estimate a next design (which we hope will be better). We then keep iterating until we reach what we hope will be the optimal design. The better the algorithm, the more likely we are to reach the optimal design and reach it faster.

As we can see, this approach involves an iterative process. The advantage and power of this approach is two-fold. First, it uses an algorithm to determine what the next improved design should be. (There is no guess-work). Second, it uses the computer to perform the iterations. Since the computer does not get tired, we can perform a thorough exploration and generally obtain an optimal design in a reasonable amount of time.

Here, we state a key distinctive approach of this book to our learning optimization. Traditionally, when we study optimization, we spend most of our time learning how these search algorithms work. Unfortunately, we might be extremely knowledgeable as to how they work, while they may play little role in helping us become better designers in practice. The mere knowledge of the inner workings of these algorithms may or may not play a critical role in our design activities. Instead, we need to know how to use them in a computational design infrastructure. This is what is presented in the first 10 chapters of this book. The knowledge of these algorithms is, however, a useful addition to our advanced understanding of the topic of optimization. As such, we do provide a presentation of these topics in Chapters 11 through 13. In addition, we provide some advanced practical topics in Chapters14 through 19.

Describing the optimization process can be accomplished more vividly using numerical and graphical illustrations. The following two examples illustrate how the minimization procedure is executed to obtain optimal solutions.

For our first example, let us consider the following optimization problem:

(5.9)

subject to

-1 ≤ x1 ≤ 1

(5.10)

-0.5 ≤ x2 ≤ 0.5

(5.11)

Figure 5.2 helps to illustrate how optimization is carried out numerically on an objective function, with two design variables satisfying the constraints in (5.10) and (5.11).

Figure 5.2. Numerical Illustration of Optimization

The function, f(x), generates the contour plot shown in the figure. Each point on the diagram represents an iteration, and the number in the brackets represents the value of the objective function for the given design variables. The optimal solution is defined as the smallest possible value of the objective function satisfying the specified constraints. As indicated on the diagram, the contour plot corresponding to f(x) = 1.00 forms the feasible boundary of thespace possessing possible solutions (feasible region). The initial point, x = [1, 0.2], yields the initial design objective value of f(x) = 1.56, which is outside of the feasible region. The arrows show the step by step procedure carried out by the optimizer to obtain the minimized solution. The iterative process indicates that, even though the value of the objective function is being decreased, it is still possible to achieve lower objective function values satisfying the constraints. The iterations will continue until the objective function reaches its minimum value (the optimal solution). Executing the fmincon function in MATLAB to minimize this objective function requires 13 iterations. From examining this function, we observe that the optimal solution lies at x* = [0, 0], yielding f(x)* = 0. In practice, MATLAB, or any other optimization code, will generate the optimum with a finite number of accurate digits. The default will sometimes be 4 to 6. For the above example, MATLAB generated the optimal design values of x* = [-0.0341,-0.002], yielding f(x)* = 1.0337 × 10-6. Obtaining more accurate values for the design variables would require eitherscaling the objective function or changing some of the MATLAB default accuracy parameters, as discussed in the Numerical Essentials Chapter (Chapter 7).

5.4.3 Experimental Optimization

Experimental optimization, as it is called, can be viewed as a traditional trial and error approach to design. It essentially involves the following steps: (i) build a version of the physical system, (ii) evaluate its performance, and (iii) ifwe are satisfied with the current performance, we STOP; if not, we review what changes might be helpful, and go back to the first step, hopefully yielding an improved version.

This approach is practically obsolete. It can be very costly and time consuming, and it might not converge easily to a good solution. It might lead to a very sub-optimal solution. It may also require quite a bit of experience in order to work. This approach makes innovative new designs less likely. Fortunately, in this new highly competitive world where high performance computing is available, we can do much better using other approaches.

5.4.4 Graphical Optimization

It is also possible to explain this phenomena graphically. We now consider the following optimization problem:

(5.12)

subject to

(5.13)

(5.14)

In this example, the given constraint is a circle whose perimeter forms the boundary of the feasible region. Figure 5.3 illustrates this problem.

Figure 5.3. Graphical Illustration of Optimization

In the diagram, each line corresponds to the value of the objective function during the minimization process. If the initial point, P, where x = [1, 1] is chosen, then f(x) = 3.00. It is clear that this line does not intersect the circle at any point, making all the points lying on this line infeasible. Going through a similar iterative process as in the first example, the objective function is sequentially reduced to find the optimal solution. Therefore, for the case whenf(x) = 2.00, we can obtain possible solutions that fall between points a1 and a2. Even though these points are feasible solutions, it is still possible to obtain a smaller value for f(x) satisfying the constraints (recall the definition of the optimal solution). The minimization process will result in smaller and smaller values of the objective function, with feasible solutions bounded between the points a1a2,...,e1e2. As indicated in the figure, a point, O*, will be reached,such that the value of the design variables at this point will result in the smallest possible value of the objective function, which also satisfes the constraint equations Eqs. 5.13 and 5.14. This is the optimal solution. MATLAB generatedx* = [-0.4472,-0.8944], yielding f(x)* = -2.2361 ≈-2.4 as optimal values for this problem. Note that the optimal point falls on the line which is a tangent to the circle. Further minimization is no longer required, as smaller values for the objective function will yield points that fall outside the feasible region.

5.5 Software Options for Optimization

In this section, we discuss the various options available for optimizing problems using a computer software. As presented in Table 5.1, we define three main classes of optimization software. The first class involves stand-aloneoptimization software, where the primary focus is to solve various types of prescribed optimization problems. The second class involves design and/or analyses integration frameworks, where analyses codes from different engineering disciplines can be conveniently integrated and designs can be optimized. The third class of optimization software involves large scale analyses codes that have optimization capabilities as one of their offered features - typically added in recent years with the growing popularity of optimization. Details of the above options are discussed next. For convenience, we respectively refer to these classes as (i) Software for Optimization as Stand-Alone(SO-SA), (ii) Software for Optimization Within Design Framework (SO-WDF), and (iii) Software for Optimization Within Analysis Package (SO-WAP).

Table 5.1. Broad Classification of Software for Optimization



Software for Optimization (SO)


Within

Within

Discrete

Stand-Alone

Design

Analysis Package

Integer

Framework

or

(SO-SA)

(SO-WDF)

(SO-WAP)

Mixed


MATLAB Toolbox

iSIGHT

GENESIS

XPRESS

NEOS Server

PHX ModelCenter

NASTRAN

CPLEX

DOT-VisualDOC

modeFRONTIER

ABAQUS

Excel and Quattro

NAG

XPRESS

Altair

NEOS Server

NPSOL

LINDO/LINGO

ANSYS

MINLP

GRG2

GAMS

COMSOL

GAMS WORLD

LSSOL

Boss Quattro

MS Excel

CPLEX

What’sBest!

BTN

RISKOptimizer

PhysPro

Busi. Spreadsh.



Within this section, we first provide a table (Table 5.1) that lists many of the popular software for each of these classes in the first three columns. The last column lists those software that perform optimization for problems with discrete, integer, or mixed design variables. In Section 5.5.1, we discuss the MATLAB Optimization Toolbox. In Sections 5.5.2, 5.5.3 and 5.5.4, we discuss each of the three classes in sequence. The last column is discussed in Section 14.3.7, where the topic of discrete optimization is presented.

We also note that it may be useful to classify optimization software as being (i) small scale or large scale, (ii) easy or difficult to use, or (iii) particularly effective or less so. The choice of a particular optimization software will generally depend on pertinent user experience and on the problem under consideration.

5.5.1 MATLAB Optimization Code—fmincon and linprog

Nonlinear Optimization/Programming

Here, we describe the procedure for solving a constrained nonlinear optimization problem using MATLAB. The first step is to carefully examine how MATLAB defines the optimization problem, which is as follows

(5.15)

subject to

(5.16)

(5.17)

(5.18)

(5.19)

(5.20)

Each of the above variables is defined within the context of the following example. Figure 5.4 provides a top level view of the optimization algorithm. A more detailed presentation is provided in Sec. 7.3, and illustrated in Fig. 7.1. Note that if we wish to maximize a function f(x) and use the min MATLAB function, we simply need to perform the operation min x -f(x).

Figure 5.4. Optimization Flow Diagram

Let’s use the following example to further explain the optimization problem definition given by Eqs. 5.15-5.20

Example 5.1

(5.21)

subject to

(5.22)

(5.23)

We would like to obtain the optimum values of the design variables x1 and x2 that minimize the objective function f(x) in Eq. 5.21, subject to the constraints in Eqs. 5.22 and 5.23 being satisfied.

The fmincon function

To solve the above problem, we use the fmincon function provided in MATLAB. The syntax of the function is as follows.

[xopt, fopt] = fmincon(‘fun’, x0, A, b, Aeq, beq, LB, UB,
‘nonlcon’)

where x0, A, b, Aeq, beq, LB, and UB are the input variables that need to be defined before calling fmincon. ‘fun’ is the name of the function file containing the definition of f(x), and ‘nonlcon’ is the name of the function filecontaining the nonlinear constraints. The variables xopt and fopt are the outputs of fmincon, where xopt is the optimum vector of variables [x1,x2] and fopt is the minimum value of the objective function f.

The Calling Function: main.m

Define the input variables: x0: This is the initial guess provided to fmincon. It is a vector of size equal to the number of variables in the optimization problem. In this case, we have two variables, x1 and x2. Let us define x0 = [1;1].

A, b, Aeq, and beq: These variables need to be defined only if the problem has linear constraints. In many cases, all constraints (linear and nonlinear) can be defined in the nonlcon.m file, so these variables can simply be defined as empty matrices. For example, A=[ ]. However, there are great numerical advantages to defining the linear constraints within Eqs. 5.18 and 5.19, rather than within Eqs. 5.16 and 5.17, respectively.

LB, UB: These are the vectors that contain the lower and upper bounds on the variables x1 and x2, respectively. From the above problem definition (Eq. 5.23), we define: LB = [-5;-5]and UB = [5;5]. If a problem does not have bounds, these variables can simply be declared as empty vectors. We recommend creating a main.m file that defines all the above variables and calls fmincon for the optimization.

After defining the above quantities, we call the function fmincon.m. The function fmincon.m calls (i) nonlcon.m to evaluate the constraints and (ii) fun.m to evaluate the objective function as needed.

Objective Function file: fun.m

This file should be saved in the same folder as the above main M-file. This function returns the value of the objective function at any given point x. The contents of fun.m are as follows:

function f = fun(x)
f = x(1)^2 + 3*x(2)^2 - 2*x(1)*x(2) - 15;

Nonlinear constraints file: nonlcon.m

This file should be saved in the same folder as the main and the function files. It returns the values of the inequality and equality constraints at any given point x. Note that all inequality constraints should first be written in the form g(x) ≤ 0, as shown in Eq. 5.22. The expression on the left-hand-side of the inequality is then defined in the constraint file nonlcon.m as shown below.

function [C,Ceq] = nonlcon(x)
C(1) = -2*x(1) - 2*x(2) + 8;
Ceq = [ ];

Note that C(1) is the first and only nonlinear inequality constraint in this problem. If the problem had more inequality constraints (say, n), they would be defined as C(2), C(3),...,C(n). Ceq is the vector of all equality constraints, which is empty in this problem.

Calling fmincon

Once all the variables and function files are defined, we can call the fmincon function as shown above, to obtain the output

xopt =
2.6667
1.3333
fopt =
-9.6667

Figure 5.5 provides a plot of the objective function and the feasible region. fmincon also allows the user to set a number of different optimization parameters, such as the maximum number of iterations and other termination criteria. It also allows the user to pass problem parameters to the fun.m and the nonlcon.m files. More help on these features can be obtained by typing ‘help fmincon’ at the MATLAB command prompt.

Figure 5.5. Nonlinear Optimization

Options

Optimization options can be set for fmincon using the command optimset. Some options apply to all algorithms, while others are relevant to particular algorithms. You can use optimset to set or change the values of the options arguments. The options arguments include algorithms selection, information display settings and gradient estimation in series or parallel. Please see Optimization Options help in MATLAB for detailed information.

Display

Depending on the option selected for display, MATLAB can show different outputs in the Command Window. The information includes outputs at each iteration and provides technical exit messages. The exit messages can include hyperlinks. These hyperlinks bring up a window containing further information about the terms used in the exit messages.

Linear Optimization/Programming

Here, we describe the procedure for solving a linear programming problem using the MATLAB function linprog. The first step is to carefully examine the way MATLAB defines the optimization problem, which is as follows.

(5.24)

subject to

(5.25)

(5.26)

(5.27)

where f is a so-called cost coefficient vector, A and Aeq are constant matrices, and b and beq are constant vectors. The quantities LB and UB are the lower and upper bounds on the design variables. This optimization statement is further explained within the context of the following example.

Example 5.2

(5.28)

subject to

(5.29)

(5.30)

(5.31)

The linprog function

The basic syntax of the linprog function is as follows

[xopt,fopt] = linprog(f, A, b)

where xopt and fopt are the optimum values of the design variables and objective function, respectively. Before solving the linear program, we need to ensure that the objective function is of a minimization type, and that all the inequality constraints are written in the form of a1x1 + a2x2b1, where a1,a2, and b1 are constants (see Eqs. 5.29 and 5.30).

Define the input variables
f: This is a row vector corresponding to the coefficients of the design variables in the objective function f. Thus, in the current problem,

f = [4;4]

A: This is a matrix in which every row corresponds to the coefficients of the design variables in each “≤” constraint. Therefore, A will have as many rows as we have inequality constraints, and as many columns as we have design variables.

A = [-5 -3; -3 -5]

Aeq and beq: These matrices do not exist in this example, so we write

Aeq = [ ]
beq = [ ]

b: This is a vector in which each element is the number that appears on the right-hand-side of each “≤” constraint. We need to make sure that the order of the constraints used to define A and b is the same.

b = [-30; -15]

Calling linprog
Calling linprog using the above syntax yields

xopt =
1.8750
1.8750
fopt =
15.0000

Figure 5.6 illustrates the objective function (dashed line) being minimized and the feasible region defined by the constraints in Eqs. 5.29 and 5.30. The dash lines represent constant values of the objective function. As this line moves downward, to decrease the value of the objective function, it reaches the optimum point. If it decreased any further, there would be no part of the line that would remain in the feasible region. Type ‘‘help linprog” at theMATLAB command prompt to explore other features and options.

Figure 5.6. Linear Optimization

5.5.2 Software for Optimization as Stand-Alone (SO-SA)

Stand-alone optimization software refers here to those software that are created primarily for optimization purposes. We note that the following is only a representative sample of the software options available. A more complete listing of available software packages is provided in [9]. As shown below, there are software packages that have the capability to solve different types of optimization problems. A representative list is provided below with associated descriptions.

Please note that the websites provided for some of the software options listed below, while active as of the writing of this chapter, are subject to change in the future. The purpose of the following discussion is to inform the readers of the broad set of software options available in the marketplace to perform optimization. In practice, the latest information can be readily obtained through an internet search.

1.Multipurpose Stand-Alone Optimization Software

a)MATLAB optimization toolboxes: MATLAB has two optimization toolboxes that can solve various types of optimization problems, such as linear, nonlinear, and multiobjective problems. General information regarding the MATLAB toolboxes were presented in Chapter 1.

b)NEOS server: The NEOS server is an environment to solve optimization problems over the internet. It is available at the website www.neos-server.org/neos, and can be used to solve a variety of optimization problems, such as linear, nonlinear, discrete, integer programming, and combinatorial problems. The NEOS server also offers global optimization algorithms. The user needs to provide the optimization problem in a specific format, which depends on the type of optimization problem being solved. The software details of the solver can be found in Ref. [10].

c)GENESIS, DOT, and VisualDOC: GENESIS is a structural optimization software that also has finite element analysis capabilities. The DOT and VisualDOC programs are developed by the Vanderplaats Research and Development Inc., and are distributed as part of the GENESIS package. These software can solve nonlinear, multiobjective, and discrete optimization problems. Typical problem sizes are well over 100 design variables with thousands of constraints. Details are provided in [11]. Further information about these software tools can be found at the following website.

www.vrand.com

d)NAG library: Several implementations, such as C and Fortran, are available for the NAG library [12, 13]. It can be used to solve linear, quadratic, and nonlinear programming problems. Further information regarding the NAG Library can be found at the following website.

www.nag.com/optimization/availnaglib.asp

2.Packages for Specific Classes of Problems

a)Nonlinear constrained problems:

i.NPSOL: NPSOL is a software used to solve constrained optimization problems. It is especially effective for nonlinear problems whose functions and gradients are expensive to evaluate. These algorithms are claimed to be numerically stable, and to provide global convergence. Details are available at the following website.

www.sbsi-sol-optimize.com/asp/sol_product_npsol.htm

ii.GRG2: The GRG2 software [14] is used for nonlinear programming problems. It can be used as a stand-alone system or as a subroutine. It can handle up to two hundred active constraints. Another implementation of GRG2, known as LSGRG2 [15], can solve large scale optimization problems with over 500 constraints.

b)Linear programming problems:

i.LSSOL: It is a software package for solving constrained linear programs. The software is claimed to be numerically stable; and is used to solve problems in Statistics, Economics, and Finance. Details are available at the following website.

www.sbsi-sol-optimize.com/asp/sol_product_lssol.htm

ii.CPLEX: CPLEX is used to solve linear programming and integer programming problems. It employs both Simplex and interior point algorithms for linear programming. It can be used directly or as a subroutine. The software is claimed to be robust and reliable with a user-friendly interface. Details are available at the website:

www.ibm.com/software/commerce/optimization/cplex-optimizer

e)Unconstrained nonlinear problems:
The BTN software can be used to solve unconstrained nonlinear optimization problems, and is implemented in a parallel computing environment. This software is particularly suitable for large scale problems. Details arein [16].

d)Multiobjective problems:

i.NIMBUS: This software can be used to solve multiobjective problems. In the NIMBUS method, the user examines the values of the objective functions calculated at a current solution, and divides the objective functions into as many as five classes. The classes are functions whose values: (1) should be improved, (2) should be improved until some aspiration level is reached, (3) are satisfactory at the moment, (4) are allowed to worsen up to some bound, or (5) are allowed to change freely. A new subproblem is constructed and solved according to the classification and the associated information. Details are provided in [17]. Further information regarding NIMBUS can be found at the following website.

https://wwwnimbus.it.jyu.fi

ii.PhysPro: This software offers a unique and comprehensive formulation and solution to multiobjective problems. It embodies the Physical Programming Method, which is briefly presented in Chapter 18, and fully in Refs. [2, 3]. For more information regarding PhysPro, please visit www.physpro.com(Ref. [20]).

Next, we discuss some examples of integration frameworks where multidisciplinary analysis and optimization can be performed.

5.5.3 Software for Optimization Within Design Framework (SO-WDF)

The following is a representative list of various integration frameworks and their optimization features.

1.iSIGHT: This software helps engineers manage the computer software required to execute simulation-based design processes, including commercial computer aided design/analysis software and Excel spreadsheets. iSIGHT enables the rapid integration of these programs and automates their execution to accelerate the evaluation of several design alternatives. iSIGHT has multiobjective optimization capabilities, and contains state-of-the-artmultiobjective genetic algorithm routines. A toolkit is available in iSIGHT for users to integrate their own proprietary or obtained optimization algorithms. Through a standard procedure, these user algorithms can be madeavailable through the iSIGHT optimization interface, including algorithm specific tuning parameters. Details are available at the following website.

www.simulia.com

2.PHX ModelCenter: Phoenix Integration has developed the PHX ModelCenter, which can be used for process integration and design optimization. PHX Model-Center is an environment for process integration to support the design team. It also enables visualizing multidimensional design spaces to seek the best designs. PHX ModelCenter allows the user to construct the design process as a series of linked applications with a simple interface. Once completed, the user will have organized analyses models that facilitate optimization and rapid design space exploration. Further details are available at the following website.

www.phoenix-int.com

3.LMS Samtech Boss Quattro: This is an applications manager that works in a multidisciplinary environment and allows the user to explore the design space using optimization. It allows interfacing with major computer aided design and analysis software, with in-house codes, and with management of data sheets such as MS-EXCEL. Boss Quattro is one of the software packages provided by SIEMENS. SIEMENS provides various software solutions that can be adapted to the applications at hand and to the industrial environment. Details are available at the following website.

www.plm.automation.siemens.com

4.modeFRONTIER: modeFRONTIER is a user-friendly and full-featured design environment that facilitates easy integration of several computer aided engineering (CAE) tools, whether commercial or in-house. It has the capability to integrate finite element structural analysis and computational fluid dynamics software. It also provides multiple optimization features, which include gradient based optimization, genetic algorithms, andmultiobjective optimization. The modeFRONTIER software can be viewed as a wrapper around the CAE tool that performs optimization by modifying the input variables and monitoring the output variables. Details can be found at the following website.

www.esteco.com

Next, we discuss the third category of software that provides optimization features.

5.5.4 Software for Optimization Within Analysis Package (SO-WAP)

The following is a representative list of various analysis codes from different disciplines that have optimization routines.

1.Altair Optistruct: Altair OptiStruct is a commercially available finite element-based software program for both structural analysis and design optimization. Altair OptiStruct is used to design, evaluate, and improve performance of mechanical structures. Users can define a target and constraints such as maximum allowable deflection and stress, modal response, member sizing, and method of manufacture. Altair OptiStruct then solves and visualizes the structurally optimal and manufacturable design proposal based on the specifications. Details are available at the following website.

www.altairhyperworks.com

2.NASTRAN: MSC NASTRAN allows the user to perform the optimization of designs where the objectives/constraints can be responses from the finite element analysis, such as weight, stress, and displacement. Large scale optimization with hundreds of design variables and millions of responses can be handled in NASTRAN. The optimization module can also handle numerical issues, such as design variable linking (see Sec. 15.3.1). Advanced capabilities, such as approximation methods and robust optimization, are also available. For further details, see www.mscsoftware.com.

3.ANSYS: ANSYS offers design optimization features in its ANSYS Mechanical suite of products. ANSYS has integrated optimization tools, such as topological optimization and probabilistic design. The DesignXplorersoftware, a subset of ANSYS, allows users to study and quantify various structural and thermal analysis responses on parts and assemblies, and to subsequently perform optimization. It also offers design for six sigma capabilities. More details are available at the following website.

www.ansys.com

4.ABAQUS: ABAQUS provides optimization capabilities through relationships with independent software vendors. More details are available at the following website.

www.simulia.com

The disciplinary solution is directly provided by the respective software vendor.

5.COMSOL: The optimization module in COMSOL provides optimization codes that are suitable for computationally intensive finite element analyses and multiphysics problems. The disciplines of the problems range from traditional engineering disciplines, such as structural mechanics and chemical engineering, to emerging technologies, such as bioengineering. The user can input objectives and constraints, which could be simple algebraic expressions, or an analysis model of any physical phenomena. The optimization module has solvers for linear and nonlinear constrained problems. Details are available at the following website.

www.comsol.com

6.Microsoft Excel Add-ins for optimization: Microsoft Excel has an optimization solver that can be included as an add-in [21]. The hybrid evolutionary solver, which is a combination of genetic algorithms and classical nonlinearoptimization methods, enables the user to optimize models with any Excel functions. Details are available at the website: www.solver.com. In addition to the above solver, additional add-ins are available from various companies. A representative list follows.

a)What’sBEST! This add-in from LINDO Systems lets the user solve large scale optimization models by using Microsoft Excel. It provides solvers for linear, nonlinear, integer, and global optimization. It is claimed to be fast, reliable, and easy to use. Details are available at the following website.

www.lindo.com

b)RISKOptimizer: RISKOptimizer is an optimization add-in for Microsoft Excel. RISKOptimizer allows the optimization of Excel spreadsheet models that contain uncertain values. RISKOptimizer is claimed to find solutions quickly and is easy to use. Details are in [22].

c)Business spreadsheets: Business Spreadsheets (formerly Excel Business Tools) provides purpose-built Microsoft Excel templates that can be applied to a range of financial analysis and business decision-making scenarios. The Portfolio Optimization, which is one of the templates of Business Spreadsheets, evaluates the optimal capital weightings for a basket of investments that gives the highest return for the least risk. Details are available at the following website.

www.business-spreadsheets.com

5.6 Summary

The previous chapters provided the perquisite knowledge for learning the theory of optimization, and for implementing optimization through computational tools, followed by a philosophical introduction to the essence and the principal components of design optimization. This chapter built on that foundation by delving directly into the mathematical theory of optimization. More specifically, this chapter provided an introduction to the fundamental concepts of linear and nonlinear programming. An insightful classification of optimization problems was first provided, based on the function (non)linearity, the number of objectives, the presence of constraints, and the nature of design variables. This was followed by an introduction to the major classes of approaches used to solve such optimization problems; namely, analytical optimization, numerical/algorithmic optimization, experimental optimization, and graphical optimization. The capabilities of MATLAB in solving optimization problems was discussed next, specifically focusing on the fmincon and the linprog functions. The chapter concluded with short descriptions of different currently-available optimization-related software, including commercial software, open-source codes, multi-purpose packages, optimization frameworks, and simulation software that include optimization modules.

5.7 Problems

Warm-up Problems

5.1Formulate the following problem (by hand) to represent the generic optimization format shown in Eq. (5.15) to (5.20) of the textbook.

(5.32)

subject to

(5.33)

(5.34)

(5.35)

(5.36)

(5.37)

(5.38)

5.2Solve the following constrained optimization problem using MATLAB. Use the following as your starting points: (i) [1;1]and (ii) [1;6].

(5.39)

subject to

(5.40)

(5.41)

Answer the following questions:

(a)

Solve the above problem using MATLAB. Report the optimum value of x1 and x2, and the corresponding minimum value of f(x) for both starting points.

(b)

Does the minimum objective function value change when the starting point is changed?

(c)

What is the effect of the starting point on the optimum value of x2?

(d)

Explain in your own words any interesting features of this problem in view of the above questions.

5.3Consider the following problem.

(5.42)

subject to

(5.43)

(5.44)

(5.45)

(a)

Solve the above problem using MATLAB. Report the optimum value of x1 and x2, and the corresponding minimum value of f(x).

(b)

Solve the above problem by removing the first constraint 2x1 + x2 ≥ 4. Report the optimum value of x1 and x2, and the corresponding minimum value of f(x).

(c)

Now, solve the same problem by removing all the constraints. Report the optimum value of x1 and x2, and the corresponding minimum value of f(x).

(d)

Create a contour plot similar to Fig. 1.11 (a) showing the contours of the objective function. Show on it the locations of the optima obtained in Parts (a) - (c).

5.4The following is a linear programming problem.

(5.46)

subject to

(5.47)

(5.48)

(5.49)

(a)

Solve the above linear optimization problem using the linprog command in MATLAB.

(b)

Solve the above problem using the fmincon command in MATLAB. Compare the results with the results obtained in (a).

Intermediate Problems

5.5Discuss the nature of the solution of a continuous linear programming problem that does not have constraints. Next, discuss the nature of the solution of a discrete linear programming problem that does not have constraints.

5.6Prepare a one to two page review of the application of design optimization in a field of your choice (e.g., aerospace industry, energy industry, biomedical/biotechnology industry, operations research (OR), and finance). The review should summarize the contribution of design optimization to the concerned field in the last few decades. A brief survey of some key literature and real life implementation of optimization in that field should also be included. Clearly state all the references that you use in the review.

5.7The water tank shown in Fig. 5.7 is supported by a vertical column and subjected to a wind load, w [23]. As the engineer, you are required to determine the best value for the thickness, t, and inner diameter, di, of the columnin an effort to minimize the total mass for the design (total mass is given as the mass of the column plus the mass of the tank).

Figure 5.7. Structure of Water Tank

The following constants are given: the height of the column, H = 35m, diameter of the tank, D = 15m, wind pressure, w = 700N/m2, gravity, g = 9.81m/s2, average thickness of the tank wall, tavg = 0.015m, unit weight of steel,γs = 80KN/m3, allowable bending stress, σb = 165MPa, and allowable deflection, Δ = 0.2m.

You are presented with a simple function wtower.m in the book website (www.cambridge.org/Messac) that describes the loading condition shown in Fig. 5.7. This function allows you to enter values for the inner diameter and thickness (inputs); giving, in return, the corresponding values of the following (outputs):

1.Outer diameter of the column, do,

2. Allowable axial stress, σa (calculated using the critical buckling load with a factor of safety ),

3.Surface area of the tank, As,

4.Deflection of the tank, δ,

5.Bending stress, fb, and

6.Axial stress, fa

Figure 5.8 illustrates the function wtower.m.

Figure 5.8. Input and Output for the Function wtower.m

The following constraints should be imposed on the water tank design:

1. ≤ 92,

2.δ ≤ Δ,

3. + ≤ 1,

4.0.7 ≤ di ≤ 2.0m and

5.Enter the constants given above as a 0.01 ≤ t ≤ 0.2m

You are required to complete the following:

1.Enter the constants given above as a row vector y. That is

y = [H D w g tavg γs σb Δ]

2.Using the function wtower.m, with initial values for the inner diameter and thickness of the column being 1m and 0.1m, respectively, determine the optimal values for the design variables and design objective. (Note: Your objective function and constraints should be functions of the design variables, x, and the constants, y).

3.If you minimize only the mass of the column, will the optimal values for the design variables and design objective change? Discuss the reasons for your results.

5.8A welded cantilever beam subjected to a tip force is shown in Fig. 5.9. The volume of the weld holding the cantilever should be as small as possible while maintaining the applied tip force. As shown in Fig. 5.9, the weld has two segments, each of length l and height b. The cantilever beam, of length L, is subject to a tip force of F. This force, F, induces shear stresses in the welds indicated by the components, τt and τy. T corresponds to the turningeffect (torque) of F.

Figure 5.9. Schematic of Cantilever Beam

1.Minimize the weld volume subject to a maximum shear stress limit in the weld. Find the optimal solution using MATLAB.

2.Make a plot of the design variable space, showing the optimal variable values. On the same plot, draw (i) the objective function contour that passes through the optimal design variables, and (ii) an objective function contour that is not feasible.

3.Indicate on your plot: the constraint functions, and the direction for which the objective function contours worsen.

Given the parameter values:

The expression for shear stress is given as:

BIBLIOGRAPHY OF CHAPTER 5

[1]H. W. Kuhn. Nonlinear Programming: A Historical View. Springer, 2014.

[2]L. T. Biegler. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical processes. SIAM, 2010.

[3]R. Bronson, G. B. Costa, and J .T. Saccoman. Linear Algebra: Algorithms, Applications, and Techniques. Elsevier Science, 2013.

[4]D. G. Luenberger and Y. Ye. Linear and Nonlinear Programming. Springer, 2008.

[5]J. K. Karlof. Integer Programming: Theory and Practice. CRC Press, 2005.

[6]D. S. Chen, R. G. Batson, and Y. Dang. Applied Integer Programming: Modeling and Solution. John Wiley and Sons, 2010.

[7]J. Hromkovic. Algorithmics for Hard Problems: Introduction to Combinatorial Optimization, Randomization, Approximation, and Heuristics. Springer, 2010.

[8]P. M. Pardalos, Q. P. Zheng, and A. Arulselvan. Deterministic global optimization. Wiley Encyclopedia of Operations Research and Management Science, 2010.

[9] J. J. Moré and S. J. Wright. Optimization Software Guide. Society for Industrial and Applied Mathematics, Philadelphia, 1993.

[10]M. C. Ferris, M. P. Mesnier, and J. J. Moré NEOS and CONDOR: Solving optimization problems over the internet. ACM Transactions on Mathematical Software, 26(1):1-18, 2000.

[11]G. N. Vanderplaats. Multidiscipline Design Optimization. Vanderplaats Research and Development, Inc., 2007. www.vrand.com/BookOnOptimization.html.

[12]NAG Ltd. NAG C library manual. www.nag.com/numeric/CL/CLdescription.asp.

[13]NAG Ltd. NAG fortran library manual. www.nag.com/numeric/fl/fldocumentation.asp.

[14]L. S. Lasdon, A. D. Waren, A. Jain, and M. Ratner. Design and testing of a generalized reduced gradient code for nonlinear programming. ACM Transactions on Mathematical Software, 4:34-50, 1978.

[15]S. Smith and L. S. Lasdon. Solving large sparse nonlinear programs using GRG. ORSA J. Comput., 4:1-15, 1992.

[16] S. G. Nash and A. Sofer. BTN: Software for parallel and unconstrained optimization. ACM Transactions on Mathematical Software, 18:414-448, 1992.

[17]K. Miettinen and M. M. Mäkelä. Synchronous approach in interactive multiobjective optimization. European Journal of Operational Research, 170(3):909-922, 2006.

[18]A. Messac. Physical Programming: Effective optimization for computational design. AIAA Journal, 34(1):149-158, 1996.

[19]A. Messac, S. Gupta, and B. Akbulut. Linear Physical Programming: A new approach to multiple objective optimization. Transactions on Operational Research, 8:39-59, 1996.

[20]PhysPro. www.physpro.com.

[21]D. Fylstra, L. S. Lasdon, J. Watson, and A. D. Waren. Design and use of the Microsoft Excel Solver. Interfaces, 28(5):29-55, 1998.

[22]W. L. Winston. Decision Making Under Uncertainty With RISKOptimizer : A Step-To-Step Guide Using Palisade’s RISKOptimizer for EXCEL. Palisade Corporation, 1999.

[23]J. Arora. Introduction to Optimum Design. Academic Press, 3rd edition, 2011.