Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)
PART 4
GOING DEEPER: INSIDE THE CODES AND THEORETICAL ASPECTS
13
Nonlinear Programming with Constraints
13.1 Overview
Practical engineering optimization problems often involve constraints. Since nonlinearities are pervasive in practical constraints and objective functions, it is necessary to explicitly study the methods used to solve nonlinearoptimization problems with constraints [1, 2, 3].
In the previous chapter (Chapter 12), the methods used to solve nonlinear programming problems without constraints were discussed. The present chapter presents a representative set of methods used to solve nonlinearprogramming problems with constraints. These include: the elimination method, the penalty method, the KarushKuhn Tuker (KKT) condition that defines optimality, sequential linear programming, and sequential quadraticprogramming.
13.2 Structure of Constrained Optimization
The general formulation of optimization problems has been defined as
(13.1) 
subject to
(13.2) 

(13.3) 

(13.4) 
The function f (x) represents the objective function. The constraints include the inequality constraint function g(x), the equality constraint function h(x), and the side constraints. An optimization problem is classified as a constrained nonlinear programming problem when (i) it involves at least one constraint, and (ii) when at least one of the functions (among its objective function or its constraint functions) is nonlinear. An example of constrained nonlinear optimization problems is as follows.
Example:
(13.5) 
subject to
(13.6) 

(13.7) 
This example involves one nonlinear function: the inequality constraint g(x). The feasible region is depicted in Fig. 13.1. The inequality constraint, g(x), constrains the feasible region inside the circle. The equality constraint function, h(x), constrains the feasible region to the single straight line segment inside the circle. Therefore, the feasible region is the line segment between the two points, a and b, as given by the dashed line in the figure.
Figure 13.1. An Example of a Constrained Nonlinear Optimization Problem
This optimization problem can be solved using graphical approaches. The location of the minimum is point b. (In this chapter, more advanced numerical methods are introduced to solve constrained nonlinear optimizationproblems.)
Note that it is acceptable to include the side constraints (Eq. 13.4) into the set of behavioral inequality constraints (Eq. 13.2), so that no side constraints explicitly appear in the formulation. This inclusion of side constraints into thebehavioral constraints is done in Sec. 13.5 in the development of the KarushKuhnTucker Conditions. Note that, in software application, this inclusion can have negative computational consequences. This is because optimizationsoftware treat these constraints differently for computational efficiency purposes. We combine them here only for theoretical convenience. In this case, the problem formulation takes the form
(13.8) 
subject to
(13.9) 

(13.10) 
13.3 Elimination Method
If an optimization problem only has equality constraints, it can be solved as an unconstrained problem by eliminating variables. This class of problems is defined as
(13.11) 
subject to
(13.12) 

In the above problem definition, N represents the number of design variables, and n represents the number of equality constraints. Suppose N > n and the n equality constraints are mutually independent. In this case, the equalityconstraints can be solved, and the expressions of n variables can be substituted into the objective function. This process reduces the number of variables from N to N − n. In addition, the constrained optimization problem becomes unconstrained. Methods used to solve unconstrained optimization problems can then be used to search for the optimum solution. The following example is used to illustrate this elimination method for solving constrained nonlinear programming problems involving only equality constraints.
Example: Use the elimination method to solve the following optimization problem with equality constraints.
(13.13) 
subject to
(13.14) 

(13.15) 
The equality constraints can be simplified to
(13.16) 
(13.17) 
Substituting the above expressions for x_{2} and x_{3} into the objective function of the constrained optimization problem yields the following unconstrained problem:
(13.18) 
Solving the above unconstrained optimization problem, the minimum point is found to be x^{*} = 0.1667, and the minimum value of the objective function is −3.1667.
13.4 Penalty Methods
The Penalty method is used to replace the original optimization problem with a sequence of subproblems in which a functional form of the constraints is added to the objective function. The idea of this approach is that, in the limit,the solutions of the subproblems will converge to the solution of the original constrained problem. Penalty methods are classified into two groups depending on how the methods add the inequality constraints: (i) the interior point methods which generate a sequence of feasible points; and (ii) the exterior point methods which generate a sequence of infeasible points.
Penalty methods reformulate the constrained optimization problem into an unconstrained optimization problem. The unconstrained problem has two parts: the objective function of the original problem, and a penalty term. The unconstrained problem is expressed as
(13.19) 
In Eq. 13.19, ΩR, g(x), h(x) is the penalty term. It is a function of R and the constraints. In this function, R is called the penalty parameter. It can have more than one number. The penalty parameter and the constraint functions cantake on different forms. The penalty parameter can be updated using different rules.
If the stationary point of P(x,R) is infeasible, an exterior point method is being used. The updated parameter, R, forces the stationary point to be closer to the feasible region. In contrast, if the form of Ω forces the stationary pointof the unconstrained function P(x,R) to be feasible, an interior point method is being used. The interior point method is also called the barrier method, as the penalty term forms a barrier along the boundaries of the feasible region.
There are different choices of penalty forms. A commonly used penalty form for equality constraints is the parabolic penalty, which is given by
(13.20) 
The parabolic penalty function is plotted in Fig. 13.2. The parabolic penalty term is 0 only at the point where h(x) = 0. As h(x) moves farther away from 0, the value of Ω becomes larger. Thus, the parabolic penalty term discourages both positive and negative violations of h(x).
Figure 13.2. Parabolic Penalty
Another useful penalty form is the logarithmic function. The logarithmic penalty form for inequality constraints is expressed as
(13.21) 
The logarithmic penalty form is usually used for the interior point methods. The initial point is in the feasible region. As the point approaches the boundary of the feasible region and −g_{i}(x) tends to 0, the penalty term, Ω, approachespositive infinity, and attempts to keep it from crossing the feasibility boundary. As the penalty parameter R approaches 0, the function P(x,R) approaches the original objective function f (x).
Another useful penalty function is the Inverse Penalty, expressed as
(13.22) 
In principle, if one is interested in a penalty function that keeps an objective function within two boundaries, a potential penalty function of the form illustrated in Fig. 13.3 could be devised.
Figure 13.3. Generic Penalty Function for DoubleSided Boundaries
It is important to note that some of these penalty functions are fraught with complications that must be attended to. In the case of the log function, when the argument should unintendedly become negative, we have a situation for which we must devise a recovery plan. In the case of the Inverse Penalty, when the argument should become too close to zero, the situation similarly becomes problematic. The published literature offers effective pertinent recovery mechanisms (see Ref. [4]).
The exterior point method and the interior method are illustrated using the following two examples.
Example: Solve the following optimization problem with an equality constraint using the parabolic penalty form.
(13.23) 
subject to
(13.24) 

The penalty function is formed using the parabolic penalty term.
(13.25) 
The stationary point of the unconstrained function P(x,R) satisfies the following firstorder conditions.
(13.26) 
(13.27) 
Then, the coordinates of the stationary point are given by
(13.28) 
As the value of R approaches positive infinity, the values of x_{1} and x_{2} become
(13.29) 
It can be verified that the minimum point, [2, 2]^{T}, satisfies the equality constraint.
The numerical implementation of the penalty method begins with a small value of R. As R increases, the stationary point of the unconstrained optimization function approaches the actual optimal solution of the constrainedproblem. Table 13.1 provides the optimal results of the unconstrained optimization problem as R increases. In other implementations of the penalty method, R may start from a very large value and is decreased to zero in order to reach the constrained optimum.
Table 13.1. Optimal Results for Different R Values




R 
x_{1 }^{*} 
x_{2 }^{*} 
f^{*} 
h(x) 


10 
2.0476 
2.0476 
1.9048 
0.0952 
100 
2.0050 
2.0050 
1.9900 
0.0100 
1,000 
2.0005 
2.0005 
1.9990 
0.0010 
10,000 
2.0000 
2.0000 
1.9999 
0.0000 
100,000 
2.0000 
2.0000 
2.0000 
0.0000 



Example: Solve the constrained optimization problem with inequality constraints using the logarithmic penalty form.
(13.30) 

(13.31) 

(13.32) 
The penalty function is formed using the logarithmic penalty term,
(13.33) 
The stationary point of P(x,R) should satisfy the following firstorder conditions.
(13.34) 
(13.35) 
Then, the coordinates of the stationary point are given by
(13.36) 
(13.37) 
As the value of R approaches 0, the limits of x_{1} and x_{2} become
(13.38) 
(13.39) 
It can be verified that the minimum point, (1, 0), satisfies the inequality constraints.
The numerical implementation of the penalty method begins with a sufficiently large positive value of R. As R approaches zero, the stationary point of the unconstrained optimization function approaches the actual optimalsolution of the constrained problem. Table 13.2 provides the optimal results of the unconstrained optimization problem as R approaches zero.
Table 13.2. Optimal Results for Different R Values




R 
x_{1 }^{*} 
x_{2 }^{*} 
f^{*} 
g_{1 }(x) 
g_{2 }(x) 


1 
1.366 
1.866 
4.134 
1.000 
1.366 
0.1 
1.048 
0.198 
3.102 
0.100 
1.048 
0.01 
1.005 
0.020 
3.010 
0.010 
1.005 
0.001 
1.000 
0.002 
3.001 
0.001 
1.000 
0.0001 
1.000 
0.000 
3.000 
0.000 
1.000 



13.5 KarushKuhnTucker Conditions
For constrained nonlinear optimization, the Lagrangian function and the Lagrange multiplier are used to provide a strategy for finding and validating the optimum of a function subject to constraints. Consider the minimization of a nonlinear optimization problem subject to equality constraints.
(13.40) 
subject to
(13.41) 

The Lagrangian function converts the constrained problem into an unconstrained problem, as given by
(13.42) 
The unspecified constants, ν_{j}, are the Lagrange multipliers. There are no sign restrictions on the values of ν_{j}. Suppose the minimum for the unconstrained problem L(x,ν) is x^{*}, and x^{*}satisfies h_{ }_{j}(x) = 0. For all values of x that satisfyh_{j}(x) = 0, the minimum of L(x,ν) is the minimum of f(x) subject to h_{j}(x) = 0. Then x^{*}minimizes the constrained optimization problem.
The following example shows us how to solve an optimization problem with equality constraints using the Lagrangian function.
Example: Minimize the following two variable function with a single equality constraint.
(13.43) 
subject to
(13.44) 

The Lagrangian function of the problem is given by
(13.45) 
The gradient of L(x,ν) with respect to x is equal to zero at the minimum.
(13.46) 
(13.47) 
The Hessian matrix of L(x,ν) with respect to x is given by
The Hessian is positive definite, implying that the optimal point is a minimum of the objective function.
From the above two functions of gradients and the equality constraint function, the point of the minimum is found to be x_{1}^{*} = 0.8 and x_{2}^{*} = 1.6. The Lagrange multiplier is ν = −1.6. The minimum value of the objective function is8.2.
The steps to solve constrained optimization problems with equality constraints using the Lagrangian function include
1. Construct the Lagrangian function L(x,ν) using the objective function and the equality constraints.
2. Solve ∇_{x}L(x,ν) = 0 and h(x) = 0.
The same method can be used to solve optimization problems with inequality constraints. The Lagrangian function in that case is constructed as
(13.48) 
However, there are sign restrictions on the Lagrange multipliers for inequality constraints. They should satisfy λ_{i} ≥ 0. Additionally, the following functions should also be satisfied at the minimum.
(13.49) 
(13.50) 
A new concept known as the active set should be introduced before the discussion of this method. The active set at any feasible point x is comprised of the equality constraints and the inequality constraints for which g_{i}(x) = 0.
An inequality constraint, at the minimum point x^{*}in the feasible region, can be classified into the following two cases.
Case I: The inequality constraint is inactive at the minimum point x^{*}, which implies g_{i}(x^{*}) < 0. Since the minimum point x^{*}satisfies λ_{i}^{*}g_{ }_{i}(x^{*}) = 0, the Lagrange multiplier is λ_{i}^{*} = 0. Then L(x^{*},λ) = f(x^{*}).
Case II: The inequality constraint is active at the minimum point x^{*}, which implies g_{i}(x^{*}) = 0. Then L(x^{*},λ) = f(x^{*}).
At the minimum point x^{*}, ∇_{x}L(x^{*},λ) = 0.
The following example explains the above process.
Example: Minimize the following twovariable function with a single inequality constraint.
(13.51) 
subject to
(13.52) 

The Lagrangian function is given by
(13.53) 
The gradient of L(x,λ) satisfies the following two equations:
(13.54) 
(13.55) 
The following two equations are satisfied:
(13.56) 
(13.57) 
(13.58) 
Since λ = 0 does not satisfy ∇_{x}L = 0, λ is greater then zero. From Eq. 13.56, it is derived that (x_{1}^{2} +x_{ }_{2}^{2} −1) = 0.
From Eq. 13.54 and 13.55, it is derived that x_{1} = x_{2}.
Since λ > 0, it is derived from Eq. 13.54 and 13.55 that x_{1} < 0 and x_{2} < 0.
Solving (x_{1}^{2} +x_{ }_{2}^{2} −1) = 0, the minimum point is found to be located at (−,−). The optimal value of the objective function is −.
The Lagrangian function for an optimization problem with multiple constraints is expressed as
(13.59) 
At the feasible point x, if the gradients of the constraints in the active set are linearly independent, the Linear Independence Constraint Qualification (LICQ) holds.
Suppose there are two inequality constraints as given by
(13.60) 
(13.61) 
In that case, the only feasible point is (1, 0), as shown by Fig. 13.4. At (1, 0), the gradients of the two constraints are given by
(13.62) 
(13.63) 
Equations 13.62 and 13.63 are not linearly independent. Therefore, the LICQ does not hold at that point. The KarushKuhnTucker (KKT) conditions are necessary for a solution to be a local minimum.
Figure 13.4. The LICQ Does Not Hold
Theorem (KarushKuhnTucker Conditions)
Suppose that x^{*} is a local minimum solution of f(x) subject to constraints g_{i}(x) ≤ 0 (i = 1, ...,m) and h_{j}(x) = 0 (j = 1, ...,n). The objective function, f (x), and the constraints, g_{i}(x) and h_{j}(x), are continuously differentiable. The LICQ holds at x^{*}. Then there are Lagrange multipliers λ^{*}and ν^{*}, such that the following conditions are satisfied at (x^{*},λ^{*},ν^{*}).
(13.64) 

(13.65) 

(13.66) 

(13.67) 

(13.68) 

(13.69) 
The following example is used to illustrate how to use the KKT conditions to solve an optimization problem.
Example: Minimize the following two variable function with two inequality constraints.
(13.70) 
subject to
(13.71) 

(13.72) 
This optimization problem can be solved using the graphical approach. The feasible region and the optimal point are plotted in Fig. 13.5.
Figure 13.5. Using the KKT Conditions to Solve an Optimization Problem
The Lagrangian function is formulated as
(13.73) 
This problem can be solved using the KKT conditions as follows.
(13.74) 
(13.75) 
(13.76) 
(13.77) 
(13.78) 
(13.79) 
(13.80) 
(13.81) 
Based on Eqs. 13.74 — 13.81, the four possible cases for different values of λ_{1} and λ_{2} are discussed below.
Case I: λ_{1} = 0 and λ_{2} = 0.
If λ_{1} = 0 and λ_{2} = 0, Eq. 13.74 and 13.75 cannot be equal to 0.
Case II: λ_{1} > 0 and λ_{2} = 0.
From Eq. 13.74 and 13.75, it is found that x_{1} = x_{2} and they should be negative.
From Eq. 13.78, x_{1} = x_{2} = −. However, they do not satisfy Eq. 13.81.
Case III: λ_{1} = 0 and λ_{2} > 0.
Equation 13.75 is not satisfied.
Case IV: λ_{1} > 0 and λ_{2} > 0.
From Eq. 13.79, it is found that x_{1} = −. From Eq. 13.75, x_{2} < 0. From Eq. 13.78, x_{2} = −. All the other equations are satisfied. The values of λ_{1} and λ_{2} are and (1 −).
Therefore, the minimum point is found to be (x_{1},x_{2}) = (−,−), using the KKT conditions.
13.6 Sequential Linear Programming
In Chapter 11, we determined that certain algorithms can be used to efficiently and reliably solve linear programming problems. We note here that nonlinear optimization problems can be converted to approximate linearoptimization problems. These converted problems are only partially equivalent to the original problem, and only in a certain neighborhood of the design space — near the operating point. Subsequently, in the neighborhood of the operating point, the problems can be solved using linear programming techniques.
The sequential linear programming method linearizes the objective function and constraints of an optimization problem, and expresses them as linear functions using Taylor series expansions. A Taylor series expansion at the point, x_{k}, is given by
(13.82) 
The higher order terms, O( x−x_{k} )^{2}, are ignored and only the linear term is retained. The linearization form of f (x) at x_{k} is given by
(13.83) 
The most direct use of sequential linear programming is to replace a nonlinear problem with a complete linearization of the constituent functions at a sequential set of points that are intended to lead to the solution. This method can also be applied to a linearly constrained problem with a nonlinear objective function. A linearly constrained nonlinear programming problem has the following form.
(13.84) 
subject to
(13.85) 

(13.86) 
The objective function, f(x), is linearized at a feasible point, x_{k}, and the problem is reformulated as follows.
(13.87) 
subject to
(13.88) 

(13.89) 
Assuming the feasible region is bounded, the above problem will possess an optimal solution, _{k}^{*}, at a feasible corner point. The optimal solution, _{k}^{*}, is not guaranteed to be improved over the current point, x_{k}. Since the feasible region is a polyhedron and since _{k}^{*}is a corner point of the feasible region, any point on the line between _{k}^{*} and x_{k} is feasible. Since (_{k}^{*}; x_{ }_{k}) < f(x_{k}), the vector (_{k}^{*}− x_{ }_{k}) is a descent direction. A line search on this descent direction can lead to an improvement in f (x). The formulated line search problem is stated as follows:
(13.90) 
subject to
(13.91) 

The line search will find a feasible point, x_{k}_{+1}, that satisfies f (x_{k}_{+1}) < f(x_{k}). The new point, x_{k}_{+1}, is used as a linearization point for the next linear approximation, and a new line search is then performed. After successive iterations of approximations and line searches, the process is expected to converge to the optimal point of the nonlinear problem. For nonlinear programming problems with linear constraints, the solution steps, using sequential linear programming, are:
1. Set an initial guess x_{0}. Specify the convergence tolerance > 0.
2. Calculate the gradient of f (x_{k}). If the gradient is ∇f_{k} ≤ , stop.
3. Approximate the original objective function at x_{k} using the Taylor expansion.
4. Solve the linearized problem to find the optimal solution, _{k}^{*}.
5. Perform a line search for the original objective function between the points x_{k} and x_{k}^{*} to find an improved point x_{k}_{+1}.
6.Set k = k + 1. Go to Step 2.
A general nonlinear optimization problem can involve nonlinear constraints, as defined by
(13.92) 

subject to
(13.93) 

(13.94) 

(13.95) 
Using the Taylor expansions for the objective function and the constraints at the point x_{k}, the linearized approximation problem is constructed as
(13.96) 

subject to
(13.97) 

(13.98) 

(13.99) 
Solving the above linear programming problem, a new point, x_{k}_{+1}, is obtained in the feasible region of the linear constraints. A series of points can be generated through iterations. At each iteration, the solution to the previous linear approximate problem is used as the linearization point, and a new linear programming problem is constructed and solved. However, there is no assurance that the solution to the approximate linear programming problem lies within the feasible region of the original problem. In order to attain convergence to the true optimal solution of the nonlinear programming problem, at each iteration, an improvement in both the objective function and the constraintfeasibility should be made. One way to achieve a satisfactory approximation of the linearization is to impose limits on the allowable increments in the variables, in order to keep the solution to the linear programming problem within a reasonably small neighborhood of the linearization point. The limits can be stated as
(13.100) 
The steps to solve a general nonlinear programming problem using the sequential linear programming method are:
1. Set an initial guess x_{0}. Specify the convergence tolerance > 0.
2. Calculate the gradient of f (x_{k}). If the gradient ∇f_{k} ≤ , stop.
3. Approximate the original nonlinear functions at x_{k} using the Taylor expansion.
4. Impose increment limits. −δ ≤ x − x_{k} ≤ δ, δ > 0.
5. Solve the linearized problem to find the optimal solution, x_{k}_{+1}.
6.Set k = k + 1. Go to Step 2.
The optimization problem in the following example is a quadratic problem. It can be solved using the sequential linear programming method without imposing increment limits.
Example: Solve the following minimization problem.
(13.101) 
subject to
(13.102) 

(13.103) 

(13.104) 

(13.105) 
The feasible region lies on the curve h_{1}(x) = 0 between the point (1, 4) determined by the linear bound 1 ≤ x_{1} and the point (2, 3) determined by the constraint g_{1}(x) ≤ 0. The linearized approximation of the problem is constructedat the point (2, 4) as shown below.
(13.106) 
subject to
(13.107) 

(13.108) 

(13.109) 

(13.110) 
The solution to the approximate optimization problem is found to be (1.8889, 3.2222). At this point, the optimization can be relinearized and solved. After several iterations, the minimum solution to the original problem is found to be (2, 3). The estimated minimum value of the objective function is 2.
13.7 Sequential Quadratic Programming
Sequential quadratic programming (SQP) is a highly effective method to solve constrained optimization problems involving smooth nonlinear functions. This approach solves a series of quadratic subproblems.
A nonlinear optimization problem without inequality constraints is defined as
(13.111) 
subject to
(13.112) 

The Lagrangian function for the above problem is expressed as
(13.113) 
The KKT conditions require that ∇L(x^{*},ν^{*}) = 0 at the optimal point. The Newton method for unconstrained minimization can be expressed as
(13.114) 
In Equation 13.114, the step p_{k} and q_{k} are obtained as the solution to the following linear function.
(13.115) 
Equation 13.115 can also be expressed as
(13.116) 
Equation 13.116 represents the firstorder optimality conditions for the following optimization problem.
(13.117) 
subject to
(13.118) 

In Equation 13.114, q_{k} represents the Lagrange multiplier, ν_{k}, in the above formulation.
At each iteration, a quadratic problem is solved to obtain [p_{k},q_{k}]^{T}. These values are used to update [x_{k},ν_{k}] using Eq. 13.114.
A nonlinear optimization problem with both equality and inequality constraints is defined as
(13.119) 
subject to
(13.120) 

(13.121) 
The above optimization problem can be reformulated as
(13.122) 
subject to
(13.123) 

(13.124) 
Example: Apply the SQP method to solve the following problem involving an equality constraint.
(13.125) 
subject to
(13.126) 

Considering the initial values of x and ν as X_{0} = [1,−1]^{T} and ν_{0} = 1, we get
(13.127) 
(13.128) 
(13.129) 
(13.130) 
(13.131) 
(13.132) 
(13.133) 
Using the firstorder optimality conditions, the solution to [p_{0},q_{0}]^{T} can be obtained from the following equation.
(13.134) 
From Eq. 13.134, the values of steps p_{0} and q_{0} are given by
(13.135) 
(13.136) 
The values of x and ν can then be updated for the next iteration as
(13.137) 
(13.138) 
The above equations represent the process for the first iteration. The values of x and ν are updated by the same procedure. The minimum point for this problem is found to be [0.6633,−0.7483]^{T}. The minimum function value is estimated as 0.1764.
13.8 Comparison of Computational Issues
In this chapter, the methods used to solve nonlinear programming problems with constraints were presented, including the elimination method, penalty methods, the sequential linear programming method, and the sequentialquadratic programming method.
The elimination method is limited in its application. It can be useful for solving optimization problems involving equality constraints. It requires solving systems of equations, making it challenging to convert this method into numerical algorithms. This method is not practical for solving large scale problems.
The penalty methods solve a sequence of subproblems with updated penalty parameters. These methods require a large number of iterations. If the constraints values span several orders of magnitude, the penalty method may face scaling issues.
The sequential linear programming method is efficient for optimization problems with mild nonlinearities. This method is not wellsuited for optimization problems involving highly nonlinear functions.
The sequential quadratic programming method is one of the most effective methods used to solve constrained nonlinear optimization problems. It can be leveraged to solve both small scale and large scale problems, as well as problems with significant nonlinearities. This method has been implemented in many optimization solvers and commercial software packages and has a wide applicability.
13.9 Summary
This Chapter delved further into the theory and application of nonlinear programming (NLP), with the description of the characteristics of constrained NLP, and introductions to the major approaches used for solving constrainedNLP problems. The structure of constrained NLP problems is first presented, the general formulation of which includes inequality and equality constraints. Basic methods for solving constrained NLP, such as Elimination methodsand Penalty methods, are then introduced. This was followed by the introduction of the KarushKuhn Tucker conditions, which are of paramount importance in NLP, both as a set of optimality criteria and as a means of solving simpler constrained NLP problems. Advanced methods for solving constrained NLP problems are also presented in this chapter, specifically including Sequential Linear Programming and Sequential Quadratic Programming. Thechapter concluded with a comparative discussion of the computational capabilities and limitations of the different classes of constrained NLP methods.
13.10 Problems
Warmup Problems
13.1Consider the following optimization problem:
(13.139) 
subject to
(13.140) 

1.Find all the points that satisfy the KKT condition using Lagrange multipliers.
2.Use the variable elimination method to solve the problem (eliminate x_{1}). Compare the solutions obtained using the variable elimination method with those obtained in Question 1.
13.2Use the Lagrange multiplier method to solve the following problem:
(13.141) 
subject to
(13.142) 

(13.143) 
Intermediate Problems
13.3Consider the following optimization problem:
(13.144) 

subject to
(13.145) 

(13.146) 

(13.147) 

(13.148) 

(13.149) 

(13.150) 
1.Write down the KarushKuhn Tucker (KKT) necessary conditions for this problem
2.You are given that constraints g_{1} and g_{4} are active, and the other constraints are inactive. Given this information, simplify the KKT conditions found in No. 1.
3.Find all the possible KKT points using the simplified KKT conditions from No. 2.
13.4Solve the constrained optimization problem below using the inverse penalty method.
(13.151) 
subject to
(13.152) 

1.Perform four unconstrained optimizations using the following values for the penalty parameter: R = 1, 0.1, 0.01, and 0.001.
2.For each R, prepare a table that provides the value of the penalty parameter, the values of the design variables, objective function and the constraint.
3.Can you guess where the constrained minimum might be?
13.5Consider the following problem:
(13.153) 
subject to
(13.154) 

1.Solve the above optimization problem using the bracket operator penalty method.
2.Check your answer using fminsearch.
3.Plot contours of the objective function, and the path of the intermediate solutions as the optimal point is reached.
13.6Let f (x) = 2x^{3}−3x^{2}−12x+1. Answer the following questions:
(a)Determine the minimum of f (x) by hand. Report the values of the optimum x and the function value at this point.
(b)This subproblem is a simple demonstration of how constrained optimization is sometimes performed. Let us add another component to the above function. The function now takes the form f (x) = 2x^{3} − 3x^{2} − 12x + 1 +R(x− 6)^{2}, where R is a constant. Plot this new function for R = 0,R = 1,R = 10,R = 100, and R = 1, 000 for 0 ≤ x ≤ 8 on the same figure. Use different colors for each R. Keep your vertical axis limits between −100 and 800.
(c)By looking at the plot in Part (b), can you tell what the minimum of the new function is for the different values of R? Indicate these minima on the plot, and compare them with the minimum of the original function in Part (a). What is the minimum of the new function if R = ∞∞?
(d)Explain how this problem shows you one way to solve constrained optimization problems.
BIBLIOGRAPHY OF CHAPTER 13
[1] J. Johannes. Introduction to the Theory of Nonlinear Optimization. Springer, 3rd edition, 2007.
[2] I. Griva, S. G. Nash, and A. Sofer. Linear and Nonlinear Optimization. SIAM, 2nd edition, 2009.
[3]M. S. Bazaraa, H. D. Sherali, and C. M. Shetty. Nonlinear Programming: Theory and Algorithms. John Wiley and Sons, 3rd edition, 2013.
[4]A. Ravindran, G. V. Reklaitis, and K. M. Ragsdell. Engineering Optimization: Methods and Applications. John Wiley and Sons, 2006.