Discrete Optimization Basics - USING OPTIMIZATION—PRACTICAL ESSENTIALS - Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

PART 3


USING OPTIMIZATION—PRACTICAL ESSENTIALS

9


Discrete Optimization Basics

9.1 Overview

Previous chapters focused on methods to solve optimization problems that involve continuous design variables. Real life engineering design problems often involve discrete choices. For example, the number of components in a product has to be a positive integer. In this case, corresponding design variables must be restricted to a set of given discrete numbers. This type of optimization problem is called a discrete optimization problem.

Discrete optimization is broadly defined in Sec. 9.2. Section 9.3 describes how to use the exhaustive search approach to solve discrete optimization problems. Section 9.4 illustrates the relaxation approach. Section 9.5 introduces some advanced approaches to address discrete problems, which include genetic algorithms, simulated annealing, and the branch and bound method. The chapter concludes with a summary in Sec. 9.6. More advanced approaches for discrete optimization are presented in Chapter 14, entitled Discrete Optimization.

9.2 Defining Discrete Optimization

Discrete optimization can address many real-world problems and has been applied in a wide range of fields, such as industrial operations, transportation, and finance. A generic discrete optimization problem can be defined as

(9.1)

subject to

(9.2)

(9.3)

(9.4)

(9.5)

(9.6)

where x, y, and z are the design variable vectors; f (x) is the objective function; g(x) and h(x) are inequality constraints and equality constraints, respectively; Zm is a set of given feasible integers; Rn is a set of real numbers; and Astands for a combinatorial set resulting from given feasible discrete values. Depending on the existence of x, y, and z, discrete optimization problems can be classified into the following five categories.

1.Pure integer programming problems: Only x exists. The design variables only take on integer values (see Refs. [1, 2]).

2.Mixed-integer programming problems: Both x and y exist. Some design variables take on integer values, while others are allowed to take on continuous values.

3.Discrete non-integer optimization problems: Only z exists. The design variables are allowed to be selected from a given set of discrete values.

4.Binary programming problems: Only x exists, and it can only take on a value of either 0 or 1. These problems are also called zero-one programming problems.

5.Combinatorial optimization problems: Only z exists. The possible feasible solutions are defined by a combinatorial set resulting from the given feasible discrete values.

For a purely finite discrete optimization problem (no continuous variables), all the feasible design variable combinations are known before hand. Theoretically, using exhaustive search, all the possible designs can be evaluated to determine the optimum in the purely finite-discrete optimization problem. In practice, this approach (exhaustive search in Sect. 9.3) only works for very small problems. For medium-size problems and industrial scale problems, this approach becomes computationally prohibitive. Consider a binary programming problem that has 100 design variables. The number of its possible solutions is 2100 = 1.27 × 1030. In Sec. 9.5, evolutionary algorithms and the branch and bound method are introduced. These methods are particularly effective in solving practical discrete optimization problems.

9.3 Exhaustive Search

Exhaustive search is a straightforward approach that can be leveraged to solve small scale discrete problems. It enumerates all of the feasible candidates. The best solution among them is the optimum.

Exhaustive search is viable when solving optimization problems with a manageable number of combinations. The following example illustrates how to solve a combinatorial optimization problem using exhaustive search. Chapters 8 and 14 provide further pertinent information.

Example: Use exhaustive search to find the optimum of the following combinatorial optimization problem.

(9.7)

subject to

(9.8)

(9.9)

(9.10)

The number of feasible combinations of the three design variables is 3 × 3 × 2 = 18. These 18 combinations and their objective function values are listed in Table 9.1.

Table 9.1. Design Variable Combinations and Their Objective Function Values

When comparing the 18 objective function values in Table 9.1, the minimum of f (x) is estimated to be 37. The optimal solution is: x1 = 3, x2 = 2, and x3 = 4.

9.4 Relaxation Approach

The relaxation approach assumes that all the design variables in a problem are continuous, and solves the problem using continuous optimization techniques. After the optimal solution to the relaxed continuous optimization problem is obtained, the optimum variable values are rounded off to the nearest feasible discrete solution.

Please note that the solution obtained by this approach can often be sub-optimal. Furthermore, this method does not guarantee that the solution is feasible.

Example: Consider the following linear discrete optimization problem.

(9.11)

subject to

(9.12)

(9.13)

(9.14)

(9.15)

By ignoring the constraint in Eq. 9.15, the problem can be solved as a continuous optimization problem. In this example, MATLAB function linprog is used to solve the relaxed linear programming problem. The MATLAB code is given below.

clear
clc

% objective function
f = [-2; 1];

% Define Linear constraints
A=[6 -4; 1 1]; B=[15; 5];
Aeq=[]; Beq=[];

% Define bounds constraints
LB=[0 0]; UB=[];

[x, fopt] = linprog(f, A, B, Aeq, Beq, LB, UB)

The final solution obtained is: x1 = 3.5 and x2 = 1.5. Rounding off to the nearest integer yields an integer optimal solution of x1 = 4,x2 = 2. However, they do not satisfy the inequality constraint given as Eq. 9.13. However, if we round off the continuous solution to x1 = 3,x2 = 1, we obtain the correct solution.

The following example illustrates how the relaxation approach could yield sub-optimal or infeasible solutions.

Example: Use the relaxation approach to solve the following discrete linear programming problem. Examine whether the solution is correct.

(9.16)

subject to

(9.17)

(9.18)

(9.19)

(9.20)

Using the relaxation approach, the constraint in Eq. 9.20 is ignored. The relaxed optimization problem is solved using MATLAB function linprog. The MATLAB code is given below.

clear
clc

% objective function
f = [-5; -1];

% Define Linear constraints
A=[10 1; 0 1]; B=[20; 2];
Aeq=[]; Beq=[];

% Define bounds constraints
LB=[0 0]; UB=[];

[x, fopt] = linprog(f, A, B, Aeq, Beq, LB, UB)

The optimal solution of the relaxed optimization problem is x1 = 1.8 and x2 = 2. Rounding off to the nearest integer yields an integer optimal solution of x1 = 2 and x2 = 2.

The optimization problem can also be solved graphically as seen in Fig. 9.1. From this figure, note that the rounded solution lies in the infeasible region of the design space. The optimal solution is x1 = 2 and x2 = 0. This example demonstrates that the relaxation approach can be misleading.

Figure 9.1. The Actual Optimum Solution

9.5 Advanced Options: Genetic Algorithms, Simulated Annealing, and Branch and Bound

9.5.1 Genetic Algorithms

Genetic Algorithms (GAs) are a family of computational algorithms inspired by the principles of evolution described in Darwin’s theory. GAs were first conceived by J.H. Holland in 1975 [3]. Binary coded genetic algorithms areparticularly popular choices for discrete optimization because of their ability to deal directly with discrete search spaces. Introductory discussion of GA has already been presented in Chapter 8.

9.5.2 Simulated Annealing

As discussed in Sec. 9.2, discrete optimization problems can become unmanageable using combinatorial methods as the number of candidates increases exponentially. An effective algorithm that can be used to solve these problems is simulated annealing. The idea behind this algorithm is based on the manner in which liquids freeze or metals recrystalize during the process of annealing. The algorithm mimics the metallurgical process of annealing: heating amaterial and slowly lowering the temperature to decrease defects, thus minimizing the system energy [4]. At the beginning of this method, the initial state is similar to that of a thermodynamic system. At each iteration of the algorithm, a new point is randomly generated. The distance of the new point from the current point, or the extent of the search, is based on a probability distribution. The scale of the distribution is proportional to the temperature. The algorithm accepts all new points that lower the energy; but with a certain probability, also accepts points that raise the energy. By accepting points that raise the energy, the algorithm avoids being trapped in local minima, and is capable of global exploration for better solutions. An annealing schedule is selected to systematically decrease the temperature as the algorithm proceeds. As the temperature decreases, the algorithm reduces the extent of its search toprogressively converge to a minimum.

Simulated annealing has been used in various combinatorial optimization problems and has been particularly successful in circuit design problems. For more information on simulated annealing, refer to Chapter 19. More details and examples are also provided in [5].

9.5.3 Branch and Bound

The branch and bound method is a basic technique used to solve discrete programming problems. This method is based on the observation that the enumeration of integer solutions has an inherent tree structure. It enumerates candidate solutions systematically for a discrete optimization problem. Refer to Chapter 14 for more information on this method.

The procedure for the branch and bound method used to solve a linear integer programming problem is illustrated in Fig. 9.2 (see Refs. [1, 2]).

Figure 9.2. Branch and Bound Method Flowchart

The technical terms used in Fig. 9.2 are explained below.

Relaxed continuous LP: A relaxed continuous linear programming (LP) problem is formulated by ignoring the integer constraints. The resulting optimal solution may have some non-integer variable values. If the resulting LP solution has only integer values, the obtained solution is the integer optimal solution.

Ceil: The notation of x is defined as the ceiling function. This function returns the smallest integer value that is greater than or equal to x. For example, 5.14 = 6, 10 = 10, and -8.6 = -8.

Floor: The floor function is denoted x, which returns the largest integer that is less than or equal to x. For example, 5.14 = 5, 10 = 10, and -8.6 = -9.

Add a ceil or a floor: For those design variables with decimal parts in the optimal result, two subproblems are created by imposing a ceil or a floor on the design variable values, respectively. The following constraint is added to the first subproblem: xix. The second subproblem is formulated by adding the constraint xix. The two subproblems are then solved as continuous problems. The solutions of the two subproblems are then examined for fractional parts, and the process is repeated.

Branching: The above process that adds a ceil or a floor is called branching. For a given variable, the branching process is repeated until the relaxed continuous problem with the additional constraints yields either an integer solution or an infeasible solution. The branching process is repeated for all the variables that have fractional solutions.

Bounds: The added ceil and floor constraints are called bounds on the variable values.

A basic implementation of the branch and bound method for linear integer programming problems consists of the following steps (see Refs. [1, 2]). Note that the given procedure only applies to linear integer programming problems.

1.Formulate a relaxed continuous linear programming problem by ignoring the integer constraints. The relaxed problem is comprised of continuous variables.

2.If the solution to the above relaxed continuous (linear optimization) problem only involves integers, it is the optimal solution. If the solution has non-integer variables, go to the next step

3.Select one non-integer variable and generate two subproblems (two branches). For one of them, add a ceil constraint to the selected non-integer variable and, for the other branch, add a floor constraint to it.

4.If a branch has no feasible solutions, stop this branch. If the solution only has integers, it becomes a candidate for the final optimal solution. If the solution has non-integer values, go back to Step 3.

5.Once the branching process is completed for all the variables, compare all the integer solutions obtained by the different branches. The best solution is considered the final optimal solution.

Example: Solve the following linear discrete optimization problem using the branch and bound method.

(9.21)

subject to

(9.22)

(9.23)

(9.24)

(9.25)

Figure 9.3 presents the subproblems and their solutions when the branching process begins with the variable x1. The gray-shaded boxes are the integer solutions obtained during this branching. Note the branching process after the optimal solution x1* = 3,x 2* = 0.75, and f* = -5.25. The two further possible branches are x2 ≤ 0 and x2 ≥ 1. The first branch, x2 ≤ 0, is not feasible, and is not solved further. The second branch, x2 ≥ 1 yields the optimal integer solution, x1* = 3,x 2* = 1, and f* = -5.

Figure 9.3. Branch and Bound Method Example

9.6 Summary

In this chapter, a new class of optimization problems was introduced - the discrete optimization problem. Several simple approaches were illustrated with examples. These approaches can be readily applied to solve discrete optimization problems.

9.7 Problems

9.1Formulate a discrete optimization problem based on your real-world engineering design experience. Solve it using the appropriate optimization method.

9.2Use exhaustive search to find the optimum for the following discrete optimization problem.

(9.26)

subject to

(9.27)

(9.28)

9.3Consider the two discrete optimization problems given in Sec. 9.4. Reproduce the results of the two problems using the MATLAB function linprog. Solve the second problem graphically, and show the actual optimum on your figure.

9.4Reproduce the results for the example given in Sec. 9.5.3. Turn in your M-file and results.

9.5Consider the linear discrete optimization problem given in Sec. 9.5.3. Apply the branch and bound method starting with x2.

BIBLIOGRAPHY OF CHAPTER 9

[1]J. K. Karlof. Integer Programming: Theory and Practice. CRC Press, 2005.

[2]D. S. Chen, R. G. Batson, and Y. Dang. Applied Integer Programming: Modeling and Solution. John Wiley and Sons, 2010.

[3]J. H. Holland. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. MIT Press, 1992.

[4]E. Aarts and J. Korst. Simulated Annealing and Boltzmann Machines: A Stochastic Approach to Combinatorial Optimization and Neural Computing. John Wiley and Sons, Inc., 1989.

[5]M. Tsuzuki and T. DeCastro Martins. Simulated Annealing: Strategies, Potential Uses and Advantages. Nova Science Publishers, 2014.