﻿ ﻿Discrete Optimization - MORE ADVANCED TOPICS IN OPTIMIZATION - Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

# Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

### MORE ADVANCED TOPICS IN OPTIMIZATION

This part introduces carefully chosen advanced topics that are appropriate for graduate students or undergraduate students who might pursue study in the design optimization field. These topics will generally be of great interest toindustry practitioners as well. While these topics are advanced, a mathematically advanced presentation is not provided. Instead, an introductory presentation is provided, with an eye toward practical usefulness.

Specifically, the topics presented, with the chapter numbers, are given below:

14.Discrete Optimization

15.Modeling Complex Systems: Surrogate Modeling and Design Space
Reduction

16.Design Optimization Under Uncertainty

17.Methods for Pareto Frontier Generation/Representation

18.Physical Programming for Multiobjective Optimization

19.Evolutionary Algorithms

### Discrete Optimization

14.1 Overview

In most previous chapters, continuous optimization problems were considered where the design variables were assumed to be continuous; that is, design variables assume real values within given ranges. In many practicalengineering problems, the acceptable values of the design variables do not form a continuous set. These problems are referred to as discrete optimization problems. For example, the number of rivets required in a riveted joint has to be an integer (such as 1, 2, 3). Another example is when the feasible region of the design variable is a set of given discrete numbers, such as {6.25, 6.95, 7.65}, which may be the available standardized sizes of nuts. The basics of discrete optimization were introduced in Chapter 9, where some pertinent elementary methods were presented. This chapter introduces more advanced approaches. The reader is advised to first review Chapter 9 as preparation for the current chapter.

This chapter is organized as follows. The next section (Sec. 14.2) provides the problem classes, examples, and definition (along with the notion of computational complexity of the solution algorithms). Section 14.3 discusses the basics of some popular techniques used to solve integer programming problems, with examples. The methods studied will be: the exhaustive search method (Sec. 14.3.1), the graphical method (Sec. 14.3.2), the relaxation method(Sec. 14.3.3), the branch and bound method (Sec. 14.3.4), and the cutting plane method (Sec. 14.3.5). Popular current software options (Sec. 14.3.7) are also discussed. The chapter concludes with a summary in Sec. 14.4.

14.2 Problem Classes, Examples and Definition

This section presents discrete optimization problem classes, problem examples, and problem definition. Computational complexity of the solution algorithms is also briefly addressed in connection with the discrete optimization problem definition.

14.2.1 Problem Classes

Given the idea of discreteness of the feasible design variable set, there are several categories of optimization problems. As explained in Chapter 9, the following terms are commonly encountered within the umbrella of the discreteoptimization literature; and define the main classes of discrete optimization problems (see Fig. 14.1).

Figure 14.1. Discrete Optimization Overview

1.Pure integer programming problems: When the design variables are allowed to assume only integer values. Example: Number of routes a traveling salesman can take (see Refs. [1, 2]).

2.Mixed-integer programming problems: When some design variables are allowed to assume only integer values, while others are allowed to assume continuous values.

3.Discrete non-integer optimization problems: When the design variables are allowed to assume only a given set of discrete, not necessarily integer, values. Example: Standardized sizes of machine components.

4.Zero-one Programming problems: When the design variables are allowed to assume values of either 0 or 1.

5.Combinatorial optimization problems: Where the set of possible feasible solutions is defined by a combinatorial set resulting from the given feasible discrete values of the design variables.

Next, we discuss some of the popular discrete optimization problems that you may encounter in the literature and in practice.

14.2.2 Popular Example Problems

In this subsection, descriptions of a few example problems in discrete optimization are introduced. The sample problems presented below span a wide range of fields, such as mathematics, transportation, and finance.

Knapsack problem: This is a classical example of a combinatorial optimization problem. A select set of items needs to be packed in a knapsack, or a bag. Each member of the set of items has a cost and a merit associated with it.How can the items be optimally chosen to be packed such that the cost is minimized, and the merit maximized?

Vehicle routing problem: The goal of this problem is to choose the route so as to minimize the total distribution cost of the goods to be delivered to a set of customers at different locations. The goods are assumed to be located in a central inventory.

Traveling salesman problem: A traveling salesman needs to plan a round-trip route. The salesman needs to visit a given number of cities, and each city exactly once. Each segment of the journey (from one city to another) has an associated cost. Which route yields the least expensive round-trip that visits each city exactly once?

Capital budgeting: This problem optimizes the allocation of capital budgets among different investment alternatives. Suppose a given amount of money in invested among four alternatives. Each investment alternative has apresent value and a minimum required investment. How can the budgets be distributed among the alternatives so that the total present value is maximized?

The above discussion provides the reader with a basic idea of the various challenging discrete optimization problems commonly found in the literature. Some references in the area of discrete optimization contain a mathematicallyrigorous treatment of the subject, which is outside the scope of this introductory textbook. This chapter presents a basic treatment of the ideas involved, with illustrative examples.

14.2.3 Problem Definition and Computational Complexity

This section introduces a generic integer programming problem formulation. The notion of computational complexity of the solution algorithms is also briefly presented.

Problem Definition

A generic integer programming problem is given below.

 (14.1)

subject to

 (14.2) (14.3) (14.4)

where x is the design variable vector; f (x), g (x), and h (x) are the objective function, inequality constraints, and equality constraints, respectively; and Z is the set of given feasible integers. Note that for a generic discrete optimization problem, the set of integers Z in Eq. 14.4 will be replaced by the given set of real numbers, and are not necessarily integers.

Example: For illustration purposes, consider the linear integer programming problem shown below.

 (14.5)

subject to

 (14.6) (14.7) (14.8) (14.9)

The feasible region of the above problem is shown in Fig. 14.2. The next section solves the above problem using some popular discrete optimization approaches.

Figure 14.2. Graphical Solution for Example 1

Before studying the solution approaches, we briefly examine the computational complexity of the algorithms that can solve discrete optimization problems.

Computational Complexity

Discrete optimization problems are known to be computationally complex. The study of complexity is related to a branch of mathematics and computing known as complexity theory. One of the issues of interest in complexity theory is to quantify the performance of computational algorithms. The computation time of an algorithm is usually presented using the O (known as the Big O) notation.

Example: Say we are interested in finding the gradient of a function f (x1,x2,...xn) using the finite difference method. The i-th component of the n × 1 gradient vector can be computed as

 (14.10)

where δp is the chosen step size. The above computation requires two function evaluations. To compute all of the n components of the gradient vector, we require n + 1 function evaluations. Therefore, this finite difference-basedgradient computation algorithm has a complexity of the order of O(n) for a problem size of n. Note that whether we use forward or backward difference (Chapter 7) to compute the gradient vector, we will have a complexity of the order of O(n).

An algorithm is said to be of polynomial time if its complexity is of the order of O(nk) for a problem size of n, where k is a finite non-negative integer. .

Another aspect of complexity theory is to categorize computational problems and algorithms into complexity classes. One important complexity class is known as the P class. The problems belonging to the P class are regarded as tractable, and easily solvable by algorithms in polynomial time. Unfortunately, several practical discrete optimization problems belong to the notoriously difficult complexity class of the so-called NP-Complete problems. A detailed presentation of complexity theory is beyond the scope of this chapter.

The methods employed to solve discrete optimization problems are studied next. Examples are provided where necessary.

14.3 Solution Approaches

An overview of discrete optimization solution approaches is provided in Fig. 14.1 (also see Sec. 14.2). Discrete problems can be linear (with linear constraints and linear objective function) or nonlinear problems (with nonlinearconstraints and/or nonlinear objective function). From the formulation and solution approaches perspectives, discrete problems can be classified as pure-integer problems, mixed-integer problems, discrete non-integer problems, andzero-one problems. Nonlinear discrete problems are much more complicated to solve than linear discrete problems.

As reported in Fig. 14.1, although the broad solution approaches for various types of discrete problems are often the same, their specific implementation and complexity will vary. For example, variations of the branch and boundand cutting plane methods have been applied to linear, as well as nonlinear, discrete problems of several kinds. In this chapter, our goal is to introduce the basics of how these popular algorithms work. Linear integer programming problems will be considered for presentational simplicity. More rigorous treatment of the methods discussed in this section is available in [4, 5, 2].

14.3.1 Brute Force Method: Exhaustive Search

The most straightforward, and computationally expensive, approach to solving discrete problems is to perform an exhaustive search - where all the possible options are enumerated and evaluated. The optimal solution can then be readily selected from the enumerated solutions. This approach is a viable option only for small problems, as it can be computationally prohibitive for larger problems.

Example: Consider the following simple example. We have a set of 6 truss elements. The generic i-th element has a weight wi (see Table 14.1). We need to choose three elements that yield a minimum total weight. This problemcan be viewed as a combinatorial optimization problem, and can be stated as

 (14.11)

where

 (14.12) (14.13) (14.14)

There are 6C3 = = 20 ways of choosing three elements from six possibilities. Table 14.2 lists the possible 20 combinations, along with the corresponding total weight.

Table 14.1. Brute Force Method: Six Elements

 Element, i 1 2 3 4 5 6 Weight, wi 1.5 6.4 2.0 3.2 5.7 4.3

Table 14.2. Brute Force Method: Twenty Combinations

 Combination Total weight Combination Total weight 1,2,3 9.9 2,3,4 11.6 1,2,4 11.1 2,3,5 14.1 1,2,5 13.6 2,3,6 12.7 1,2,6 12.2 2,4,5 15.3 1,3,4 6.7 2,4,6 13.9 1,3,5 9.2 2,5,6 16.4 1,3,6 7.8 3,4,5 10.9 1,4,5 10.4 3,4,6 9.5 1,4,6 9 3,5,6 12 1,5,6 11.5 4,5,6 13.2

The row shown in bold in Table 14.2 is the optimal solution. Note that as the number of possibilities increases, such exhaustive enumeration becomes computationally expensive.

The graphical method to solve discrete optimization problems is studied next.

14.3.2 Graphical Method

For simple problems where the objective function and the constraints can be visualized, the feasible region could be plotted to find the optimal discrete solution graphically. However, a graphical solution may not be a viable option for complex problems with a large number of design variables.

Example: For the example problem previously discussed (Eqs. 14.5 through 14.9), the feasible region can be plotted as shown in Fig. 14.2. The direction of decreasing objective function value is shown by the block arrow. Thegrey shaded region represents the feasible region as defined by the inequality constraints alone, and the diamonds represent feasible integer solutions. Upon inspection of Fig. 14.2, the optimal solution is x1 = 1,x2 = 2, which is the integer solution within the feasible design space with the least objective function value.

Once the feasible region is identified, one could also perform an exhaustive search, considering only the feasible design variable values, to find the optimum solution.

The next subsection discusses a simple and commonly used approach to solve discrete problems.

14.3.3 Relaxation Approach: Solve as Continuous Problem

In this method, the discrete formulation is relaxed by treating the discrete variables as if they were continuous. The optimization problem is then solved using the continuous optimization techniques learned earlier. The real-valuedoptimum design variables obtained are then rounded off to the nearest feasible discrete solution.

While this technique is used quite often due to its ease of implementation, the user is warned that the solution obtained can often be sub-optimal. In addition, the previously discussed rounding of the optimal solution can result in constraints violations at the approximate discrete solution.

Example: Consider the linear discrete optimization problem presented in Eqs. 14.5 - 14.9. Now, solve it as a continuous optimization problem by ignoring the constraint in Eq. 14.9. The relaxed optimization problem thenbecomes

 (14.15)

subject to

 (14.16) (14.17) (14.18)

Solving the above optimization problem using a continuous optimization algorithm, such as fmincon, yields the optimal solution as x1 = 1.5,x2 = 2.5. Rounding off to the nearest integer would yield an integer optimal solution of x1= 2,x2 = 3. As seen in Fig. 14.2, the rounded solution lies in the infeasible region of the design space. If the continuous solution is instead rounded off to x1 = 1,x2 = 2, we obtain the correct solution. The relaxation approach for solving discrete problems must, therefore, be carefully employed.

14.3.4 Branch and Bound Method

The branch and bound method involves a systematic enumeration of the candidate solutions for a discrete optimization problem. We explain the basics of a branch and bound algorithm in a series of steps that can be used to solve alinear integer programming problem by hand. Software implementations of the algorithm could be much more involved.

A basic implementation of the branch and bound method consists of the following steps.

1.Formulate a relaxed continuous linear programming (LP) problem by ignoring the integer constraints - resulting in a continuous formulation. The resulting optimal solution may have some non-integer design variable values. Ifthe resulting LP solution has only integer values, the obtained solution is indeed the optimal solution.

2.Notation: Say xi*contains a decimal part. Define the notation of x, also known as the ceiling function. This function returns the smallest integer value which is greater than or equal to x. For example, 5.134 = 6, 10 = 10, and -8.6 = -8. The second notation defined is the floor function, denoted by x, which returns the largest integer that is less than or equal to x. For example, 5.134 = 5, 10 = 10, and -8.6 = -9.

3.For those design variables with decimal parts in the solution, two subproblems that impose bounds on the design variable values are created. This process is known as branching. The following additional constraint is added tothe first subproblem: xix. The second subproblem is formulated by adding the constraint xix. The subproblems are then solved as continuous problems. The solutions of the two subproblems are then examined for fractional parts, and the branching process is repeated.

4.For a given variable, the branching process is usually repeated until the relaxed continuous problem with the additional constraints yields either an integer solution or an infeasible solution. The branching process is repeated for all the variables that have fractional solutions.

5.Once the branching process is completed for all the variables, the best solution among the integer solutions from all branches is considered the optimal solution. Note that the above algorithm discussion applies only to linear integer programming problems.

Example: Consider the following example, and implement the branch and bound method.

 (14.19)

subject to

 (14.20) (14.21) (14.22) (14.23)

Figure 14.3 provides the subproblems and their solutions when the branching is performed on the variable x1. The gray-shaded boxes are the integer solutions obtained during this branching. Note the branching process after the optimal solution x1* = 0.75,x 2* = 2,f* = -3.25. The two further possible branches are x1 ≤ 0 and x1 ≥ 1. The first branch, x1 ≤ 0, is not feasible, as it violates the first constraint, and is not solved further. The second branch, x1 ≥ 1,conflicts with the first branching, x1 ≤ 1. Note that this conflict yields x1 = 1 as the only possibility for the variable x1, yielding the optimal integer solution, x1* = 1,x 2* = 2,f* = -3. Figure 14.4 reports the subproblems and thecorresponding solutions when the branching is performed on x2. The gray-shaded box is the integer solution obtained (x1* = 1,x 2* = 2,f* = -3), which is also the optimal integer solution for this problem.

Figure 14.3. Branch and Bound Method Example: Integer Bounds on x1

Figure 14.4. Branch and Bound Method Example: Integer Bounds on x2

14.3.5 Cutting Plane Method

The basic concept of the cutting plane method is to add inequalities, also known as cuts, to the existing integer problem. The purpose of adding the inequality is to cut off the non-integer solutions without eliminating integer points of the feasible region. The resulting formulation is then solved as a non-integer continuous problem, and is tested if the obtained solution is an integer. If not, a new cut is added and the procedure is repeated until an optimal integersolution is found. There are different approaches for finding effective inequalities or cuts that are added to the initial set of constraints. These effective inequalities are usually taken from pre-defined families, one of which is theGomory cut, which will be studied in this chapter.

Consider a linear integer programming problem. A Gomory cut is a linear inequality constraint that does not exclude any feasible integer solutions from the integer problem. In this chapter, we will explain the cutting plane method by using the Simplex method that was presented in Chapter 11. The following is a generic discussion of Gomory’s cutting plane method for linear integer programming problems.

1.Initial Simplex Tableau:Begin by relasing the integer programming problem by ignoring the integer constraints on the design variables. The linear programming (LP) formulation can be written as

 (14.24)

subject to

 (14.25) (14.26) (14.27)

Adding slack variables, denoted by the vector s, to the inequality constraints for the Simplex method, obtain the following formulation.

 (14.28)

subject to

 (14.29) (14.30) (14.31)

The above relaxed linear programming formulation is solved using the Simplex method. If the resulting optimal solution consists of only integer values, the solution procedure can be stopped. If that is not the case, use the following procedure to further obtain integer solutions. A constraint is added to the formulation, known as the Gomory cut.

2.Generating the Gomory Cut: Examine the current basic variables in the Simplex tableau. Arbitrarily choose one basic variable with a fractional value, say xi. The following equation corresponds to the row of xi in the Simplextableau.

 xi = bi -{ai1x1 + ... + aitxt} (14.32)

where t is the total number of variables: n original variables and m slack variables. Note that in the above equation, xi and bi are fractional.

Now separate each of the above coefficients into their respective integer and fractional parts. Let bi = bZ:i + bf:i, where bZ:i is an integer, and bf:i is a positive fraction such that 0 ≤ bf:i ≤ 1. Similarly, we separate each of thecoefficients, aij,j = {1,...,t}, into an integer part, aZ:j, and non-negative fractional part, af:j, yielding aij = aZ:j+af:j.

Example: Consider the following equation.

 x2 = -x3 + x4 (14.33)

It can be re-written as

 x2 = - (14.34)

Using the above nomenclature, rewrite Eq. 14.32 as follows.

 xi = bZ:i + bf:i - [{aZ:1 + af:1}x1 + ... + {aZ:t + af:t}xt] (14.35)

The above equation can be further written as

 bf:i - [af:1x1 + ... + af:txt] = xi - bZ:i + [aZ:1x1 + ... + aZ:txt] (14.36)

Note that the right-hand-side of the above equation is always an integer. Therefore, the left-hand-side should also be an integer.

 bf:i - [af:1x1 + ... + af:txt] ∈ Z (14.37)

Since af:i are non-negative fractions and x ≥ 0, the quantity [af:1x1+...+af:txt] is non-negative. In addition, note that 0 ≤ bf:i ≤ 1. Therefore, the following equation holds.

 bf:i - [af:1x1 + ... + af:txt] ≤ bf:i ≤ 1 (14.38)

From Eq. 14.37, note that the left-hand-side of the above equation should be an integer. This implies that bf:i-[af:1x1+...+af:txt] should either be zero or a negative integer. This condition yields the Gomory constraint, given as

 bf:i - [af:1x1 + ... + af:txt] ≤ 0 (14.39)

3.Solving problem with the Gomory constraint:

Adding a slack variable to Eq. 14.39, we obtain

 bf:i - [af:1x1 + ... + af:txt] + st+1 = 0 (14.40)

The above constraint is then added to the linear programming problem from STEP 1. Note that the optimal solution obtained in STEP 1 (initial Simplex tableau) does not satisfy the Gomory constraint generated above. In this case, the basic Simplex method used in STEP 1 cannot be used to solve the new problem. Use the dual Simplex method (see Ref. ) for solving the new problem with the Gomory constraint.

4.Repeat: The process of generating the Gomory constraint and solving the new formulation with the dual Simplex method is repeated until integer solutions are obtained.

Note that the above-stated approach is a very simplified discussion of cutting plane methods. For a more rigorous mathematical discussion, see . Some shortcomings of this method are: (1) As the number of variables grows, the size and the complexity of the problem also grows, since an additional slack variable and an additional constraint are added for each non-integer variable. (2) Implementing this method in a computer using decimal representation can result in rounding errors as the algorithm proceeds, and may result in a wrong integer optimal solution.

The cutting plane method is now illustrated with the help of the following example.

Cutting plane algorithm example:

The linear integer programming problem is given as follows.

 (14.41)

subject to

 (14.42) (14.43) (14.44) (14.45)

1.Initial Simplex Tableau: Temporarily ignore the integer constraints on x1 and x2. The standard linear programming formulation for the Simplex method can be written as follows.

 (14.46)

subject to

 (14.47) (14.48) (14.49)

The initial Simplex tableau can be written as shown in Table 14.3. The final Simplex tableau that yields the continuous optimal solution is x1* = ,x2* = ,f* = -, and is shown in Table. 14.4.

Table 14.3. Initial Simplex Tableau Before Adding Cutting Planes

 x1 x2 s1 s2 b bi ∕aij R1 -4 6 1 0 9 3∕2 R2 1 1 0 1 4 4 f 1 -2 0 0 f

Table 14.4. Final Simplex Tableau Before Adding Cutting Planes

 x1 x2 s1 s2 b R1 0 1 1∕10 2∕5 2∕5 R2 1 0 -1∕10 3∕5 3∕2 f 0 0 3∕10 1∕5 f + 2∕7

2.Generating the Gomory Cut: If the above step yielded an integer solution, the integer problem would have been solved, and the solution procedure would end there. However, this is not the case in the example underconsideration. A constraint is added using the theory of Gomory cut algorithm. From the Simplex tableau in Table 14.4, arbitrarily choose a basic variable that has a fractional value, say x2 (from row R1). The equation in R1 canbe written as

 (14.50)

or

 (14.51)

Now rewrite the above equation by separating the integer and fractional parts.

 (14.52) (14.53)

Using the earlier discussion and Eq. 14.39, the Gomory cut for the above case can be written as follows.

 (14.54) (14.55)

3.Solve problem with Gomory constraint (Iteration 1):

Including Eq. 14.55 in the Simplex tableau, the updated tableau can be written as shown in Table 14.5.

Table 14.5. Initial Simplex Tableau After Adding First Cutting Plane (Eq. 14.55)

 x1 x2 s1 s2 s3 b R1 0 1 1∕10 2∕5 0 5∕2 R2 1 0 -1∕10 3∕5 0 3∕2 R3 0 0 -1∕10 -2∕5 1 -1∕2 f 0 0 3∕10 1∕5 0 f + 7∕2 ci∕(-ai) N/A N/A 3 1∕2 N/A

The linear programming problem in Table 14.5 will now be solved using the dual Simplex method. Select R3 from Table 14.5, as it has a negative b value. For each negative coefficient in R3, find the corresponding costcoefficient (entries of the row f), ci, and compute the following.

 (14.56)

For example, the s1 column can be evaluated as

 (14.57)

Since the column for s2 satisfied the above condition, we pivot on the element - in R3 in Table 14.5. We then obtain the final Simplex tableau provided in Table 14.6. This is the final Simplex tableau as all the elements in the bvector are positive.

Table 14.6. Final Simplex Tableau After Adding the First Cutting Plane (Eq. 14.55)

 x1 x2 s1 s2 s3 b R1 0 1 0 0 1 2 R2 1 0 -1∕4 0 3∕2 3∕4 R3 0 0 1∕4 1 -5∕2 5∕4 f 0 0 1∕4 0 1∕2 f + 13∕4

The optimal values from Table 14.6 are x1* = ,x2* = 2,f = -. Since the optimal solution has fractional parts, another Gomory cutting plane will be generated from Table 14.6.

4.Solve problem with Gomory constraint (Iteration 2):

Choose x1 as the variable for which a Gomory constraint is generated, using R1. Using the discussion provided earlier, we separate the integer and the fractional parts of the coefficients, as shown below.

 (14.58) (14.59) (14.60) (14.61)

Or, the Gomory constraint can be written as

 (14.62)

The initial Simplex tableau for the above problem is provided in Table 14.7. Select row R4 with the negative b value. Computing the ratio for the negative elements in the row leads to the choice of the s1 column. Pivoting on- in row R4 in Table 14.7, obtain the final Simplex tableau shown in Table 14.8, with optimal values of x1* = 1,x 2* = 2,f* = -3.

Table 14.7. Initial Simplex Tableau After Adding Second Cutting Plane (Eq. 14.62)

 x1 x2 s1 s2 s3 s4 b R1 0 1 0 0 1 0 2 R2 1 0 -1∕4 0 3∕2 0 3∕4 R3 0 0 1∕4 1 -5∕2 0 5∕4 R4 0 0 -3∕4 1 -1∕2 1 -3∕4 f 0 0 1∕4 0 1∕2 0 f + 13∕4 ci∕(-ai) N/A N/A 1∕3 N/A 1 N/A N/A

Table 14.8. Final Simplex Tableau After Adding Second Cutting Plane (Eq. 14.62)

 x1 x2 s1 s2 s3 s4 b R1 0 1 0 0 1 0 2 R2 1 0 0 0 0 5∕3 1 R3 0 0 0 1 -8∕3 1∕3 1 R4 0 0 1 0 2∕3 -4∕3 1 f 0 0 0 0 4∕3 1∕3 f + 3

The above example concludes our discussion of the cutting plane method. We discuss other solution approaches next.

14.3.6 Evolutionary Algorithms

Evolutionary algorithms, and certain versions of swarm-based algorithms , are a popular choice for discrete optimization because of their ability to work directly with discrete search spaces. The application of evolutionaryalgorithms will be studied in Chapter 19.

14.3.7 Software Options for Discrete Optimization

This subsection presents some of the popular software tools available for solving discrete programming problems. Note that most of these software tools can solve more than just integer programming problems. For convenience, werepeat here the table listing software in Section 5.5, as Table 14.9, where software options are discussed. Here, we will focuss on the last column of Table 14.9, where discrete optimization software is presented.

Table 14.9. Broad Classification of Software for Optimization—with Discrete Case

 Software for Optimization (SO) Within Within Discrete Stand-Alone Design Analysis Package Integer Framework or (SO-SA) (SO-WDF) (SO-WAP) Mixed MATLAB Toolbox iSIGHT GENESIS XPRESS NEOS Server PHX ModelCenter NASTRAN CPLEX DOT-VisualDOC modeFRONTIER ABAQUS Excel and Quattro NAG XPRESS Altair NEOS Server NPSOL LINDO/LINGO ANSYS MINLP GRG2 GAMS COMSOL GAMS WORLD LSSOL Boss Quattro MS Excel CPLEX What’sBest! BTN RISKOptimizer PhysPro Busi. Spreadsh.

1.The XPRESS suite of optimization algorithms is distributed by Dash Optimization . The XPRESS MIP tool provides the capability to solve mixed-integer programming problems by using sophisticated implementations of thebranch and bound and cutting plane algorithms. Further information about the XPRESS suite can be found at the following website.

www.fico.com/en/products/fico-xpress-optimization-suite

2.CPLEX solvers, created by CPLEX Optimization, Inc., are designed to handle problems with mixed-integer programming features. The solver uses a combination of algorithms and heuristics . Further information regardingthe free CPLEX solver can be found at the following website.

openopt.org/cplex

3.Excel and Quattro Pro Solvers, developed by Frontline Systems, are available to solve small scale integer programming problems using Excel spreadsheets. The integer programming method employs a branch and boundtechnique, which uses a nonlinear programming tool known as GRG2 .

4.The NEOS Server is a website hosted by Argonne National Laboratory, and is free to use. Several state-of-the-art algorithms in optimization software, including those in integer programming, are available from the websitewww.neos-server.org/neos/solvers

5.The GAMS WORLD [11, 12] is a website hosted by the international GAMS community. It provides a large suite of source codes for solving mixed integer nonlinear programming (MINLP) problems, which are an importantand complex class of discrete optimization problems. The MINLP solver codes can be found at the following website: www.gamsworld.org/minlp/solvers.htm

The above subsection concludes our discussion on the solution approaches of discrete optimization.

14.4 Summary

This chapter introduced important aspects of discrete optimization. The treatment of the subject in this chapter is fairly simple when compared to more detailed references. Simple and easy-to-implement solution approaches fordiscrete problems that can be readily implemented for engineering problems were introduced. Examples were provided to further illustrate how the algorithms work. A list of some popular software options was also provided,together with their pertinent characteristics.

14.5 Problems

14.1Perform a literature survey and find at least two examples of discrete optimization problems that were not discussed in the chapter.

14.2Solve the problem presented in Sec. 14.3.2 graphically. In addition, reproduce the details shown in Fig. 14.2. Clearly label your plot.

14.3Solve the following integer problem graphically. Clearly label your plots.

 (14.63)

subject to

 (14.64) (14.65) (14.66) (14.67)

14.4Solve the following integer problem graphically. Clearly label your plots.

 (14.68)

subject to

 (14.69) (14.70) (14.71) (14.72)

14.5Duplicate the results for the integer programming problem presented in Sec. 14.3.4.

14.6Solve the integer problem shown in Problem 14.3 using the branch and bound method. Clearly write down the subproblem statement, and the corresponding optimal solutions.

14.7Solve the integer problem shown in Problem 14.4 using the branch and bound method. Write down the detailed solutions.

14.8Duplicate the results for the cutting plane method example discussed in Sec. 14.3.5.

14.9Solve the integer problem shown in Problem 14.3 using Gomory’s cutting plane method. Clearly write down the subproblem statement, and the corresponding optimal solutions.

14.10Solve the integer problem shown in Problem 14.4 using Gomory’s cutting plane method. Write down the detailed solutions.

BIBLIOGRAPHY OF CHAPTER 14

J. K. Karlof. Integer Programming: Theory and Practice. CRC Press, 2005.

D. S. Chen, R. G. Batson, and Y. Dang. Applied Integer Programming: Modeling and Solution. John Wiley and Sons, 2010.

K. H. Rosen. Discrete Mathematics and its Applications. McGraw-Hill Science, 7th edition, 2012.

M. M. Syslo, N. Deo, and J. S. Kowalik. Discrete Optimization Algorithms: With Pascal programs. Courier Dover Publications, 2006.

S. S. Rao. Engineering Optimization: Theory and Practice. John Wiley and Sons, 4th edition, 2009.

S. Chowdhury, W. Tong, A. Messac, and J. Zhang. A mixed-discrete particle swarm optimization algorithm with explicit diversity-preservation. Structural and Multidisciplinary Optimization, 47(3):367-388, 2013.

Fair Issac Corp. Xpress optimization suite. www.dashoptimization.com/home/index.html.

IBM. IBM ILOG CPLEX Optimization Studio. www.ilog.com/products/cplex.

Frontline Systems, Inc. Frontline Solvers. www.solver.com/exceluse.htm.

S. Smith and L. S. Lasdon. Solving sparse nonlinear programs using GRG. INFORMS Journal on Computing, 4(1):2-15, 1992.

GAMS Development Corp. and GAMS Software GmbH. GAMS world. www.gamsworld.org.

GAMS World. MINLP world. www.gamsworld.org/minlp/index.htm.

﻿