Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)
PART 4
GOING DEEPER: INSIDE THE CODES AND THEORETICAL ASPECTS
Part IV of the book explains what is inside the code and how it works. This knowledge will make it possible to use the optimization code with more confidence, and more reliably. It will also help you know what to do when things do not work. The material presented will also be great preparation for further studies in optimization. With this more advanced knowledge, it will be possible to understand such things as the code error messages. The materialexamined in this part of the book will reinforce and advance the knowledge of optimization that was previously learned.
Metaphorically, the material up to this point taught us how to drive the design from a bad state to an optimal state. We did so without understanding how the car we are driving works, how to fix it, design it, or actually build it. However, we have sufficient practical knowledge to drive from a bad to a good design. The following material will teach us how the car works, how to fix it, how to make it perform very well, and even prepare us venture into the advanced world of designing cars (if that is our interest). That is, building optimization codes and algorithms. Interestingly, most of us want to use the power of optimization, without having to become expert in the intricate details of the innerworkings of the optimization code. This book provides the breadth of presentation to suit these diverse objectives. More advanced topics will be presented beyond Part IV.
Specifically, Part IV presents three elementary topics:
11.Linear Programming
12.Nonlinear Programming with No Constraints
13.Nonlinear Programming with Constraints
11
Linear Programming
11.1 Overview
Linear programming (LP) is a technique for optimizing a linear objective function, subject to linear equality and linear inequality constraints. Linear programming is an important field of optimization for several reasons (Refs. [1, 2]). Many practical problems in engineering and operations research can be expressed as linear programming problems. Linear programming problems can be solved in an easy, fast, and reliable manner.
Linear programming was first used in the field of economics, and still remains popular in the areas of management, economics, finance, and engineering. During World War II, George Dantzig of the U.S. Air Force used LP techniques for planning problems. He invented the Simplex method, which is one of the most popular methods for solving LP problems.
Linear programming is a well developed subject in operations research, and several references that focus on linear programming alone are available [3, 4, 5, 6]. The reader is encouraged to consult these references for a more detailed discussion of linear programming.
In this chapter, the basics of linear programming problems will be studied. First, an introduction of the basic terminology and important concepts in LP problems will be presented in Sec. 11.2. In Sec. 11.3, the graphical approach for solving LP problems will be discussed, and four types of possible solutions for an LP problem will be introduced. Details on the MATLAB linear programming solver are presented in Sec. 11.4. Sections 11.5 and 11.6 discuss thevarious concepts involved in the Simplex method. In Sec. 11.7, the notion of duality and interior point methods are briefly discussed. Section 11.8 concludes the chapter with a summary.
11.2 Basics of Linear Programming
In this section, some basic terminology and definitions in linear programming will be discussed.
A generic linear programming problem consisting of linear equality and linear inequality constraints is given as
(11.1) 
subject to
(11.2) 

(11.3) 

(11.4) 

(11.5) 

(11.6) 

(11.7) 
where z is the linear objective function to be minimized; c represents the vector of coefficients for each of the n design variables; A and b represent the matrix and vector of coefficients for m inequality constraints, respectively; A_{eq}and b_{eq} represent the matrix and vector of coefficients for p equality constraints, respectively; and x_{i}_{lb} and x_{i}_{ub} are the lower and upper bounds on the ith design variable, respectively. Note that the above problem is not defined inthe socalled standard form, which is commonly used in some LP solvers. The standard form will be discussed later.
In a matrix notation, the above formulation can be written as
(11.8) 
subject to
(11.9) 

(11.10) 

(11.11) 
where x_{lb} and x_{ub} are vectors of the lower and upper bounds on the design variables, respectively. Note that in some references, the objective function, z, is called the cost function. The coefficients c_{1},...,c_{n} are also known as cost coefficients. Some references pose the LP problem as a maximization problem, which is different from the convention in the present chapter, a minimization problem.
11.3 Graphical Solution Approach: Types of LP Solutions
The graphical approach is a simple and easy technique to solve LP problems. The procedure involves plotting the contours of the objective function and the constraints. The feasible region of the constraints and the optimal solution is then identified graphically. Note that the graphical approach to solving LP problems may not be possible as the number of variables increases.
There are four possible types of solutions in a generic LP problem: (1) unique solution, (2) segment solution, (3) no solution, and (4) solution at infinity. These four types of solutions will be studied using examples.
11.3.1 The Unique Solution
Consider the following LP problem.
(11.12) 
subject to
(11.13) 

(11.14) 

(11.15) 
Plot the constraints and the objective function contours, as shown in Fig. 11.1(a). There is a unique solution for this problem, where the objective function contour has the least value while remaining in the feasible region.
Figure 11.1. Types of LP Solutions
Here is an important point to keep in mind. If a constant value is added to the objective function in the above problem, for example, x_{1}  2x_{2} + 5, and the constraints remain unchanged, how would it affect the optimal solution?The optimal values of the design variables will not change because of the added constant. This is because the additional constant in the objective function does not change the slope of the function contours. However, the optimalvalue of the objective function will change when compared to the original objective function.
11.3.2 The Segment Solution
Consider the LP problem with infinitely many solutions in the form of a segment solution. Examine the following example:
(11.16) 
subject to
(11.17) 

(11.18) 

(11.19) 
From Figure 11.1(b), the slope of the objective function and the slope of the constraint function, 0.5x_{1} + x_{2} ≤ 10, are the same. Upon optimization, the objective function coincides with the constraint function, resulting in infinitely many solutions along the segment shown in Fig. 11.1(b).
A discussion of the third type of LP solutions is presented next.
11.3.3 No Solution
In this case, we will study LP problems that do not have a feasible solution. Consider the following example.
(11.20) 
subject to
(11.21) 

(11.22) 

(11.23) 
Observe the feasible region of this problem from Fig. 11.1(c). The respective feasible regions of the inequality constraints do not intersect. There is no solution that satisfies all the constraints. Thus, there is no solution to this problem.
11.3.4 The Solution at Infinity
Consider the following problem.
(11.24) 
subject to
(11.25) 

(11.26) 
The above problem leads to a solution at infinity. The feasible region, as illustrated in Fig. 11.1(d), is not bounded to yield a finite optimum value. The solution for this problem lies at infinity.
An unconstrained linear programming problem has a solution at infinity. These problems are rarely encountered in practice. On the contrary, unconstrained nonlinear programming problems are fairly common in practice. Nonlinear optimization problems do not require the presence of a constraint in order to yield a finite optimum.
Thus far, four possible types of LP solutions have been described. The graphical approach to solving LP problems is feasible only for small scale problems. For large scale problems, software implementations of LP solution algorithms are used. However, note that graphical visualization means can be used to examine the solution obtained for a large scale LP problem.
11.4 Solving LP Problems Using MATLAB
In this section, the command linprog in MATLAB is revisited. The MATLAB LP solvers use variations of the numerical methods that will be learned later in this chapter. The command linprog employs different solution strategies, such as the Simplex method and the interior point methods based on the size of the problem. The command allows for linear equality and linear inequality constraints and bounds on the design variables.
Different software codes follow their own respective formulations. Before using the solver, the problem is transformed into the formulation specified by the solver. The default problem formulation for linprog is given as
(11.27) 

subject to
(11.28) 

(11.29) 

(11.30) 
In the above formulation, f is the vector of coefficients of the objective function; A and A_{eq} represent the matrices of the lefthandside coefficients of the linear inequality and linear equality constraints, respectively; b and b_{eq}represent the vectors of the righthandside values of the linear inequality and equality constraints, respectively; and x_{lb} and x_{ub} are the lower and upper bounds on the design variables, respectively.
Example: Consider the following example to demonstrate the use of the linprog command.
(11.31) 
subject to
(11.32) 

(11.33) 

(11.34) 
The constraints can be rewritten as 4x_{1} + 6x_{2} ≤ 9 and x_{1} + x_{2} ≤ 4, according to the MATLAB standard formulation. The MATLAB code that solves the above problem is summarized below.
f = [1;2] % Defining Objective
A= [4 6; 1 1]; % LHS inequalities
b = [9; 4] % RHS inequalities
Aeq = []; % No equalities
beq = []; % No equalities
lb = [0;0] % Lower bounds
ub = [] % No upper bounds
x0 = [1;1] % Initial guess
x = linprog(f,A,B,Aeq,beq,lb,ub,x0)
The output generated by MATLAB is given below.
Warning: Large scale (interior point) method uses a
builtin starting point;
ignoring usersupplied X0.
> In linprog at 235
Optimization terminated.
x =
1.5000
2.5000
The warning displayed above informs the reader that the interior point algorithm used by the MATLAB solver does not need a starting point. The starting point provided has been ignored by the solver. These messages are warningsthat often help the user understand the solver algorithm and should not be mistaken for error messages.
Thus far, we have studied the graphical approach to solving LP problems, followed by the use of the MATLAB software options. Next, we present a popular numerical technique for solving LP problems that forms the basis of many commercial solvers.
11.5 Simplex Method Basics
This section introduces the Simplex method. Before discussing the algorithm of the Simplex method, some terminology and basic definitions associated with the Simplex method are presented.
11.5.1 The Standard Form
In order to apply the Simplex method, the problem must be posed in standard form. The standard form of an LP problem for the Simplex method is given below.
(11.35) 
subject to
(11.36) 

(11.37) 

(11.38) 

(11.39) 

(11.40) 
The standard formulation does not contain inequality constraints. In a matrix notation, the standard formulation can be written as follows.
(11.41) 
subject to
(11.42) 

(11.43) 
where c = [c_{1},...,c_{n}] is the vector of the cost coefficients for the objective function; A is an m × n matrix of the coefficients for the linear equality constraints; and b is the m× 1 vector of the righthandside values.
The feasible region of the standard LP problem is a convex polygon. The optimal solution of the LP problem lies at one of the vertices of the polygon. In the Simplex method, the solution process moves from one vertex of the polygon to the next along the boundary of the feasible region.
11.5.2 Transforming into Standard Form
In the standard definition of LP problems, there are no inequality constraints. All design variables have nonnegativity constraints. If a given problem is not in this standard form, certain operations are performed to transform the problem into the Standard Form. How the operations are performed for inequality constraints and for unbounded variables is discussed below.
Inequality Constraints
If the given problem formulation contains inequality constraints of the form g(x) ≤ 0, they are transformed into equality constraints by using slack variables. For constraints of the form g(x) ≤ 0, add a nonnegative slack variable, s_{1}, to the lefthandside of the constraint, yielding g(x) + s_{1} = 0. The variable s_{1} is called a slack variable because it represents the slack between the lefthandside and the righthandside of the inequality.
For inequalities of the form g(x) ≥ 0, they are transformed into equality constraints by using surplus variables. Subtract a nonnegative surplus variable, s_{2}, from the lefthandside of the constraint, yielding g(x)  s_{2} = 0. The variable s_{2} represents the surplus between the lefthandside and the righthandside of the inequality.
The slack/surplus variables are unknowns, and will be determined as part of the LP solution process.
Example: Consider the following linear programming problem.
(11.44) 

subject to
(11.45) 

(11.46) 

(11.47) 
The standard form for the above formulation can be given as
(11.48) 

subject to
(11.49) 

(11.50) 

(11.51) 
The standard form for the LP formulation requires the design variables to be nonnegative.
Unbounded Design Variables
In the standard form, the design variables should be nonnegative, x ≥ 0. However, in practice, the bounds on the design variable could also be of the form x ≤ 0. In some problems, the design variables may be indefinite (i.e., no bounds may be specified). In these cases, how are the design variables put into a standard form?
If a design variable, x_{i}, does not have bounds imposed in the problem, the following technique is used. Let x_{i} = s_{1}  s_{2}, where s_{1},s_{2} ≥ 0. In the standard form, the variable x_{i} is then replaced by s_{1}  s_{2}, and the additional constraintss_{1},s_{2} ≥ 0 are added to the problem. In other words, the unbounded design variable is rewritten as the difference between two nonnegative additional variables. The additional variables will be determined as part of the LP solution process.
Example: Consider the following formulation.
(11.52) 

subject to
(11.53) 

(11.54) 

(11.55) 
Note that the variable x_{3} is unbounded in the above formulation. Assume that x_{3} = s_{1}  s_{2} and s_{1},s_{2} ≥ 0. The standard formulation can be written as follows.
(11.56) 
subject to
(11.57) 

(11.58) 

(11.59) 
Next, we introduce the Gauss Jordan method, which forms an important component of the Simplex method.
11.5.3 Gauss Jordan Elimination
Note that the number of variables (including the n design variables and the p slack variables) is not necessarily equal to the number of equations, m. If the number of variables is equal to the number of equality constraints, then the solution is uniquely defined. In most LP problems, there exists more variables than equations. This results in an underdetermined system of equations, resulting in infinitely many feasible solutions for the equality constraint set. The optimization problem then lies in determining which feasible solution(s) results in the minimization of the objective function, while satisfying the nonnegativity constraints for the design variables.
Example: Consider the following underdetermined system of equations:
(11.60) 

(11.61) 

(11.62) 
The above set of equations have more variables than equations. Therefore, there are infinitely many solutions for this case.
To efficiently work with the constraint set of the LP problem, reduce the constraint set into a special form. The set of equations in the special form is said to be in a canonical form. The original constraint set and the canonical form are equivalent (i.e., they have the same set of solutions). By transforming the LP constraint set into a canonical form (which is easier to solve), the solutions can be found more efficiently. To solve LP problems, use a canonical form known as the reduced row echelon form. This approach of using a reduced row echelon form to solve a set of linear equations is known as the Gauss Jordan elimination.
How can a given constraint be reduced into a row echelon form? A canonical form is usually defined with respect to a set of dependent or basic variables, which are defined in terms of a set of independent or nonbasic variables.The choice of basic variables is arbitrary. A system of m equations and n variables is said to be in a reduced row echelon form with respect to a set of basic variables, x_{1},...,x_{m}, if all the basic variables have a coefficient of one in only one equation, and have a zero coefficient in all the other equations. A generic matrix based representation of a set of equations in a reduced row echelon form with m basic variables and p nonbasic variables is given as
(11.63) 
where d is the matrix of coefficients for the nonbasic variables; x_{b} is the set of basic variables; and x_{nb} is the set of nonbasic variables. The number of basic variables is equal to the number of equations.
The reduced row echelon form is illustrated with the help of the following example.
Example: The following equations are in a reduced row echelon form with respect to the variables x_{1},x_{2},x_{3}, and x_{4}.
(11.64) 

(11.65) 

(11.66) 

(11.67) 
Writing the above equation set in a matrix form allows us to obtain the following.
(11.68) 
Notice the matrix of coefficients on the lefthandside. Each row of the matrix represents one constraint. We note that the coefficients corresponding to the basic variables x_{1},x_{2},x_{3}, and x_{4} are equal to one in only one equation, and zero elsewhere.
The given set of constraints of an LP problem is not usually given in the row echelon form. A series of row operations must be performed to transform it into the standard form. The details are discussed next.
11.5.4 Reducing to a Row Echelon Form
A pivot operation consists of a series of elementary row operations to make a particular variable a basic variable (i.e., reduce the coefficient of the variable to unity in only one equation, and to zeroes in the other equations). A particular variable to be made basic, x_{bi}, could exist in some or in all of the m equations. We need to decide which coefficient of the basic variable should be made one (i.e., in which equation). By definition, each equation is to have only one basic variable with unit coefficient. The choice as to which equation corresponds to which basic variable is arbitrary, or is based on algebraic convenience for the reduced row echelon form. There will be further restrictions on this issue when the Simplex method is discussed.
There are two types of elementary row operations that can be performed to reduce a set of equations into a reduced row echelon form: (1) Multiply both sides of an equation with the same nonzero number. (2) Replace one equation by a linear combination of other equations.
Example: Reduce the following set of equations into a row echelon form.
(11.69) 

(11.70) 

(11.71) 
Choose x_{1}, x_{2}, and x_{3} as basic variables. In order to obtain a row echelon form, perform operations such that the variables x_{1}, x_{2}, and x_{3} appear in only one equation with a unit coefficient, and do not appear in the other equations. Use the following representation, where the first four columns represent the coefficients of each variable, and the last column represents the righthandside of the equation.
(11.72) 
First, make x_{1} a basic variable. Choose x_{1} to have a unit coefficient in R_{1}, and zero coefficients in R_{2} and R_{3}. Replace R_{2} by R_{2}+R_{1} and R_{3} by R_{3} R_{1} to obtain
(11.73) 
With the above transformations, x_{1} is made a basic variable (appears only in R_{1}). Now make x_{2} a basic variable. That is, make its coefficient one in R_{2} and zeroes in the other rows. First, divide R_{2} by 4 to obtain a unit coefficient in R_{2}.
(11.74) 
Now replace R_{1} by R_{1}  R_{2} and R_{3} by R_{3}  R_{2} to make the coefficients of x_{2} zeroes in the other equations to obtain
(11.75) 
Next, make x_{3} a basic variable by making its coefficient unity in R_{3} and zeroes in the other equations. Divide R_{3} by 3 to obtain the following.
(11.76) 
Replace R_{2} by R_{2} + 2R_{3} and R_{1} by R_{1}  R_{3} to make the coefficients of x_{3} zeroes in the other equations.
(11.77) 
The above set of equations are in the reduced row echelon form with respect to the basic variables x_{1},x_{2}, and x_{3}. Note that the particular choice of making x_{1}, x_{2}, and x_{3} the basic variables is arbitrary.
11.5.5 The Basic Solution
Thus far, our discussion has dealt with how to reduce a given set of equations into a canonical form. The next step is to find the solution for the set of equations in the canonical form. A basic solution is obtained from the canonicalform by setting the nonbasic (or independent) variables to zero. A basic feasible solution is a basic solution in which the values of the basic variables are nonnegative. In other words, the basic feasible solution is feasible for the standard LP formulation, whereas a basic solution is not always feasible.
Example: Consider the following canonical form derived previously.
(11.78) 
Set the nonbasic variable, x_{4}, to zero. Then, obtain the basic solution by solving for x_{1}, x_{2}, and x_{3}. The basic solution can then be written as x_{1} = 1,x_{2} = 4,x_{3} = 1, and x_{4} = 0. Note that this basic solution is not a basic feasible solution for the standard LP formulation since x_{1} < 0.
The choice of the set of basic variables is not unique, and can be decided according to computational convenience. In the example considered above, x_{2},x_{3}, and x_{4} could have been chosen as basic variables instead of x_{1}, x_{2}, and x_{3}. For a generic problem, any set of m variables (recall that there are m equations) from the possible n variables can be chosen as basic variables. This implies that the number of basic solutions for a generic standard LP problem with mconstraints and n variables is given as
(11.79) 
Example: Consider the following set of equations from the previous example.
(11.80) 

(11.81) 

(11.82) 
Here, m = 3 and n = 4. Therefore, C_{3}^{4} = = 4 basic solutions.
As mentioned earlier, the feasible region of the standard LP problem is a convex polygon. Each vertex of this convex polygon corresponds to a basic feasible solution of the constraint set. The optimal solution of the LP problem,which is the basic feasible solution with the minimum objective function value, lies at one of the vertices of the polygon. In the Simplex method, the solution process moves from one vertex of the polygon to the next along the boundary of the feasible region. Now that the supporting mathematics of the Simplex method has been presented, we may proceed to discuss the Simplex algorithm.
11.6 Simplex Algorithm
11.6.1 Basic Algorithm
The optimal solution of the LP problem lies at one of the vertices of the feasible convex polygon. A cumbersome approach to solve an LP problem would be to list all the basic feasible solutions from which the optimal solution could be chosen. In the Simplex method, the solution process efficiently moves from one basic feasible solution to the next, while ensuring objective function reduction at each iteration (Ref. [7]).
Before discussing the details of the Simplex algorithm, please note the following. The standard LP problem was posed as a minimization problem. The rules of the following algorithm apply to minimization problems only. Some other textbooks may treat the standard LP problem as a maximization problem, and the rules of the algorithm would differ accordingly.
The Simplex method algorithm is presented in 6 generic steps:
1.Transformation into the Standard LP Problem
2.Formation of the Simplex Tableau
3.Choice of the Variable that Enters the Basis  Identify the Pivotal Column
4.Application of the Minimum Ratio Rule  Identify the Pivotal Row
5.Reduction to Canonical Form
6.Checking for Optimality
The presentation of these steps follows. The algorithm of the Simplex method is summarized in Fig. 11.2.
Figure 11.2. Simplex Algorithm
1.Transform into Standard LP Problem: Transform the given problem into the standard LP formulation by adding slack/surplus variables. Consider a generic formulation with n design variables, m inequality constraints, and mslack variables. The standard LP formulation is given as

subject to
(11.84) 

(11.85) 

(11.86) 

(11.87) 
Example: Consider the following LP problem.
(11.88) 
subject to
(11.89) 

(11.90) 

(11.91) 
The standard form for the above problem is given below.
(11.92) 

subject to
(11.93) 

(11.94) 

(11.95) 
2.Form the Initial Simplex Tableau: List the constraints and the objective function coefficients in the form of a table, known as the Simplex tableau, shown below.
(11.96) 
Example: The Simplex tableau for the example is shown in Table Reproduce.
The basic feasible solution for the initial Simplex tableau with s_{1} and s_{2} as basic variables is x_{1} = 0,x_{2} = 0,s_{1} = 4, and s_{2} = 3, and the function value is f = 0 (see Fig. 11.3).
Figure 11.3. Simplex Method Example
3.Choose the Variable that Enters the Basis  Identify the Pivotal Column: The above set of equations is in the reduced row echelon form with respect to the slack variables, s_{1},...s_{p}. The Simplex algorithm begins with this initial basic feasible solution. By observing the coefficients of the objective function row in the initial Simplex tableau, the algorithm moves to the adjacent basic feasible solution that reduces the objective function. The other adjacent basic solution(s) with objective function value(s) higher than the current solution are ignored.
In order to move to the next basic feasible solution, the algorithm proceeds by making an existing basic variable a nonbasic variable. In addition, an existing nonbasic variable is made into a basic variable. How do we choose which nonbasic variable becomes a basic variable? Similarly, how do we choose which will be the next basic variable?
The nonbasic variable with the highest negative coefficient in the objective function is selected to become the basic variable in the next iteration. This choice is driven by our interest in minimizing the objective function value. The variable with the highest negative coefficient has the potential to reduce the objective function value to the maximum extent, when compared to the other variables. In other words, this choice can incur the maximum improvement to the objective value per unit of increase of the variable. In contrast, the coefficient of a nonbasic variable would not have the opportunity to play an active role in the minimization process, since the value of allnonbasic variables are set to zero  in determining the basic feasible solutions. As the nonbasic variable enters the basis (in the next iteration), it will directly impact the function minimization process. Similarly, in a functionmaximization process, the nonbasic variable with the highest positive coefficient would be chosen to enter the basis in the next iteration.
According to the above approach, each iteration of the Simplex method makes a nonbasic variable a basic variable. The corresponding variable is then said to enter the basis. When all the coefficients of the objective function are positive at any iteration, then the corresponding basic solution is the optimal solution. Therefore, the Simplex algorithm steps are repeated until all the coefficients in the objective function row are positive.
Example: Consider the initial Simplex tableau given in Table 11.1. Currently, the basic variables are s_{1} = 4 and s_{2} = 3 since they appear with unit coefficient in only one equation and have zero coefficients in the other equations. The nonbasic variables are x_{1} = 0 and x_{2} = 0, and the corresponding function value is f = 0. Observing the entries of the objective function row (R_{3}), the coefficient of x_{2} is negative (see boldfaced entries inTable 11.2). The current basic variables, s_{1} and s_{2}, are not part of the objective function (see the corresponding coefficients in R_{3} in Table 11.2). If x_{2} is made a basic variable, the objective function value can be reduced from its current value. Therefore, the variable x_{2} enters the basis, as per Table 11.2. (Note that enters the basis means will enter the basis in the next iteration.
Table 11.1. Initial Simplex Tableau




x_{1} 
x_{2} 
s_{1} 
s_{2} 
b 



R_{1} 
1 
1 
1 
0 
4 
R_{2} 
1 
1 
0 
1 
3 
R_{3} 
1 
2 
0 
0 
f 



Table 11.2. Simplex Method Example: Identifying the Pivotal Column




x_{1} 
x_{2} 
s_{1} 
s_{2} 
b 



R_{1} 
1 
1 
1 
0 
4 
R_{2} 
1 
1 
0 
1 
3 
R_{3} 
1 
2 
0 
0 
f 



4.Minimum Ratio Rule  Identify the Pivotal Row: In the previous step, the variable entering the basis was identified. However, the variable may appear in all the equations in the constraint set. In other words, the previous stepidentified the column of interest, or the pivotal column. For a variable to be basic, recall that it must have unit coefficient in one equation only, and zero coefficients in all other equations. Which equation will have the unit coefficient? In other words, how is the row of interest determined, or the pivotal row?
The number of basic variables in a set of equations cannot be greater than the number of equations. Since one variable was added to the basic variable set in the previous step, an existing basic variable must be made nonbasic. In order to determine which variable is eliminated from the basic variable set, use the minimum ratio rule. The variable that is selected using this rule is said to be leaving the basis. To select the variable that leaves the basis, compute the following ratio for the selected column in the previous step corresponding to the variable that enters the basis, say x_{j}.
(11.97) 
The row that satisfies the above minimum ratio rule is then selected as the pivotal row.
In the previous step, the entering basic variable was chosen based on its potential to reduce the objective function value. However, the allowable reduction of the objective function depends on the condition that the constraints are not violated. The minimum ratio rule above provides the largest increase in the objective function value possible while the constraints are not violated.
Special Cases:
(1) If all the coefficients of the column corresponding to the chosen basic variable, a_{ij}, are negative, the minimum ratio cannot be computed using Eq. 11.97. In these cases, the LP problem has an unbounded solution.
(2) If two or more rows have the same minimum ratio as computed from Eq. 11.97, any of these can be chosen as the pivotal row.
(3) When one or more of the basic variables have zero values, the solution is said to be degenerate. This can happen when the righthandside value b_{i} is zero and, consequently, the minimum ratio in Eq. 11.97 is zero. This usually implies that adding a new variable to the basic variable set may not reduce the objective function value. This may result in cycling, where the Simplex algorithm may enter an infinite loop. Fortunately, in most practical problems, this situation usually does not arise.
Example: After applying the minimum ratio rule, the pivotal row has been identified as shown in Table 11.3.
Table 11.3. Simplex Method Example: Identifying the Pivotal Row




x_{1} 
x_{2} 
s_{1} 
s_{2} 
b 
b_{i}_{ }∕a_{ij} 



R_{1} 
1 
1 
1 
0 
4 
4∕1 = 4 
R_{2} 
1 
1 
0 
1 
3 
3∕1 =3 
R_{3} 
1 
2 
0 
0 
f 




5.Reduce to Canonical Form: Once the pivotal row and the pivotal column have been chosen based on the above rules, the pivotal element can be identified. The constraint set is then transformed into a reduced row echelon form with respect to the newly identified incoming basic variable.
Example: By performing elementary row operations as discussed in Sec. 11.5.3, the initial Simplex tableau is reduced into the canonical form with respect to the newly added basic variable, x_{2}. The coefficient of x_{2} is reduced to one in R_{2}, and is reduced to zero in R_{1} and R_{3}. Table 11.4 provides the updated Simplex tableau in the canonical form with respect to x_{2}.
Table 11.4. Simplex Method Example: Tableau in Canonical Form




x_{1} 
x_{2} 
s_{1} 
s_{2} 
b 



R_{1} 
2 
0 
1 
1 
1 
R_{2} 
1 
1 
0 
1 
3 
R_{3} 
1 
0 
0 
2 
f + 6 



The basic solution for the above Simplex tableau is x_{1} = 0, x_{2} = 3, s_{1} = 1, and s_{2} = 0, while the function value is f = 6 (see Fig. 11.3). The variable x_{2} has entered the basis and s_{1} has left the basis. The value of x_{2} has increased from zero in the initial Simplex tableau to three in the current iteration, and the function value has been reduced from f = 0 to f = 6.
As shown in Fig. 11.3, the Simplex method moves from one basic feasible solution to an adjacent one with a reduced objective function value. The other adjacent basic feasible solution at x_{1} = 4,x_{2} = 0 is ignored since theobjective function value is higher (f = 4) than the initial basic feasible solution (f = 0).
6.Check for Optimality: If the coefficients of the objective function are all positive in the current Simplex tableau, the optimum has been reached and the algorithm can be terminated. If not, repeat the Simplex algorithm until theabove termination criterion is met.
Example: In Table 11.4, the coefficient of x_{1} in the objective function row is negative. Therefore, choose x_{1} as the variable entering the basis. The minimum ratio rule in the last column indicates that R_{1} is the pivotal row (see Table 11.5).
Table 11.5. Simplex Method Example: The Second Iteration




x_{1} 
x_{2} 
s_{1} 
s_{2} 
b 
b_{i}_{ }∕a_{ij} 



R_{1} 
2 
0 
1 
1 
1 
1/2 
R_{2} 
1 
1 
0 
1 
3 
3 
R_{3} 
1 
0 
0 
2 
f + 6 




By reducing Table 11.5 to a canonical form with respect to x_{1}, the Simplex tableau provided in Table 11.6 is obtained. The basic feasible solution for Table 11.6 is x_{1} = ,x_{2} = ,s_{1} = 0, and s_{2} = 0, and the function value is f =. The coefficients in the objective function in Table 11.6 are all positive. Therefore, the current basic feasible solution is the optimal solution.
Table 11.6. Simplex Method Example: Final Simplex Tableau




x_{1} 
x_{2} 
s_{1} 
s_{2} 
b 



R_{1} 
1 
0 
1∕2 
1∕2 
1∕2 
R_{2} 
0 
1 
1∕2 
1∕2 
7∕2 
R_{3} 
0 
0 
1∕2 
3∕2 
f + 13∕2 



Figure 11.3 illustrates the progression of the iterations of the Simplex method. The solid black circles represent the four basic feasible solutions. The feasible region of the standard LP problem is a polygon which, in this case, is a quadrilateral. The hollow block arrows illustrate the progression of the Simplex method along the boundary of the feasible quadrilateral. The initial Simplex tableau has a basic feasible solution at x_{1} = 0,x_{2} = 0. Using the rules of the Simplex algorithm, the method proceeds to the adjacent vertex of the quadrilateral that reduces the objective function value.
The above example concludes our discussion of the Simplex algorithm. Next, we briefly examine how to deal with cases where the Simplex method cannot be applied.
11.6.2 Special Cases
The discussion of the Simplex method in the previous subsection assumes that the LP problem can be reduced into the standard formulation. In some cases, it will not be directly possible to do so. One requirement of the standard LP problem is that the variables must be nonnegative, and the righthandside constants for the constraints must be nonnegative. In some problems, obtaining a standard LP problem is not straightforward. In these cases, artificialvariables are used. Variations of the Simplex method known as the twophase Simplex method and the dual Simplex method are used. A detailed discussion of the theory and implementation of these methods can be found in [3, 8]. Next, we explore some advanced concepts in the area of linear programming.
11.7 Advanced Concepts
Two important topics will be studied briefly in this section: duality and interior point methods. While a detailed discussion of these topics is outside the scope of this chapter, the following discussion provides the reader with a basic understanding of the theory underlying these advanced concepts.
11.7.1 Duality
Duality is an important concept in linear programming problems. Each LP problem, known as the primal, has another corresponding LP problem, known as the dual, associated with it. The dual and the primal problems share some common features, but are arranged differently. How is this concept useful? As will be discussed later, the solutions of the primal and dual problems have interesting relationships. In some cases, solving a dual problem might be computationally more convenient than solving the primal problem. The concept of duality is of great importance in various numerical optimization algorithms and LP solvers. This section introduces the basic concept of duality in LP problems.
Consider the following matrix representation of the LP problem.
(11.98) 

subject to
(11.99) 

(11.100) 
For the above primal problem, the corresponding dual problem can be defined as follows.
(11.101) 
subject to
(11.102) 

(11.103) 
If the primal is a minimization problem, the dual will be a maximization problem, and vice versa.
Example: Consider the following LP problem.
(11.104) 

subject to
(11.105) 

(11.106) 

(11.107) 

(11.108) 
In the above problem, c is a 4 × 1 column vector given as c = [1,1,2, 4]^{T}; the lefthandside coefficients form a 3 × 4 matrix
(11.109) 
and the righthandside values of the inequality constraints form a 3×1 column vector given as b = [1, 5, 3]^{T}.
The dual of the above problem is given as
(11.110) 
subject to
(11.111) 

(11.112) 

(11.113) 

(11.114) 

(11.115) 
Note the following relationships between the primal and the dual problem:
1.The primal problem had three inequality constraints, while the dual problem has three design variables.
2.The primal problem had four design variables, while the dual problem has four inequality constraints.
3.The primal is a minimization problem, while the dual is a maximization problem.
4.The primal inequalities are of the form ≤ 0, whereas the dual constraints are of the form ≥ 0.
Next, we examine the relationships between the primal and the dual problems.
11.7.2 PrimalDual Relationships
The feasible sets and the solutions of the primal and the dual problems are related. Duality theory presents the relationship between the two problems. The following are the important relationships between the primal and the dual problems.
1.The objective function value of the dual problem evaluated at any feasible solution provides a lower bound on the objective function value of the primal problem evaluated at any feasible solution. This is known as the weak duality theorem.
2.Every feasible solution for the dual problem provides a lower bound on every feasible solution of the primal.
3.The dual of a dual problem is the primal problem.
4.If the primal solution has a feasible solution and the dual problem has a feasible solution, then there exists optimal feasible solutions such that the objective function values of the primal and the dual are the same. This is knownas the strong duality theorem.
5.If the primal problem is unbounded, the dual problem is infeasible.
The second topic in this section, interior point methods, is introduced next.
11.7.3 Interior Point Methods
The Simplex algorithm moves along the boundary of the feasible region to find the optimal solution (see Fig. 11.3). For large scale problems, the Simplex method can become cumbersome and computationally expensive. There exists another class of solution approaches for LP problems known as the interior point methods. These algorithms are known to converge faster than the Simplex method. Unlike the Simplex method, which shifts from one vertex of the feasible polygon to the other, interior point methods move across the feasible region based on predefined criteria, such as the feasibility of the constraints or the objective function value.
Some variations of the interior point method employ the concept of duality. An important component of the interior point method, similar to most numerical optimization algorithms, is determining the search direction and the step size. The search direction idea here is similar to those discussed in Chapter 12, such as the steepest descent direction and the Newton’s search direction. Comprehensive presentations of the interior point method can be found in Refs. [5] and [9].
11.7.4 Solution Sensitivity
In linear programming (LP), it is crucial to understand the impact of the coefficients in the objective and the constraint functions on the optimum solution. In other words, for LP problems, the optimum solution is highly sensitive to the slope (or gradient) of the objective function and the constraint functions. Therefore, a careful observation of the slope of these functions provides important insight into the behavior of the LP problem even prior to optimization.Two important characteristics of the solution sensitivity are pointed out in this section. In order to discuss these characteristics, we revisit the LP problem defined by Eqs. (11.12  11.15) and illustrated in Fig. 11.4.
Figure 11.4. Sensitivity of Optimum to Slope of Linear Objective Function
The more readily evident characteristics are the direct proportionality of the function to the coefficients of the linear terms. For example, in Eq. 11.12, the objective function value increases at the same rate at which the variable x_{1}increases, and decreases at twice the rate at which variable x_{2} increases. The impact of x_{2} on the objective function is double that of x_{1}. However, when we consider the relative slopes of the objective function and of the constraint functions, potentially more significant impacts become evident. A minor change in the slope (or in the linear term coefficients) can switch the optimum from one vertex of the feasible region (or feasible polytope) to another adjacent one. To explain this scenario, we will call the objective function given by Eq. 11.12 as f_{A}, and consider an alternative objective function, f_{B}. These two objective functions are described below:
(11.116) 
(11.117) 
The direction in which objective functions f_{A} and f_{B} decrease are provided in Fig. 11.4, represented by the dashdot line and the dashed line, respectively. Note that, with the same feasible region, the optimum for f_{A} lies at the vertexx_{A}^{*} = , while the optimum for f_{b} lies at the vertex x_{B}^{*} = . The constraint given by Eq. 11.13 is the active constraint in both cases. Now, if we carefully compare the slopes of the objective functions (f_{A} and f_{B}) with that of the active constraint given by Eq. 11.13, we find that the slope of f_{A} is slightly greater than that of the constraint, while the slope of f_{B} is slightly smaller. Thus, as a result of a minor change in the slope of the objective function (with respect to the active constraint), the location of the optimum can change drastically.
In the context of practical application of LP, the following comment can be made. Although mathematically, the sensitivity of the objective function to the design variables remain the same across the design space for an LP problem, practically, the location of the optimum becomes more critical (or sensitive) to the slope of the objective function when this slope is close to that of an active constraint. Under these scenarios, when formulating a practical optimization problem, if the objective function coefficients are changed slightly, it may lead to a substantial change in the location of the optimum. Extreme care should be exercised when formulating such LP problems.
11.8 Summary
Linear Programming is a topic of great importance in optimization. This chapter introduced the basics of linear programming. Specifically, the four types of possible solutions for the Simplex method were discussed. The graphical solution approach and software options, such as linprog in MATLAB, were also discussed with examples. The theory behind the Simplex method and the Simplex algorithm were explored in detail with the help of examples. We provided a brief overview of the theory of duality and interior point methods. With the basic understanding of the topic provided in this chapter, the reader can understand and further explore linear programming in more detail(Refs. [1, 3, 2, 4, 5, 6].
11.9 Problems
11.1Consider the LP problem given in Sec. 11.3.1. Reproduce the results shown in Fig. 11.1(a) using MATLAB. How does the optimal solution change if a constant is added to the objective function? Run your MATLAB code with five different starting points. Do you observe a change in the optimal results?
11.2Consider the LP problem given in Sec. 11.3.2. Reproduce the results shown in Fig. 11.1(b) using MATLAB. Run your MATLAB code with five different starting points. Do you observe a change in the optimal results?
11.3You are given the following optimization problem. Solve the following problem graphically.
(11.118) 
subject to
(11.119) 

(11.120) 

(11.121) 
Plot the objective function and the constraint equations. Identify the feasible design space and the optimal solution.
11.4Transform the following problem into the standard LP formulation.
(11.122) 

subject to
(11.123) 

(11.124) 

(11.125) 

(11.126) 

(11.127) 
11.5Consider the following set of equations.
(1)How many basic solutions are possible for this set of constraints?
(2)Transform them into the reduced row echelon form with respect to the basic variables x_{1}, x_{2}, and x_{3}.
(11.128) 

(11.129) 

(11.130) 
11.6Solve the LP formulation given in Problem 11.4 using linprog.
11.7Consider the following LP problem from Sec. 11.3.1. Solve it using the Simplex method.
(11.131) 

subject to
(11.132) 

(11.133) 

(11.134) 
11.8 Solve the following problem using the Simplex method. Verify the correctness of your solution using linprog.
(11.135) 
subject to
(11.136) 

(11.137) 

(11.138) 

(11.139) 
BIBLIOGRAPHY OF CHAPTER 11
[1]D. G. Luenberger and Y. Ye. Linear and Nonlinear Programming. Springer, 2008.
[2]S. I. Gass. Linear Programming: Methods and Applications. Dover, 5th edition, 2010.
[3]G. B. Dantzig and M. N. Thapa. Linear Programming 1: Introduction. Springer, 1997.
[4]V. Chvatal. Linear Programming. W. H. Freeman and Company, New York, 1983.
[5]S. J. Wright. PrimalDual InteriorPoint Methods. Society of Industrial and Applied Mathematics, Philadelphia, 1997.
[6]R. J. Vanderbei. Linear Programing: Foundations and Extensions. Springer, 2014.
[7]G. Hurlber. Linear Optimization: The Simplex Workbook. Undergraduate Texts in Mathematics. Springer New York, 2010.
[8]A. Ravindran, G. V. Reklaitis, and K. M. Ragsdell. Engineering Optimization: Methods and Applications. John Wiley and Sons, 2006.
[9]C. Roos, T. Terlaky, and J.Ph. Vial. Interior Point Methods for Linear Optimization. Springer, 2nd edition, 2010.