Optimization in Practice with MATLAB for Engineering Students and Professionals (2015)

PART 5


MORE ADVANCED TOPICS IN OPTIMIZATION

19


Evolutionary Algorithms

19.1 Overview

This book has presented various algorithms and applications where the optimizer was primarily gradient-based (i.e., the search direction is governed by gradient and/or Hessian information). This chapter introduces an entirelydifferent class of optimization algorithms called the evolutionary algorithms (EA). Evolutionary algorithms imitate natural selection processes to develop powerful computational algorithms to select optimal solutions. Geneticalgorithms (GA), simulated annealing (SA), ant colony optimization (ACO), particle swarm optimization (PSO), and tabu search (TS) are some of the popular techniques that fall under the umbrella of evolutionary algorithms.

The motivation for using biologically-inspired computational approaches stems from two key observation. First, the mathematical optimization algorithms in solving complex problems in engineering, computing, and other fieldssuffer strong limitations. The common challenges in these areas revolve around the lack of mathematical models that define the physical phenomena, discontinuous functions, and high nonlinearity. Second, many complex problemsencountered in engineering already exist in nature in some relevant form. Optimization is inherent in nature, such as in the process of adaptation performed by biological organisms in order to survive. Engineers and scientistscontinue to explore the various efficient problem-solving techniques employed by nature to optimize natural systems.

The relative advantages and limitations of EAs vs. traditional optimization methods are as follows:

1.Traditional algorithms typically generate a single candidate optimum at each iteration that progresses toward the optimal solution. Evolutionary algorithms generate a population of points at each iteration. The best point in thepopulation approaches an optimal solution.

2.Traditional algorithms calculate the candidate optimal point at the next iteration by a deterministic computation. EAs usually select the next population by a combination of operations that use random number generators.

3.Traditional algorithms require gradient and/or Hessian information to proceed, while EAs usually require only function values. As a result, EAs can solve a variety of optimization problems in which the objective function is notsmooth and potentially discontinuous.

4.To their disadvantage, EAs often require more function evaluations than do gradient-based methods, particularly for single-objective optimization. The computation time associated with EAs is longer than that of the gradient-based methods.

5.Evolutionary algorithms are stochastic methods that typically involve random choices. Therefore, different runs of the same EA code may yield different optimal solutions.

6.Evolutionary algorithms do not have proofs of convergence to an optimal solution; unlike gradient based methods, where at least a local optimum is guaranteed upon convergence. It has been observed in practice thatevolutionary algorithms, if employed with careful settings, have the potential to yield globally optimal solutions. Because of the inherent randomness in the search process of EAs, a much larger solution space is generallyexplored when compared to traditional methods, which are limited in their search scope.

This chapter focusses on genetic algorithms (Sec. 19.2) because they are the most popular evolutionary algorithms used in the design optimization community. Using a simplified example, the basic concept of a genetic algorithmis explained. Multiobjective optimization with GAs is discussed in Sec. 19.3 using an example. A brief overview of other evolutionary algorithms is provided with pertinent references in Sec. 19.4. A summary of the chapter is provided in Sec. 19.5.

19.2 Genetic Algorithms

This section explains the basics of how a GA works. Practical software implementations are outside the scope of this introductory chapter. MATLAB provides the “Genetic Algorithms and Direct Search Solvers” that providessoftware implementations of optimization using GAs. Genetic algorithms have been used to solve a wide range of problems involving continuous and discrete variables. For example, the use of GAs is popular for the optimization of laminate composite structures [1], multiobjective optimization (presented later), and structural and design problems. A simplified version of a GA is demonstrated next using an example.

19.2.1 Basics of Genetic Algorithms

A genetic algorithm repeatedly modifies a set or population of solutions or individuals in the course of its entire run. At each step, the genetic algorithm selects individuals from the current population to be parents based on certaincriteria (discussed shortly). The parents are then used to produce the next generation of individuals, called children. Over successive generations, the population “evolves” toward an optimal solution, or a set of Pareto optimalsolutions (in the case of a multiobjective problem).

This section describes how GAs work by minimizing the function f(x)=x2, 0<x<40. The objective function is also known as the fitness function in the GA literature. A similar example that further helps us to understand howGAs work is available in Ref. [2].

In its simplest form, a GA implementation involves the following tasks.

1.Encoding: Encoding is a method that represent individuals in evolutionary algorithms. Typically, individuals are coded as a fixed length string (e.g., a binary number with 0’s or 1’s). This string is also known as a chromosome.Other variable string length encodings are also possible [3, 2]. Coding is a representation of a number as a string of 0’s and 1’s.

Example: We could use a string length of 5 to code a number in binary (e.g., 10001). This string can be de-coded into a base 10 decimal number as

1 × 24 + 0 × 23 + 0 × 22 + 0 × 21 + 1 × 20 = 16 + 1 = 17

(19.1)

2.Initial population: The algorithm begins by generating a random initial population. Important initialization choices, such as the number of individuals in each population and the number of bits in the encoding, must be made by the user. These choices govern the performance of the GA. A detailed discussion can be found in [2].

Example: Assume six individuals in the population for the present example. Each individual (represented by a row in Fig. 19.1) is randomly generated by a series of five fair coin flips, heads=1 and tails=0. Note that fivecoin flips are needed because we chose to encode each individual using a 5-bit binary string. Say the initial population generated is 10101, 11001, 01001, 11101, 10111, and 10000 (Column 2 in Fig. 19.1). In the decimalsystem, the initial population is the following set of numbers {21, 25, 9, 29, 23, 16} (Column 3 in Fig. 19.1). Now evaluate the fitness function value (f(x) = x2) at each of the individuals in the initial population. The fitnessfunction value set is {441, 625, 81, 841, 529, 256} (Column 4 in Fig. 19.1).

Figure 19.1. Basics of Genetic Algorithms

Now proceed to the next step.

3.Reproduction: A new generation, called child, in the genetic algorithm is created by reproduction from the previous generation, called the parent. The notion of “survival of the fittest” is usually used in genetic algorithms.There are three main mechanisms used to create a new generation. Different implementations of GAs use different combinations of the three ideas below.

a)Elitism: In this approach, the individuals with the best fitness values in the current generation are guaranteed to survive in the next generation.

Example: In the example considered earlier, of the five individuals in the parent population, {10101, 11001, 01001, 11101, 10111}, the third individual, 01001, had the lowest function value for the current minimizationproblem. This individual is considered elite, and will become part of the next generation.

b)Crossover: In this technique, some bits of the encoded string of one parent individual are exchanged with the corresponding bits of another parent individual. A series of random choices are made for this mechanism, which areexplained with the following example.

Example: Assume that the elite individual, 01001, is part of the next generation. First, choose which individual is to be crossed over with which individual (e.g., Individual 1 with Individual 2, Individual 1 with Individual3, or Individual 1 with Individual 4). This choice is usually made randomly. Assume that Individual 1, 10101, is crossed over with Individual 2, 11001; and Individual 4, 11101, is crossed over with Individual 5, 10111.

Next, decide how many bits in the individuals will be exchanged. For example, will it be 1, 2, 3, or 4 bits? This choice is also made randomly. Assume that 3 bits in the string will be exchanged for Individuals 1 and 2, andone bit will be exchanged for Individuals 4 and 5 (shown by the grey shaded regions in Rows 1, 2, 4, and 5 in Fig. 19.1).

The final choice to be made (again implemented using random numbers) is the positions of the bits. For example, exchange the first three bits or the last three bits. For this example, the first three bits will be exchanged forIndividuals 1 and 2, and Bit 2 for Individuals 4 and 5 (shown by the grey shaded regions in Rows 1, 2, 4, and 5 in Fig. 19.1).

c)Mutation: Unlike the crossover operation (which requires two parents), mutation children are generated from a single parent by randomly reversing some bits from 0 to 1, or vice versa. In most GA implementations, aprobability value for a mutation to occur is assumed.

Example: Make the random choice that Individual 6 goes through a mutation on Bits 1 and 3. As shown in Fig. 19.1, the bits are reversed for these two positions, leading to a new child (noted by the grey shaded regions inthe last row in Fig. 19.1).

Column 7 in Fig. 19.1 presents the child population generated by the above three mechanisms. The decimal equivalents are reported in Column 8.

4.The function values of the new population that is generated are computed, as shown in Fig. 19.1. Using a combination of the above reproduction options, the algorithm proceeds further until a desired stopping criterion isachieved. Examples of stopping criteria include the number of generations, time limit, and function tolerance.

Example: Observe the function values of the child population presented in the last column of Fig. 19.1. The best individual in the child generation shows a decrease in the function value when compared to the parentgeneration.

Options available in the MATLAB Genetic Algorithm and Direct Search Solvers are specified within the Global Optimization Toolbox. The GA Solver will be used to demonstrate how a software implementation of GA works.

19.2.2 Options in MATLAB

The MATLAB GA Solver, within the Global Optimization Toolbox, has software implementations of several GA capabilities. The ga command provides nonlinear constrained optimization capabilities, and the gamultiobj commandallows for multiobjective optimization (we will study this later). In this subsection, we will solve an example problem using the traditional gradient based solvers in MATLAB (fmincon) and the genetic algorithm routine.

We note that the genetic algorithm by itself allows only for unconstrained optimization. Nonlinear constraints are usually incorporated using penalty schemes, which were discussed in Chapter 13. Consider the followingconstrained optimization problem.

(19.2)

subject to

(19.3)

(19.4)

(19.5)

Let us now discuss how this problem can be setup in MATLAB. There are two options possible: (1) we can invoke the ga command from the graphical user interface of the GA Solver (we will explore this option in the next section), or (2) we can call the ga command from an M-file. We will need to generate three M-files: a main file, an objective function file, and a constraint function file. Note that this file structure is similar to the one we used with fmincon.The main.m file contains the initializations, bounds, options, and the ga command. The confun.m file contains the nonlinear inequality and equality constraints. The objfun.m file contains the objective or the fitness function definition.The files are shown below.

1.Main file

clc
clear all
close all
global fcount % to count function evaluations
lb=[-5 -5]; %Lower bound
ub=[5 5]; %Upper bound
A = []; %LHS matrix for linear inequalities
b = []; %RHS vector for linear inequalities
Aeq = []; %LHS matrix for linear equalities
beq = []; %RHS vector for linear equalities
%If you do not specify options, defaults are used.
fcount = 0; %Initialize function count
nvars = 2; % Number of design variables

[x,fval] = ga(@objfun,nvars,A,b,Aeq,beq,lb,ub,@confun);
% More arguments are available, check \Matlab\ help

display(fcount)
display(x)

2.Objective function file

function f= objfun(x)
global fcount
fcount = fcount+1;
f= x(1)^2+10*x(2)^2-3*x(1)*x(2);

3.Constraint function file

function [c,ceq]= confun(x)
c(1) = 4-2*x(1)-x(2);
c(2) = -5 - x(1) - x(2);
ceq = [];

Some important observations that can be made from this example are:

1.Each run of the genetic algorithm may yield a slightly different result. This is to be expected, since the genetic algorithm is stochastic in nature and involves random operators. Multiple runs are required to build confidence in thesolution. The solution for this problem is x1 = 1.825,x2 = 0.3593, and is generated correctly by the GA. Interestingly, solving this optimization problem using fmincon yields the same results. (Think of why that might be?)

2.GAs require a large number of function evaluations. For this example, ga requires approximately 4,000 function evaluations, while fmincon requires only 16 evaluations. Since this is a relatively simple problem, the issue of function evaluations may seem trivial. In computationally expensive models, however, the associated burden can become significant.

3.There are several settings within the ga command in MATLAB that can improve the computational performance of the ga.

Another popular application of GAs is to obtain Pareto optimal sets for multiobjective problems, which we discuss next.

19.3 Multiobjective Optimization Using Genetic Algorithms

Several approaches for solving multiobjective optimization problems were discussed in Chapter 6. Most of the previously studied methods involved the weight-based aggregation of the objectives into a single function. One of the most significant drawbacks of the weight-based methods is the need to specify appropriate weights, which is often a significant challenge.

The motivation for using evolutionary algorithms to solve multiobjective problems is two-fold: (1) EAs work with a population of candidate solutions, and use the concept of non-domination; thereby allowing for a series of Pareto solutions to be found in one converged run. This in in contrast to applying traditional techniques where the Pareto solutions are found sequentially, one run at a time. (2) EAs are significantly less sensitive to the shape of the Pareto frontier (convex or concave) or to discontinuous Pareto fronts.

There are many implementations of multiobjective genetic algorithms (MOGA) available in the literature [4, 5, 6, 7, 8]. The first practical implementation of a MOGA was called vector evaluated genetic algorithm (VEGA) bySchaffer [9]. A drawback of the VEGA approach is its bias toward some Pareto solutions. A so-called non-dominated sorting procedure [5, 10] was later implemented by several researchers to overcome the drawbacks of VEGA. A ranking procedure is adopted to rank individuals in a population. An individual, a, is said to dominate another individual, b, if a is strictly better than b in at least one objective, and a is no worse than b in all objectives. A distancemeasure is used to compare individuals with equal rank.

MATLAB provides a multiobjective optimization algorithm based on GAs called gamultiobj. An example is provided to illustrate how the gamultiobj works.

19.3.1 Example

Recall the following optimization problem from the exercises of Chapter 6 (Problem 6.2). The Pareto frontier for this problem is non-convex, and cannot be generated using a weighted-sum method. This multiobjective problem canbe solved using the MATLAB Genetic Algorithm Solver.

(19.6)

subject to

(19.7)

The gamultiobj accepts only linear equality constraints, linear inequality constraints, and bounds on the design variables. We now explore the graphical user interface (GUI) of the MATLAB GA Solver, within the GlobalOptimization Toolbox. Figure 19.2 provides a screen shot of the optimization tool that has various options. This screen can be opened by typing optimtool(′gamultiobj′) in the Command Window. Alternatively, you can typeoptimtool in the Command Window, followed by choosing the gamultiobj solver option from the top left dropdown menu. Before you can use this tool, you need to create a file that contains the objective function objsin.m that defines the two objectives, as shown below.

function f = objsin(x)
f(1) = sin(x);
f(2) = 1 - sin(x)^7;

Figure 19.2. The MATLAB Genetic Algorithm and Direct Search Toolbox

In the GUI for the GA Solver, the user can provide the function handle for the fitness function, as shown in Fig. 19.1. Please keep in mind that the objsin.m file must be saved in the current directory in MATLAB. In Fig. 19.2, notethe selections highlighted by boxes. There are several other options in the window. Based on these options, the solutions may change. For the latest help on the additional features, refer to MATLAB help.

Figure 19.3 depicts the Pareto frontier for this problem as generated by the MATLAB Multiobjective Genetic Algorithm Solver. When the Parallel Computing Toolbox is available in the GUI for the GA Solver, it is possible to setthe option as “in parallel” for evaluating fitness and constraint functions. By using the built-in parallel computing capabilities or defining a custom parallel computing implementation of the optimization problem, it is possible tosignificantly decrease solution time.

Figure 19.3. Pareto Front for Multiobjective Genetic Algorithm Example

19.4 Other Evolutionary Algorithms

This section summarizes other major metaheuristic and/or non-gradient-based algorithms. Similar to Genetic Algorithms, some of these algorithms are also inspired by natural phenomena such as the social behavior of animals (e.g.,particle swarm optimization (Sec. 19.4.4), ant colony optimization (Sec. 19.4.1), and predator-prey optimization [11, 12]). These algorithms are often jointly classified as evolutionary algorithms in the literature; however, all of themdo not necessarily mimic evolutionary phenomena (e.g., swarm-based algorithms). Thus, a more technically appropriate name for this class of algorithms may be “nature-inspired optimization algorithms.” There also existmetaheuristic algorithms that are inspired by human behavior and human-developed processes (e.g., tabu search (Sec. 19.4.3) and simulated annealing (Sec. 19.4.2)). Overall, these classes of algorithms are generally intended to solve highly nonlinear optimization problems, which involve non-convex, discontinuous, or multi-modal criteria functions. A brief introduction to four of these major metaheuristic algorithms is provided below.

19.4.1 Ant Colony Optimization

The ant colony optimization algorithm (ACO), introduced by Marco Dorigo in 1992 in his Ph.D. thesis, is a probabilistic technique for solving computational problems that can be reduced to finding good paths through graphs. They are inspired by the behavior of ants in finding paths from the colony to food. In their path to search for food, ants deposit a substance called phermone that helps them smell or identify the path for later use. In a group of ants, eachant goes in search of food in a random direction. The ant that finds the shortest path to food returns to the colony in the shortest time. Deciding which path to take can be based on the amount of phermone. A larger phermoneconcentration along a path usually implies that a higher number of ants used the path, inferring that the path is likely shorter. Longer paths are progressively abandoned. The subsequent ant searches then use the phermone along theshortest path to direct their search. More details and examples are provided in [13].

19.4.2 Simulated Annealing

This method mimics the metallurgical process of annealing: heating a material and slowly lowering the temperature to decrease defects, thus minimizing the system energy. At each iteration of the simulated annealing algorithm, a new point is randomly generated. The distance of the new point from the current point, or the extent of the search, is based on a probability distribution with a scale proportional to the temperature. The algorithm accepts all the newpoints that lower the objective function; but also, with a certain probability, points that raise the objective function. By accepting points that raise the objective function, the algorithm minimizes the likelihood of being trapped in local minima, and is capable of greater global exploration. An annealing schedule is selected to systematically decrease the temperature as the algorithm proceeds. As the temperature decreases, the algorithm reduces the extent of itssearch to converge to a minimum. More details and examples are provided in [14].

19.4.3 Tabu Search

Tabu Search is a mathematical optimization method belonging to the class of local search techniques. Tabu Search enhances the performance of a local search method by using memory structures: once a potential solution has beendetermined, it is marked as “taboo” so that the algorithm does not visit that possibility repeatedly. More details and examples are provided in [15].

19.4.4 Particle Swarm Optimization (PSO)

The Particle Swarm Optimization (PSO) algorithm imitates the dynamics of swarm behavior observed in nature (e.g., a flock of geese or a swarm of bees). In PSO, an initial set of randomly generated individuals is used, with each as a candidate solution. These individuals are also known as particles; hence the name particle swarm. Over a sequence of iterations, the population of particles searches for (or ideally converges to) the global optimum, where the motion of each particle in the design space is guided by a velocity update equation inspired by perceived swarm behavior. In this strategy, the locations of best fitness are remembered or recorded by each individual. An individual’sbest solution or success is called the particle best, and this information is shared with the neighbors. A swarm is typically modeled by particles in a multidimensional space, where each particle has a position and a velocity at eachiteration. More details and examples are provided in [16, 14, 17]. With the above basic overview of certain key evolutionary algorithms, we conclude this chapter.

19.5 Summary

In this chapter, we presented the basics of popular evolutionary algorithms. The important differences between traditional optimization algorithms and evolutionary algorithms are discussed. We primarily focus on genetic algorithmsbecause of their popularity in the design optimization community. We also illustrate the use of the MATLAB genetic algorithm tools for single objective and multiobjective optimizaton using examples. A brief review of other popularnon-gradient-based algorithms is also presented.

19.6 Problems

19.1Using the MATLAB Genetic Algorithms Solver, reproduce the results shown in Fig. 19.3.

19.2Reproduce the results presented in Sec. 19.2.2 using both the ga and the fmincon commands. Perform the comparison for the number of function evaluations.

19.3Solve the following problem using the multiobjective optimization tool in the GA Solver in MATLAB.

(19.8)

where

(19.9)

(19.10)

subject to

(19.11)

Try solving the above problem using the weighted sum method (Use fmincon). Can you obtain a good representation of the Pareto frontier? Explain why.

19.4Recall that we solved the following problem earlier. Let us now employ GAs to solve this problem.

Figure 19.4. Sandwich Beam Designed with Vibrating Motor

A vibratory disturbance (at v Hz) is imparted from the motor onto the beam. The beam is of length, L, and width, b. The variables, d1 and d2, respectively, locate the contact of materials one and two, and two and three. Thevariable, d3, locates the top of the beam. The mass density, Young’s modulus, and cost per unit volume for materials one, two, and three, are respectively denoted by the triplets (ρ1, E1, c1), (ρ2, E2, c2), and (ρ3, E3, c3).

The overall objective is to design the preceding sandwich beam in such a way as to passively minimize the vibration of the beam that results from the disturbance (v = 10Hz). Minimizing the vibration will require maximizingthe fundamental frequency, Fr, of the beam. The optimal solution should be such that the fundamental frequency is maximized economically (i.e., at minimum cost, c). The following aggregate objective function is providedto you.

f = -50Fr2 + 100C2

(19.12)

In the design of the plant, the quantities of interest are as follows:

(19.13)

(19.14)

(19.15)

(19.16)

(19.17)

(19.18)

(19.19)

(19.20)

The constraints are given as follows.

(19.21)

(19.22)

(19.23)

(19.24)

(19.25)

(19.26)

(19.27)

(19.28)

(19.29)

The other constants are ρ1 = 100 kg/m3, ρ2 = 2, 770 kg/m3, ρ3 = 7, 780 kg/m3, E1 = 1.6×109 Pa, E2 = 70×109 Pa, E3 = 200×109 Pa, c1 = 500 $/m3, c2 = 1, 500 $/m3, and c3 = 800 $/m3.

(1)Solve the above optimization problem using fmincon.

(2)Solve the optimization problem using the genetic algorithm solver in MATLAB. Run the program 20 times. Do you obtain the same answer for all the runs? Explain why or why not?

(3)You may have noticed that for some runs, ga terminates with an error that the nonlinear constraint file is not returning a real value. Determine why that happens by closely examining your code. How would you fix this issue? Subsequently, re-run the ga routine the number of times needed to ensure the issue has been fixed.

(4)As we studied in Chapter 7, we need to be careful about numerical scaling issues for the situations you encountered in the previous question. Examine whether the current problem has such scaling issues. To make our program more robust, fix the scaling issue as discussed in Chapter 7.

(5)How many function evaluations does the genetic algorithm need? How many function evaluations does fmincon need?

19.5Consider the following optimization problem. Note that the objective function is discontinuous.

(19.30)

where

subject to

(19.31)

(1)Plot this function to study the local/global optima for this problem.

(2)Use fmincon to minimize the function. Do you obtain the global optimum? Try different starting points.

(3)Now try the ga command. Do you obtain the global minima? Note that you need to change the “initial range” option to help the ga command.

(4)Compare the number of function evaluations for each optimization as required by the genetic algorithm and fmincon.

19.6A salesman has to travel through several cities (to sell his/her product) in such a way that the total traveling expense is minimized. In this case, the traveling expense is directly proportional to the distance traveled (TravelingSalesman Problem). Solve the Traveling Salesman Problem (TSP) using the MATLAB GA function, for a particular case where the number of cities is equal to five, as shown in Fig. 19.5. The locations of the cities are alsogiven in the figure, and the salesman can start from any of the cities.

Figure 19.5. Traveling Salesman Problem: Locations of Cities

1.Formulate the optimization problem.

2.Determine the optimum route for the traveling salesman (using MATLAB GA).

3.If the salesman has to return to the starting city at the end of his journey, will the optimum route change? (Justify the answer through optimization)

BIBLIOGRAPHY OF CHAPTER 19

[1]Z. Gürdal, R. T. Haftka, and P. Hajela. Design and Optimization of Laminated Composite Materials. John Wiley and Sons, 1999.

[2]D. E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley Publishing Company, Inc, 1989.

[3]N. N. Schraudolph and R. K. Belew. Dynamic parameter encoding for genetic algorithms. Machine Learning, 9(1):9-21, 1992.

[4]E. Zitzler and L. Thiele. Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study. Lecture Notes in Computer Science. Springer Berlin Heidelberg, 1998.

[5]N. Srinivas and K. Deb. Multi-objective function optimization using non-dominated sorting genetic algorithms. Evolutionary Computation, 2(3):221-248, 1994.

[6]C. M. Fonseca and P. J. Fleming. Genetic algorithms for multiobjective optimization: Formulation, discussion, and generalization. In Proceedings of the 5th International Conference on Genetic Algorithms, pages 416-423, Urbana-Champaign, IL, USA, June 1993.

[7]J. Horn, N. Nafploitis, and D. E. Goldberg. A niched Pareto genetic algorithm for multi-objective optimization. In Proceedings of the first IEEE Conference on Evolutionary Computation, pages 82-87, Orlando, FL, USA, June 1994.

[8]C. A. Coello Coello, G. B. Lamont, and D. A. Van Veldhuizen. Evolutionary Algorithms for Solving Multi-objective Problems. Springer, 2nd edition, 2007.

[9]J. D. Schaffer. Multiple objective optimization with vector evaluated genetic algorithms. In Proceedings of the First International Conference on Genetic Algorithms, pages 93-100, Pittsburg, PA, USA, July 1985.

[10]K. Deb, S. Agarwal, A. Pratap, and T. Meyarivan. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2):182-197, 2002.

[11]S. Chowdhury, G. S. Dulikravich, and R. J. Moral. Modified predator-prey algorithm for constrained and unconstrained multi-objective optimisation. International Journal of Mathematical Modelling and Numerical Optimisation, 1(1):1-38, 2009.

[12]G. Venter and J. Sobieszczanski-Sobieski. Particle swarm optimization. AIAA Journal, 41(8):1583-1589, 2003.

[13]L. N. De Castro and F. J. Von Zuben. Recent Developments in Biologically Inspired Computing. Idea Group Publishing, 2005.

[14]M. Tsuzuki and T. DeCastro Martins. Simulated Annealing: Strategies, Potential Uses and Advantages. Nova Science Publishers, 2014.

[15]F. Glover and M. Laguna. Tabu Search. Kluwer Academic Publishers, 1997.

[16]M. Clerc. Particle Swarm Optimization. ISTE Ltd, 2006.

[17]S. Chowdhury, W. Tong, A. Messac, and J. Zhang. A mixed-discrete particle swarm optimization algorithm with explicit diversity-preservation. Structural and Multidisciplinary Optimization, 47(3):367-388, 2013.