Multiobjective Optimization Using Genetic Algorithm

Similar documents
Department of Mechanical Engineering, Khon Kaen University, THAILAND, 40002

Multi-objective Optimization Inspired by Nature

EVOLUTIONARY ALGORITHMS FOR MULTIOBJECTIVE OPTIMIZATION

TABLE OF CONTENTS CHAPTER NO. TITLE PAGE NO. LIST OF TABLES LIST OF FIGURES LIST OF SYMBOLS AND ABBREVIATIONS

Smart Grid Reconfiguration Using Genetic Algorithm and NSGA-II

Applying Mechanism of Crowd in Evolutionary MAS for Multiobjective Optimisation

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Submitted November 19, 1989 to 2nd Conference Economics and Artificial Intelligence, July 2-6, 1990, Paris

Evolution of Sensor Suites for Complex Environments

Wire Layer Geometry Optimization using Stochastic Wire Sampling

2 M.W. LIU, Y. OEDA and T. SUMI Many of the past research effort were conducted to examine various signal timing optimization methods with different s

Multilayer Perceptron: NSGA II for a New Multi-Objective Learning Method for Training and Model Complexity

Wire-Antenna Geometry Design with Multiobjective Genetic Algorithms

Variable Size Population NSGA-II VPNSGA-II Technical Report Giovanni Rappa Queensland University of Technology (QUT), Brisbane, Australia 2014

A Review on Genetic Algorithm and Its Applications

Evolutionary Multiobjective Optimization Algorithms For Induction Motor Design A Study

A Jumping Gene Algorithm for Multiobjective Resource Management in Wideband CDMA Systems

Reducing the Computational Cost in Multi-objective Evolutionary Algorithms by Filtering Worthless Individuals

The Genetic Algorithm

An Evolutionary Approach to the Synthesis of Combinational Circuits

Fault Location Using Sparse Wide Area Measurements

Available online at ScienceDirect. Procedia Computer Science 24 (2013 ) 66 75

Robust Fitness Landscape based Multi-Objective Optimisation

CHAPTER 3 HARMONIC ELIMINATION SOLUTION USING GENETIC ALGORITHM

Memetic Algorithms for Multiobjective Optimization: Issues, Methods and Prospects

SECTOR SYNTHESIS OF ANTENNA ARRAY USING GENETIC ALGORITHM

Machine Learning in Iterated Prisoner s Dilemma using Evolutionary Algorithms

Biologically Inspired Embodied Evolution of Survival

CPS331 Lecture: Genetic Algorithms last revised October 28, 2016

Niched-Pareto Genetic Algorithm for Aircraft Technology Selection Process. Chirag B. Patel Dr. Michelle R. Kirby Prof. Dimitri N.

Optimization of Time of Day Plan Scheduling Using a Multi-Objective Evolutionary Algorithm

An Optimized Performance Amplifier

Research Projects BSc 2013

Research Article Single- versus Multiobjective Optimization for Evolution of Neural Controllers in Ms. Pac-Man

Population Adaptation for Genetic Algorithm-based Cognitive Radios

Mehrdad Amirghasemi a* Reza Zamani a

ARRANGING WEEKLY WORK PLANS IN CONCRETE ELEMENT PREFABRICATION USING GENETIC ALGORITHMS

Available online at ScienceDirect. Procedia CIRP 17 (2014 ) 82 87

THE area of multi-objective optimization has developed. Pareto or Non-Pareto: Bi-Criterion Evolution in Multi-Objective Optimization

GA Optimization for RFID Broadband Antenna Applications. Stefanie Alki Delichatsios MAS.862 May 22, 2006

The Behavior Evolving Model and Application of Virtual Robots

EVOLUTIONARY ALGORITHMS IN DESIGN

MULTI OBJECTIVE ECONOMIC DISPATCH USING PARETO FRONTIER DIFFERENTIAL EVOLUTION

An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization

A Genetic Algorithm-Based Controller for Decentralized Multi-Agent Robotic Systems

A Systems Approach to Evolutionary Multi-Objective Structural Optimization and Beyond

Pareto Evolution and Co-Evolution in Cognitive Neural Agents Synthesis for Tic-Tac-Toe

Automating a Solution for Optimum PTP Deployment

Bi-Goal Evolution for Many-Objective Optimization Problems

CHAPTER 2 LITERATURE REVIEW

Creating a Dominion AI Using Genetic Algorithms

An Algorithm and Implementation for Image Segmentation

Solving Assembly Line Balancing Problem using Genetic Algorithm with Heuristics- Treated Initial Population

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

Optimization of Robot Arm Motion in Human Environment

Pseudo Noise Sequence Generation using Elliptic Curve for CDMA and Security Application

Genetic Algorithm Based Performance Analysis of Self Excited Induction Generator

Load frequency control in two area multi units Interconnected Power System using Multi objective Genetic Algorithm

Genetic Algorithms: Basic notions and some advanced topics

Meta-Heuristic Approach for Supporting Design-for- Disassembly towards Efficient Material Utilization

PID Controller Tuning using Soft Computing Methodologies for Industrial Process- A Comparative Approach

Computational Intelligence Optimization

2. Simulated Based Evolutionary Heuristic Methodology

A Case Study of GP and GAs in the Design of a Control System

Carlos A. Coello Coello CINVESTAV-IPN, MEXICO

OFDM Systems Resource Allocation using Multi- Objective Particle Swarm Optimization

Power Systems Optimal Placement And Sizing Of STATCOM in Multi-Objective Optimization Approach And Using NSGA-II Algorithm

FOUR TOTAL TRANSFER CAPABILITY. 4.1 Total transfer capability CHAPTER

Foundations of Artificial Intelligence

STIMULATIVE MECHANISM FOR CREATIVE THINKING

Chapter 5 OPTIMIZATION OF BOW TIE ANTENNA USING GENETIC ALGORITHM

Local Search: Hill Climbing. When A* doesn t work AIMA 4.1. Review: Hill climbing on a surface of states. Review: Local search and optimization

CHAPTER 5 PERFORMANCE EVALUATION OF SYMMETRIC H- BRIDGE MLI FED THREE PHASE INDUCTION MOTOR

Digital Filter Design Using Multiple Pareto Fronts

Localized Distributed Sensor Deployment via Coevolutionary Computation

Evolutionary Programming Optimization Technique for Solving Reactive Power Planning in Power System

6545(Print), ISSN (Online) Volume 4, Issue 3, May - June (2013), IAEME & TECHNOLOGY (IJEET)

An Improved NSGA-II and its Application for Reconfigurable Pixel Antenna Design

Available online at ScienceDirect. Procedia Technology 17 (2014 ) 50 57

Endless forms (of regression models) James McDermott

Multi-Competence Cybernetics: The Study of Multi-Objective Artificial Systems and Multi-Fitness Natural Systems

CONTROLLER DESIGN BASED ON CARTESIAN GENETIC PROGRAMMING IN MATLAB

A Genetic Algorithm for Solving Beehive Hidato Puzzles

Coalescence. Outline History. History, Model, and Application. Coalescence. The Model. Application

Design of PID Controller for Higher Order Discrete Systems Based on Order Reduction Employing ABC Algorithm

Table of Contents SCIENTIFIC INQUIRY AND PROCESS UNDERSTANDING HOW TO MANAGE LEARNING ACTIVITIES TO ENSURE THE SAFETY OF ALL STUDENTS...

Evolutionary Image Enhancement for Impulsive Noise Reduction

A Novel Multistage Genetic Algorithm Approach for Solving Sudoku Puzzle

Optimum Coordination of Overcurrent Relays: GA Approach

Stock Market Indices Prediction Using Time Series Analysis

COMP SCI 5401 FS2015 A Genetic Programming Approach for Ms. Pac-Man

EMO-based Architectural Room Floor Planning

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms

Position Control of Servo Systems using PID Controller Tuning with Soft Computing Optimization Techniques

Smart Home System for Energy Saving using Genetic- Fuzzy-Neural Networks Approach

Implementation of FPGA based Decision Making Engine and Genetic Algorithm (GA) for Control of Wireless Parameters

A Divide-and-Conquer Approach to Evolvable Hardware

Evolutionary Optimization for the Channel Assignment Problem in Wireless Mobile Network

Co-Evolving Neural Networks with Evolutionary Strategies : A New Application to Divisia Money

Dynamic Spectrum Allocation for Cognitive Radio. Using Genetic Algorithm

Transcription:

Multiobjective Optimization Using Genetic Algorithm Md. Saddam Hossain Mukta 1, T.M. Rezwanul Islam 2 and Sadat Maruf Hasnayen 3 1,2,3 Department of Computer Science and Information Technology, Islamic University of Technology, Board Bazaar, Gazipur, Bangladesh Abstract: In case of Multi-objective optimization problems (MOP), objective vector can be scalarized into a single objective and the yielded objective is highly sensitive to the objective weight vectors and it asks the user to have knowledge about the underlying problems. Moreover in the case of Multi-objective optimization problems, one may require a set of Pareto-Optimal points in the search space, instead of a single point. Since Genetic Algorithm (GA) works with a set of individual solutions called population, it is natural to adopt GA schemes for Multi-objective Optimization problems so that one can capture a number of solutions simultaneously. Although many techniques have been developed to solve these types of problems, namely VEGA, MOGA, NPGA, NSGA etc, all of them have some shortcomings. This project proposal explains a new approach to solve these types of problems by subdividing the population with respect to each overlapping pair of objective functions and their merging through genetic operations. Keywords: Genetic algorithm, Evolutionary Computation. 1. INTRODUCTION Most optimization problems naturally have several objectives to be achieved (normally conflicting with each other). These problems with several objectives, are called Multi-objective or vector optimization problems, and were originally studied in the context of economics and operation research. However scientists and engineers soon realized that such problems naturally arise in all areas of knowledge. Over the years, the work of a considerable amount of operational researcher has produced a important number of techniques to deal with Multi-objective optimization problems (Miettinen, 1998). However, it was until relatively recent that researchers realize the potential of evolutionary algorithms (EA) in this area. The most recent developments of such schemes are VEGA, MOGA, NPGA, NSGA and NSGA-II. The fact is that most of them are successful to many test suites for Evolutionary Multi Objective Optimization (EMOO). However they also encounter with some difficulties and recent research trends are mainly heading for devising new approach to handle with Pareto-Optimal Solutions. This research proposal mainly concentrates on a new approach to handle this concern. 2. OBJECTIVE Most optimization problems naturally have several objectives to be achieved (normally conflicting with each other). These problems with several objectives, are called Multi-objective or vector optimization problems, and were originally studied in the context of economics and operation research. However scientists and engineers soon realized that such problems naturally arise in all areas of knowledge. Over the years, the work of a considerable amount of operational researcher has produced a important number of techniques to deal with Multi-objective optimization problems (Miettinen, 1998). However, it was until relatively recent that researchers realize the potential of evolutionary algorithms (EA) in this area. The most recent developments of such schemes are VEGA, MOGA, NPGA, NSGA and NSGA-II. The fact is that most of them are successful to many test suites for Evolutionary Multi Objective Optimization (EMOO). However they also encounter with some difficulties and recent research trends are mainly heading for devising new approach to handle with Pareto-Optimal Solutions. This research proposal mainly concentrates on a new approach to handle this concern. 2.1 Multi objective Optimization Problem Most optimization problems naturally have several objectives to be achieved and normally they conflict with each other. These problems with several objectives are called multi objective or vector optimization problems. Over the years, the work of considerable amount of operational researchers has produced an important number of techniques to deal with multi objective optimization problems ( Miettinen, 1998). We are interested in solving multi objective optimization (MOPs) of the form: Opt [ f 1 (x), f 2 (x),..., f k (x) ] T Subject to the m inequality constraint: g i (x) And the p equality constraints: h i (x)=0 i = 1,2, p Volume 1, Issue 3, September October 2012 Page 255

Where k is the number of objective functions f i : R n R. We call x=[x 1,x 2,,x n ]T the vector of decision variables. We wish to determine from among the set F of all numbers which satisfy (1.2) and (1.3) the particular set x 1 *, x 2 *,.,x n * which yields the optimum values of all objective functions. 2.2 Genetic Algorithm The past decade has witnessed a flurry of interest within the financial industry regarding artificial intelligence technologies, including neural networks, fuzzy systems, and genetic algorithms. In many ways, genetic algorithms, and the extension of genetic programming, offer an outstanding combination of flexibility, robustness, and simplicity. "Genetic algorithms are based on a biological metaphor: They view learning as a competition among a population of evolving candidate problem solutions. A 'fitness' function evaluates each solution to decide whether it will contribute to the next generation of solutions. Then, through operations analogous to gene transfer in sexual reproduction, the algorithm creates a new population of candidate solutions." Genetic algorithms are created when computers evaluate and improve a population of possible solutions to a problem in a stepwise fashion. The new program evolves by letting good solutions produce offspring as bad solutions die out. Over time, the individual solutions in the population become better and better, producing a final, best solution. The method uses terms derived from biology, such as generation, inheritance and mutation, to describe the particular program manipulation the computer uses at each step of improvement, hence the name genetic algorithm. The genetic algorithm is a probabilistic search algorithm that iteratively transforms a set (called a population) of mathematical objects (typically fixed-length binary character strings), each with an associated fitness value, into a new population of offspring objects using the Darwinian principle of natural selection and using operations that are patterned after naturally occurring genetic operations, such as crossover (sexual recombination) and mutation. Virtually every technical discipline, from science and engineering to finance and economics, frequently encounters problems of optimization. Although optimization techniques abound, such techniques often involve identifying, in some fashion, the values of a sequence of explanatory parameters associated with the best performance of an underlying dependent, scalarvalued objective function. For example, in simple 3-D space, this amounts to finding the (x,y) point associated with the optimal z value above or below the x-y plane, where the scalar-valued z is a surface identified by the objective function f(x,y). Or it may involve estimating a large number of parameters of a more elaborate econometric model. For example, we might wish to estimate the coefficients of a generalized auto-regressive conditional heteroskedastic (GARCH) model, in which the log-likelihood function is the objective to maximize. In each case, the unknowns may be thought of as a parameter vector, V, and the objective function, z = f(v), as a transformation of a vector-valued input to a scalarvalued performance metric z. Optimization may take the form of a minimization or maximization procedure. Throughout this article, optimization will refer to maximization without loss of generality, because maximizing f(v) is the same as minimizing -f(v). My preference for maximization is simply intuitive: Genetic algorithms are based on evolutionary processes and Darwin's concept of natural selection. In a GA context, the objective function is usually referred to as a fitness function, and the phrase survival of the fittest implies a maximization procedure. 2.3 Sharing on MOO Most experimental MOEAs incorporate phenotypic-based sharing using the distance between objective vectors for consistency. A sharing function[2] determines the degradation of an individual s fitness due to a neighbor at some distance dist. A sharing function 'sh' was defined as a function of the distance with the following properties: 0 <= sh(dist) <= 1, for all distance dist sh(0) = 1, and lim dist- = 0; there are many sharing functions which satisfy the above condition. One approach can be, 1-( dist/ sh) sh sh sh(dist) = 0,otherwise sh and of an individual x is given by: eval'(x) = eval(x)/m(x), where m(x) returns the niche count for a particular individual x: m(x) = y sh(dist(x,y)). In the above formula the sum over all y in the population includes the string x itself ; consequently, if string x is all by itself in its own niche, it fitness value does not decrease(m(x)=1). Otherwise, the fitness function is decreased proportionally to the number and closeness of neighboring points. It means, that when many individuals are in the same neighborhood they contribute to ones another s share count, thus derating one another s fitness values. As a result this techniques limits the uncontrolled growth of particular species within a population. Sharing occurs only if both solutions are dominated or non dominated with respect to the comparison set. A value is used, however, the associated niche count is simply the number of vectors within in phenotypic space rather than a degradation value applied against unshared Volume 1, Issue 3, September October 2012 Page 256

fitness. The solution with the smaller niche count is selected for inclusion in the next generation. Represents how close two individuals must be in order to decrease each other s fitness. This value commonly depends on the number of optima in the search space. As this number is generally unknown and because P Ftrue s shape within objective space is also unknown, share s value is assigned using Fonseca s suggested method(fonseca and Fleming, 1998a): N = k i=1 ( i share )- s k hare k i=1 Where N is the number of individuals in the populations, s is the difference between the maximum and the minimum objective values in dimension I, and k is the number if distinct MOP objectives. As all variables but one are known, can be easily computed. For example, if k=2, 1= 2 =1 and N=50, the above equation simplifies to: share = ( 1 + 2 )/N-1= 0.041 2.4 Pareto Optimality We normally look for trade-offs, rather than single solutions when dealing with multi objective optimization problems. The notion of optimum is therefore, different. In the multi-objective optimization the notion of optimality is to interrelate the relative values of the different criteria- if we want compare apple with orangethen we must come up with a different definition of optimality. The most commonly adopted notion is that originally was proposed by Vilferdo Pareto and we will use the term: Pareto optimum. f i (x) f i (x*) for all i = 1,..,k and f j (x) j(x*) for at least one j. 3. REVIEW OF MOO APPROACHES VEGA David Schaffer (1985) proposed an approach called as Vector Evaluated Genetic Algorithm (VEGA), and that differed of the simple genetic algorithm (GA) only in the way in which the selection was performed. This operator was modified so that at each generation a number of subpopulations were generated by performing proportional selection according to each objective function in turn. Thus a problem with k objectives and a population with size of M, k subpopulations of size M/k each would be generated. These subpopulations would be shuffled together to obtain a new population of size M, on which GA would apply the crossover and mutation operators in the usual way. However this approach had i been proved well suited for some problems but it still suffers from middling effect. MOGA Fonseca and Fleming (1993) proposed the Multi-objective Genetic Algorithm (MOGA). This approach consists of a scheme in which the rank of a certain individual corresponds to the number of individuals in the current population by which it is dominated. All non dominated individuals are assigned rank 1, while dominated ones are penalized according to the population density of the corresponding region of the trade-off surface. Its main weakness is its dependence on the sharing factor (how to maintain the diversity is the main issue when dealing with Evolutionary Multi-objective Optimization). NPGA Horn et. al. (1994) proposed the Niched Pareto Genetic Algorithm, which uses a tournament selection scheme based on Pareto dominance. Two individuals are compared against a set of members of the population (typically 10% of the population size). When both competitors are either dominated or non dominated (i.e. whether there is a tie), the result of the tournament is decided through fitness sharing in the objective domain (a technique called equivalent class sharing was used in this case) (Horn et. al., 1994). However its main weakness is that besides requiring a sharing factor, this approach also requires an additional parameter: the size of the tournament. NSGA The Non-dominated Sorting Genetic Algorithm (NSGA) was proposed by Srinivas and Deb (1994), and is based on several layers of classifications of the individuals. Before selection is performed, the population is ranked on the basis of domination (using Pareto ranking): all nondominated individuals are classified into one category (with a dummy fitness value, which proportional to the population size). To maintain the diversity of the population, these classified individuals are shared (in decision variable space) with their dummy fitness values. Then this group of classified individuals is removed from the population and another layer of non-dominated individuals is considered (i.e. the reminder of the subpopulation is re-classified). The process continues until all individuals in the population are classified. Since individuals in the first front have the maximum fitness value, they always get more copies than the rest of the population. However some researchers have reported that NSGA has lower overall performance than MOGA, and it seems to be also more sensitive to the value of the sharing factor than MOGA (Coello, 1996; Veldhuizen, 1999). However another approach of NSGA, NSGA-II is also proposed by Deb et.al. It is more efficient than NSGA. Recent Approaches: Recently, several new Evolutionary Multi objective Optimization approaches have been developed, namely PAES and SPEA. The Pareto Archived Evolution Volume 1, Issue 3, September October 2012 Page 257

Strategy (PAES) was introduced y Knowles and Corne (2000a). This approach is very simple: it uses a (1+1) evolution strategy (i.e. a single parent that generates a single offspring) together with a historical archive that records all the non-dominated solutions previously found (such archive is used as a comparison set in way analogous to the tournament competition in NPGA). The Strength Pareto Evolutionary Algorithm (SPEA) was introduced by Ziztler and Thiele (1999). This approach was conceived as a way of integrating different Evolutionary Multi objective Optimization techniques. 4. PROPOSED APPROACH 4.1 Proposed New Approach The main idea behind the proposed approach is taken from VEGA and NSGA. In the case of VEGA, first the initial population of size M is divided into k subpopulations (each of size M/k), and each subpopulation is based on k separate objective performance where total number of objective function is k. In our approach, the population is divided in to M/k 1 subpopulations where M and k stands for same as VEGA. Suppose the objective functions are, f1, f2, f3 fk and the first subpopulation will be created with respect to the performance of f1 and f2, the second will be created with respect to f2 and f3, in the same way k 1 th subpopulation will be created from fk 1 and fk. Then every subpopulation will be ranked and their fitness will be shared (analogous notion to NSGA) to ensure the maintenance of population diversity and non-dominated individuals. Now let we enumerate the subpopulations as s11, s12, s13 s1k-1 and each of size M/k 1. Now in next step, we will create k 2 subpopulations from s11, s12, s13 s1k-1. 1st subpopulation (enumerated s21) is derived from elite members (non-dominated solutions with respect to f1, f2 and f2, f3 pairs) of subpopulation s11 and s12. We will take two individuals (elite member) from s11 and s12 respectively and apply crossover. The procedure will be iterated until M/k 2 numbers of individuals fills up the s21 subpopulation. In the same way, rest of the s22, s23 s2k-2 subpopulation will be created. Now in next step, k 3 subpopulations will be generated (each of size M/k 3). At every step fitness will be shared among the individuals in every subpopulation and non dominated one will get the relatively high fitness. It will become evident that this iteration (ranking, fitness sharing, crossover and merging) will stop when they all merge to a population of size M and this iteration will continue up to k 1 times if the total number of objective is k. The overall process will be apparent form below figure (fig : 1) Figure 1: 4.2 Underlying Philosophy In this new approach, at the first step, we select a subpopulation that thrives with respect to f1 and f2; the next subpopulation will perform best for f2 and f3. If we take the elite individuals from these two subpopulations, and apply crossover, it will be natural that the offspring may achieve good performance with respect to f1, f2 and f3. After the overall iteration, the newly generated population (with size M) may have good performance with all k objectives. After ranking and fitness sharing (according to non-domination), the last generation may contain Pareto-optimal points that is the goal of our search. 4.3 Disadvantages with some prior approach In VEGA we have k objective functions and M population. The size of each subpopulation will be M/k. Next step we shuffle and then use Fig: VEGA Operator on them. After shuffling we never get the more fit value separately rather in VEGA we are almost averaging them. But our aim is to gradually get more fit value which is strictly followed in our technique. This type of problem arise in VEGA is called Middling performance. 2) NSGA has a lower overall performance and it seems to be more sensitive to the value of the sharing factor. 4.4 Strengths This approach include some computational strength:- Volume 1, Issue 3, September October 2012 Page 258

i. Fitness measure is done during the first step and it will be done using Min-Max formulation or Distance Function. (Some non-linear fitness measuring scheme should be accounted) ii. After first iteration, the procedure as NSGA can be applied to achieve more accurate result (i.e. it can be embedded into the classification phase of NSGA). iii. Some parameters, such as population number (M) and generation count (k 1) can be predicted. iv. For future implementations, niche method and crowding can easily be applied. v. Parallel implementation is also possible 5. IMPLEMENTATION 5.1 Algorithm to implement Initialize the population with random values For i=1 to MAXGENS Evaluate each subpopulation based on objective functions. assign shared fitness among subpopulation Rank on subpopulation based on shared fitness value(best fit =highest rank ) 2 point Crossover between two consecutive subpopulation Merge step by step up to getting one final population. End Loop; End; 5.2 Test Functions Among the many of the known MOEA test functions, we implemented our approach on the following problem: F= (f1(x,y), f2(x,y)), where -5<=x, y<=10 f1(x,y)= x 2 +y 2, f2(x,y)=(x-5) 2 + (y-5) 2 From the obtained result it is evident that this method allows the function to converge very quickly. 5.3 Results Snapshots of different generations are given below. Function1 and Function 2 represents f1 and f2 respectively. 6. CONCLUSION Even though there exists a number of classical Multi objective optimization techniques, they require some a priori problem information. Since genetic algorithm use a population of points, they may be able to find multiple Pareto-Optimal solutions simultaneously. Schaffer s Vector Evaluated Algorithm (VEGA) and Deb s Non dominated Sorting Genetic Algorithm (NSGA) show Volume 1, Issue 3, September October 2012 Page 259

excellent results in many test cases, but still they are not free from some short comings. This new approach shows a new approach to solve Mult iobjective optimization problem. However we hope that this research will be a great success if carried out. 7. FUTURE PLAN Our future plan will be measured the performance on the basis of tests like Implementation of Different Statistical Testing, Error Ratio (ER), Two set coverage (CS), Generational Distance (GD), Maximum Pareto Front Error (ME), Average Pareto Front Error (AE), Hyperarea and Ratio (H,HR) etc. REFERENCES [1] David E. Goldberg, "Genetic Algorithms in search, optimization and machine learning", Pearson Education Asia Ltd, New Delhi, 2000. [2] Michalewicz, Z.,"Genetic Algorithms + Data Structures = Evolution Programs", 3rd edn. Springer-Verlag, Berlin Heidelberg New York (1996). [3] Carlos A. Coello Coello, David A. Van Veldhuizen, Gary B. Lamont, " Evolutionary Algorithms for Solving Multi-Objective Problems", Kluwer Academic Publishers; ISBN: 0306467623, May 2002. [4] R. Sarker, M. Mohammadian and X. Yao, "Evolutionary Optimization, Management and Operation" Research Series, Kluwer Academic Publishers. [5] N.Srinivas and K.Deb, Multiobjective Optimization using Non-Dominated Sorting Genetic Algorithm, Kanpur Genetic Algorithm Laboratory (KanGAL), Indian Institute of Technology (IIT), Kanpur, India. [6] Deb.K(2001),"Genetic Algorithms for Optimization", KanGAL Report Number 2001002. [7] K.Deb, Single and Multi-Objective Optimization using Evolutionary Computation, Department of Mechanical Engineering, Kanpur Genetic Algorithm Laboratory (KanGAL), KanGAL report No. 2004002,Institute of Technology (IIT), Kanpur, India. [8] Shukla, P. K. and Deb, K. (August, 2005). On Finding Multiple Pareto-Optimal Solutions Using Classical and Evolutionary Generating Methods. KanGAL Report No. 2005006. Bangladesh in 2006. His research interest is mainly focused on Semantic web, Social computing, Software Engineering, HCI, Image processing, Web Mining and Data & knowledge management. Currently he is a Lecturer in the Dept. of Computer Science and Engineering (CSE), Bangladesh University of Business and Technology (BUBT). T.M. Rezwanul Islam obtained his BSc degree in Computer Science and Information Technology from Islamic University of Technology (IUT), Gazipur, Bangladesh in 2011. He received the OIC (Organization of the Islamic Conference) scholarship for three years during his BSc studies. His research interest is mainly focused on AI, Evolutionary Computation, Software Engineering, HCI, Image processing, Web Mining, Ubiquitous Computing and Cognitive and Computational Neuroscience. Currently he is a Lecturer in the Dept. of Computer Science and Engineering (CSE), Bangladesh University of Business and Technology (BUBT). Sadat Maruf Hasnayen obtained his BSc degree in Computer Science and Information Technology from Islamic University of Technology (IUT), Gazipur, Bangladesh in 2011. He received the OIC (Organization of the Islamic Conference) scholarship for three years during his BSc studies. His research interest is mainly focused on AI, Evolutionary Computation, Software Engineering. AUTHORS Md. Saddam Hossain Mukta obtained his M.sc degree in Computer science from University of Trento, Italy where he was receiving Opera Universita Scholarship and earned a B.Sc degree in Computer Science and Information Technology from Islamic University of Technology (IUT), Gazipur, Volume 1, Issue 3, September October 2012 Page 260