Applying Mechanism of Crowd in Evolutionary MAS for Multiobjective Optimisation

Similar documents
Multi-objective Optimization Inspired by Nature

Mehrdad Amirghasemi a* Reza Zamani a

Department of Mechanical Engineering, Khon Kaen University, THAILAND, 40002

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Agent-Based Modeling and Simulation of Species Formation Processes

Understanding Coevolution

Online Evolution for Cooperative Behavior in Group Robot Systems

Smart Grid Reconfiguration Using Genetic Algorithm and NSGA-II

INTEGRATED CIRCUIT CHANNEL ROUTING USING A PARETO-OPTIMAL GENETIC ALGORITHM

A Divide-and-Conquer Approach to Evolvable Hardware

Variable Size Population NSGA-II VPNSGA-II Technical Report Giovanni Rappa Queensland University of Technology (QUT), Brisbane, Australia 2014

Pareto Evolution and Co-Evolution in Cognitive Neural Agents Synthesis for Tic-Tac-Toe

SECTOR SYNTHESIS OF ANTENNA ARRAY USING GENETIC ALGORITHM

EVOLUTIONARY ALGORITHMS FOR MULTIOBJECTIVE OPTIMIZATION

Collaborative transmission in wireless sensor networks

An Agent-Based Co-Evolutionary Multi-Objective Algorithm for Portfolio Optimization

Constraint Programming and Genetic Algorithms to Solve Layout Design Problem

Creating a Poker Playing Program Using Evolutionary Computation

Do Not Kill Unfeasible Individuals

Exercise 4 Exploring Population Change without Selection

STIMULATIVE MECHANISM FOR CREATIVE THINKING

Meta-Heuristic Approach for Supporting Design-for- Disassembly towards Efficient Material Utilization

Robust Fitness Landscape based Multi-Objective Optimisation

1. Papers EVOLUTIONARY METHODS IN DESIGN: DISCUSSION. University of Kassel, Germany. University of Sydney, Australia

A Hybrid Evolutionary Approach for Multi Robot Path Exploration Problem

Multiobjective Optimization Using Genetic Algorithm

Evolutionary Programming Optimization Technique for Solving Reactive Power Planning in Power System

An Adaptive Learning Model for Simplified Poker Using Evolutionary Algorithms

Optimizing the State Evaluation Heuristic of Abalone using Evolutionary Algorithms

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

A Review on Genetic Algorithm and Its Applications

Optimal Design of Modulation Parameters for Underwater Acoustic Communication

M ous experience and knowledge to aid problem solving

Improving Evolutionary Algorithm Performance on Maximizing Functional Test Coverage of ASICs Using Adaptation of the Fitness Criteria

CONTROLLER DESIGN BASED ON CARTESIAN GENETIC PROGRAMMING IN MATLAB

EVOLUTIONARY ALGORITHMS IN DESIGN

Improvement of Robot Path Planning Using Particle. Swarm Optimization in Dynamic Environments. with Mobile Obstacles and Target

Evolutionary robotics Jørgen Nordmoen

An Evolutionary Approach to the Synthesis of Combinational Circuits

ARRANGING WEEKLY WORK PLANS IN CONCRETE ELEMENT PREFABRICATION USING GENETIC ALGORITHMS

Lexicographic Parsimony Pressure

Position Control of Servo Systems using PID Controller Tuning with Soft Computing Optimization Techniques

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms

THE area of multi-objective optimization has developed. Pareto or Non-Pareto: Bi-Criterion Evolution in Multi-Objective Optimization

A Case Study of GP and GAs in the Design of a Control System

Evolutionary Computation and Machine Intelligence

Digital Filter Design Using Multiple Pareto Fronts

Automating a Solution for Optimum PTP Deployment

Ankur Sinha, Ph.D. Indian Institute of Technology, Kanpur, India Bachelor of Technology, Department of Mechanical Engineering, 2006

Evolving Adaptive Play for the Game of Spoof. Mark Wittkamp

Evolution of Sensor Suites for Complex Environments

Chapter 5 OPTIMIZATION OF BOW TIE ANTENNA USING GENETIC ALGORITHM

A Factorial Representation of Permutations and Its Application to Flow-Shop Scheduling

Population Adaptation for Genetic Algorithm-based Cognitive Radios

Optimal Power Flow Using Differential Evolution Algorithm With Conventional Weighted Sum Method

Holland, Jane; Griffith, Josephine; O'Riordan, Colm.

Learning Behaviors for Environment Modeling by Genetic Algorithm

Evolutionary Algorithms

A Genetic Algorithm-Based Controller for Decentralized Multi-Agent Robotic Systems

NUMERICAL SIMULATION OF SELF-STRUCTURING ANTENNAS BASED ON A GENETIC ALGORITHM OPTIMIZATION SCHEME

BIEB 143 Spring 2018 Weeks 8-10 Game Theory Lab

Implementing Multi-VRC Cores to Evolve Combinational Logic Circuits in Parallel

Endless forms (of regression models) James McDermott

Semi-Automatic Antenna Design Via Sampling and Visualization

DESIGN OF FOLDED WIRE LOADED ANTENNAS USING BI-SWARM DIFFERENTIAL EVOLUTION

Reducing the Computational Cost in Multi-objective Evolutionary Algorithms by Filtering Worthless Individuals

The Application of Multi-Level Genetic Algorithms in Assembly Planning

An Optimized Performance Amplifier

Evolvable Hardware in Xilinx Spartan-3 FPGA

Cooperative Behavior Acquisition in A Multiple Mobile Robot Environment by Co-evolution

COMMONS GAME Made More Exciting by an Intelligent Utilization of the Two Evolutionary Algorithms

Computer Aided Design of a Layout of Planar Circuits by Means of Evolutionary Algorithms

Submitted November 19, 1989 to 2nd Conference Economics and Artificial Intelligence, July 2-6, 1990, Paris

Available online at ScienceDirect. Procedia Computer Science 24 (2013 ) 66 75

International Journal of Scientific & Engineering Research, Volume 7, Issue 12, December ISSN

Vesselin K. Vassilev South Bank University London Dominic Job Napier University Edinburgh Julian F. Miller The University of Birmingham Birmingham

Evolving Assembly Plans for Fully Automated Design and Assembly

Genetic Algorithms with Heuristic Knight s Tour Problem

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Biologically Inspired Embodied Evolution of Survival

Genetic Algorithm Optimisation of PID Controllers for a Multivariable Process

Evolving discrete-valued anomaly detectors for a network intrusion detection system using negative selection

An Improved NSGA-II and its Application for Reconfigurable Pixel Antenna Design

The Behavior Evolving Model and Application of Virtual Robots

Creating a Dominion AI Using Genetic Algorithms

Autonomous Controller Design for Unmanned Aerial Vehicles using Multi-objective Genetic Programming

Reactive Planning with Evolutionary Computation

CPS331 Lecture: Genetic Algorithms last revised October 28, 2016

Evolutionary Optimization for the Channel Assignment Problem in Wireless Mobile Network

Evolutionary Approach to Approximate Digital Circuits Design

Evolutionary Multi-Objective Optimisation with a Hybrid Representation

Economic Design of Control Chart Using Differential Evolution

A Note on General Adaptation in Populations of Painting Robots

PROG IR 0.95 IR 0.50 IR IR 0.50 IR 0.85 IR O3 : 0/1 = slow/fast (R-motor) O2 : 0/1 = slow/fast (L-motor) AND

Localized Distributed Sensor Deployment via Coevolutionary Computation

Simulation of Microgrid and Mobile Power Transfer System interaction using Distributed Multiobjective Evolutionary Algorithms

Interactive Genetic Algorithms with Individual Fitness not Assigned by Human

K.1 Structure and Function: The natural world includes living and non-living things.

Behavior generation for a mobile robot based on the adaptive fitness function

Learning a Visual Task by Genetic Programming

Transcription:

Applying Mechanism of Crowd in Evolutionary MAS for Multiobjective Optimisation Marek Kisiel-Dorohinicki Λ Krzysztof Socha y Adam Gagatek z Abstract This work introduces a new evolutionary approach to searching for a global solution in the Pareto sense to multiobjective optimisation problem. Novelty of the method proposed consists in application of an evolutionary multi-agent system (EMAS) instead of classical evolutionary algorithms. Decentralisation of the evolution process in EMAS allows for intensive exploration of the search space, and the use of the mechanism of crowd allows for effective approximation of the whole Pareto frontier. In the paper a description of the technique is presented as well as preliminary experimental results are reported. Keywords: multiobjective optimisation, evolutionary computation, multi-agent systems. 1 Introduction Although evolutionary computation (EC) and multi-agent systems (MAS) gained a lot of interest during the last decade, many aspects of their functionality still remain open. The problems become even more complicated considering systems utilising both evolutionary and agent paradigms. Building and applying such systems may be a thorny task but it often opens new possibilities for solving difficult kinds of problems. Also, as for other hybrid systems, one approach may help another in attaining its own goals. This is the case when an evolutionary algorithm is used by an agent to aid the realisation of some of its tasks (e.g. connected with learning or reasoning [8]) or to support coordination of some group (team) activity (e.g. planning [5]). An evolutionary multi-agent system (EMAS) is an example of the opposite case, where a multi-agent system helps evolutionary computation providing mechanisms allowing for the decentralisation of the solving process (evolution). Thus EMAS may be considered as a computational technique utilising a decentralised model of evolution extending classical evolutionary algorithm [2]. The key idea of EMAS is the incorporation of evolutionary processes into a MAS at a population level. It means that besides interaction mechanisms typical for MAS (such as communication) agents are able to reproduce (generate new agents) and may die (be eliminated from the system). A decisive factor of the agent s activity is its fitness, expressed by the amount of possessed non-renewable resource called life energy. Selection is Λ Institute of Computer Science, Univ. of Mining and Metallurgy, Kraków, Poland, e-mail: doroh@agh.edu.pl y Free University of Brussles, Belgium, e-mail: socha@helios.iihe.ac.be z Logica pdv GmbH, Hamburg, Germany, e-mail: gagateka@logica.com

ag 9 ag 8 AbcdE ag 5 AbcdE abcde ag 1 ABCDE ag 3 ag 6 ag abcde ABCDE 2 abcde ag 7 ABCdE ag 4 ABcDE Figure 1: Evolutionary MAS realised in such a way that agents with high energy are more likely to reproduce, while a low level of energy increases possibility of death. Based on this model a new evolutionary approach to searching for a global solution to multiobjective optimisation problem may be proposed [7]. In this particular case each agent represents a feasible solution to a given optimisation problem. By means of communication agents acquire information, which allows for the determination of the (non-)domination relation with respect to the others. Then dominated agents transfer a fixed amount of energy to their dominants. This way non-dominated agents should represent successive approximations of the Pareto set. Additionaly, an introduction of the mechanism of crowd allows for a uniform sampling of the whole frontier. Below a short description of these ideas and their implementation is presented. The experimental results show the influence of the crowding factor on the performance of the system applied to several test problems. 2 Evolutionary Multi-Agent Systems While different forms of classical evolutionary computation use specific representation, variation operators, and selection scheme, they all employ a similar model of evolution they work on a given number of data structures (population) and repeat the same cycle of processing (generation) consisting of the selection of parents and production of offspring using mutation and recombination operators. Yet this model of evolution is much simplified and lacks many important features observed in organic evolution [1], e.g.: ffl dynamically changing environmental conditions, ffl many criteria in consideration, ffl neither global knowledge nor generational synchronisation assumed, ffl co-evolution of species, ffl evolving genotype-fenotype mapping.

At least some of these shortcommings may be avoided utilising the idea of decentralised evolutionary computation, which may be realised as an evolutionary multi-agent system described below. Following neodarwinian paradigms, two main components of the process of evolution are inheritance (with random changes of genetic information by means of mutation and recombination) and selection. They are realised by the phenomena of death and reproduction, which may be easily modelled as actions executed by agents (fig. 1):! the action of death results in the elimination of the agent from the system,! the action of reproduction is simply the production of a new agent from its parent(s). Inheritance is to be accomplished by an appropriate definition of reproduction, which is similar to classical evolutionary algorithms. The set of parameters describing basic behaviour of the agent is encoded in its genotype, and is inherited from its parent(s) with the use of mutation and recombination. Besides, the agent may possess some knowledge acquired during its life, which is not inherited. Both the inherited and acquired information determines the behaviour of the agent in the system (phenotype). Selection is the most important and most difficult element of the model of evolution employed in EMAS. This is due to assumed lack of global knowledge (which makes it impossible to evaluate all individuals at the same time) and autonomy of agents (which causes that reproduction is achieved asynchronously). In such a situation selection mechanisms according to classical evolutionary computation cannot be used. The proposed principle of selection corresponds to its natural prototype and is based on the existence of non-renewable resource called life energy. The energy is gained and lost when the agent executes actions in the environment. Increase in energy is a reward for good behaviour of the agent, decrease a penalty for bad behaviour (which behaviour is considered good or bad depends on the particular problem to be solved). At the same time the level of energy determines actions the agent is able to execute. In particular low energy level should increase possibility of death and high energy level should increase possibility of reproduction. A more precise description of this model and its advantages may be found in [2, 7, and other]. In short EMAS should enable the following: ffl local selection allows for intensive exploration of the search space, which is similar to parallel evolutionary algorithms, ffl the way phenotype (behaviour of the agent) is developed from genotype (inherited information) depends on its interaction with the environment, ffl self-adaptation of the population size is possible when appropriate selection mechanisms are used. What is more, explicitly defined living space facilitates implementation in a distributed computational environment. 3 Evolutionary techniques of multiobjective optimisation Decision making and lots of other tasks of human activity described by many non-comparable factors may be mathematically formulated as multiobjective optimisation problems. The terms multiobjective or multicriteria indicate that a classical notion of optimality becomes ambiguous since decisions which optimise one criterion need not optimise the others. The notion

of Pareto-optimality is based on (non-)domination of solutions (which corresponds to the weakorder of vectors in the evaluation space) and in a general case leads to the selection of multiple alternatives. The shape of the multiobjective optimisation problem may be described as follows. Let the input variables be represented by a real-valued vector: ~x =[x 1 ;x 2 ;:::;x N ] T 2 R N (1) where N gives the number of variables. Then a subset of R N of all possible (feasible) decision alternatives (options) can be defined by a system of: ffl inequalities (constraints): g k (~x) 0, k =1; 2;:::;K, ffl equalities (bounds): h l (~x) =0, l =1; 2;:::;L and denoted by D. The alternatives are evaluated by a system of M functions (outcomes) denoted here by vector F =[f 1 ;f 2 ;:::;f M ] T : f m : R N! R; m =1; 2;:::;M (2) The key issue of optimality in the Pareto sense is the relation of domination. Alternative ~x a is dominated by ~x b if and only if: 8m f m (~x a )» f m (~x b ) and 9m f m (~x a ) <f m (~x b ) (3) The relation of domination corresponds to the weak-order of vectors in the evaluation space (given by values of F ). A solution to such-defined multiobjective optimisation problem (in the Pareto sense) means determination of all non-dominated alternatives from D the Pareto set or Pareto frontier. In a general case (i.e. when no particular class of objective and constraint functions is considered) effective approximation of the Pareto set is hard to obtain. For specific types of criteria and constraints (e.g. of linear type) some methods are known, but even in low-dimensional cases they need much computational effort. For complex problems, involving multimodal or discontinuous criteria, disjoint feasible spaces, noisy function evaluation, etc. evolutionary approach (e.g. a genetic algorithm) may be applied for the detailed survey on evolutionary multicriteria optimisation techniques see [4, 3]. 4 Crowd in EMAS for Multiobjective Optimisation As it was said in Introduction, the particular EMAS should search for a set of points which constitute the approximation of the Pareto frontier for a given multicriteria optimisation problem. The population of agents represents feasible solutions to the problem defined by a system of objective functions. The agents act according to the above-described rules of EMAS operation. The most important element of the process is the realisation of energetic reward/punishment mechanism, which should prefer non-dominated agents. This is done via domination energy transfer principle (in short: domination principle) forcing dominated agents to give a fixed amount of their energy to the encountered dominants. This may happen, when two agents inhabiting one place communicate with each other and obtain information about their quality with respect to each objective function. The flow of energy connected with the domination principle causes that dominating agents are more likely to reproduce whereas dominated ones

1000 0,5 Number of Pareto-optimal solutions found 800 600 400 200 Number of Pareto-optimal solutions Average minimal distance between solutions 0,4 0,3 0,2 0,1 Average minimal distance between solutions 0 0 0 1 2 3 4 5 6 7 8 9 10 Crowding Factor Figure 2: The influence of the crowding factor on the system performance in the case of the coherent Pareto frontier are more likely to die. This way, in successive generations, non-dominated agents should make up better approximations of the Pareto frontier. The idea behind introducing the mechanism of crowd was to discourage agents from creating large bunches of similar solutions at some points on the Pareto frontier (this is quite similar to the ideas presented by De Jong [6, and other]). Instead they should be rather uniformly distributed over the whole frontier. Also in the case of problems for which the Pareto set consists of several disjoined parts, this mechanism should improve the ability of agents to cover a wide area of the search space, and discover other parts of the frontier. The mechanism of crowd is controlled by a parameter called crowding factor, which describes how agents representing similar solutions to the problem behave. A larger crowding factor value indicates that there is less tolerance for similar solutions, for smaller values of the crowding factor, this tendency is weaker, up to its disappearance for the crowding factor equal to 0. This way the existence of a certain crowd in the solution space is simulated. 5 Experimental Results Several different tests were performed for different optimisation problems. Various system parameters were checked in order to establish the influence of the crowding factor on the system performance. It has been established that there is a substantial relation between the factor and the system operation. First observation is that small values of the crowding factor improves the system performance in the case of almost every test problem. The agents find more points from the Pareto

100 2 Number of Pareto-optimal solutions found 95 90 85 80 Number of Pareto-optimal solutions Average minimal distance between solutions 1,5 1 0,5 Average minimal distancebetween the solutions 75 0 0 1 2 3 4 5 6 7 8 9 10 Crowding Factor Figure 3: The influence of the crowding factor on the system performance in the case of Pareto frontier consisting of several disjoined parts frontier, comparing to the cases, when the mechanism of crowd was disabled, i.e. the crowding factor was equal to 0 (fig. 2). The second effect concernes only such cases of problems which have a fairly large number of disjoined parts of the Pareto frontier. In that case a rather high value of the crowding factor (relative to the distance between the separate parts of the Pareto frontier) allowes the system to find these disjoined areas more efficiently (fig. 3). As it is shown in fig. 2 and fig. 3, the crowding factor has influence on the average minimal distance between the solutions (this value has been computed over the whole set of nondominated solutions). In fact the lower value indicates that the gaps between solutions found, are smaller. If these gaps are smaller, then the distribution is more uniform and better reflects the real Pareto frontier, which in turn makes it possible to recognize disjoined areas of the Pareto frontier (fig. 4). 6 Concluding remarks The proposed idea of an evolutionary multi-agent system for multi-objective optimisation proved to be working in a number of tests. Up till now it is still too early to compare this method with various other heuristics supporting decision making known from literature. Yet the preliminary results show significant advantages over other techniques regarding adaptation to a particular problem, which is mainly performed by the system itself. In most of the problems investigated the introduction of the mechanism of crowd improved the system performance concerning the distribution of the solutions on the Pareto frontier. Of course, it is impossible to assure that some value of the crowding factor may give

f1(x; y; z; t) = (x 2) 2 (y +3) 2 (z 5) 2 (t 4) 2 +5 f1(x; y; z; t) = sin x +siny +sinz +sint 1+( x 10 )2 +( y 10 )2 +( z 10 )2 +( t 10 )2 4 3.5 3 2.5 f2(x,y,z,t) 2 1.5 1 0.5 0-80 -70-60 -50-40 -30-20 -10 0 10-0.5 f1(x,y,z,t) -1 Figure 4: The crowding factor s role in determining discontinuity of the Pareto frontier perfectly uniform distribution, and hence absolute certainity of the Pareto frontier shape. Yet in most cases it allows to estimate it much better. Further research should concern the effectiveness of the approach proposed, especially in the case of difficult problems (many dimensions, multimodal or discontinuous criteria, etc.). Several extensions to the evolutionary process (such as aggregation) applied to EMAS in many other application domains should also be considered. References [1] T. Bäck, U. Hammel, and H.-P. Schwefel. Evolutionary computation: Comments on the history and current state. IEEE Transactions on Evolutionary Computation, 1(1), 1997. [2] K. Cetnarowicz, M. Kisiel-Dorohinicki, and E. Nawarecki. The application of evolution process in multi-agent world (MAW) to the prediction system. In M. Tokoro, editor, Proc. of the 2nd Int. Conf. on Multi-Agent Systems (ICMAS 96). AAAI Press, 1996. [3] C. A. Coello Coello. An updated survey of evolutionary multiobjective optimization techniques: State of the art and future trends. In P. J. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and A. Zalzala, editors, Proceedings of the Congress on Evolutionary Computation, volume 1. IEEE Press, 1999. [4] C. M. Fonseca and P. J. Fleming. An overview of evolutionary algorithms in multiobjective optimization. Evolutionary Computation, 3(1):1 16, 1995. [5] T. Haynes and S. Sen. Crossover operators for evolving A team. In J. R. Koza, K. Deb, M. Dorigo, D. B. Fogel, M. Garzon, H. Iba, and R. L. Riolo, editors, Genetic Programming 1997: Proceedings of the Second Annual Conference. Morgan Kaufmann Publishers, 1997.

[6] K. A. D. Jong. An analysis of the behaviour of a class of genetic systems. PhD thesis, University of Michigan, 1975. [7] M. Kisiel-Dorohinicki, G. Dobrowolski, and E. Nawarecki. Evolutionary multi-agent system in multiobjective optimisation. In M. Hamza, editor, Proc. of the IASTED Int. Symp.: Applied Informatics. IASTED/ACTA Press, 2001. [8] J. Liu and H. Qin. Adaptation and learning in animated creatures. In W. L. Johnson and B. Hayes-Roth, editors, Proc. of the 1st Int. Conf. on Autonomous Agents (Agents 97). ACM Press, 1997.