Improving AI for simulated cars using Neuroevolution

Size: px
Start display at page:

Download "Improving AI for simulated cars using Neuroevolution"

Transcription

1 Improving AI for simulated cars using Neuroevolution Adam Pace School of Computing and Mathematics University of Derby Derby, UK Abstract A lot of games rely on very rigid Artificial Intelligence techniques such as Finite-State machines that get overly complex as the game worlds themselves do. This often leads to very static AI behaviour that is the same across a number of characters. Through the use of Neural Networks this paper hopes to show how these rigid controllers can be replaced with a more intelligent solution that gives better visual behaviour and can provide greater variety. Keywords Neuroevolution, Genetic Algortihms, Neural Networks I. INTRODUCTION Creating believable Artificial Intelligence for games is a becoming a more important task as games seek further realism, not only is it becoming more important but its also becoming much harder. As the complexity of our game environments increases, such as through improved physics, larger environments etc. the AI needed becomes more complex too as it needs to factor in these new variables. A lot of games within the industry rely on relatively simple approaches to their AI, using Finite State Machines mixed with smoke and mirror techniques to create the illusion of intelligence. Obviously there are many notable exceptions to this who use more advanced techniques such as F.E.A.R that utilised STRIPS planning [1] or the Halo series with its use of behaviour trees [2] that create varied behaviours for multiple characters. Still though it is acceptable for game AI to cheat or uses workarounds to create an illusion of greater intelligence for the player, as both of these titles do, unlike with traditional AI. Even with a relatively simple domain, that consists of a top down view of a grid-road system, through the addition of realistic, wheel driven physics creating AI to simply navigate from point A to point B already starts to get a little more complex when they only have the ability to accelerate, brake or steer, trying to avoid collisions and add more varied behaviour becomes increasingly difficult and time consuming. What If we could get the machine to learn how to drive a car along a pre-defined path, react to different circumstances and the physics in play in the environment, or even to work out its own path as well? This would give us much more power to create believable AI that deals with complex environments. This is something we wish to investigate, how we can use Evolutionary Algorithms to evolve Artificial Neural Networks (ANN) which will act as the drivers of these cars. II. A. Neural Networks TECHNICAL BACKGROUND An Artificial Neural Network is a programmers attempt at recreating a real brain and how it deals with processes, but in a much smaller simplified way. Like a brain a Neural Network is made up of a number of neurons and connections, the network takes a series of input values which are passed along the connections and through the network to create a set of outputs. Neural Networks are very good at pattern recognition and can be applied to a number of fields such as data processing and robotics or Artificial Intelligence [3]. Fig. 1. layer. Simple Feed-Forward Neural Network design with just one hidden As Figure 1 shows, a NN consists of a layer of input neurons, any number of hidden layers containing any number of neurons, in this case just one layer of 3 neurons, and finally an output layer. The network in the diagram is fully connected, meaning every node is connected to every other in the next layer, though this doesnt always have to be the case. The connections are all weighted, each neuron sums the values that each connection gives it, after it has been modified by the connections weight, before passing this value through its activation function and onto all of its connected neurons. The output layer does exactly the same, but holds onto its value which will form the set of outputs that will be used by some form of controller or whatever is making use of the network to react. As we mentioned each of these neurons has an activation function that its output must go through. This activation function is meant to represent the firing mechanism of an actual neuron and these can be of two forms. They can either respond

2 or not respond, i.e. always firing either 0 or 1 or can react with a gradient, where a stronger input can cause a stronger signal to be fired, these would then be able to fire any value between 0 and 1 for example. This have many benefits for use in Artificial Neural Networks by providing how much a controller should react rather than just a react or dont react output. Typical activation functions that are used can be a normalized Sigmoid function [4] which outputs a value between 0 and 1 and if its output were to be plotted would form a simple S shape, not firing whilst its input was at 0 and quickly escalating at first before easing into constantly firing. This function has its downfalls for certain Networks, especially that it doesnt output negative values. The other most common function used is a hyperbolic tangent, tanh [5], which can either output between -1 and 1 or be translated to the 0 and 1 range. One typical way a Neural Network is trained is by observing its performance against a set of data, and using what you know the result should be to feed back the error into the network to try and correct it. This is known as supervised learning. The other way, with which we have more experience and will be using as the basis for this paper is through the use of Evolutionary Algorithms. B. Evolutionary Algorithms Evolutionary Algorithms (EA) are a subset of Evolutionary Computation that use selection and modification methods that are inspired by real biological concepts. What is now referred to as evolutionary algorithms emerged in the 50s and 60s, however it was Hollands 1975 work [6] where the idea of Genetic Algorithms (GA) originated. A Genetic Algorithm borrows from biological evolution to essentially find solutions to a given problem through a number of techniques. A GA consists of a population of chromosomes, each chromosomes is a potential solution to the problem trying to be solved. In this instance we could use a chromosome to represent the weightings for the connections of a Neural Network, but they could represent any number of things. A GA operates over a number of cycles, referred to as generations, once again lending the biological terminology. At the beginning of each generation a new population of chromosomes is created, the way this is done can differ but it essentially involves using chromosomes from the previous generation to act as parents for the new generation. How these parents are selected and how they create the offspring varies across different techniques. So evolutionary algorithms rely on modifying this ever changing population of potential solutions, but how do they do this? Firstly a key concept is the use of a fitness score, each individual solution is evaluated at how well it solves the problem through the use of a fitness function and its performance graded. This fitness function might be as simple as grading how close to a target the individual manages to get, or it may be slightly more complex and take into a number of different factors such as how many collisions did it occur and how long did it take. How fitness is calculated can obviously have a very large impact on the resulting chromosomes, and the algorithm will often manage to come up with high fitness individuals that whilst being high fitness according to the given fitness function doesnt behave exactly as the designer intends! This fitness is used in order to influence the selection of parents for the next generation, a popular method often used is known as Roulette selection. Unlike other methods where maybe the top 5 percent of individuals are chosen this allows all chromosomes a chance to be chosen, though it is proportionate to the chromosomes fitness. A random number is generated between 0 and the total fitness of the population, then the population is iterated over, adding up the total fitness as it goes. The chromosome where the total fitness becomes greater than the value randomly generated is the chromosome to be selected. This is then repeated until the new generation is created [7]. This technique is good in that weaker individuals that might actually have something to offer have a chance at being selected and we get to keep some variety. Some parents may also be preserved in the new generation, known as elitism this was presented by De Jong [8] as a way to prevent the destruction of high fitness individuals. Once the parents are selected we somehow need to make them reproduce. This is done through the use of Genetic Operators, and are once again based on how reproduction actually works in biology. The main approach is to use crossover between two parent chromosomes. For crossover a point is randomly selected somewhere along the length of the chromosomes, all the values up to this point are taken from Parent A and all the values after this point are taken from Parent B to form Offspring A. The opposite is done, using the same crossover point for Offspring B. This is known as single point crossover and is a simplification of how we are created as a combination of our parents, its possible to use any number of crossover points, though single-point is the most common form. This technique alone wouldnt get us too far, we need to add some variance to the population too else we will just keep mixing the same values. This is done through mutation. For each new chromosome there is a small chance that each of its values might be mutated a small amount. This is often achieved by applying a Gaussian Blur to a value that is to be mutated, very slightly altering it. Through this we create more variance within the population and help discover new and potentially better solutions. III. PREVIOUS WORK Creating intelligent controllers for cars has been a popular topic among researchers, both for simulated and RC vehicles. A lot of different work has been done for the racing domain where the emphasis is on speed, either how many lap a controller can manage in a given time, or just how fast a lap it can manage. Numerous competitions have even been hosted with this in mind [9]. Work by Togelius and Lucas [10] explored using different evolutionary controllers for simulated car racing, concluding that a sensor based Neural Network gave the most promising results. Further work [11] explored these ideas in greater depth and managed to evolve generalized racing controllers. The controllers they used were equipped with sensors that could read the environment around the car and work out distances to walls or the edge of the track, in some tests these sensors themselves could be evolved alongside the network allowing for cars with further reaching sensors at the loss of fidelity or

3 differently angled sensors. The controllers were also given the speed, waypoint angle and a bias to be able to navigate. In order to evolve generalised controllers that could race well on tracks they hadnt encountered yet they needed to be trained on a selection of different tracks in order to evolve a more generalised behaviour. The interesting way this was approached was to slowly introduce new, harder tracks as the fittest individual gained proficiency on the tracks already in the training set (initially one). A good example of a game that actually puts this to use is Forza. Forzas Drivatar [12] AI system uses neural networks to control its AI drivers. Each driver uses a different NN and as such each driver has slightly different skill levels and driving tendencies, such as how they take a particular corner or how aggressively they try to overtake. IV. THE GOAL As weve already discussed we wish to try and use Neural Networks to improve our AI. We have a very simple domain which is a top-down view of a section of streets laid out in a grid like pattern like pattern, i.e. all corners are right angles. In previous work we created cars that simulated real physics in order to move, creating a force that in turn drove the rear wheels in move the car forward as well as calculating accurate angular velocity in order to steer the car. As such it becomes much harder for AI, as well as players to be able to drive these cars as they now have to take into account a number of new factors, such as speed and resistance. Originally a Hierarchical Finite State Machine (HFSM) was used in order to control the navigation of these cars and whilst it was relatively well performing, managing to control 50 individual cars on the map at once whilst maintaining a relatively good frame rate it certainly had a number of drawbacks. For one it was already starting to get cumbersome with around 6 or 7 states already, and it didnt manage to avoid car-on-car collisions only attempt to slow down if another car was too close in front, though with more work this could have been added. What was more of a problem was that every single car drove the exact same way, and they could only really handle the exact physics they were built for. If the roads resistances changed slightly or the cars speed changed then the map turned to a mess of cars all over the place. The goal here is to swap out the HFSM the cars currently use for an Artificial Neural Network that hopefully we can train to be much better. For one they should be able to deal with varying physics, and provide better human like variety. A. Initial Evolution V. RESULTS We began our investigations small aiming to evolve a car that could navigate a small section of our map, this section was rather short and comprised a single right turn and a two left turns. All of our Neural networks have just two outputs, the first relates to acceleration and braking using positive and negative outputs and the second is steering, where a negative value means turn left and a positive value means turn right. These outputs are directly passed into the control functions so have a direct control on how much to steer/accelerate. We took from Togelius and Lucas and used a sensor based approach for our controllers. The Network that we had most success with comprised of 6 inputs and 4 hidden layers of 5 neurons each. Our 6 inputs were the angle difference from the cars rotation to the next waypoint and then the angle to the waypoint after, the cars speed and three collision sensors. The three sensors measured the distance to a blocked tile up to 25 pixels away. Each sensor started at the front corner of the car, one on the left and one on the right and travelled along that 45 degree angle, with the third travelling directly forward. These inputs varied greatly until we settled upon this combination, the second angle only being added in when cars often failed at wider corners due to not being able to see far enough ahead and not reaching the waypoint quick enough. It makes sense that the car should be able to see more than directly in front of it. Our controllers are given a path to the destination using an A* path finding algorithm, and the controller must work its way along the path, where every tile is a way point. Its possible for the controller to count being next to the appropriate tile as having reached a waypoint so travel doesnt need to be that precise. Waypoints can also be skipped altogether, reaching a further waypoint moves the progress along to that point, regardless of other points reached. This is particularly useful if the car loses control and as it comes round a corner too fast misses a lot of waypoints near the corner by using other waypoints managing to still receive useful information rather than needing to turn around. B. Perfecting the Fitness After a number of failed attempts of just using the progress along the defined route as an indication of fitness we realised we needed a much more specific function. Early attempts saw cars just driving in one large circle disregarding the roads in order to reach the destination. We adding collisions to the edge of the roads in order to force the car to stay on the road, though we were aware that steps would need to be taken to prevent the usual wall hugging behaviour often exhibited by AI controlled robots trying to find their way. This is when we improved the fitness function to minimise its collisions whilst on route to the destination. Foolishly we ended up with cars that didnt move, instantly being graded 0.5 for not having any collisions. We changed to a two part fitness function as used in [13], if the car failed to reach the destination its fitness would be grade solely on how far it managed to get on a scale. If it succeeded in reaching the target, it would instantly be given a fitness of 0.5 with the other 0.5 based upon how many collisions it encountered. Using this fitness function we managed to evolve a range of cars that managed to reach the target in our scenario and not collide with the wall at all, quite a difficult task in fact given our physics model and the tightness of some of the corners. However it managed this at a snails pace, never gaining much momentum, stopping all acceleration and turning hard into corners to drift its way round, very slowly. It worked, but it wasnt the human like behaviour we were hoping for. This led to another change to the way we graded the chromosomes, the first 50

4 This was good progress, but wed made a specialist. Though there are only really two types of corners in the map, tight right hand turns, and wider left hand turns there is much more that needs to be taken into consideration. The car didnt perform quite so well on other routes, some it managed okay, but it wasnt as good as it was on the route it was trained on. This was partly due to flaws with the inputs used and partly to how it was trained. C. Gradual Evolution Togelius, Lucas and Nardi [10] took an interesting approach that proved successful in their work for evolving control networks for simulated cars. They would evolve the population on a certain track and when the population was deemed fit enough, introduce another track into the training sample. Our domain has a limited set of problems, in the fact we only really have the two different styles of corners, tight right turn and wide left turn so it reduces the need for different maps to train on. We do have different corners in the sense of corners at the end of the road or corners at a junction which differ slightly too. Giving the network a pretty decent route to try and learn on was quite a big task to tackle at once and took quite a long time to get anything of much worth. We decided to adapt the gradual learning approach and break the route down. To begin with the networks would train on a simple section with just one corner, once they got good at this another section was added to the route, taking it up to two corners which now covered both types of corner. This would continue until we had covered the four sections that composed the initial route and we had a number of individuals that could tackle to whole route reasonably well. This led to some pretty good results, cars could get through the route with a minimal number of collisions, the best being around 5 individual collisions. It was however impossible given the current state to improve past this, the cars in fact seemed to rely on these few collisions in order to get round the corners, almost slamming into the opposite side of some of the tight corners. D. Perfecting the Behaviour It could have been possible to devise a fitness function that penalised for collisions based on the level of impact. For example a slight scrap would not be as harshly penalised for as a high velocity side on collision. Although this we never experimented with this we felt that whilst it may work at least reducing the amount the cars used the walls it would be quite a slow process. Instead we decided to continue the gradual learning approach and see collisions as almost like the training wheels used to get the cars going in the right direction. In fact we never wanted collisions in the first place so this helped serve that desire too. The plan would be to remove collisions once the cars had become moderately good at completing the full route and rather than grade them on the number of collisions encountered, theyd be scored dependant on the amount of time they were off-road. A further addition made was to allow the Genetic Algorithm to also evolve the sensor lengths individually. In previous tests theyd always been fixed at 25 pixels distance. Individual distances were allowed for each sensor available, this is something that was once again presented in [10] with the sensor based racing model. They allowed the sensor distance as well as angles to be evolved and later on let the networks decide how many wall sensors and how many car sensors it had. Their work required all of this sensory data as it was much less reliant upon a provided route, only being given a handful of waypoints along a racing track. We were just curious as to what effect, if any giving away some control of these sensors would have on the resulting chromosomes. The results after these changes were quite good, in fact quite surprising in some regards. The sensors distances were the most interesting, the forward sensor seemed to stay relatively far which makes sense. The two side sensors however became much shorter, with the right hand one seemingly always slightly shorter still. Looking at this it seems to make sense, the car is trying to stick to the right hand side of the road and the roads are pretty narrow. The sensors only ever really need to reach the edge of the road, and the shorter they are the better granularity they provide. The difference in side sensor length also allows for better positioning. From observing the networks in action they seem to try and position themselves using these sensors, by having a shorter right sensor they position themselves to the right hand side more which is what the A* path follows. These cars also had much improved cornering, being able to take the corners much tighter especially now they didnt have a wall to help them round. We were able to evolve individuals that score greater than 0.97 and only ever slightly drifted off the road with the rear tyres, a much greater result that the previous chromosomes that were only trained using collisions. The resulting chromosomes were solid AI controllers, at least on par with if not better than the previous HFSM implementation. The Neural Networks managed to push the car to its limits, managing to navigate without the need to brake for corners, this was never possible with the HFSM implementation that drove slower and needed to brake quite early on. The physics felt like a hindrance to these early controllers whereas the Neural Network based ones took advantage of it wherever they could. VI. FUTURE WORK The controllers developed for this paper were quite basic in their behaviours, completing the task of navigating a route and not much else. In the future it would be good to look to try and enhance these behaviours to include things such as the avoidance of other cars or increasing the challenge of the environment so that they need to brake to be able to stand a chance of getting round corners. The behaviour exhibited wasnt quite as human like as we would have wished either. Its possible that for future work these networks could be trained against a player model in order to try and learn more human like tendencies. These could include braking as weve mentioned, much more cautious driving, real drivers dont hit a corner at full momentum or constantly alter their steering direction. In fact gradual evolution could be used here again, where already successful

5 controllers are refined against a player model in order to try and eradicate some of the wilder behaviour exhibited. VII. CONCLUSION For a first exploration into implementing such a control system into this domain we feel the results were definitely promising. The behaviour of the final chromosomes are arguably much better than the initial HFSM implementation and by using a random assortment of different NNs wed be able to easily create a range of slightly different NPC behaviours with minimal effort. In fact rather interesting behaviors appeared that we did not expect. For example if the car s front sensor detected a collision too close, such as with the early route that ended in front of a wall, the car would begin to turn away to prevent itself crashing. Obviously there are downsides to this approach, proper testing would need to be done to ensure that no unexpected behaviour was possible before adding to a final game. Whilst these networks are computationally quick at processing they do have a larger memory requirement. However because these networks have no context other than the inputs passed in it would be possible for one network to power as many cars as needed, potentially saving memory even when compared to FSMs or other rigid AI controllers. Personally we feel like weve also learnt a lot of useful techniques for evolving Neural Networks which can be applied to other projects or future work in this domain. The use of gradual complexity increases for the training methods greatly helped the generation of good chromosomes and the time it took to find these. REFERENCES [1] J. Orkin, Three states and a plan: The A.I. of F.E.A.R, [2] D. Isla, Handling complexity in the halo 2 AI, [3] S. Russel and P. Norvig, Artificial intelligence: A modern approach, 1st ed. New Jersey: Prentice-Hall, Inc., [4] The MathWorks, Inc, Hyperbolic tangent sigmoid transfer function - matlab tansig, [Online]. Available: [5] MathWorks, Inc, Hyperbolic tangent - matlab tanh, [Online]. Available: [6] J. Holland, Adaptation in natural and artificial systems, MIT Press, Tech. Rep., [7] M. Mitchell, An introduction to Genetic Algorithms (complex adaptive systems), new edition ed. Prentice-Hall, Inc., [8] K. De Jong, Analysis of the behaviour of a class genetic adaptive systems, University of Michigan, Tech. Rep., [9] D. Loiacono, P. L. Lanzi, J. Togelius, and E. Onieva, The 2009 simulated car racing championship, computational intelligence and AI in games, IEEE Transactions on, vol. 2, no. 1, pp , [10] J. Togelius and S. M. Lucas, Evolving controllers for simulated car racing, Evolutionary Computation, The 2005 IEEE Congress on, vol. 2, pp , [11] J. Togelius, S. Lucas, and R. D. Nardi, Computational intelligence in racing games, in Advanced Intelligent Paradigms in Computer Games. Springer Berlin, 2007, pp [12] Microsoft Research, Applied Group, Drivatar theory, [Online]. Available: [13] T. Thompson and J. Levine, Scaling-up behaviours in evotanks: Applying subsumption principles to artificial neural networks, Computational Intelligence and Games, IEEE Symposium On, pp , 2008.

City Research Online. Permanent City Research Online URL:

City Research Online. Permanent City Research Online URL: Child, C. H. T. & Trusler, B. P. (2014). Implementing Racing AI using Q-Learning and Steering Behaviours. Paper presented at the GAMEON 2014 (15th annual European Conference on Simulation and AI in Computer

More information

Neural Networks for Real-time Pathfinding in Computer Games

Neural Networks for Real-time Pathfinding in Computer Games Neural Networks for Real-time Pathfinding in Computer Games Ross Graham 1, Hugh McCabe 1 & Stephen Sheridan 1 1 School of Informatics and Engineering, Institute of Technology at Blanchardstown, Dublin

More information

Creating a Poker Playing Program Using Evolutionary Computation

Creating a Poker Playing Program Using Evolutionary Computation Creating a Poker Playing Program Using Evolutionary Computation Simon Olsen and Rob LeGrand, Ph.D. Abstract Artificial intelligence is a rapidly expanding technology. We are surrounded by technology that

More information

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function

Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Developing Frogger Player Intelligence Using NEAT and a Score Driven Fitness Function Davis Ancona and Jake Weiner Abstract In this report, we examine the plausibility of implementing a NEAT-based solution

More information

Evolutions of communication

Evolutions of communication Evolutions of communication Alex Bell, Andrew Pace, and Raul Santos May 12, 2009 Abstract In this paper a experiment is presented in which two simulated robots evolved a form of communication to allow

More information

Implementation and Comparison the Dynamic Pathfinding Algorithm and Two Modified A* Pathfinding Algorithms in a Car Racing Game

Implementation and Comparison the Dynamic Pathfinding Algorithm and Two Modified A* Pathfinding Algorithms in a Car Racing Game Implementation and Comparison the Dynamic Pathfinding Algorithm and Two Modified A* Pathfinding Algorithms in a Car Racing Game Jung-Ying Wang and Yong-Bin Lin Abstract For a car racing game, the most

More information

Creating a Dominion AI Using Genetic Algorithms

Creating a Dominion AI Using Genetic Algorithms Creating a Dominion AI Using Genetic Algorithms Abstract Mok Ming Foong Dominion is a deck-building card game. It allows for complex strategies, has an aspect of randomness in card drawing, and no obvious

More information

Evolutionary Neural Networks for Non-Player Characters in Quake III

Evolutionary Neural Networks for Non-Player Characters in Quake III Evolutionary Neural Networks for Non-Player Characters in Quake III Joost Westra and Frank Dignum Abstract Designing and implementing the decisions of Non- Player Characters in first person shooter games

More information

Robust player imitation using multiobjective evolution

Robust player imitation using multiobjective evolution Robust player imitation using multiobjective evolution Niels van Hoorn, Julian Togelius, Daan Wierstra and Jürgen Schmidhuber Dalle Molle Institute for Artificial Intelligence (IDSIA) Galleria 2, 6298

More information

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections Proceedings of the World Congress on Engineering and Computer Science 00 Vol I WCECS 00, October 0-, 00, San Francisco, USA A Comparison of Particle Swarm Optimization and Gradient Descent in Training

More information

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS

ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS ARTIFICIAL INTELLIGENCE IN POWER SYSTEMS Prof.Somashekara Reddy 1, Kusuma S 2 1 Department of MCA, NHCE Bangalore, India 2 Kusuma S, Department of MCA, NHCE Bangalore, India Abstract: Artificial Intelligence

More information

Population Adaptation for Genetic Algorithm-based Cognitive Radios

Population Adaptation for Genetic Algorithm-based Cognitive Radios Population Adaptation for Genetic Algorithm-based Cognitive Radios Timothy R. Newman, Rakesh Rajbanshi, Alexander M. Wyglinski, Joseph B. Evans, and Gary J. Minden Information Technology and Telecommunications

More information

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC)

Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC) Chapter 1: Introduction to Neuro-Fuzzy (NF) and Soft Computing (SC) Introduction (1.1) SC Constituants and Conventional Artificial Intelligence (AI) (1.2) NF and SC Characteristics (1.3) Jyh-Shing Roger

More information

THE EFFECT OF CHANGE IN EVOLUTION PARAMETERS ON EVOLUTIONARY ROBOTS

THE EFFECT OF CHANGE IN EVOLUTION PARAMETERS ON EVOLUTIONARY ROBOTS THE EFFECT OF CHANGE IN EVOLUTION PARAMETERS ON EVOLUTIONARY ROBOTS Shanker G R Prabhu*, Richard Seals^ University of Greenwich Dept. of Engineering Science Chatham, Kent, UK, ME4 4TB. +44 (0) 1634 88

More information

Agent Smith: An Application of Neural Networks to Directing Intelligent Agents in a Game Environment

Agent Smith: An Application of Neural Networks to Directing Intelligent Agents in a Game Environment Agent Smith: An Application of Neural Networks to Directing Intelligent Agents in a Game Environment Jonathan Wolf Tyler Haugen Dr. Antonette Logar South Dakota School of Mines and Technology Math and

More information

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS

CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS CYCLIC GENETIC ALGORITHMS FOR EVOLVING MULTI-LOOP CONTROL PROGRAMS GARY B. PARKER, CONNECTICUT COLLEGE, USA, parker@conncoll.edu IVO I. PARASHKEVOV, CONNECTICUT COLLEGE, USA, iipar@conncoll.edu H. JOSEPH

More information

CS 441/541 Artificial Intelligence Fall, Homework 6: Genetic Algorithms. Due Monday Nov. 24.

CS 441/541 Artificial Intelligence Fall, Homework 6: Genetic Algorithms. Due Monday Nov. 24. CS 441/541 Artificial Intelligence Fall, 2008 Homework 6: Genetic Algorithms Due Monday Nov. 24. In this assignment you will code and experiment with a genetic algorithm as a method for evolving control

More information

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors

! The architecture of the robot control system! Also maybe some aspects of its body/motors/sensors Towards the more concrete end of the Alife spectrum is robotics. Alife -- because it is the attempt to synthesise -- at some level -- 'lifelike behaviour. AI is often associated with a particular style

More information

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani

Neuro-Fuzzy and Soft Computing: Fuzzy Sets. Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Chapter 1 of Neuro-Fuzzy and Soft Computing by Jang, Sun and Mizutani Outline Introduction Soft Computing (SC) vs. Conventional Artificial Intelligence (AI) Neuro-Fuzzy (NF) and SC Characteristics 2 Introduction

More information

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks Stanislav Slušný, Petra Vidnerová, Roman Neruda Abstract We study the emergence of intelligent behavior

More information

Enhancing Embodied Evolution with Punctuated Anytime Learning

Enhancing Embodied Evolution with Punctuated Anytime Learning Enhancing Embodied Evolution with Punctuated Anytime Learning Gary B. Parker, Member IEEE, and Gregory E. Fedynyshyn Abstract This paper discusses a new implementation of embodied evolution that uses the

More information

Coevolution and turnbased games

Coevolution and turnbased games Spring 5 Coevolution and turnbased games A case study Joakim Långberg HS-IKI-EA-05-112 [Coevolution and turnbased games] Submitted by Joakim Långberg to the University of Skövde as a dissertation towards

More information

Genetic Algorithms with Heuristic Knight s Tour Problem

Genetic Algorithms with Heuristic Knight s Tour Problem Genetic Algorithms with Heuristic Knight s Tour Problem Jafar Al-Gharaibeh Computer Department University of Idaho Moscow, Idaho, USA Zakariya Qawagneh Computer Department Jordan University for Science

More information

Evolved Neurodynamics for Robot Control

Evolved Neurodynamics for Robot Control Evolved Neurodynamics for Robot Control Frank Pasemann, Martin Hülse, Keyan Zahedi Fraunhofer Institute for Autonomous Intelligent Systems (AiS) Schloss Birlinghoven, D-53754 Sankt Augustin, Germany Abstract

More information

Bachelor thesis. Influence map based Ms. Pac-Man and Ghost Controller. Johan Svensson. Abstract

Bachelor thesis. Influence map based Ms. Pac-Man and Ghost Controller. Johan Svensson. Abstract 2012-07-02 BTH-Blekinge Institute of Technology Uppsats inlämnad som del av examination i DV1446 Kandidatarbete i datavetenskap. Bachelor thesis Influence map based Ms. Pac-Man and Ghost Controller Johan

More information

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms

A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms A comparison of a genetic algorithm and a depth first search algorithm applied to Japanese nonograms Wouter Wiggers Faculty of EECMS, University of Twente w.a.wiggers@student.utwente.nl ABSTRACT In this

More information

Implicit Fitness Functions for Evolving a Drawing Robot

Implicit Fitness Functions for Evolving a Drawing Robot Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,

More information

Mehrdad Amirghasemi a* Reza Zamani a

Mehrdad Amirghasemi a* Reza Zamani a The roles of evolutionary computation, fitness landscape, constructive methods and local searches in the development of adaptive systems for infrastructure planning Mehrdad Amirghasemi a* Reza Zamani a

More information

As can be seen in the example pictures below showing over exposure (too much light) to under exposure (too little light):

As can be seen in the example pictures below showing over exposure (too much light) to under exposure (too little light): Hopefully after we are done with this you will resist any temptations you may have to use the automatic settings provided by your camera. Once you understand exposure, especially f-stops and shutter speeds,

More information

Stock Price Prediction Using Multilayer Perceptron Neural Network by Monitoring Frog Leaping Algorithm

Stock Price Prediction Using Multilayer Perceptron Neural Network by Monitoring Frog Leaping Algorithm Stock Price Prediction Using Multilayer Perceptron Neural Network by Monitoring Frog Leaping Algorithm Ahdieh Rahimi Garakani Department of Computer South Tehran Branch Islamic Azad University Tehran,

More information

Evolutionary robotics Jørgen Nordmoen

Evolutionary robotics Jørgen Nordmoen INF3480 Evolutionary robotics Jørgen Nordmoen Slides: Kyrre Glette Today: Evolutionary robotics Why evolutionary robotics Basics of evolutionary optimization INF3490 will discuss algorithms in detail Illustrating

More information

Artificial Intelligence for Games

Artificial Intelligence for Games Artificial Intelligence for Games CSC404: Video Game Design Elias Adum Let s talk about AI Artificial Intelligence AI is the field of creating intelligent behaviour in machines. Intelligence understood

More information

Mobile and web games Development

Mobile and web games Development Mobile and web games Development For Alistair McMonnies FINAL ASSESSMENT Banner ID B00193816, B00187790, B00186941 1 Table of Contents Overview... 3 Comparing to the specification... 4 Challenges... 6

More information

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters

Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Achieving Desirable Gameplay Objectives by Niched Evolution of Game Parameters Scott Watson, Andrew Vardy, Wolfgang Banzhaf Department of Computer Science Memorial University of Newfoundland St John s.

More information

Publication P IEEE. Reprinted with permission.

Publication P IEEE. Reprinted with permission. P3 Publication P3 J. Martikainen and S. J. Ovaska function approximation by neural networks in the optimization of MGP-FIR filters in Proc. of the IEEE Mountain Workshop on Adaptive and Learning Systems

More information

EvoTanks: Co-Evolutionary Development of Game-Playing Agents

EvoTanks: Co-Evolutionary Development of Game-Playing Agents Proceedings of the 2007 IEEE Symposium on EvoTanks: Co-Evolutionary Development of Game-Playing Agents Thomas Thompson, John Levine Strathclyde Planning Group Department of Computer & Information Sciences

More information

Reactive Planning with Evolutionary Computation

Reactive Planning with Evolutionary Computation Reactive Planning with Evolutionary Computation Chaiwat Jassadapakorn and Prabhas Chongstitvatana Intelligent System Laboratory, Department of Computer Engineering Chulalongkorn University, Bangkok 10330,

More information

INTERACTIVE DYNAMIC PRODUCTION BY GENETIC ALGORITHMS

INTERACTIVE DYNAMIC PRODUCTION BY GENETIC ALGORITHMS INTERACTIVE DYNAMIC PRODUCTION BY GENETIC ALGORITHMS M.Baioletti, A.Milani, V.Poggioni and S.Suriani Mathematics and Computer Science Department University of Perugia Via Vanvitelli 1, 06123 Perugia, Italy

More information

Evolution of Sensor Suites for Complex Environments

Evolution of Sensor Suites for Complex Environments Evolution of Sensor Suites for Complex Environments Annie S. Wu, Ayse S. Yilmaz, and John C. Sciortino, Jr. Abstract We present a genetic algorithm (GA) based decision tool for the design and configuration

More information

The Behavior Evolving Model and Application of Virtual Robots

The Behavior Evolving Model and Application of Virtual Robots The Behavior Evolving Model and Application of Virtual Robots Suchul Hwang Kyungdal Cho V. Scott Gordon Inha Tech. College Inha Tech College CSUS, Sacramento 253 Yonghyundong Namku 253 Yonghyundong Namku

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

A Divide-and-Conquer Approach to Evolvable Hardware

A Divide-and-Conquer Approach to Evolvable Hardware A Divide-and-Conquer Approach to Evolvable Hardware Jim Torresen Department of Informatics, University of Oslo, PO Box 1080 Blindern N-0316 Oslo, Norway E-mail: jimtoer@idi.ntnu.no Abstract. Evolvable

More information

USING GENETIC ALGORITHMS TO EVOLVE CHARACTER BEHAVIOURS IN MODERN VIDEO GAMES

USING GENETIC ALGORITHMS TO EVOLVE CHARACTER BEHAVIOURS IN MODERN VIDEO GAMES USING GENETIC ALGORITHMS TO EVOLVE CHARACTER BEHAVIOURS IN MODERN VIDEO GAMES T. Bullen and M. Katchabaw Department of Computer Science The University of Western Ontario London, Ontario, Canada N6A 5B7

More information

Multi-Robot Coordination. Chapter 11

Multi-Robot Coordination. Chapter 11 Multi-Robot Coordination Chapter 11 Objectives To understand some of the problems being studied with multiple robots To understand the challenges involved with coordinating robots To investigate a simple

More information

Artificial Neural Network based Mobile Robot Navigation

Artificial Neural Network based Mobile Robot Navigation Artificial Neural Network based Mobile Robot Navigation István Engedy Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar tudósok körútja 2. H-1117,

More information

Optimization of Tile Sets for DNA Self- Assembly

Optimization of Tile Sets for DNA Self- Assembly Optimization of Tile Sets for DNA Self- Assembly Joel Gawarecki Department of Computer Science Simpson College Indianola, IA 50125 joel.gawarecki@my.simpson.edu Adam Smith Department of Computer Science

More information

Automating a Solution for Optimum PTP Deployment

Automating a Solution for Optimum PTP Deployment Automating a Solution for Optimum PTP Deployment ITSF 2015 David O Connor Bridge Worx in Sync Sync Architect V4: Sync planning & diagnostic tool. Evaluates physical layer synchronisation distribution by

More information

Co-evolution for Communication: An EHW Approach

Co-evolution for Communication: An EHW Approach Journal of Universal Computer Science, vol. 13, no. 9 (2007), 1300-1308 submitted: 12/6/06, accepted: 24/10/06, appeared: 28/9/07 J.UCS Co-evolution for Communication: An EHW Approach Yasser Baleghi Damavandi,

More information

GPU Computing for Cognitive Robotics

GPU Computing for Cognitive Robotics GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating

More information

CPS331 Lecture: Genetic Algorithms last revised October 28, 2016

CPS331 Lecture: Genetic Algorithms last revised October 28, 2016 CPS331 Lecture: Genetic Algorithms last revised October 28, 2016 Objectives: 1. To explain the basic ideas of GA/GP: evolution of a population; fitness, crossover, mutation Materials: 1. Genetic NIM learner

More information

Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control

Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control Int. J. of Computers, Communications & Control, ISSN 1841-9836, E-ISSN 1841-9844 Vol. VII (2012), No. 1 (March), pp. 135-146 Adaptive Neuro-Fuzzy Controler With Genetic Training For Mobile Robot Control

More information

Online Interactive Neuro-evolution

Online Interactive Neuro-evolution Appears in Neural Processing Letters, 1999. Online Interactive Neuro-evolution Adrian Agogino (agogino@ece.utexas.edu) Kenneth Stanley (kstanley@cs.utexas.edu) Risto Miikkulainen (risto@cs.utexas.edu)

More information

A Review on Genetic Algorithm and Its Applications

A Review on Genetic Algorithm and Its Applications 2017 IJSRST Volume 3 Issue 8 Print ISSN: 2395-6011 Online ISSN: 2395-602X Themed Section: Science and Technology A Review on Genetic Algorithm and Its Applications Anju Bala Research Scholar, Department

More information

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER

USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER World Automation Congress 21 TSI Press. USING A FUZZY LOGIC CONTROL SYSTEM FOR AN XPILOT COMBAT AGENT ANDREW HUBLEY AND GARY PARKER Department of Computer Science Connecticut College New London, CT {ahubley,

More information

2048: An Autonomous Solver

2048: An Autonomous Solver 2048: An Autonomous Solver Final Project in Introduction to Artificial Intelligence ABSTRACT. Our goal in this project was to create an automatic solver for the wellknown game 2048 and to analyze how different

More information

FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms

FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms FreeCiv Learner: A Machine Learning Project Utilizing Genetic Algorithms Felix Arnold, Bryan Horvat, Albert Sacks Department of Computer Science Georgia Institute of Technology Atlanta, GA 30318 farnold3@gatech.edu

More information

Learning Behaviors for Environment Modeling by Genetic Algorithm

Learning Behaviors for Environment Modeling by Genetic Algorithm Learning Behaviors for Environment Modeling by Genetic Algorithm Seiji Yamada Department of Computational Intelligence and Systems Science Interdisciplinary Graduate School of Science and Engineering Tokyo

More information

Generating Diverse Opponents with Multiobjective Evolution

Generating Diverse Opponents with Multiobjective Evolution Generating Diverse Opponents with Multiobjective Evolution Alexandros Agapitos, Julian Togelius, Simon M. Lucas, Jürgen Schmidhuber and Andreas Konstantinidis Abstract For computational intelligence to

More information

Biologically Inspired Embodied Evolution of Survival

Biologically Inspired Embodied Evolution of Survival Biologically Inspired Embodied Evolution of Survival Stefan Elfwing 1,2 Eiji Uchibe 2 Kenji Doya 2 Henrik I. Christensen 1 1 Centre for Autonomous Systems, Numerical Analysis and Computer Science, Royal

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Lecture 01 - Introduction Edirlei Soares de Lima What is Artificial Intelligence? Artificial intelligence is about making computers able to perform the

More information

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS

EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS EMERGENCE OF COMMUNICATION IN TEAMS OF EMBODIED AND SITUATED AGENTS DAVIDE MAROCCO STEFANO NOLFI Institute of Cognitive Science and Technologies, CNR, Via San Martino della Battaglia 44, Rome, 00185, Italy

More information

Synthetic Brains: Update

Synthetic Brains: Update Synthetic Brains: Update Bryan Adams Computer Science and Artificial Intelligence Laboratory (CSAIL) Massachusetts Institute of Technology Project Review January 04 through April 04 Project Status Current

More information

Obstacle Avoidance in Collective Robotic Search Using Particle Swarm Optimization

Obstacle Avoidance in Collective Robotic Search Using Particle Swarm Optimization Avoidance in Collective Robotic Search Using Particle Swarm Optimization Lisa L. Smith, Student Member, IEEE, Ganesh K. Venayagamoorthy, Senior Member, IEEE, Phillip G. Holloway Real-Time Power and Intelligent

More information

An Adaptive Learning Model for Simplified Poker Using Evolutionary Algorithms

An Adaptive Learning Model for Simplified Poker Using Evolutionary Algorithms An Adaptive Learning Model for Simplified Poker Using Evolutionary Algorithms Luigi Barone Department of Computer Science, The University of Western Australia, Western Australia, 697 luigi@cs.uwa.edu.au

More information

A Numerical Approach to Understanding Oscillator Neural Networks

A Numerical Approach to Understanding Oscillator Neural Networks A Numerical Approach to Understanding Oscillator Neural Networks Natalie Klein Mentored by Jon Wilkins Networks of coupled oscillators are a form of dynamical network originally inspired by various biological

More information

GENETIC PROGRAMMING. In artificial intelligence, genetic programming (GP) is an evolutionary algorithmbased

GENETIC PROGRAMMING. In artificial intelligence, genetic programming (GP) is an evolutionary algorithmbased GENETIC PROGRAMMING Definition In artificial intelligence, genetic programming (GP) is an evolutionary algorithmbased methodology inspired by biological evolution to find computer programs that perform

More information

Universiteit Leiden Opleiding Informatica

Universiteit Leiden Opleiding Informatica Universiteit Leiden Opleiding Informatica Predicting the Outcome of the Game Othello Name: Simone Cammel Date: August 31, 2015 1st supervisor: 2nd supervisor: Walter Kosters Jeannette de Graaf BACHELOR

More information

Computer Science. Using neural networks and genetic algorithms in a Pac-man game

Computer Science. Using neural networks and genetic algorithms in a Pac-man game Computer Science Using neural networks and genetic algorithms in a Pac-man game Jaroslav Klíma Candidate D 0771 008 Gymnázium Jura Hronca 2003 Word count: 3959 Jaroslav Klíma D 0771 008 Page 1 Abstract:

More information

Comparing Methods for Solving Kuromasu Puzzles

Comparing Methods for Solving Kuromasu Puzzles Comparing Methods for Solving Kuromasu Puzzles Leiden Institute of Advanced Computer Science Bachelor Project Report Tim van Meurs Abstract The goal of this bachelor thesis is to examine different methods

More information

Solving Assembly Line Balancing Problem using Genetic Algorithm with Heuristics- Treated Initial Population

Solving Assembly Line Balancing Problem using Genetic Algorithm with Heuristics- Treated Initial Population Solving Assembly Line Balancing Problem using Genetic Algorithm with Heuristics- Treated Initial Population 1 Kuan Eng Chong, Mohamed K. Omar, and Nooh Abu Bakar Abstract Although genetic algorithm (GA)

More information

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY Sidhesh Badrinarayan 1, Saurabh Abhale 2 1,2 Department of Information Technology, Pune Institute of Computer Technology, Pune, India ABSTRACT: Gestures

More information

Generating Interesting Patterns in Conway s Game of Life Through a Genetic Algorithm

Generating Interesting Patterns in Conway s Game of Life Through a Genetic Algorithm Generating Interesting Patterns in Conway s Game of Life Through a Genetic Algorithm Hector Alfaro University of Central Florida Orlando, FL hector@hectorsector.com Francisco Mendoza University of Central

More information

The Three Laws of Artificial Intelligence

The Three Laws of Artificial Intelligence The Three Laws of Artificial Intelligence Dispelling Common Myths of AI We ve all heard about it and watched the scary movies. An artificial intelligence somehow develops spontaneously and ferociously

More information

Genetic Programming of Autonomous Agents. Senior Project Proposal. Scott O'Dell. Advisors: Dr. Joel Schipper and Dr. Arnold Patton

Genetic Programming of Autonomous Agents. Senior Project Proposal. Scott O'Dell. Advisors: Dr. Joel Schipper and Dr. Arnold Patton Genetic Programming of Autonomous Agents Senior Project Proposal Scott O'Dell Advisors: Dr. Joel Schipper and Dr. Arnold Patton December 9, 2010 GPAA 1 Introduction to Genetic Programming Genetic programming

More information

Exercise 4 Exploring Population Change without Selection

Exercise 4 Exploring Population Change without Selection Exercise 4 Exploring Population Change without Selection This experiment began with nine Avidian ancestors of identical fitness; the mutation rate is zero percent. Since descendants can never differ in

More information

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization Learning to avoid obstacles Outline Problem encoding using GA and ANN Floreano and Mondada

More information

Worship Sound Guy Presents: Ultimate Compression Cheat Sheet

Worship Sound Guy Presents: Ultimate Compression Cheat Sheet Worship Sound Guy Presents: Ultimate Compression Cheat Sheet Compression Basics For Live Sound www.worshipsoundguy.com @WorshipSoundGuy 2017 Do your mixes PUNCH?? Do they have low-end control? Do they

More information

Retaining Learned Behavior During Real-Time Neuroevolution

Retaining Learned Behavior During Real-Time Neuroevolution Retaining Learned Behavior During Real-Time Neuroevolution Thomas D Silva, Roy Janik, Michael Chrien, Kenneth O. Stanley and Risto Miikkulainen Department of Computer Sciences University of Texas at Austin

More information

The Open Access Institutional Repository at Robert Gordon University

The Open Access Institutional Repository at Robert Gordon University OpenAIR@RGU The Open Access Institutional Repository at Robert Gordon University http://openair.rgu.ac.uk This is an author produced version of a paper published in Electronics World (ISSN 0959-8332) This

More information

Evolving Parameters for Xpilot Combat Agents

Evolving Parameters for Xpilot Combat Agents Evolving Parameters for Xpilot Combat Agents Gary B. Parker Computer Science Connecticut College New London, CT 06320 parker@conncoll.edu Matt Parker Computer Science Indiana University Bloomington, IN,

More information

arxiv: v1 [cs.ne] 3 May 2018

arxiv: v1 [cs.ne] 3 May 2018 VINE: An Open Source Interactive Data Visualization Tool for Neuroevolution Uber AI Labs San Francisco, CA 94103 {ruiwang,jeffclune,kstanley}@uber.com arxiv:1805.01141v1 [cs.ne] 3 May 2018 ABSTRACT Recent

More information

Online Evolution for Cooperative Behavior in Group Robot Systems

Online Evolution for Cooperative Behavior in Group Robot Systems 282 International Dong-Wook Journal of Lee, Control, Sang-Wook Automation, Seo, and Systems, Kwee-Bo vol. Sim 6, no. 2, pp. 282-287, April 2008 Online Evolution for Cooperative Behavior in Group Robot

More information

Decision Science Letters

Decision Science Letters Decision Science Letters 3 (2014) 121 130 Contents lists available at GrowingScience Decision Science Letters homepage: www.growingscience.com/dsl A new effective algorithm for on-line robot motion planning

More information

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Int. J. Advanced Networking and Applications 1053 Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors Eng. Abdelfattah A. Ahmed Atomic Energy Authority,

More information

Submitted November 19, 1989 to 2nd Conference Economics and Artificial Intelligence, July 2-6, 1990, Paris

Submitted November 19, 1989 to 2nd Conference Economics and Artificial Intelligence, July 2-6, 1990, Paris 1 Submitted November 19, 1989 to 2nd Conference Economics and Artificial Intelligence, July 2-6, 1990, Paris DISCOVERING AN ECONOMETRIC MODEL BY. GENETIC BREEDING OF A POPULATION OF MATHEMATICAL FUNCTIONS

More information

Behaviour Patterns Evolution on Individual and Group Level. Stanislav Slušný, Roman Neruda, Petra Vidnerová. CIMMACS 07, December 14, Tenerife

Behaviour Patterns Evolution on Individual and Group Level. Stanislav Slušný, Roman Neruda, Petra Vidnerová. CIMMACS 07, December 14, Tenerife Behaviour Patterns Evolution on Individual and Group Level Stanislav Slušný, Roman Neruda, Petra Vidnerová Department of Theoretical Computer Science Institute of Computer Science Academy of Science of

More information

Controller for TORCS created by imitation

Controller for TORCS created by imitation Controller for TORCS created by imitation Jorge Muñoz, German Gutierrez, Araceli Sanchis Abstract This paper is an initial approach to create a controller for the game TORCS by learning how another controller

More information

Tutorial: Creating maze games

Tutorial: Creating maze games Tutorial: Creating maze games Copyright 2003, Mark Overmars Last changed: March 22, 2003 (finished) Uses: version 5.0, advanced mode Level: Beginner Even though Game Maker is really simple to use and creating

More information

Artificial Intelligence: Using Neural Networks for Image Recognition

Artificial Intelligence: Using Neural Networks for Image Recognition Kankanahalli 1 Sri Kankanahalli Natalie Kelly Independent Research 12 February 2010 Artificial Intelligence: Using Neural Networks for Image Recognition Abstract: The engineering goals of this experiment

More information

HyperNEAT-GGP: A HyperNEAT-based Atari General Game Player. Matthew Hausknecht, Piyush Khandelwal, Risto Miikkulainen, Peter Stone

HyperNEAT-GGP: A HyperNEAT-based Atari General Game Player. Matthew Hausknecht, Piyush Khandelwal, Risto Miikkulainen, Peter Stone -GGP: A -based Atari General Game Player Matthew Hausknecht, Piyush Khandelwal, Risto Miikkulainen, Peter Stone Motivation Create a General Video Game Playing agent which learns from visual representations

More information

BIEB 143 Spring 2018 Weeks 8-10 Game Theory Lab

BIEB 143 Spring 2018 Weeks 8-10 Game Theory Lab BIEB 143 Spring 2018 Weeks 8-10 Game Theory Lab Please read and follow this handout. Read a section or paragraph completely before proceeding to writing code. It is important that you understand exactly

More information

Solving Sudoku with Genetic Operations that Preserve Building Blocks

Solving Sudoku with Genetic Operations that Preserve Building Blocks Solving Sudoku with Genetic Operations that Preserve Building Blocks Yuji Sato, Member, IEEE, and Hazuki Inoue Abstract Genetic operations that consider effective building blocks are proposed for using

More information

Chapter 5 OPTIMIZATION OF BOW TIE ANTENNA USING GENETIC ALGORITHM

Chapter 5 OPTIMIZATION OF BOW TIE ANTENNA USING GENETIC ALGORITHM Chapter 5 OPTIMIZATION OF BOW TIE ANTENNA USING GENETIC ALGORITHM 5.1 Introduction This chapter focuses on the use of an optimization technique known as genetic algorithm to optimize the dimensions of

More information

Evolving Predator Control Programs for an Actual Hexapod Robot Predator

Evolving Predator Control Programs for an Actual Hexapod Robot Predator Evolving Predator Control Programs for an Actual Hexapod Robot Predator Gary Parker Department of Computer Science Connecticut College New London, CT, USA parker@conncoll.edu Basar Gulcu Department of

More information

A Hybrid Evolutionary Approach for Multi Robot Path Exploration Problem

A Hybrid Evolutionary Approach for Multi Robot Path Exploration Problem A Hybrid Evolutionary Approach for Multi Robot Path Exploration Problem K.. enthilkumar and K. K. Bharadwaj Abstract - Robot Path Exploration problem or Robot Motion planning problem is one of the famous

More information

Smart Grid Reconfiguration Using Genetic Algorithm and NSGA-II

Smart Grid Reconfiguration Using Genetic Algorithm and NSGA-II Smart Grid Reconfiguration Using Genetic Algorithm and NSGA-II 1 * Sangeeta Jagdish Gurjar, 2 Urvish Mewada, 3 * Parita Vinodbhai Desai 1 Department of Electrical Engineering, AIT, Gujarat Technical University,

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

An Intelligent Othello Player Combining Machine Learning and Game Specific Heuristics

An Intelligent Othello Player Combining Machine Learning and Game Specific Heuristics An Intelligent Othello Player Combining Machine Learning and Game Specific Heuristics Kevin Cherry and Jianhua Chen Department of Computer Science, Louisiana State University, Baton Rouge, Louisiana, U.S.A.

More information

Learning to Play like an Othello Master CS 229 Project Report. Shir Aharon, Amanda Chang, Kent Koyanagi

Learning to Play like an Othello Master CS 229 Project Report. Shir Aharon, Amanda Chang, Kent Koyanagi Learning to Play like an Othello Master CS 229 Project Report December 13, 213 1 Abstract This project aims to train a machine to strategically play the game of Othello using machine learning. Prior to

More information

The Genetic Algorithm

The Genetic Algorithm The Genetic Algorithm The Genetic Algorithm, (GA) is finding increasing applications in electromagnetics including antenna design. In this lesson we will learn about some of these techniques so you are

More information