Self Organising Neural Place Codes for Vision Based Robot Navigation

Size: px
Start display at page:

Download "Self Organising Neural Place Codes for Vision Based Robot Navigation"

Transcription

1 Self Organising Neural Place Codes for Vision Based Robot Navigation Kaustubh Chokshi, Stefan Wermter, Christo Panchev, Kevin Burn Centre for Hybrid Intelligent Systems, The Informatics Centre University of Sunderland Sunderland, SR6 DD United Kingdom URL: Abstract Autonomous robots must be able to navigate independently within an environment. In the animal brain so-called place cells respond to the environment an animal is in. In this paper we present a model of place cells based on Self Organising Maps. The aim of this paper is to show how image invariance can improve the performance of the neural place codes and make the model more robust to noise. The paper also demonstrates that localisation can be learned without having a pre-defined map given to the robot by humans and that after training, a robot can localise itself within a learned environment. I. INTRODUCTION Ideally, an autonomous robot should have a self-contained system which allows it to adapt and modify its behaviour to all possible situations it might face. The classical method is to pre-define the internal model of the robot relating it to the external world. With a pre-defined program the robot can navigate only within a highly controlled environment [1]. However, the external world is very complex thus making it often unpredictable [2]. There are various ways to address the problem of localisation. The most common approach is to ignore the errors of localisation [3]. This has the advantage of being simple but a major disadvantage is that it cannot be used as a global planning method. To overcome this problem, another technique can updates the robot location by bar codes and laser sensors in the environment, thus giving it an explicit location. This method is motivated purely by the go until you get there philosophy [3]. Another approach is to use topological maps, where symbolic information for localisation at certain points, for instance gateways, is used [4], [], [6]. With gateways as a navigational strategy, the robot can change its direction, for instance at intersections of hallways. Therefore the gateways are critical for localisation, path planning and map making. The main disadvantage here is that it is very hard to have unique gateways. Another technique of localisation is to match raw sensor data to an a priori map [3]. Usually the sensors used are distance sensors such as sonar, infrared, laser etc. The problem here is that the sensor data rarely comes in without noise but it can be used in highly restricted environments. It is used in certain industrial robots which operate in highly controlled environments but they fail to work in dynamic environments. This problem is overcome by generating small local maps and then integrating them with larger maps. The use of small local maps motivates the need of good map building [3]. In this paper we present a model of place cells that learns to form landmarks for helping the robot to navigate. In order to generate an internal representation, we use vision to identify landmarks. This paper focuses on a type of landmark localisation which depends on extracting novel features from the environment in order to localise itself. Landmarks can be defined as distinguishable input patterns. In other words each category of input activates different output neurons. In our approach we have developed Self Organising Maps (SOMs) which classify the input images for localisation. One of the major problems that is solved by the visual system in the cerebral cortex is the building of a representation of visual information which allows recognition to occur relatively independent of size, contrast, spatial frequency, position on the retina, angle of view etc. Although neural networks do generalise well, the type of generalisation they show naturally is to those vectors, which have a high correlation with what they have already learned [7]. If we consider a shift in the image coming into the retina by just by a few pixels, the same neuron in the SOM s map will not respond even though it is the same pattern but the neighbouring neuron would respond for this pattern. Therefore we clustered neurons for the same landmark on the SOM. The self organising map does not grow with the number of landmarks. In order to represent more landmarks, it is necessary to reduce the number of clusters that are being formed on the map. This creates a need to compute invariant pattern representations for the map. Invariant pattern representation is a form of preprocessing performed in V1 of the visual cortex and striate cortex of primates [8]. As our aim is to have smaller clusters of place codes on the self organising map, our focus is not on size-invariant landmarks, because landmarks from different distances would have different sizes in the visual field. We focus on transform invariant processing. Our model is based on associative memory for template matching and template alignment. We demonstrate that neural localisation based on self organising place codes

2 holds a lot of potential for learning robot navigation. II. ARCHITECTURE Despite the progress made in the field of AI and Robotics, robots today remain inferior to humans or animals in terms of performance [9]. One main reason for this may be that robots do not possess the neural capabilities of the brain. Human and animal brains adapt well to diverse environments, whereas artificial neural networks are usually limited to a controlled environment, and also lack the advantage of having millions of neurons working in true parallelism. Our approach is to try to emulate the neural visual navigation functions present in human and animal brains. The main emphasis of this research is to build a robot with a goal-based navigation, with the objective that it can learn and move autonomously. The central focus is on natural vision for navigation based on neural place codes. This section summarises the primary algorithm and the principle underlying our approach. A. Overall Architecture of our Model In our architecture there are various neural networks responsible for various aspects of navigation. Navigation is often based on sensor fusion; navigation cannot just depend on one sensor input [1], [11] and humans and animals use various sensors to navigate [3], [12]. We are primarily using vision as global navigation strategy and for our reactive and local navigation behaviour we have employed the use of sonar and infra red. Therefore our model of the navigation system itself consists of several neural network units. We used SOMs for the visual landmarks, which enables the robot to generate its landmarks of the most salient features in a visual field. A primitive visual landmark allows us to implement simple visually-based behaviour. Our approach is based on functional units and each of these functional units uses a neural network. An overview of the different functional units can be seen in figure 1. In figure 1, visual information derivation is a module which is responsible for getting the images from the robot s camera. The image preprocessing in the localisation module is responsible for normalising the images for the network. Our invariance processing (shown in more detail in figure 2), part of localisation module, makes use of associative memory for transform invariance and pattern completion for noise reduction. The invariance and pattern completion modules are based on Multi-Layered Perceptron (MLP) (figure 2), the output of which forms the input to the SOM. Self localisation and target representation are based on SOMs. This makes up the localisation module which is responsible for the localisation of the robot in the environment. Goals or targets for navigation can be given by language, action, desire etc. In our modal we assume that the goal is triggered by language instructions. Once the language instruction is received the recall of the goal is triggered. An internal representation of this goal is then generated in the prefrontal cortex, or the working memory, of the brain. As this whole process is not within the scope of the this paper goal is given as the image of where the robot is desired to be. This image is then the target representation of the architecture. Effective navigation depends on the representation of the world the robot is using [3]. In our architecture the world representation is part of spatial representation module. This module will provide the path planning module with necessary information from the localisation module and visual target module. It will map both the current location and the location of the target into the same map. This enables the path planning module to choose the best pathway. There are various ways in which the robot can be instructed as to where its target for navigation is. We are currently experimenting how to translate the place code output and target representation into a spatial representation. The path planning module provides output to the motors. This forms the global navigation strategy. We have implemented the local navigation strategy using reactive behaviour. Both the global and local navigation strategy are combined in the navigation strategy module which is mostly responsible for choosing motor commands either from local or global behaviours. Accordingly it chooses the output from either the global navigation strategy or the local navigation strategy and transforms it into motor control commands. B. Place Code Localisation based on Invariant Images Hippocampal pyramidal cells called place cells have been discovered that fire when an animal is at a certain location in its environment. In our model, place cells can provide candidate locations to the path integrator and place cells could localise the robot in a familiar environment. Self-localisation in animals or humans often refers to the internal model of the world outside. As seen in a white water maze experiment [13] a rodent which was not given any landmarks could still reach its goal by forming its own internal representation of landmarks of the world outside. Humans and animals can create their own landmarks, depending on the firing of place cells. These cells change their firing patters in an environment when prominent landmarks are removed from the environment. With this evidence from computational neuroscience [14], [1], [16] it is reasonable to assume that place cells could prove to be an efficient way of localisation using vision. Self Organising Maps [17] networks learn to categorise input patterns and to associate them to different output neurons or a set of output neurons. Each neuron, j, is connected to the input through a synaptic weight vector w j = [w j1...w jm ] T. At each iteration, the SOM [17] finds a winning neuron v by minimising the following: v(x) = arg min j x(t) w j, j = 1, 2,...n (1) x belongs to an m-dimensional input space,. is the Euclidean distance, while the update of the synaptic weight vector follows: w j (t+1) = w j (t)+α(t)h j,v(x) (t)[x(t) w j (t)], j = 1, 2,...n (2)

3 Visual Target Visual Information Derivation Localisation Module Spatial Representation Path Planning Local Navigation Navigation Strategy Motor Control Fig. 1. Overall architecture of the visual navigation strategy of the robot. This diagram shows the flow of the model. Image Pre Processing AutoAssociative Memory Retina Goal Place Codes Based on SOMs Visual Target Spatial Representation Fig. 2. An overview of the localisation module in our model. The image pre-processing is responsible for getting the images from the robot camera and resizing it. The associative memory is responsible for image-invariant processing. Localisation of the robot is done by place codes. Visual Target represents the main goal for the navigation. Spatial representation will now take activations from both the neural network region and represent it in the environmental space. This classification is based on features extracted from the environment by the network. Feature detectors are neurons that respond to correlated combinations of their inputs. These are the neurons that give us symbolic representations of the world outside. In our experiments once we get symbolic representations of the features in the environment we use these to localise the robot in the environment. The sparsification performed by the competitive networks is very useful for preparing signals for presentation to pattern associators and auto associators, since this representation increases the number of patterns that can be associated or stored in such networks [8], [18]. Although the algorithm is simple, its convergence and accuracy depend on the selection of the neighbourhood function, the topology of the output space, a scheme for decreasing the learning rate parameter, and the total number of neuronal units [19]. An important property of self organising map is feature discovery. Each neuron in a self-organising map becomes activated by a set of consistently coactive input stimuli and gradually learns to respond to that cluster of coactive inputs. We can think of self organising maps as feature discovery in the input space. The features in the input stimuli can thus be defined as consistently coactive inputs. Self organising maps thus show that how feature analysers can be built without any external teachers [8]. This is a very important aspect of place cells, as the place cells have to respond to unique feature or landmarks in the input space in order to localise the robot. The removal of redundancy is thought to be a key aspect of how the visual system operates [8], [18]. Our hybrid model of neural networks also reduces the dimensions of the input vector as a set of input patterns, in our case pixels of the input image vector. Removal of redundancy is done by an invariance layer. The representation of a image is done by activation of neurons, which forms as input to Place Codes based on Self Organising Map. A. Experimental Setup III. EXPERIMENTS AND RESULTS The experiments were conducted on the Khepera robot (figure 3). The robot was in a cage, which was divided into four parts, north, south, east and west. The cage was further divided into 1 cm x 1 cm grids. These grids were only

4 trained. Invariance and the place codes networks were trained in a sequential manner. First the invariance layer was trained and then the neural place code layer was trained. Dimensions No of Connections Input Layer 17 x 27 x 3 - Invariant Layer 1 17 x 27 x 3 17 x 27 x 3 Invariant Layer 2 7 full connections Invariant Layer 3 12 full connections Invariant Layer 4 17 x 27 x 3 full connections SOM Place Code Layer x 17 x 27 x 3 Fig. 3. This is a Khepera robot that we use for the experimentation. used for the purpose of calculating the error by the place cells. All the landmarks were placed against the wall of the cage. There were cubes and pyramids of different colour codes spread across the walls of the cage randomly. The cage was divided in small squares of 1cm. Each square represents a place code. Each cell was given a name, depending on where the cells were located, for example a cell in the southern hemisphere within the eastern block would have a name se1. The naming convention was simple, the first letter representing which hemisphere, the second letter which block and the numbers the x and y co-ordinates. This information was purely for our use to test the results and to setup the experiments. This information was not provided to the robot. For training purposes there 4 images were taken from each place code. For testing, there were 1 images from each place code in the environment. N o r t h S o u t h East West Movement of Khepera Robot Fig. 4. This is the cage where the robot was placed. The robot was allowed to move in a limited space, as we did not want the robot to be too close to the landmarks nor too far away from the landmarks. The area of movement of robot was divided into smaller grids of 1cm x 1cm giving us an approximate position of the robot. The stimuli used for training and testing our model in this paper are specially constructed to investigate the performance of the localisation problem using the self organising maps. To train the network, a sequence of 2 images were presented to represent over 3 locations in the cage. At each presentation the winning neuron was selected and the weight vector of the wining neuron was update along with the distance vector. The presentation of all the stimuli across all the landmarks consists of one epoch of training. In this manner the networks were TABLE I DIMENSIONS FOR OUR HYBRID NEURAL MODEL FOR LOCALISATION B. Clustering of Place Codes BASED ON MLP AND SOM The basic property of a SOM network is to form clusters of information relating to each other, in our case landmarks. A cluster is a collection of neurons which are next to each other representing the same landmark. Figure ((a)(b)) shows that when the robot was approaching the desired landmark, there were activations in the neighbouring neurons. This is due to clustering of similar images around the landmark. There are multiple similar images that are being represented by a single neuron, making the cluster a smaller and richer in information. This is achieve with the invariance module. On the other hand figure ((d)) shows the landmarks which were at a distance to the location represented in figure ((a) and (d)). Two landmarks (nw1 and ne2) that were given to the robot at a distance are mapped not only in different clusters but also distant from each other. By their very definition, landmarks are features in the environment. This was the reason behind a formation of these clusters by SOM s. The landmarks that were chosen by the SOM were quite significant in the image and distinguished features from the rest of the environment and other landmarks. C. Reduction in Cluster Size It was observed in [7], that the main cause for large clusters of place codes was due to the Self Organising Map trying to handle transform invariance by having the neighbouring neurons responding to the invariance. With the use of associative memory for transform invariance the size of the clusters is reduced. Now, the Self Organising Map, does not represent the invariance, rather it represents the place codes. Images were collected at every 1th frame i.e. there was approximately half a second difference between the images. This causes a large amount of overlap and large amount of transform invariance. The associative memory clustered the similar images and reduced the transform invariance. The number of neurons per location could be reduced (figure 6). There would be fewer neurons required to represent the same location if there was a shift in the images. This has also has additional benefits, mainly now the Self Organising Map can represent more place codes without actually growing or increasing the size of the map.

5 1 Activation of Neurons for region ne2 1 Activation of Neurons for region ne (a) 1 1 (b) 1 Activation of Neurons for region ne2 andne3 1 Activation of Neurons for region nw (c) 1 1 (d) Fig.. (a) Activation in region of NE2 (b) Activation in the region of NE3 (c) Activation of both the regions of NE2 and NE3 (d) Activation of NW1 Neighbouring regions are place code neighbours to each other. There is also a clear overlap of a neurons (a) and (b) regions. The reason for the overlap is the robot between both the locations, it could see both the prominent landmarks. D. Performance of network To test the performance of the network, we tested our network with white noise with mean deviation ranging from. to.8 and variance of.1 to the image. The effects of the noise on the images can be seen in figure 7. The aim of the neural network is to localise itself in an environment. As the place cells are based on SOM, there is a cluster of neurons responsible for a place code in the environment. The neuron responding to that place code would be more precise in coordinates than the neighbouring neurons. There are various reasons why we would need a robust localisation. With the increase of noise, it was possible for the robot to be lost easily [7]. During these times, the approximate coordinate would help the robot to localise itself. We have considered two parts for localisation with noise handling, the first being that neuron responsible for the robot responding and the second being, another neuron in the cluster of neurons responding to the same place. In the latter case, localisation is not very accurate. As seen in figure 8, the clusters are more robust for noise handling as compared to having one neuron. As the amount of noise increases the neurons or the cluster response to localisation becomes blurred. However, the network performance is quite impressive, since until. mean deviation of noise, the error for localisation is below 3%. The cluster for place codes still performs much better and is still below 2%. The noise handling by the neural networks was also improved by adding an additional layer of associative memory below the place codes. The associative memory would reduce the noise before the visual inputs are given to the neural place codes layer in the architecture. It is seen in the figure 8 that associative memory helps place cells to perform better, giving less error per neuron. As the noise level increased above 7% the performance of the network with associative memory was reduced. This was mainly observed for higher levels of noise. With noise of 7% the performance did not make a lot of difference with or without associative memory. At higher levels of noise with more than 8% the noise handling at associative memory failed, reducing the place codes performance then without the layer. At the same time, with 8% noise the performance of SOM s is completely random as can be expected. For our experimentation we are not expecting more than 3% of noise levels. This level would mostly be caused by interference in wireless signals. Even if the noise levels for a few frames are higher than 3%, it would be possible for the network to localise itself with the following input frames coming from the robot retina. With the use of the associative memory, the performance has improved up to %.

6 Cluster of Neurons Noise Handling By Network Neurons Per Cluster Percentage of Error Mean Deviation of Noise added 1 NE1 NE2 NE3 NE4 NE4 NE NE6 NW6 NW7 NW8 NW9 NW1 NW11 NW12 NW13 SE1 SE2 SE3 SE4 SE4 SE SE6 SW6 SW7 SW8 Clusters Original Cluster Size After Autoassociative Memory SW9 SW1 SW11 SW12 SW13 Fig. 8. Error Per Neuron Error Per Cluster Error Per Neuron (Associative Memory) Noise handling by the neural networks Fig. 6. The cluster sizes i.e. number of neurons per cluster representing a particular location of the robot. There is a drastic reduction in the size of the clusters after the use of associative memory before the place code map. This makes it possible for us to have more place code on the a map of the same size. (a) Fig. 7. This figure above shows different noise levels added to the image. (a) Is is image without any noise. (b) Image with. Mean deviation (b) IV. CONCLUSIONS AND FUTURE WORK In this paper we have described a place cell model based on a SOM for navigation. The model was successful in learning the locations of the landmarks even when tested with distorted images. Visual landmarks were associated with the locations in a controlled environment. This model clusters neighbouring landmarks next to each other. The landmarks that are away from each other in the environment are also relatively away from each in the map. Rather than predefined localisation algorithms as internal modules, our SOM s architecture demonstrates that localisation can be learnt in a robust model based on external hints from the environment. In future model is further extended to be implemented on Peoplebot robot for the robot to localise itself within a corridor. ACKNOWLEDGMENT The authors would like to thanks Dr. Harry Erwin for his help and insight into navigation and place cells. Authors would like to thanks Chris Rowan for his help in collecting data for training purposes. REFERENCES [1] U. Nehmzow, Mobile Robotics: A Practical Introduction. London: Springer Verlag, 2. [2] A. Arleo and W. Gerstner, Spatial cognition and neuro-mimetic navigation: A model of hippocampal place cell activity, Biological Cybernetics, Special Issue on Navigation in Biological and Artificial Systems, vol. 83, pp , 2. [3] R. R. Murphy, Introduction to AI Robotics. London, England: The MIT Press, 2. [4] P. Bonasso and R. Murphy, eds., Artificial Intelligence and Mobile Robotics: Case Studies of Successful Robot Systems. London: The MIT Press / AAAI Press, [] S. Thrun, M. Beetz, M. Bennewitz, W. Burgard, A. Cremers, F. Dellaert, D. Fox, D. Hähnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz, Probabilistic algorithms and the interactive museum tour-guide robot minerva, International Journal of Robotics Research, vol. 19, no. 11, pp , 2. [6] W. Burgard, A. Cremers, D. Fox, D. Hahnel, G. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun, Experiences with an interactive museum tourguide robot, Artificial Intelligence, vol. 114, no. 1-2, pp. 3, [7] K. Chokshi, S. Wermter, and C. Weber, Learning localisation based on landmarks using self organisation, in ICANN 3, 23. [8] E. Rolls and G. Deco, Computational Neuroscience of Vision. New York: Oxford University Press, 22. [9] J. Ng, R. Hirata, N. Mundhenk, E. Pichon, A. Tsui, T. Ventrice, P. Williams, and L. Itti, Towards visually-guided neuromorphic robots: Beobots, in Proc. 9th Joint Symposium on Neural Computation (JSNC 2), Pasadena, California, 22. [1] J. A. Castellanos, J. Neira, and J. Tardoes, Multisensor fusion for simultaneous localisation and map building, IEEE Transactions on Robotics and Automation, vol. 17, pp , 21. [11] S. Martens, C. A. Gail, and P. Gaudiano, Neural sensor fusion for spatial visualisation on a mobile robot, in Proceedings of SPIE, Sensor Fusion and Decentralised Control in Robotic Systems (S. P.S. and M. G.T., eds.), [12] R. A. Brooks, Cambrian Intelligence: The Early History of the New AI. Cambridge, Massachusetts: The MIT Press, [13] A. D. Redish, Beyond Cognitive Map from Place Cells to Episodic Memory. London: The MIT Press, [14] A. Arleo, Spatial Learning and Navigation in Neuro-Mimetic Systems, Modeling the Rat Hippocampus. PhD thesis, Swiss Federal Institute of Technology, Lausanne, EPFL, Switzerland, 2. [1] A. D. Redish and D. S. Touretzky, Navigating with landmarks: computing goal locations from place codes., [16] E. Rolls and A. Treves, Neural Network and Brain Function. New York: Oxford Press, [17] T. Kohonen, Self-organizing Maps. Springer Series in Information Sciences, Berlin, Germany: Springer-Verlag, 3rd ed ed., 21. [18] S. Wermter, J. Austin, and D. Willshaw, Emergent Computational Neural Architectures based on Neuroscience. Heidelberg: Springer, 21. [19] M. Haritoppulos, H. Yin, and N. M. Allinson, Image denoising using self-organisising map-based non-linear independent component analysis, Neural Networks: Special Issue, New Developments in Self Organising Maps, vol. 1, pp , October - November 22.

Creating a 3D environment map from 2D camera images in robotics

Creating a 3D environment map from 2D camera images in robotics Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:

More information

Fuzzy-Heuristic Robot Navigation in a Simulated Environment

Fuzzy-Heuristic Robot Navigation in a Simulated Environment Fuzzy-Heuristic Robot Navigation in a Simulated Environment S. K. Deshpande, M. Blumenstein and B. Verma School of Information Technology, Griffith University-Gold Coast, PMB 50, GCMC, Bundall, QLD 9726,

More information

Proposers Day Workshop

Proposers Day Workshop Proposers Day Workshop Monday, January 23, 2017 @srcjump, #JUMPpdw Cognitive Computing Vertical Research Center Mandy Pant Academic Research Director Intel Corporation Center Motivation Today s deep learning

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders

Key-Words: - Fuzzy Behaviour Controls, Multiple Target Tracking, Obstacle Avoidance, Ultrasonic Range Finders Fuzzy Behaviour Based Navigation of a Mobile Robot for Tracking Multiple Targets in an Unstructured Environment NASIR RAHMAN, ALI RAZA JAFRI, M. USMAN KEERIO School of Mechatronics Engineering Beijing

More information

Image Segmentation by Complex-Valued Units

Image Segmentation by Complex-Valued Units Image Segmentation by Complex-Valued Units Cornelius Weber and Stefan Wermter Hybrid Intelligent Systems, SCAT, University of Sunderland, UK Abstract. Spie synchronisation and de-synchronisation are important

More information

GPU Computing for Cognitive Robotics

GPU Computing for Cognitive Robotics GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating

More information

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures D.M. Rojas Castro, A. Revel and M. Ménard * Laboratory of Informatics, Image and Interaction (L3I)

More information

Computing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation

Computing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation Computing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation Authors: Ammar Belatreche, Liam Maguire, Martin McGinnity, Liam McDaid and Arfan Ghani Published: Advances

More information

Invariant Object Recognition in the Visual System with Novel Views of 3D Objects

Invariant Object Recognition in the Visual System with Novel Views of 3D Objects LETTER Communicated by Marian Stewart-Bartlett Invariant Object Recognition in the Visual System with Novel Views of 3D Objects Simon M. Stringer simon.stringer@psy.ox.ac.uk Edmund T. Rolls Edmund.Rolls@psy.ox.ac.uk,

More information

Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network

Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network Exploration of Unknown Environments Using a Compass, Topological Map and Neural Network Tom Duckett and Ulrich Nehmzow Department of Computer Science University of Manchester Manchester M13 9PL United

More information

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition Shigueo Nomura and José Ricardo Gonçalves Manzan Faculty of Electrical Engineering, Federal University of Uberlândia, Uberlândia, MG,

More information

Evolved Neurodynamics for Robot Control

Evolved Neurodynamics for Robot Control Evolved Neurodynamics for Robot Control Frank Pasemann, Martin Hülse, Keyan Zahedi Fraunhofer Institute for Autonomous Intelligent Systems (AiS) Schloss Birlinghoven, D-53754 Sankt Augustin, Germany Abstract

More information

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL

FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL FAST GOAL NAVIGATION WITH OBSTACLE AVOIDANCE USING A DYNAMIC LOCAL VISUAL MODEL Juan Fasola jfasola@andrew.cmu.edu Manuela M. Veloso veloso@cs.cmu.edu School of Computer Science Carnegie Mellon University

More information

A Hybrid Architecture using Cross Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots

A Hybrid Architecture using Cross Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots A Hybrid Architecture using Cross Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots John C. Murray, Harry Erwin and Stefan Wermter Hybrid Intelligent Systems School for Computing

More information

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur

More information

Outline. Artificial Neural Network Importance of ANN Application of ANN is Sports Science

Outline. Artificial Neural Network Importance of ANN Application of ANN is Sports Science Advances of Neural Networks in Sports Science Aviroop Dutt Mazumder 13 th Aug, 2010 COSC - 460 Sports Science Outline Artificial Neural Network Importance of ANN Application of ANN is Sports Science Modeling

More information

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots.

Keywords Multi-Agent, Distributed, Cooperation, Fuzzy, Multi-Robot, Communication Protocol. Fig. 1. Architecture of the Robots. 1 José Manuel Molina, Vicente Matellán, Lorenzo Sommaruga Laboratorio de Agentes Inteligentes (LAI) Departamento de Informática Avd. Butarque 15, Leganés-Madrid, SPAIN Phone: +34 1 624 94 31 Fax +34 1

More information

Behaviour-Based Control. IAR Lecture 5 Barbara Webb

Behaviour-Based Control. IAR Lecture 5 Barbara Webb Behaviour-Based Control IAR Lecture 5 Barbara Webb Traditional sense-plan-act approach suggests a vertical (serial) task decomposition Sensors Actuators perception modelling planning task execution motor

More information

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press,   ISSN Combining multi-layer perceptrons with heuristics for reliable control chart pattern classification D.T. Pham & E. Oztemel Intelligent Systems Research Laboratory, School of Electrical, Electronic and

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots

A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots Applied Mathematical Sciences, Vol. 6, 2012, no. 96, 4767-4771 A Real-World Experiments Setup for Investigations of the Problem of Visual Landmarks Selection for Mobile Robots Anna Gorbenko Department

More information

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots

An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots An Experimental Comparison of Path Planning Techniques for Teams of Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany maren,burgard

More information

FACE RECOGNITION USING NEURAL NETWORKS

FACE RECOGNITION USING NEURAL NETWORKS Int. J. Elec&Electr.Eng&Telecoms. 2014 Vinoda Yaragatti and Bhaskar B, 2014 Research Paper ISSN 2319 2518 www.ijeetc.com Vol. 3, No. 3, July 2014 2014 IJEETC. All Rights Reserved FACE RECOGNITION USING

More information

Hybrid architectures. IAR Lecture 6 Barbara Webb

Hybrid architectures. IAR Lecture 6 Barbara Webb Hybrid architectures IAR Lecture 6 Barbara Webb Behaviour Based: Conclusions But arbitrary and difficult to design emergent behaviour for a given task. Architectures do not impose strong constraints Options?

More information

Neural Models for Multi-Sensor Integration in Robotics

Neural Models for Multi-Sensor Integration in Robotics Department of Informatics Intelligent Robotics WS 2016/17 Neural Models for Multi-Sensor Integration in Robotics Josip Josifovski 4josifov@informatik.uni-hamburg.de Outline Multi-sensor Integration: Neurally

More information

Research on Hand Gesture Recognition Using Convolutional Neural Network

Research on Hand Gesture Recognition Using Convolutional Neural Network Research on Hand Gesture Recognition Using Convolutional Neural Network Tian Zhaoyang a, Cheng Lee Lung b a Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China E-mail address:

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Night-time pedestrian detection via Neuromorphic approach

Night-time pedestrian detection via Neuromorphic approach Night-time pedestrian detection via Neuromorphic approach WOO JOON HAN, IL SONG HAN Graduate School for Green Transportation Korea Advanced Institute of Science and Technology 335 Gwahak-ro, Yuseong-gu,

More information

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks Stanislav Slušný, Petra Vidnerová, Roman Neruda Abstract We study the emergence of intelligent behavior

More information

Implicit Fitness Functions for Evolving a Drawing Robot

Implicit Fitness Functions for Evolving a Drawing Robot Implicit Fitness Functions for Evolving a Drawing Robot Jon Bird, Phil Husbands, Martin Perris, Bill Bigge and Paul Brown Centre for Computational Neuroscience and Robotics University of Sussex, Brighton,

More information

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION

NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Journal of Academic and Applied Studies (JAAS) Vol. 2(1) Jan 2012, pp. 32-38 Available online @ www.academians.org ISSN1925-931X NAVIGATION OF MOBILE ROBOT USING THE PSO PARTICLE SWARM OPTIMIZATION Sedigheh

More information

Investigation of Navigating Mobile Agents in Simulation Environments

Investigation of Navigating Mobile Agents in Simulation Environments Investigation of Navigating Mobile Agents in Simulation Environments Theses of the Doctoral Dissertation Richárd Szabó Department of Software Technology and Methodology Faculty of Informatics Loránd Eötvös

More information

Unsupervised learning of reflexive and action-based affordances to model navigational behavior

Unsupervised learning of reflexive and action-based affordances to model navigational behavior Unsupervised learning of reflexive and action-based affordances to model navigational behavior DANIEL WEILLER 1, LEONARD LÄER 1, ANDREAS K. ENGEL 2, PETER KÖNIG 1 1 Institute of Cognitive Science Dept.

More information

Learning Behaviors for Environment Modeling by Genetic Algorithm

Learning Behaviors for Environment Modeling by Genetic Algorithm Learning Behaviors for Environment Modeling by Genetic Algorithm Seiji Yamada Department of Computational Intelligence and Systems Science Interdisciplinary Graduate School of Science and Engineering Tokyo

More information

4D-Particle filter localization for a simulated UAV

4D-Particle filter localization for a simulated UAV 4D-Particle filter localization for a simulated UAV Anna Chiara Bellini annachiara.bellini@gmail.com Abstract. Particle filters are a mathematical method that can be used to build a belief about the location

More information

A Self-organizing Method for Robot Navigation based on Learned Place and Head-direction cells

A Self-organizing Method for Robot Navigation based on Learned Place and Head-direction cells A Self-organizing Method for Robot Navigation based on Learned Place and Head-direction cells Xiaomao Zhou, Cornelius Weber, Stefan Wermter College of Automation Harbin Engineering University, Harbin,

More information

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization

Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Sensors and Materials, Vol. 28, No. 6 (2016) 695 705 MYU Tokyo 695 S & M 1227 Artificial Beacons with RGB-D Environment Mapping for Indoor Mobile Robot Localization Chun-Chi Lai and Kuo-Lan Su * Department

More information

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots

A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots A Probabilistic Method for Planning Collision-free Trajectories of Multiple Mobile Robots Maren Bennewitz Wolfram Burgard Department of Computer Science, University of Freiburg, 7911 Freiburg, Germany

More information

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot erebellum Based ar Auto-Pilot System B. HSIEH,.QUEK and A.WAHAB Intelligent Systems Laboratory, School of omputer Engineering Nanyang Technological University, Blk N4 #2A-32 Nanyang Avenue, Singapore 639798

More information

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models Poornashankar 1 and V.P. Pawar 2 Abstract: The proposed work is related to prediction of tumor growth through

More information

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015

Subsumption Architecture in Swarm Robotics. Cuong Nguyen Viet 16/11/2015 Subsumption Architecture in Swarm Robotics Cuong Nguyen Viet 16/11/2015 1 Table of content Motivation Subsumption Architecture Background Architecture decomposition Implementation Swarm robotics Swarm

More information

MINE 432 Industrial Automation and Robotics

MINE 432 Industrial Automation and Robotics MINE 432 Industrial Automation and Robotics Part 3, Lecture 5 Overview of Artificial Neural Networks A. Farzanegan (Visiting Associate Professor) Fall 2014 Norman B. Keevil Institute of Mining Engineering

More information

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh Palani (1000660520) prasannaven.palani@mavs.uta.edu

More information

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation +

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation + Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation + J.M. Moreno *, J. Madrenas, J. Cabestany Departament d'enginyeria Electrònica Universitat Politècnica de Catalunya Barcelona,

More information

The Architecture of the Neural System for Control of a Mobile Robot

The Architecture of the Neural System for Control of a Mobile Robot The Architecture of the Neural System for Control of a Mobile Robot Vladimir Golovko*, Klaus Schilling**, Hubert Roth**, Rauf Sadykhov***, Pedro Albertos**** and Valentin Dimakov* *Department of Computers

More information

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments , pp.32-36 http://dx.doi.org/10.14257/astl.2016.129.07 Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments Viet Dung Do 1 and Dong-Min Woo 1 1 Department of

More information

Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks

Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks HONG ZHENG Research Center for Intelligent Image Processing and Analysis School of Electronic Information

More information

Sensations and Perceptions in Cicerobot a Museum Guide Robot

Sensations and Perceptions in Cicerobot a Museum Guide Robot Sensations and Perceptions in Cicerobot a Museum Guide Robot Antonio Chella, Irene Macaluso Dipartimento di Ingegneria Informatica, Università di Palermo Viale delle Scienze, building 6 90128 Palermo,

More information

Mobile Robot Exploration and Map-]Building with Continuous Localization

Mobile Robot Exploration and Map-]Building with Continuous Localization Proceedings of the 1998 IEEE International Conference on Robotics & Automation Leuven, Belgium May 1998 Mobile Robot Exploration and Map-]Building with Continuous Localization Brian Yamauchi, Alan Schultz,

More information

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects

Multi Robot Localization assisted by Teammate Robots and Dynamic Objects Multi Robot Localization assisted by Teammate Robots and Dynamic Objects Anil Kumar Katti Department of Computer Science University of Texas at Austin akatti@cs.utexas.edu ABSTRACT This paper discusses

More information

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques Kevin Rushant, Department of Computer Science, University of Sheffield, GB. email: krusha@dcs.shef.ac.uk Libor Spacek,

More information

Team Edinferno Description Paper for RoboCup 2011 SPL

Team Edinferno Description Paper for RoboCup 2011 SPL Team Edinferno Description Paper for RoboCup 2011 SPL Subramanian Ramamoorthy, Aris Valtazanos, Efstathios Vafeias, Christopher Towell, Majd Hawasly, Ioannis Havoutis, Thomas McGuire, Seyed Behzad Tabibian,

More information

Artificial Intelligence: Using Neural Networks for Image Recognition

Artificial Intelligence: Using Neural Networks for Image Recognition Kankanahalli 1 Sri Kankanahalli Natalie Kelly Independent Research 12 February 2010 Artificial Intelligence: Using Neural Networks for Image Recognition Abstract: The engineering goals of this experiment

More information

Integrating Exploration and Localization for Mobile Robots

Integrating Exploration and Localization for Mobile Robots Submitted to Autonomous Robots, Special Issue on Learning in Autonomous Robots. Integrating Exploration and Localization for Mobile Robots Brian Yamauchi, Alan Schultz, and William Adams Navy Center for

More information

Improvement of Mobile Tour-Guide Robots from the Perspective of Users

Improvement of Mobile Tour-Guide Robots from the Perspective of Users Journal of Institute of Control, Robotics and Systems (2012) 18(10):955-963 http://dx.doi.org/10.5302/j.icros.2012.18.10.955 ISSN:1976-5622 eissn:2233-4335 Improvement of Mobile Tour-Guide Robots from

More information

Behaviour Patterns Evolution on Individual and Group Level. Stanislav Slušný, Roman Neruda, Petra Vidnerová. CIMMACS 07, December 14, Tenerife

Behaviour Patterns Evolution on Individual and Group Level. Stanislav Slušný, Roman Neruda, Petra Vidnerová. CIMMACS 07, December 14, Tenerife Behaviour Patterns Evolution on Individual and Group Level Stanislav Slušný, Roman Neruda, Petra Vidnerová Department of Theoretical Computer Science Institute of Computer Science Academy of Science of

More information

ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY IMPAIRED

ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY IMPAIRED Proceedings of the 7th WSEAS International Conference on Robotics, Control & Manufacturing Technology, Hangzhou, China, April 15-17, 2007 239 ASSISTIVE TECHNOLOGY BASED NAVIGATION AID FOR THE VISUALLY

More information

Correcting Odometry Errors for Mobile Robots Using Image Processing

Correcting Odometry Errors for Mobile Robots Using Image Processing Correcting Odometry Errors for Mobile Robots Using Image Processing Adrian Korodi, Toma L. Dragomir Abstract - The mobile robots that are moving in partially known environments have a low availability,

More information

Situated Robotics INTRODUCTION TYPES OF ROBOT CONTROL. Maja J Matarić, University of Southern California, Los Angeles, CA, USA

Situated Robotics INTRODUCTION TYPES OF ROBOT CONTROL. Maja J Matarić, University of Southern California, Los Angeles, CA, USA This article appears in the Encyclopedia of Cognitive Science, Nature Publishers Group, Macmillian Reference Ltd., 2002. Situated Robotics Level 2 Maja J Matarić, University of Southern California, Los

More information

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB S. Kajan, J. Goga Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology, Slovak University

More information

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping

Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Robotics and Autonomous Systems 54 (2006) 414 418 www.elsevier.com/locate/robot Interaction rule learning with a human partner based on an imitation faculty with a simple visuo-motor mapping Masaki Ogino

More information

International Journal of Informative & Futuristic Research ISSN (Online):

International Journal of Informative & Futuristic Research ISSN (Online): Reviewed Paper Volume 2 Issue 4 December 2014 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 A Survey On Simultaneous Localization And Mapping Paper ID IJIFR/ V2/ E4/

More information

Advanced Robotics Introduction

Advanced Robotics Introduction Advanced Robotics Introduction Institute for Software Technology 1 Motivation Agenda Some Definitions and Thought about Autonomous Robots History Challenges Application Examples 2 http://youtu.be/rvnvnhim9kg

More information

Curiosity as a Survival Technique

Curiosity as a Survival Technique Curiosity as a Survival Technique Amber Viescas Department of Computer Science Swarthmore College Swarthmore, PA 19081 aviesca1@cs.swarthmore.edu Anne-Marie Frassica Department of Computer Science Swarthmore

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

Energy-Efficient Mobile Robot Exploration

Energy-Efficient Mobile Robot Exploration Energy-Efficient Mobile Robot Exploration Abstract Mobile robots can be used in many applications, including exploration in an unknown area. Robots usually carry limited energy so energy conservation is

More information

AN ANN BASED FAULT DETECTION ON ALTERNATOR

AN ANN BASED FAULT DETECTION ON ALTERNATOR AN ANN BASED FAULT DETECTION ON ALTERNATOR Suraj J. Dhon 1, Sarang V. Bhonde 2 1 (Electrical engineering, Amravati University, India) 2 (Electrical engineering, Amravati University, India) ABSTRACT: Synchronous

More information

Visual Search using Principal Component Analysis

Visual Search using Principal Component Analysis Visual Search using Principal Component Analysis Project Report Umesh Rajashekar EE381K - Multidimensional Digital Signal Processing FALL 2000 The University of Texas at Austin Abstract The development

More information

Design of an office guide robot for social interaction studies

Design of an office guide robot for social interaction studies Design of an office guide robot for social interaction studies Elena Pacchierotti, Henrik I. Christensen & Patric Jensfelt Centre for Autonomous Systems Royal Institute of Technology, Stockholm, Sweden

More information

Evolving Spiking Neurons from Wheels to Wings

Evolving Spiking Neurons from Wheels to Wings Evolving Spiking Neurons from Wheels to Wings Dario Floreano, Jean-Christophe Zufferey, Claudio Mattiussi Autonomous Systems Lab, Institute of Systems Engineering Swiss Federal Institute of Technology

More information

A Divide-and-Conquer Approach to Evolvable Hardware

A Divide-and-Conquer Approach to Evolvable Hardware A Divide-and-Conquer Approach to Evolvable Hardware Jim Torresen Department of Informatics, University of Oslo, PO Box 1080 Blindern N-0316 Oslo, Norway E-mail: jimtoer@idi.ntnu.no Abstract. Evolvable

More information

Why we need to know what AI is. Overview. Artificial Intelligence is it finally arriving?

Why we need to know what AI is. Overview. Artificial Intelligence is it finally arriving? Artificial Intelligence is it finally arriving? Artificial Intelligence is it finally arriving? Are we nearly there yet? Leslie Smith Computing Science and Mathematics University of Stirling May 2 2013.

More information

A cognitive agent for searching indoor environments using a mobile robot

A cognitive agent for searching indoor environments using a mobile robot A cognitive agent for searching indoor environments using a mobile robot Scott D. Hanford Lyle N. Long The Pennsylvania State University Department of Aerospace Engineering 229 Hammond Building University

More information

Service Robots in an Intelligent House

Service Robots in an Intelligent House Service Robots in an Intelligent House Jesus Savage Bio-Robotics Laboratory biorobotics.fi-p.unam.mx School of Engineering Autonomous National University of Mexico UNAM 2017 OUTLINE Introduction A System

More information

Probabilistic Modelling of a Bio-Inspired Collective Experiment with Real Robots

Probabilistic Modelling of a Bio-Inspired Collective Experiment with Real Robots Probabilistic Modelling of a Bio-Inspired Collective Experiment with Real Robots A. Martinoli, and F. Mondada Microcomputing Laboratory, Swiss Federal Institute of Technology IN-F Ecublens, CH- Lausanne

More information

An Agent-Based Architecture for an Adaptive Human-Robot Interface

An Agent-Based Architecture for an Adaptive Human-Robot Interface An Agent-Based Architecture for an Adaptive Human-Robot Interface Kazuhiko Kawamura, Phongchai Nilas, Kazuhiko Muguruma, Julie A. Adams, and Chen Zhou Center for Intelligent Systems Vanderbilt University

More information

Multi-Hierarchical Semantic Maps for Mobile Robotics

Multi-Hierarchical Semantic Maps for Mobile Robotics Multi-Hierarchical Semantic Maps for Mobile Robotics C. Galindo, A. Saffiotti, S. Coradeschi, P. Buschka Center for Applied Autonomous Sensor Systems Dept. of Technology, Örebro University S-70182 Örebro,

More information

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition

Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Advanced Techniques for Mobile Robotics Location-Based Activity Recognition Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Activity Recognition Based on L. Liao, D. J. Patterson, D. Fox,

More information

Unit 1: Introduction to Autonomous Robotics

Unit 1: Introduction to Autonomous Robotics Unit 1: Introduction to Autonomous Robotics Computer Science 4766/6778 Department of Computer Science Memorial University of Newfoundland January 16, 2009 COMP 4766/6778 (MUN) Course Introduction January

More information

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment

Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Analysis of Compass Sensor Accuracy on Several Mobile Devices in an Industrial Environment Michael Hölzl, Roland Neumeier and Gerald Ostermayer University of Applied Sciences Hagenberg michael.hoelzl@fh-hagenberg.at,

More information

Background Pixel Classification for Motion Detection in Video Image Sequences

Background Pixel Classification for Motion Detection in Video Image Sequences Background Pixel Classification for Motion Detection in Video Image Sequences P. Gil-Jiménez, S. Maldonado-Bascón, R. Gil-Pita, and H. Gómez-Moreno Dpto. de Teoría de la señal y Comunicaciones. Universidad

More information

Cooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors

Cooperative Tracking using Mobile Robots and Environment-Embedded, Networked Sensors In the 2001 International Symposium on Computational Intelligence in Robotics and Automation pp. 206-211, Banff, Alberta, Canada, July 29 - August 1, 2001. Cooperative Tracking using Mobile Robots and

More information

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Robotics Dipartimento di Elettronica Informazione e Bioingegneria Robotics Behavioral robotics @ 2014 Behaviorism behave is what organisms do Behaviorism is built on this assumption, and its goal is to promote

More information

Classifying the Brain's Motor Activity via Deep Learning

Classifying the Brain's Motor Activity via Deep Learning Final Report Classifying the Brain's Motor Activity via Deep Learning Tania Morimoto & Sean Sketch Motivation Over 50 million Americans suffer from mobility or dexterity impairments. Over the past few

More information

Robust Hand Gesture Recognition for Robotic Hand Control

Robust Hand Gesture Recognition for Robotic Hand Control Robust Hand Gesture Recognition for Robotic Hand Control Ankit Chaudhary Robust Hand Gesture Recognition for Robotic Hand Control 123 Ankit Chaudhary Department of Computer Science Northwest Missouri State

More information

Division of Informatics, University of Edinburgh

Division of Informatics, University of Edinburgh T E H U N I V E R S I T Y O H F R G Division of Informatics, University of Edinburgh E D I N B U Institute of Perception, Action and Behaviour A Robot Implementation of a Biologically Inspired Method for

More information

Extracting Navigation States from a Hand-Drawn Map

Extracting Navigation States from a Hand-Drawn Map Extracting Navigation States from a Hand-Drawn Map Marjorie Skubic, Pascal Matsakis, Benjamin Forrester and George Chronis Dept. of Computer Engineering and Computer Science, University of Missouri-Columbia,

More information

Artificial Neural Network based Mobile Robot Navigation

Artificial Neural Network based Mobile Robot Navigation Artificial Neural Network based Mobile Robot Navigation István Engedy Budapest University of Technology and Economics, Department of Measurement and Information Systems, Magyar tudósok körútja 2. H-1117,

More information

MLP for Adaptive Postprocessing Block-Coded Images

MLP for Adaptive Postprocessing Block-Coded Images 1450 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 8, DECEMBER 2000 MLP for Adaptive Postprocessing Block-Coded Images Guoping Qiu, Member, IEEE Abstract A new technique

More information

Surveillance and Calibration Verification Using Autoassociative Neural Networks

Surveillance and Calibration Verification Using Autoassociative Neural Networks Surveillance and Calibration Verification Using Autoassociative Neural Networks Darryl J. Wrest, J. Wesley Hines, and Robert E. Uhrig* Department of Nuclear Engineering, University of Tennessee, Knoxville,

More information

Collaborative Multi-Robot Exploration

Collaborative Multi-Robot Exploration IEEE International Conference on Robotics and Automation (ICRA), 2 Collaborative Multi-Robot Exploration Wolfram Burgard y Mark Moors yy Dieter Fox z Reid Simmons z Sebastian Thrun z y Department of Computer

More information

State Estimation Techniques for 3D Visualizations of Web-based Teleoperated

State Estimation Techniques for 3D Visualizations of Web-based Teleoperated State Estimation Techniques for 3D Visualizations of Web-based Teleoperated Mobile Robots Dirk Schulz, Wolfram Burgard, Armin B. Cremers The World Wide Web provides a unique opportunity to connect robots

More information

Complex-valued neural networks fertilize electronics

Complex-valued neural networks fertilize electronics 1 Complex-valued neural networks fertilize electronics The complex-valued neural networks are the networks that deal with complexvalued information by using complex-valued parameters and variables. They

More information

Salient features make a search easy

Salient features make a search easy Chapter General discussion This thesis examined various aspects of haptic search. It consisted of three parts. In the first part, the saliency of movability and compliance were investigated. In the second

More information

ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT REAL TIME POWER QUALITY MONITORING SYSTEM

ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT REAL TIME POWER QUALITY MONITORING SYSTEM ARTIFICIAL NEURAL NETWORKS FOR INTELLIGENT REAL TIME POWER QUALITY MONITORING SYSTEM Ajith Abraham and Baikunth Nath Gippsland School of Computing & Information Technology Monash University, Churchill

More information

Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex

Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex 1.Vision Science 2.Visual Performance 3.The Human Visual System 4.The Retina 5.The Visual Field and

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

Dimension Recognition and Geometry Reconstruction in Vectorization of Engineering Drawings

Dimension Recognition and Geometry Reconstruction in Vectorization of Engineering Drawings Dimension Recognition and Geometry Reconstruction in Vectorization of Engineering Drawings Feng Su 1, Jiqiang Song 1, Chiew-Lan Tai 2, and Shijie Cai 1 1 State Key Laboratory for Novel Software Technology,

More information

Student Attendance Monitoring System Via Face Detection and Recognition System

Student Attendance Monitoring System Via Face Detection and Recognition System IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Student Attendance Monitoring System Via Face Detection and Recognition System Pinal

More information