Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Similar documents
Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors

MINE 432 Industrial Automation and Robotics

Introduction to Machine Learning

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Live Hand Gesture Recognition using an Android Device

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

Artificial Neural Networks

Artificial Intelligence: Using Neural Networks for Image Recognition

An Hybrid MLP-SVM Handwritten Digit Recognizer

Prediction of Missing PMU Measurement using Artificial Neural Network

Decriminition between Magnetising Inrush from Interturn Fault Current in Transformer: Hilbert Transform Approach

The Automatic Classification Problem. Perceptrons, SVMs, and Friends: Some Discriminative Models for Classification

Identification of Cardiac Arrhythmias using ECG

Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition

A Novel Fuzzy Neural Network Based Distance Relaying Scheme

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING

Fault Diagnosis of Analog Circuit Using DC Approach and Neural Networks

Application of Multi Layer Perceptron (MLP) for Shower Size Prediction

FACE RECOGNITION USING NEURAL NETWORKS

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE

Abstract. Most OCR systems decompose the process into several stages:

IMPLEMENTATION OF NEURAL NETWORK IN ENERGY SAVING OF INDUCTION MOTOR DRIVES WITH INDIRECT VECTOR CONTROL

A Multilayer Artificial Neural Network for Target Identification Using Radar Information

The Basic Kak Neural Network with Complex Inputs

Neural Network Based Optimal Switching Pattern Generation for Multiple Pulse Width Modulated Inverter

Detection and classification of faults on 220 KV transmission line using wavelet transform and neural network

Design of a CMOS OR Gate using Artificial Neural Networks (ANNs)

CMOS Architecture of Synchronous Pulse-Coupled Neural Network and Its Application to Image Processing

Prediction of airblast loads in complex environments using artificial neural networks

Shunt active filter algorithms for a three phase system fed to adjustable speed drive

A Simple Design and Implementation of Reconfigurable Neural Networks

NNC for Power Electronics Converter Circuits: Design & Simulation

Representation Learning for Mobile Robots in Dynamic Environments

Classification Experiments for Number Plate Recognition Data Set Using Weka

Application of ANN to Predict Reinforcement Height of Weld Bead under Magnetic Field

Student: Nizar Cherkaoui. Advisor: Dr. Chia-Ling Tsai (Computer Science Dept.) Advisor: Dr. Eric Muller (Biology Dept.)

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation +

Available online at ScienceDirect. Procedia Computer Science 85 (2016 )

A DWT Approach for Detection and Classification of Transmission Line Faults

Prediction of Compaction Parameters of Soils using Artificial Neural Network

Wireless Spectral Prediction by the Modified Echo State Network Based on Leaky Integrate and Fire Neurons

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

Energy Saving Scheme for Induction Motor Drives

CHAPTER 4 PV-UPQC BASED HARMONICS REDUCTION IN POWER DISTRIBUTION SYSTEMS

COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS

Segmentation of Fingerprint Images

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press, ISSN

Application of Artificial Neural Networks System for Synthesis of Phased Cylindrical Arc Antenna Arrays

IJITKMI Volume 7 Number 2 Jan June 2014 pp (ISSN ) Impact of attribute selection on the accuracy of Multilayer Perceptron

Thursday, December 11, 8:00am 10:00am rooms: pending

Colour Recognition in Images Using Neural Networks

Detection and Classification of Power Quality Event using Discrete Wavelet Transform and Support Vector Machine

DETECTION AND CLASSIFICATION OF POWER QUALITY DISTURBANCES

Impulse Noise Removal Based on Artificial Neural Network Classification with Weighted Median Filter

CHAPTER 4 MIXED-SIGNAL DESIGN OF NEUROHARDWARE

CHAPTER 1 INTRODUCTION

Computational Intelligence Introduction

The Use of Neural Network to Recognize the Parts of the Computer Motherboard

John Lazzaro and John Wawrzynek Computer Science Division UC Berkeley Berkeley, CA, 94720

Study and Analysis of various preprocessing approaches to enhance Offline Handwritten Gujarati Numerals for feature extraction

Sensors & Transducers 2014 by IFSA Publishing, S. L.

258 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 33, NO. 2, APRIL 2003

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Multiple-Layer Networks. and. Backpropagation Algorithms

Classifying the Brain's Motor Activity via Deep Learning

AN ANN BASED FAULT DETECTION ON ALTERNATOR

Computing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation

Efficient Computation of Resonant Frequency of Rectangular Microstrip Antenna using a Neural Network Model with Two Stage Training

SSB Debate: Model-based Inference vs. Machine Learning

DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS

Image Finder Mobile Application Based on Neural Networks

Generating an appropriate sound for a video using WaveNet.

Artificial Neural Network based Fault Classifier and Distance

Separation and Recognition of multiple sound source using Pulsed Neuron Model

LabVIEW based Intelligent Frontal & Non- Frontal Face Recognition System

Use of Neural Networks in Testing Analog to Digital Converters

Stock Price Prediction Using Multilayer Perceptron Neural Network by Monitoring Frog Leaping Algorithm

Fault Detection in Double Circuit Transmission Lines Using ANN

A Divide-and-Conquer Approach to Evolvable Hardware

MULTIPLE CLASSIFIERS FOR ELECTRONIC NOSE DATA

COMPUTATONAL INTELLIGENCE

Region Based Satellite Image Segmentation Using JSEG Algorithm

Neural Network Predictive Controller for Pressure Control

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK

PERFORMANCE PARAMETERS CONTROL OF WOUND ROTOR INDUCTION MOTOR USING ANN CONTROLLER

Speech/Music Change Point Detection using Sonogram and AANN

Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada

A SIGNAL DRIVEN LARGE MOS-CAPACITOR CIRCUIT SIMULATOR

A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna

AUTOMATED MUSIC TRACK GENERATION

Radiated EMI Recognition and Identification from PCB Configuration Using Neural Network

A 5 GHz LNA Design Using Neural Smith Chart

Knowledge-Based Neural Network for Line Flow Contingency Selection and Ranking

Kalman Filtering, Factor Graphs and Electrical Networks

Transcription:

Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur University of Mascara Nagpur, India Nagpur, India Mascara, Algeria soni2569@gmail.com aakhurshid@gmail.com meftahb@yahoo.fr Abstract Pattern recognition basically assigns a label to a given input image. Pattern recognition is done on the basis of classes to which an input image belongs. A pattern could be a fingerprint image, a handwritten cursive word, a human face, or a speech signal. In this paper we consider to analyze back propagation algorithm and feed forward algorithm used for recognizing patterns. We also try to implement Leaky integrate and fire neuron model which belongs to a category of Spiking neural networks. Keywords- Back propogation Algorithm, Feed Algorithm, LIF-model, Spiking Neural Network. I. INTRODUCTION Forward Pattern recognition basically assigns a label to a given input image. Pattern recognition is done on the basis of classes to which an input image belongs. A pattern could be a fingerprint image, a handwritten cursive word, a human face, or a speech signal. Given a pattern, its recognition/classification may consist of one of the following two tasks: 1) supervised classification (e.g., discriminant analysis) in which the input pattern is identified as a member of a predefined class, 2) unsupervised classification (e.g., clustering) in which the pattern is assigned to a hitherto unknown class. Thus, pattern recognition is a popular application that enables the full set of human perception to be acquired by machine. Neural network possesses the capability of pattern recognition. Researchers have reported various neural network models capable of pattern recognition, models that have the function of self organization and can learn to recognize patterns. It is implemented in following steps: In the training stage (Approximation), neural networks extract the features of the input data [1]. In the recognizing stage (generalization), the network distinguishes the pattern of the input data by the features, and the result of information is greatly influenced by the hidden layers. Neural-network learning can be specified as a function approximation problem where the goal is to learn an unknown function (or a good approximation of it) from a set of input-output pairs. Every instance in any dataset used by machine learning algorithms is represented using the same set of features. The features may be continuous, real coded, categorical or binary. If instances are given with known labels (the corresponding correct outputs) then the learning is called supervised, in contrast to unsupervised learning, where instances are unlabeled. In our paper we consider the data set of alphabets. Various algorithms are being used for this based on neural networks. Neural Networks are effective tool used in this reference. In this paper we consider to analyze back propagation algorithm and feed forward algorithm used for recognizing patterns. We also try to implement Leaky integrate and fire neuron model which belongs to a category of spiking neural networks [1]. II. A. Artificial Neural Network NEURAL BACKGROUND Neural network is an inter connection of various small processing units called neurons or Neuroides. An artificial neural network is an adaptive mathematical model or a computational structure that is designed to simulate a system of biological neurons to transfer information. The main characteristics of neural networks are that they have the ability to learn complex nonlinear input-output relationships, use sequential training procedures, and adapt themselves to the data [2]. An Artificial Neural Network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation (Figure 1). In most cases, ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase [2]. 206

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural network models, increasing the level of realism in a neural simulation. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential an intrinsic quality of the neuron related to its membrane electrical charge reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal [3]. A spiking neural network model is used to identify characters in a character set. The network is a two layered structure consisting of integrate-and-fire and active dendrite neurons. There are both excitatory and inhibitory connections in the network. Spike time dependent plasticity (STDP) is used for training. It is found that most of the characters are recognized in a character set consisting of 48 characters. Following figure shows the result of character recognition performed individually along with the data set used [4]. Figure 3. Output when characters are presented in following order: C, D A, B, C, D [3]. C. Integrate and fire model The leaky integrate-and-fire neuron introduced is probably the best-known example of a formal spiking neuron model [3]. All integrate-and-fire neurons can either be stimulated by external current or by synaptic input from presynaptic neurons. Figure 4. Schematic diagram of the integrate-and-fire model [3]. The basic circuit is the module inside the dashed circle on the right-hand side. A current I(t) charges the RC circuit. The voltage u(t) across the capacitance (points) is compared to a threshold. If at time an output pulse is generated. On the left part: A presynaptic Figure 2. Output when each character is presented individually. spike is low-pass filtered at the synapse and generates an input current pulse. The basic circuit of an integrate-and-fire model consists of a capacitor C in parallel with a resistor R driven by a current I(t). The driving current can be split into two components, I(t) = IR + IC. The first component is the resistive current IR which passes through the linear resistor R. It can be calculated from Ohm's law as IR = u/r where u 207

is the voltage across the resistor. The second component IC charges the capacitor C. From the definition of the capacity as C = q/u (where q is the charge and the voltage), we find a capacitive current IC = C du/dt. (1) We multiply the above equation by R and introduce the time constant of the `leaky integrator'. This yields the standard form : (2) We refer to as the membrane potential and to as the membrane time constant of the neuron. III. SYSTEM OVERVIEW A. Backpropagation Algorithm for Pattern Recognition [4] Backpropagation learning emerged as the most significant result in the field of artificial neural networks. The backpropagation learning involves propagation of the error backwards from the output layer to the hidden layers in order to determine the update for the weights leading to the units in a hidden layer. The error at the output layer itself is computed using the difference between the desired output and the actual output at each of the output units. The actual output for a given input training pattern is determined by computing the outputs of units for each hidden layer in the forward pass of the input data. The error in the output is propagated backwards only to determine the weight updates. The reliability of the neural network pattern recognition system is measured by setting the network with hundreds of input vectors with varying quantities of noise. The script file tests the network at various noise levels, and then graphs the percentage of network errors versus noise. Noise with a mean of 0 and a standard deviation from 0 to 0.5 is added to input vectors. At each noise level, 100 presentations of different noisy versions of each letter are made and the network s output is calculated. The output is then passed through the competitive transfer function so that only one of the 26 outputs (representing the letters of the alphabet), has a value of 1. The number of erroneous classifications is then added and percentages are obtained. The example with alphabet G is shown in Figure 4 [4]. Figure 5. Reliability for the Network Trained with and without Noise [3]. The solid line on the graph shows the reliability for the network trained with and without noise. The reliability of the same network when it had only been trained without noise is shown with a dashed line. Thus, training the network on noisy input vectors greatly reduces its errors when it has to classify noisy vectors. Then network did not make any errors for vectors with noise of mean 0.00 or 0.05. When noise of mean 0.2 was added to the vectors both networks began making errors. If a higher accuracy is needed, the network can be trained for a longer time or retrained with more neurons in its hidden layer. Also, the resolution of the input vectors can be increased to a 10-by-14 grid [4]. Other typical problems of the back-propagation algorithm are the speed of convergence and the possibility of ending up in a local minimum of the error function. Today there are practical solutions that make back-propagation in multi-layer perceptrons the solution of choice for many machine learning tasks. B. Feed forward Neural Networks for Pattern Recognition A feed-forward network can be viewed as a graphical representation of parametric function which takes a set of 208

input values and maps them to a corresponding set of output values [2]. Figure 6 shows an example of a feed-forward network of a kind that is widely used in practical applications [2]. Figure 6. Feed-forward network. Nodes in the above figure represent either inputs, outputs or `hidden' variables, while the edges of the graph correspond to the adaptive parameters. We can write down the analytic function corresponding to this network follows. The output of the hidden node is obtained by first forming a weighted linear combination of the d input values to give: The value of hidden variable j is then obtained by transforming the linear sum in (3) using an activation function to give : (3) ) (4) Finally, the outputs of the network are obtained by forming linear combinations of the hidden variables to give : The parameters are called weights while are called biases, and together they constitute the adaptive parameters in the network. There is a one-to-one correspondence between the variables and parameters in the analytic function and the nodes and edges respectively in the graph. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden (5) nodes (if any) and to the output nodes. There are no cycles or loops in the network [2]. IV. IMPLEMENTING LIF NEURON MODEL FOR PATTERN RECOGNITION [6] Leaky Integrate and Fire (LIF) neuron can be applied to solve nonlinear pattern recognition problems. A LIF neuron is stimulated during T ms with an input signal and fires when its membrane potential reaches a specific value generating an action potential (spike) or a train of spikes. Given a set of input patterns belonging to K classes, each input pattern is transformed into an input signal, then the spiking neuron is stimulated during Tms and finally the firing rate is computed. After adjusting the synaptic weights of the neuron model, we expect that input patterns belonging to the same class generate almost the same firing rate; on the other hand, we also expect that input patterns belonging to different classes generate firing rates different enough to discriminate among the different classes. When the input current signal changes, the response of the LIF neuron also changes, generating different firing rates, The firing rate is computed as the number of spikes generated in an interval of duration T. The neuron is stimulated during T ms with an input signal and fires when its membrane potential reaches a specific value generating an action potential (spike) or a train of spikes. Firing rate (fr) is given by fr = Fn/T Where Fn= No of spikes generated and T= Input spike time period The accuracy (classification rate), achieved with the proposed method, was computed as the number of input patterns correctly classified divided by the total number of tested input patterns [6]. V. CONCLUSION AND FUTUR SCOPE Various algorithms are used for Pattern recognition. We can summarize that Back propagation algorithm method used is based on backward propagation of errors. It is mainly affected by noise. A feed-forward network can be viewed as a graphical representation of parametric function which takes a set of input values and maps them to a corresponding set of output values. Spiking neurons can be considered as an alternative way to perform different pattern recognition tasks. If only one neuron is capable to solve pattern recognition problems, perhaps several spiking neurons working together can improve the experimental results obtained. The input patterns belonging to the same class generate almost the same firing rate; on the other hand, input patterns belonging to different classes generate firing rates different enough to discriminate among the different classes. However, implementing an LIF model for pattern recognition needs to be reanalyzed if patterns of different 209

classes are applied at the input, at the same time, simultaneously. In other we can say that, if input patterns of different classes are applied at the same time to an LIF model, then it may not produce correct firing rates and hence patterns may not be detected correctly. This can be considered as one of the limitation or drawback of an LIF model which can be eliminated in future scenario. REFERENCES [1] P. K. Patra, S. Vipsita, S. Mohapatra and S. K. Dash, A Novel Approach for Pattern Recognition, International Journal of Computer Applications, Vol. 9(8), pp. 19-23, 2010. [2] C. M. Bishop, Pattern Recognition and Feed-forward Networks, The MIT Encyclopedia of the Cognitive Sciences, Wilson and F. C. Keil (editors), MIT Press, 1999. [3] W. Gerstner and W. M. Kistler, Spiking Neuron Models, Cambridge University Press, 2002. [4] A. Gupta and L. N. Long, Character Recognition using Spiking Neural Networks, Proc. of IEEE Neural Net works, Orlando, FL, 2007. [5] S.P. Kosbatwar, Association for character recognition by Back- Propagation algorithm using Neural Network approach, International Journal of Computer Science & Engineering Survey (IJCSES) Vol. 3(1), pp. 127-134, 2012. [6] R. A. Vazquez and A. Cachón, Integrate and Fire Neurons and their Application in Pattern Recognition, Proc. Of 7th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE 2010), Tuxtla Gutiérrez, Chiapas, México. September 8-10, 2010. 210