Application of Multi Layer Perceptron (MLP) for Shower Size Prediction

Similar documents
MINE 432 Industrial Automation and Robotics

NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Neural Network Classifier and Filtering for EEG Detection in Brain-Computer Interface Device

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF

Introduction to Machine Learning

DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS

Multiple-Layer Networks. and. Backpropagation Algorithms

A comparative study of different feature sets for recognition of handwritten Arabic numerals using a Multi Layer Perceptron

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors

Comparison of Various Neural Network Algorithms Used for Location Estimation in Wireless Communication

Lesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni.

Back Propagation Algorithm: The Best Algorithm Among the Multi-layer Perceptron Algorithm

Hybrid Optimized Back propagation Learning Algorithm For Multi-layer Perceptron

Generating an appropriate sound for a video using WaveNet.

Prediction of Influence of Doping of NaNO 3 on the Solid Phase Thermal Decomposition of Bitumen using neural networks

Prediction of Rock Fragmentation in Open Pit Mines, using Neural Network Analysis

Analysis of Learning Paradigms and Prediction Accuracy using Artificial Neural Network Models

Neural Network based Digital Receiver for Radio Communications

Figure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw

Transient stability Assessment using Artificial Neural Network Considering Fault Location

Initialisation improvement in engineering feedforward ANN models.

Systolic modular VLSI Architecture for Multi-Model Neural Network Implementation +

Mapping EDFA Noise Figure and Gain Flatness Over the Power Mask Using Neural Networks

A Neural Network Approach for the calculation of Resonant frequency of a circular microstrip antenna

Current Harmonic Estimation in Power Transmission Lines Using Multi-layer Perceptron Learning Strategies

Adaptive Multi-layer Neural Network Receiver Architectures for Pattern Classification of Respective Wavelet Images

A Comparison of MLP, RNN and ESN in Determining Harmonic Contributions from Nonlinear Loads

Neural Filters: MLP VIS-A-VIS RBF Network

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press, ISSN

IBM SPSS Neural Networks

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE

Radiated EMI Recognition and Identification from PCB Configuration Using Neural Network

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK

A Neural Network Color Classifier in HSV Color Space

COMPUTATION OF RADIATION EFFICIENCY FOR A RESONANT RECTANGULAR MICROSTRIP PATCH ANTENNA USING BACKPROPAGATION MULTILAYERED PERCEPTRONS

Harmonic detection by using different artificial neural network topologies

بسم اهلل الرحمن الرحيم. Introduction to Neural Networks

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition

ARTIFICIAL NEURAL NETWORK BASED CLASSIFICATION FOR MONOBLOCK CENTRIFUGAL PUMP USING WAVELET ANALYSIS

Target Classification in Forward Scattering Radar in Noisy Environment

Approximation a One-Dimensional Functions by Using Multilayer Perceptron and Radial Basis Function Networks

AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND TRANSFER FUNCTIONS

INTRODUCTION. a complex system, that using new information technologies (software & hardware) combined

Analysis Of Feed Point Coordinates Of A Coaxial Feed Rectangular Microstrip Antenna Using Mlpffbp Artificial Neural Network

Keywords : Simulated Neural Networks, Shelf Life, ANN, Elman, Self - Organizing. GJCST Classification : I.2

Automatic Speech Recognition (CS753)

POLITEHNICA UNIVERSITY TIMISOARA

Computation of Different Parameters of Triangular Patch Microstrip Antennas using a Common Neural Model

Systematic Treatment of Failures Using Multilayer Perceptrons

Prediction of Missing PMU Measurement using Artificial Neural Network

A COMPARISON OF ARTIFICIAL NEURAL NETWORKS AND OTHER STATISTICAL METHODS FOR ROTATING MACHINE

Prediction of Compaction Parameters of Soils using Artificial Neural Network

A Robust Footprint Detection Using Color Images and Neural Networks

Available online at ScienceDirect. Procedia Computer Science 85 (2016 )

Artificial Neural Network Approach to Mobile Location Estimation in GSM Network

Eur Ing Dr. Lei Zhang Faculty of Engineering and Applied Science University of Regina Canada

Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks

The Basic Kak Neural Network with Complex Inputs

Forecasting Exchange Rates using Neural Neworks

Fault Detection in Double Circuit Transmission Lines Using ANN

Prediction of Cluster System Load Using Artificial Neural Networks

Application of Artificial Neural Networks System for Synthesis of Phased Cylindrical Arc Antenna Arrays

Evolutionary Artificial Neural Networks For Medical Data Classification

Hourly Runoff Forecast at Different Leadtime for a Small Watershed using Artificial Neural Networks

Application of Backpropagation Algorithms in Predicting the Quality of Component Based Software Systems

Statistical Tests: More Complicated Discriminants

Comparative Study of Neural Networks for Face Recognition

Artificial Intelligence Elman Backpropagation Computing Models for Predicting Shelf Life of. Processed Cheese

Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation

[Mathur* et al., 5(6): June, 2016] ISSN: IC Value: 3.00 Impact Factor: 4.116

ISSN: [Jha* et al., 5(12): December, 2016] Impact Factor: 4.116

PERFORMANCE PARAMETERS CONTROL OF WOUND ROTOR INDUCTION MOTOR USING ANN CONTROLLER

Signal Processing of Automobile Millimeter Wave Radar Base on BP Neural Network

ANALYSIS OF CITIES DATA USING PRINCIPAL COMPONENT INPUTS IN AN ARTIFICIAL NEURAL NETWORK

Segmentation of Fingerprint Images

Neural Model of the Spinning Process for Predicting Selected Properties of Flax/Cotton Yarn Blends

Random Administrivia. In CMC 306 on Monday for LISP lab

Artificial Neural Networks

We Know Where You Are : Indoor WiFi Localization Using Neural Networks Tong Mu, Tori Fujinami, Saleil Bhat

Voltage Stability Assessment in Power Network Using Artificial Neural Network

MAGNT Research Report (ISSN ) Vol.6(1). PP , Controlling Cost and Time of Construction Projects Using Neural Network

COMPARATIVE STUDY ON ARTIFICIAL NEURAL NETWORK ALGORITHMS

Multilayer Perceptron: NSGA II for a New Multi-Objective Learning Method for Training and Model Complexity

Impulse Noise Removal Based on Artificial Neural Network Classification with Weighted Median Filter

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

Fault Tolerant Multi-Layer Perceptron Networks

Supervised Versus Unsupervised Binary-Learning by Feedforward Neural Networks

Modeling the Drain Current of a PHEMT using the Artificial Neural Networks and a Taylor Series Expansion

Design of Substrate IntegratedWaveguide Power Divider and Parameter optimization using Neural Network

USING EMBEDDED PROCESSORS IN HARDWARE MODELS OF ARTIFICIAL NEURAL NETWORKS

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

Design of Near-Optimal Classifier Using Multi-Layer Perceptron Neural Networks for Intelligent Sensors

Artificial Neural Network (ANN) Prediction of Porosity and Water Saturation of Shaly Sandstone Reservoirs

THE USE OF ARTIFICIAL NEURAL NETWORKS IN THE ESTIMATION OF THE PERCEPTION OF SOUND BY THE HUMAN AUDITORY SYSTEM

Image Finder Mobile Application Based on Neural Networks

Artificial Neural Network Based Fault Locator for Single Line to Ground Fault in Double Circuit Transmission Line

Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1

Acoustic Emission Source Location Based on Signal Features. Blahacek, M., Chlada, M. and Prevorovsky, Z.

Transcription:

Chapter 3 Application of Multi Layer Perceptron (MLP) for Shower Size Prediction 3.1 Basic considerations of the ANN Artificial Neural Network (ANN)s are non- parametric prediction tools that can be used for a host of pattern classification problems [Haykin 2003] including face recognition. The application of the ANN considers two aspects. A MLP is constituted for this work with one hidden layer and input and output layers. The choice of the length of the hidden layers has been fixed by not following any definite reasoning but by using trial and error method. For this case several sizes of the hidden layer have been considered. Table 3.1 shows the performance obtained during training by varying the size of the hidden layer. The case where the size of the hidden layer taken to be 1.5 times to that of the input layer is found to be computationally efficient (Table 3.1). Its MSE convergence rate and learning ability are found to be superior to the rest of the cases. Hence, the size of the hidden layer of the ANNs Table 3.1: Performance variation after 1000 epochs during training of a MLP with variation of size of the hidden layer Case Size of hidden MSE Precision layer (x input layer) Attained attained in % 1 0.75 1.2 x 10 3 87.1 2 1.0 0.56 x 10 3 87.8 3 1.25 0.8 x 10 4 87.1 4 1.5 0.3 x 10 4 90.1 5 1.75 0.6 x 10 4 89.2 6 2 0.7 x 10 4 89.8

50 Chapter 3. Application of Multi Layer Perceptron (MLP) for Shower Size Prediction Table 3.2: Effect on average MSE convergence after 1000 epochs with variation of activation functions at input, hidden and output layers Case Input Hidden Output MSE x layer Layer Layer 10 4 1 log-sigmoid log-sigmoid log-sigmoid 1.45 2 tan-sigmoid tan-sigmoid tan-sigmoid 1.32 3 tan-sigmoid log-sigmoid tan-sigmoid 1.05 4 log-sigmoid tan-sigmoid log-sigmoid 1.02 5 log-sigmoid log-sigmoid tan-sigmoid 1.15 6 log-sigmoid tan-sigmoid log-sigmoid 1.19 considered is 1.5 times to that of the input layer. The selection of the activation functions of the input, hidden and output layers plays an important role in the performance of the system. A common practice is to use a similar type of activation function in all layers. But certain combinations and alterations of activation function types carried out during training is expected to provide a way to attain better performance. Hence, in this work two types of MLP configurations are considered- the first type constituted by a set of similar activation functions in all layers and the other with a varied combination of activation functions in different layers. Both these two configurations are trained with gradient descend with variable learning rate and momentum back propagation (GDMALBP) algorithm as a measure of training performance standardization (Table 3.2). The outcome of the MLP blocks varies depending upon the number of training sessions and the data used. MSE convergence and prediction precision are used to ascertain the performance of the MLP blocks. The Table 3.2 shows some results derived during training. The cases 3 and 4 provide the best MSE convergence but case 3 gives better processing speed. hence, the combination as shown in case 3 of Table 3.2 is taken as the standard form of configuration of the ANNs. 3.1.1 Multi Layered Perceptron Based Learning The fundamental unit of the ANN is the McCulloch-Pitts Neuron (1943). The MLP is the product of several researchers: Frank Rosenblatt (1958), H. D. Block (1962) and M. L. Minsky with S. A. Papart (1988). Backpropagation, the training algorithm, was discovered independently by several researchers (Rumelhart et. al.(1986) and also McClelland and

3.1. Basic considerations of the ANN 51 Figure 3.1: Multi Layer Perceptron Rumelhart (1988)). A simple perceptron which is a single McCulloch-Pitts neuron trained by the perceptron algorithm is given as: O x = g(([w].[x] + b) (3.1) where [x] is the input vector, [w] is the associated weight vector, b is a bias value and g(x) is the activation function. Such a setup, namely the perceptron will be able to classify only linearly separable data. A MLP, in contrast, consists of several layers of neurons. The equation for output in a MLP with one hidden layer is given as: O x = N β i g[w] i.[x] + b i (3.2) i=1 where β i is the weight value between the i th hidden neuron. Such a setup maybe depicted as in Figure 3.1. The process of adjusting the weights and biases of a perceptron or MLP is known as training. The perceptron algorithm (for training simple perceptrons consists of comparing the output of the perceptron with an associated target value. The most common training algorithm for MLPs is error backpropagation. This algorithm entails a back propagation of the error correction through each neuron in the network.

52 Chapter 3. Application of Multi Layer Perceptron (MLP) for Shower Size Prediction 3.1.2 Application of Error Back Propagation for MLP training The MLP is trained using (error) Back Propagation (BP) depending upon which the connecting weights between the layers are updated. The details of the training process is provided in Section 2.2. One cycle through the complete training set forms one epoch. The training process is repeated till MSE meets the performance criteria. While repeating the above, the number of epoch elapsed is counted. A few methods used for MLP training includes: (i) Gradient Descent(GDBP), (ii) Gradient Descent with Momentum BP (GDMBP), (iii) Gradient Descent with Adaptive Learning Rate BP (GDALRBP) and (iv) Gradient Descent with Adaptive Learning Rate and Momentum BP (GDALMBP). 3.2 Application of ANN for shower size prediction Showers were generated according to a modified Nishimura Kamata Greisen (NKG) Lateral Distribution Function [Hanna 1991] with primary energy in the range 10 10.5 to 10 20.5 ev and Moliere radius of 70 m. Their cores were evenly distributed within a circle of radius 50 m centered on the middle of the array. This restriction was adopted to avoid edge effect. A conceptual model of the core and detector locations used for the work are depicted in Figure 3.2. The high energy showers between 10 10.5 to 10 20.5 ev are simulated and density values calculated. The required shower sizes are generated from the trained MLP. 3.2.1 Evaluation of the training process The results derived for the four training methods are used to determine the most suitable method for training the ANN. Table 3.3 shows some of the results derived during training from which it is seen that a three layered ANN trained with traingdm provides the best success rate within 12000 epochs. This set-up is taken for testing and subsequent prediction of shower size.

3.2. Application of ANN for shower size prediction 53 Table 3.3: Results derived during training- ANN trained with traingd, traingdm, traingdx and traingda SL Num Epochs Success rate % Processing in % Time 5000 93.8 12 traingd 10000 92.2 17 12000 94.0 52 15000 93.9 33 20000 94.1 96 5000 93.13 12 traingdm 10000 93.9 18 12000 94.1 26 15000 93.4 33 20000 94.1 96 5000 92.8 13 traingdx 10000 92.2 18 12000 93.1 26 15000 93.9 33 20000 94.4 99 5000 93.1 12 traingda 10000 88.9 17 12000 91.2 26 15000 93.5 36 20000 89.1 100

54 Chapter 3. Application of Multi Layer Perceptron (MLP) for Shower Size Prediction Figure 3.2: Conceptual set-up used for simulation of density functions of EAS 3.2.2 Experimental results and discussion At the end of the training the selected MLP configuration is used to carry out the prediction for a range of density values generated for the purpose. The results derived by the trained ANN after training it for about 20000 epochs are depicted in Figure 3.3. A χ 2 -distribution between expected and ANN predicted showers sizes with variation of training sessions is shown in Figure 3.4. Similarly the success-rates derived with increase in ANN training sessions is shown in Figure 3.5 The results thus derived establish the usefulness of the ANN in predicting the shower sizes for the range considered for the work. 3.3 Conclusion With the size of the hidden layer as 1.5 times that of the input layer, it is found that a three-layered ANN trained with traingdm provides the best success rate within 12000 epochs. This setup is taken for testing and subsequent prediction of shower size. The success rate derived establishes the ability of the ANN to handle a task like shower size prediction. The work can also be extended to predict the coordinates of the

3.3. Conclusion 55 Figure 3.3: Expected versus ANN generated results upto 20000 epochs

56 Chapter 3. Application of Multi Layer Perceptron (MLP) for Shower Size Prediction Figure 3.4: χ 2 -(between expected and ANN predicted showers sizes) distribution with ANN training sessions Figure 3.5: Variation of success rates achieved in shower size prediction with ANN training sessions

3.3. Conclusion 57 core using experimental density which is presented in the next chapter.