The Hamming Code Performance Analysis using RBF Neural Network

Similar documents
AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

BER Analysis of BPSK for Block Codes and Convolution Codes Over AWGN Channel

Convolutional Coding Using Booth Algorithm For Application in Wireless Communication

CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK

MINE 432 Industrial Automation and Robotics

CHAPTER 6 BACK PROPAGATED ARTIFICIAL NEURAL NETWORK TRAINED ARHF

Outline. Communications Engineering 1

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

Study of Turbo Coded OFDM over Fading Channel

NEURAL NETWORK DEMODULATOR FOR QUADRATURE AMPLITUDE MODULATION (QAM)

POWER TRANSFORMER PROTECTION USING ANN, FUZZY SYSTEM AND CLARKE S TRANSFORM

TCM-coded OFDM assisted by ANN in Wireless Channels

PROJECT 5: DESIGNING A VOICE MODEM. Instructor: Amir Asif

[Dobriyal, 4(9): September, 2015] ISSN: (I2OR), Publication Impact Factor: 3.785

Decoding of Block Turbo Codes

Chapter 2 Channel Equalization

CHAPTER 4 MONITORING OF POWER SYSTEM VOLTAGE STABILITY THROUGH ARTIFICIAL NEURAL NETWORK TECHNIQUE

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Performance comparison of convolutional and block turbo codes

S Coding Methods (5 cr) P. Prerequisites. Literature (1) Contents

Lab/Project Error Control Coding using LDPC Codes and HARQ

A Simple Design and Implementation of Reconfigurable Neural Networks

Performance Evaluation of Low Density Parity Check codes with Hard and Soft decision Decoding

Analysis of Convolutional Encoder with Viterbi Decoder for Next Generation Broadband Wireless Access Systems

Transient stability Assessment using Artificial Neural Network Considering Fault Location

ISSN: International Journal of Innovative Research in Science, Engineering and Technology

Improvement Of Block Product Turbo Coding By Using A New Concept Of Soft Hamming Decoder

Sonia Sharma ECE Department, University Institute of Engineering and Technology, MDU, Rohtak, India. Fig.1.Neuron and its connection

Performance Evaluation of different α value for OFDM System

Hamming net based Low Complexity Successive Cancellation Polar Decoder

This chapter describes the objective of research work which is covered in the first

UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS. Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik

Basics of Error Correcting Codes

Using of Artificial Neural Networks to Recognize the Noisy Accidents Patterns of Nuclear Research Reactors

Goa, India, October Question: 4/15 SOURCE 1 : IBM. G.gen: Low-density parity-check codes for DSL transmission.

Lecture 4: Wireless Physical Layer: Channel Coding. Mythili Vutukuru CS 653 Spring 2014 Jan 16, Thursday

M-FSK in Multi Coding and Channel Environments

Frequency Hopping Spread Spectrum Recognition Based on Discrete Fourier Transform and Skewness and Kurtosis

BER Performance of CRC Coded LTE System for Various Modulation Schemes and Channel Conditions

Narrow-Band Interference Rejection in DS/CDMA Systems Using Adaptive (QRD-LSL)-Based Nonlinear ACM Interpolators

BER Performance Analysis and Comparison for Large Scale MIMO Receiver

Chapter - 7. Adaptive Channel Equalization

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

ECE 6640 Digital Communications

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Stock Price Prediction Using Multilayer Perceptron Neural Network by Monitoring Frog Leaping Algorithm

Bit error rate simulation using 16 qam technique in matlab

Communications Theory and Engineering

Digital Television Lecture 5

PID Controller Design Based on Radial Basis Function Neural Networks for the Steam Generator Level Control

Error Patterns in Belief Propagation Decoding of Polar Codes and Their Mitigation Methods

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels

A Novel Spread Spectrum System using MC-DCSK

SIMULATIONS OF ERROR CORRECTION CODES FOR DATA COMMUNICATION OVER POWER LINES

DEGRADED broadcast channels were first studied by

Performance Evaluation of V-Blast Mimo System in Fading Diversity Using Matched Filter

Current Harmonic Estimation in Power Transmission Lines Using Multi-layer Perceptron Learning Strategies

A Novel Fuzzy Neural Network Based Distance Relaying Scheme

IJESRT. (I2OR), Publication Impact Factor: 3.785

Chapter 3 Convolutional Codes and Trellis Coded Modulation

VOL. 3, NO.11 Nov, 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

Soft Channel Encoding; A Comparison of Algorithms for Soft Information Relaying

Neural Labyrinth Robot Finding the Best Way in a Connectionist Fashion

Simulink Modelling of Reed-Solomon (Rs) Code for Error Detection and Correction

BEING wideband, chaotic signals are well suited for

Capacity-Approaching Bandwidth-Efficient Coded Modulation Schemes Based on Low-Density Parity-Check Codes

IDMA Technology and Comparison survey of Interleavers

1. INTRODUCTION II. SPREADING USING WALSH CODE. International Journal of Advanced Networking & Applications (IJANA) ISSN:

COPYRIGHTED MATERIAL. Introduction. 1.1 Communication Systems

SPLIT MLSE ADAPTIVE EQUALIZATION IN SEVERELY FADED RAYLEIGH MIMO CHANNELS

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

Introduction to Error Control Coding

Nonuniform multi level crossing for signal reconstruction

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

THE problem of acoustic echo cancellation (AEC) was

Power Efficiency of LDPC Codes under Hard and Soft Decision QAM Modulated OFDM

High-Rate Non-Binary Product Codes

Fault Detection in Double Circuit Transmission Lines Using ANN

Neural Networks and Antenna Arrays

EXIT Chart Analysis for Turbo LDS-OFDM Receivers

Performance Evaluation of Nonlinear Equalizer based on Multilayer Perceptron for OFDM Power- Line Communication

Performance Evaluation of the VBLAST Algorithm in W-CDMA Systems

Performance Evaluation and Comparative Analysis of Various Concatenated Error Correcting Codes Using BPSK Modulation for AWGN Channel

Replacing Fuzzy Systems with Neural Networks

Review on Improvement in WIMAX System

TSTE17 System Design, CDIO. General project hints. Behavioral Model. General project hints, cont. Lecture 5. Required documents Modulation, cont.

Error Correcting Code

A New Transmission Scheme for MIMO OFDM

THE EFFECT of multipath fading in wireless systems can

TABLE OF CONTENTS CHAPTER TITLE PAGE

Improvement of MFSK -BER Performance Using MIMO Technology on Multipath Non LOS Wireless Channels

Proceedings of the 6th WSEAS International Conference on Multimedia Systems & Signal Processing, Hangzhou, China, April 16-18, 2006 (pp )

Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting

6. FUNDAMENTALS OF CHANNEL CODER

Noisy Index Coding with Quadrature Amplitude Modulation (QAM)

Spread Spectrum. Chapter 18. FHSS Frequency Hopping Spread Spectrum DSSS Direct Sequence Spread Spectrum DSSS using CDMA Code Division Multiple Access

A Comparison of Particle Swarm Optimization and Gradient Descent in Training Wavelet Neural Network to Predict DGPS Corrections

Performance of OFDM System under Different Fading Channels and Coding

Implementation of Reed-Solomon RS(255,239) Code

Transcription:

, 22-24 October, 2014, San Francisco, USA The Hamming Code Performance Analysis using RBF Neural Network Omid Haddadi, Zahra Abbasi, and Hossein TooToonchy, Member, IAENG Abstract In this paper the Hamming decoding model development, and BER curve performance, including Error histogram, target Mean Square Error, Training state, Regression curve and the impact of employing a different number of Neurons in RBF Neural network will be investigated. The Hamming (15,11) will be used to develop the results, and diagrams throughout this article. The results, and simulations in this paper are generated via Matlab Neural Network Toolbox 2013. Index Terms BER Performance, Hamming Code, Neural Network, RBFN. W I. INTRODUCTION HENEVER a transmitter broadcasts a signal over a long distance, the received message may be different than the original one due to a couple of reasons, including but not limited to noise, fading, and jamming. The impact of such an error could be as small as misunderstanding a word in a telephonic conversation or as big as losing connection to a space station thousands of miles away. Due to possible catastrophic impacts of such errors in communication; the detection, and error correction have always been the centers of interest for scientists, engineers and researchers in the field. One type of error control coding scheme is the linear block coding. In this method, some extra bits are inserted into the symbol stream emitted by the source. This is done to, investigate the error detection process, as well as correcting the transmission errors [1]. By using the channel coding, the probability of bit error ( P B ) will be reduced significantly, at the cost of bandwidth, and added network complexity [2]. The very first step in error detection, and correction is the error modeling and simulation. One of the most accurate, and reliable modeling and identification algorithms developed is Artificial Neural Networks (ANN). Artificial neural networks are circuits, computer algorithms, or mathematical representations of the massively connected set of neurons that form artificial biological networks that mimic the neuron behaviors. They have been shown to be Manuscript received July 12, 2014; revised August 9, 2014. Omid Haddadi, M.Sc. research assistant of Electrical Engineering, Department of California State University, Fullerton (Phone: 310-349- 7533; e-mail: Omid.haddadi@csu.fullerton.edu). Zahra Abbasi, B.Sc. in Radiation Technology, independent research assistant at Saddleback and Irvine Valley Colleges (Phone: 949-616-4551; Email: zabbasi1@saddleback.edu). Hossein Tootoonchy, M.Sc. research assistant of Electrical Engineering Department at California State University, Fullerton (Phone: 949-616-6249; Email: Tootoonchy@csu.fullerton.edu). useful, as an alternative computing technology, and have proven to be useful in a variety of tasks such as pattern recognition, signal processing, estimation, and control problems. Their Ability to learn from examples has been particularly useful. Among the diverse set of neural network algorithms, the RBF method will be adopted in this paper due to various advantages that will be discussed in the subsequent sections. In this paper, the Hamming code (15,11) is simulated via RBF neural network, and different outputs, including the BER curve performance, Error histogram, and target MSE are discussed. The different sections of this paper are organized as follows. In sections II, and section III a review and introduction to Hamming code, and Neural Networks will be presented, respectively. In section IV the simulation results are developed and discussed. Finally, in sections V and VI the results and conclusion will be presented respectively. II. HAMMING CODE A. A brief Introduction to Hamming Code In the late 1940 s Claude Shannon was developing an information theory, and coding as a mathematical model for communication. At the same time, Richard Hamming, a colleague of Shannon s at Bell Laboratories, found a need for error correction in his work on computers. The parity checking was already being used to detect errors in the calculations of the relay based computers at the time, and Hamming realized that a more sophisticated pattern of parity checking can be used to correct a single error along with the detection of double instances. In 1950s, Hamming published what is now known as the Hamming code. The single error correcting binary Hamming code, their single error correcting and double error detecting extended version, marked the beginning of coding theory. These codes remain important to this day, for theoretical, practical as well as historical reasons. Hamming code is a class of block codes characterized by the structure (n,k)= (2 m 1, 2 m 1 m) where m=2,3,. This is an error detecting or error-correcting binary code, which transmits n, bits for every k source bits. They are capable of correcting all single errors or detecting all combinations of 2 or fewer errors within a block. For performance over a Gaussian channel using coherently demodulated BPSK, the channel symbol probability can be expressed in terms of as follows:

, 22-24 October, 2014, San Francisco, USA (1) Where is a code symbol energy per noise spectral density and Q(x) is called the complementary error function. [2]. In this paper, the m=4, is considered. Thus, the (n,k) Hamming code will be (15,11). For this Hamming code the generator matrix is: Fig. 1 Neural Network with one Input layer, one hidden and one output layer So the parity check matrix is given by: B. Syndrome and Error Detection Let V (V 0,V 1,...,V n1 ) be a code word that was transmitted over a noisy channel and r (r 0, r 1,..., r n1 ) be the received vector at the output of the channel. Upon receiving r, the decoder must first determine whether r contains transmission errors. So when r is received, the decoder computes the following (n k)- tuple: Which is called the syndrome of r. The s=0 if and only if r is a code word and receiver accepts r as the transmitted code word. The s 0 if and only if r is not a code word and the presence of errors has been detected. [3] C. Error correction: After finding the syndrome s, the coset leader (error pattern) whose syndrome equals will be found. Next, the received vector r will be decoded into the code vector v: III. NEURAL NETWORK AND RBF A. Introduction to Neural Network Among the available computational intelligence techniques, the Artificial Neural Networks (ANNs) attempts (2) (3) to mimic the behavior of biological neurons. Among the benefits of ANNs, are the ability to process complex, and interconnected data, arrays, systems input/output relationships, and nonlinear models, which are notorious for simulation and development. Neural Networks are able to infer, and learn from complex relationships, by generalizing from a limited amount of training data through a process known as training. This concept comes from the fact that animals, and humans are able to learn through observation. Any new situation is training and an experience through which the agent can gain experience that will be used when later is confronted with new, and unpredicted situations. Although, the exact learning mechanism is still unknown, the attempt to mimic the pattern has been successful. Based on scientists observations, brain consists a large number of interconnected cells called neurons. These cells are known as the critical information-processing units. Human brain consists of millions of interconnected intelligent agents known as neurons. These neurons will respond to electrical impulses sent from other neurons. In 1943, McCulloch and Pitts developed a simple mathematical model of the neuron that had multiple inputs, and was connected to the output of a neuron via other influencing factors known as weights. Fig. 1 shows the proposed model. It is very interesting to observe that how such a simple model can solve many sophisticated problems. Later, it was shown that if the perceptrons from different layers were grouped together, also known as multilayer perceptrons. The input layers merely do not perform any computations but distribute the input to the summation through weight factors. For the neurons in the hidden layer, first the weighted sum of inputs is calculated. Weights play an important role in operation of neural networks. Some inputs are more important than others, and their influence and importance can be highlighted via weights in neural networks. On the other hand, a nonlinear transfer function, also known as activation function is used so the desired output can be calculated. A sample of such function is shown as below: (4) X j f W jk x k k

, 22-24 October, 2014, San Francisco, USA Transfer functions add the required nonlinearity to the system. Another important feature in ANNs is the ability to learn. Neural networks lack the elements to store data, what they do instead is to utilize the power of weights that shows the importance of each connection. Training of the network means, these weights are selected in such a way that the error between the desire output, and the network output is minimized. There are two steps involved, first is the feed forward calculation of the weights and inputs, and the second is the comparison of the error to plant output. Once the input values to the transfer function were created, the result will be compared to the desired output, the difference, the error then is used to adjust the weights first in the last layer, and then the layer before, etc. This process will continue until the error is minimum. min E 2 1 2 y i x n Yx n 2 n i 1 y i (5) 2 x n f w ij. f w jk, x kn 2 n i j The gradient descent optimization, and the updated output weights can be found by differentiating the cost function given by equation No. 5. Because these differences are in terms of the other layer outputs, the desired result can be found using the chain rule that the errors are fed backward through the network layer by layer using gradient descent algorithm, and thus this method is called back propagation. Fig. 2: McCullock-Pitts Neuron Model During the recent years, the Neural Networks have been the center of attention for researchers, and scientists. New architectures, and learning algorithms are developed all the time. Even though the present neural networks do not achieve human-like performance, they offer interesting means for pattern recognition, including a large collection of very different types of mathematical tools (preprocessing, extraction of features, final recognition). In many cases, it is difficult to say what kind of tool would best fit to a particular problem. Neural networks make it possible to combine these steps, because they are able to extract the features autonomously. They are practical to use, because they are nonparametric. It has also been reported that the accuracy of neural classifiers is better than of traditional k counterparts [4][5]. Selection of the proper learning algorithm is vital, because through selecting the right one, it is possible to train those networks that can not be trained with simple algorithms. For example, The error back propagation known as EBP method, is one of the most widely used training algorithms, however, they are more suitable for networks with large number of neurons. On the other hand, the EBP is very efficient in learning, yet to the cost of reduced generalization ability. In other words, the neural network may produce incorrect answers for patterns that were not introduced in the training sets[6][7]. IV. RBF HAMMING CODE MODEL Different techniques have been developed for correction of errors from the received data. Instead of using traditional error correcting techniques, Artificial Neural Networks have been used because of their adaptive learning, selforganization, and real time operation, and to project what will most likely happen on the analogy of the human brain. Many researchers have contributed much in the field of channel decoding with artificial neural networks (ANN). L. G. Tallini and P. Cull proposed a scheme of using ANN technology to decode Hamming codes and Reed-Muller codes [8]. S.E. El-Khamy used ANN to decode block codes and compared the performance between soft-decision decoding and hard-decision decoding [9]. Due to their parallel processing capability, ANNs are a promising technology for error correction to meet the needs of high data-rate transmission. R. Annauth et al proposed a scheme of using error back propagation (EBP) algorithm to decode Turbo codes [10]. However, they only got rather poor performance results although the decoding complexity was reduced. [11] Radial Basis Function (RBF) Algorithm for the Artificial Neural Networks has been simulated using Matlab for decoding block codes. The Simulator is trained on all the possible code words to detect/correct the errors.[12] The model explained here is designed to code, and decode the (15,11) Hamming algorithm, which was presented in section II. For a better result, it is advised that a large number for N is selected. For instance, assume N=10^6 where N is the number of bits which are emitted from the transponder through the channel. The modulation used in this paper is BPSK with the channel noise of Additive White Gaussian Noise (AWGN). The RBF decoder could be treated as a Single-Input- Single-Output (SISO) model from the viewpoint of codewords, where its input or output is corresponding to 1- codeword information. There are two stages of decoding for Hamming codes with RBF technology, i.e., the first stage of training the RBF network and the second stage of testing, and validating the RBF network. [11] Training stage: Based on the principle of Minimum Mean Square Error (MMSE), the known information is used to train the RBF network, and the weights of the NN are accordingly changed during training to obtain the optimal output. The number of samples for training influences the performance of RBF decoder a lot. There should be neither too few nor too many training samples. In the first case of too few samples, the weights of the RBF network would not be correct, which would lead to poor decoding performance. On the other

, 22-24 October, 2014, San Francisco, USA hand, if too many samples were applied, the performance would not be improved substantially while it would prolong the training time. Based on default of nftools in Matlab codes and our experiments, 70% of data is allocated for training stage. Figure 3 illustrates the train data and Regression plot. Fig. 5: All data and Regression Fig. 3: Train data and Regression For the test, and validation stage, the time for training and network learning is allocated. Similar to training state, some portions of information should be allocated to testing, and data validation. In this paper, 15% of the information is considered for each of them. Figure 4 shows the testing data, and Regression plot. Table 1 Parameter used in RBF tool Parameter Value Desired Minimum Error 0 Spread 1 Maximum number of neuron 7 DisplayAt 1 Percentage of training data 70 Percentage of testing data 15 Percentage of validation data 15 Table 2 The value of Mean Square Error (MSE) for using different number of Neurons. Table 2: MSE for different number of neuron Number of neuron MSE 2 6.24836e-005 3 3.03697e-005 4 1.9699e-007 5 1.71009e-007 6 1.43405e-007 7 3.28795e-008 Fig. 4: Test data and Regression Figure 5, depicts a more comprehensive collection of data sets used to derive the simulation results. For creating, and training the RBF network, the NEWRB Matlab code is employed. According to newrb Matlab code, different parameters can be modified to achieve a better result in terms of performance, and error reduction. Table 1, depicts the important parameters that were used in the simulation process. V. RESULTS In order to simulate the results, a proper training data set was used for a (15,11) Hamming code. This simulation was performed with the presence of an additive white Gaussian noise (AWGN). The whole simulation was carried out with Matlab Neural Network Toolbox 2013. In order to achieve the desired minimum square error, the newrb function is employed. This function will increase the number of neurons in the hidden layer of the radial base network (RBN) until the desired MSE is achieved. After training the RBF network with proper data set, and running the algorithm, the following result was achieved. Fig.6 is known as the Bit Error Rate (BER) curve, which is

, 22-24 October, 2014, San Francisco, USA an indicative of the channel conversion accuracy in using different SNRs. The simulation consists of four graphs including, theory, symbol error rate, hamming BER, and RBFN Hamming. [10] R. Annauth and H. C. S. Rughooputh, Neural network decoding of Turbo codes, Int. Jt. Conf. Neural Networks (IJCNN 99), vol. 5, pp. 3336 3341, 1999. [11] X. Liu, Z. Chen, Z. Wang, and P. Cull, Decoding of Block Turbo Codes with RBF Networks, Int. Conf. Sensing,, pp. 1986 1990, 2006. [12] A. Haroon, Decoding of Error Correcting codes Using Neural Networks, 2012. Fig 6: SNR vs BER Gragh for RBFN (15,11) Hamming code According to simulated results all graphs showed a decreased error rate while increasing SNR. This decrease was not the same for different algorithms. It is shown through numerous papers, and articles that the Hamming code will produce better results than uncoded BER in terms of performance and error reduction. The Hamming code simulation results in this paper, also advocates the same theory. Increasing new layers, and neurons will add to the complexity of the network but not necessarily improves the results. It is shown that sing optimized neural network algorithm, along with Hamming code could produce much lower BER while maintaining the network simplicity. This feature will make network debugging and troubleshooting much easier than complex networks. REFERENCES [1] H. Abdelbaki and E. Gelenbe, Random Neural Network Decoder for Error Correcting Codes 3 The Random Neural Network. [2] B. Sklar, Digital communications, vol. 2. Prentice Hall NJ, 2001. [3] J. Micolau, D. Rodriguez, and J. A. Vidal, Hamming Block Codes, no. January, 2000. [4] T. Sorsa, H. H. N. Koivo, and H. Koivisto, Neural networks in process fault diagnosis, Syst. Man Cybern. IEEE Trans., vol. 21, no. 4, pp. 815 825, 1991. [5] S. R. Naidu, E. Zafiriou, and T. J. McAvoy, Use of neural networks for sensor failure detection in a control system, Control Syst. Mag. IEEE, vol. 10, no. 3, pp. 49 55, 1990. [6] B. WILAMOWSKI, How Not to Be Frustrated with Neural Networks, eng.auburn.edu, no. December, pp. 56 63, 2009. [7] B. G. Lipták, Instrument Engineers Handbook, Volume Two: Process Control and Optimization, vol. 2. CRC press, 2005. [8] L. G. Tallini and P. Cull, Neural nets for decoding errorcorrecting codes, Ital. J. Pure Appl. Math., vol. 10, pp. 91 106, 2001. [9] S. E. El-Khamy, E. A. Youssef, and H. M. Abdou, Soft decision decoding of block codes using artificial neural network, Proc. IEEE Symp. Comput. Commun., pp. 234 240, 1995.