Application of Generalised Regression Neural Networks in Lossless Data Compression

Size: px
Start display at page:

Download "Application of Generalised Regression Neural Networks in Lossless Data Compression"

Transcription

1 Application of Generalised Regression Neural Networks in Lossless Data Compression R. LOGESWARAN Centre for Multimedia Communications, Faculty of Engineering, Multimedia University, Cyberjaya MALAYSIA Abstract: - Neural networks are a popular technology that exploits massive parallelism and distributed storage and processing for speed and error tolerance. Most neural networks tend to rely on linear, step or sigmoidal activation functions for decision making. The generalised regression neural network (GRNN) is a radial basis network (RBN) which uses the Gaussian activation function in its processing element (PE). This paper proposes the use of the GRNN for lossless data compression. It is applied in the first stage of the lossless twostage predictor-encoder scheme. Three different approaches using the GRNN are proposed. Batch training with different block sizes is applied to each approach. Two popular encoders, namely arithmetic coding and Huffman coding, are used in the second stage. The performance of the proposed single- and two-stage schemes are evaluated in terms of the compression ratios achieved for telemetry data test files of different sizes and distributions. It is shown that the compression performance of the GRNN schemes is better than existing implementations using the finite impulse response (FIR) and adaptive normalised least mean squares (NLMS) filters, as well as an implementation using a recurrent neural network. Key-Words: - Lossless data compression, neural network, two-stage, predictor, encoder, radial basis. 1 Introduction Inherent parallelism, distributed, error tolerance, and the ability to learn, adapt and generalise input patterns, are some of the characteristics of neural networks which gives them the added edge over classical technology. These characteristics are sought after in the area of data compression, especially when dealing with real-time applications and remote equipment, which make repair and maintenance costly. Lossless data compression assumes importance when dealing with data that is sensitive to error, such as in high definition medical images and in satellite telemetry. In recent years, several types of classical and neural network schemes have been successfully applied in lossless data compression [1]-[3]. The neural schemes generally employ the use of the step (hardlimiter), linear and sigmoidal function(s) in their processing elements (PE) or the hidden and output layers. An alternative approach is the use of a radial basis function in the decision nodes. Radial basis nodes produce identical outputs for inputs with equal distance from the centre, acting as a detector of the input. The most common of these functions is the Gaussian distribution. Radial basis networks (RBN) [4] can be designed in a fraction of the time taken to train standard feedforward networks [5], a significant advantage in real-time applications. Several implementation schemes of a RBN, known as the generalised regression neural network (GRNN), are proposed in this paper. 2 GRNN Models The GRNN [6] is a two-layer feedforward network (by convention, the input layer is not mentioned), consisting of a radial basis hidden layer and a special linear output layer, as shown in Fig. 1 with the activation (transfer) functions in Fig. 2. The activation level of the hidden layer is determined by f rb in (1), using the net input, N, in (2). f rb e 2 N = (1) N = ( W X ) b (2) where f rb ( ) = radial basis activation function N = net input to the activation function W = weight vector of the PE containing weights w 1, w 2 w p X = input vector of the PE containing inputs X 1, X 2 X p b = bias of the PE

2 The nodes in the output layer form a linear combination of the basis (kernel) functions computed by the hidden units, producing the overall output (decision) of the network at each iteration. Input Radial Basis Special Output Hidden Layer Linear Layer X 1 X 2 X p : w 1 w 2... w p Fig. 1 : GRNN structure with p inputs f rb (n) Y 1 N 2 f rb b n W 2 W 1 + X N 1 W X f lin (n) f lin Y 2 (a) Radial basis (rb) (b) linear (lin) Fig. 2 : Activation functions of the (a) hidden and (b) output layers of the GRNN For lossless data compression, the well-known two-stage scheme, shown in Fig. 3 [1], is chosen. In this scheme, the first stage is used to predict the current input,. The residue, r n, is generated by taking the difference between and the predicted input. The encoder in the second stage is used to compress the residue before transmission. The GRNN is to be used in the first, whilst lossless arithmetic coding is used as the encoder. Three approaches of adapting the GRNN for the first stage are attempted. Each approach has different processing and storage requirements. These approaches (models) are detailed below. Input, Residue, Output, Y n O n Lossless Predictor Fig. 3 : Two-stage compression scheme 2.1 GRNN Predictor (GRNNP) The first approach is to use the GRNN as a predictor, shown in Fig. 4. The network is set up N 3 Lossless Encoder n such that a number of past samples, say p, are used to predict the current sample. The p th -order GRNNP derives an approximation function of the input, F, during its training phase using a training window. The input values are then predicted by using F, as shown in (3). Xˆ n = F GRNNP (-1, -2,, -p ) (3) where F GRNNP = approximation function derived by the GRNNP Xˆn = predicted n th value = n th input value Z -1 Z -1 Z p ^ Fig. 4 : GRNN Predictor (GRNNP) model 2.2 GRNN Approximator (GRNNA) To realise the usual application of the GRNN as a function approximator, the second approach uses samples taken at regular intervals throughout the data, in designing the network. These samples are used as the target (expected) values. Fig. 5 shows the structure of the GRNNA. X i X i-i X i-i(p-1) X X GRNN GRNN j = final sample in training window of current block I = size of an interval between samples p = no. of input nodes n = number of current sample (j-pi +1 < n < j) Fig. 5 : GRNN Approximator (GRNNA) / GRNN Estimator (GRNNE) model Only the p inputs at intervals I of the training block are used in setting up the network. The approximation process for the remaining input data is simplified as (4). + Xˆ n + S Σ Y n Y n

3 Xˆ = n FGRNNP ( n) (4) where F GRNNA = approximation function of GRNNA ˆ th = predicted n value n = number of the input currently being approximated The remaining input values are not used in the approximation process, except in the generation of the residues. This approach enables the GRNNA to use a larger range of the input for function approximation. For better approximation, the input should be preprocessed by a function analyser to identify training samples that are most representative of the input pattern. The additional overheads incurred are costly in terms of time and processing power, and make it impractical for real-time applications. 2.3 GRNN Estimator (GRNNE) The GRNNE is a compromise between compression and processing performance of the GRNNA. The structure of the GRNNE is similar to that of the GRNNA. The difference is that samples are taken at intervals on a subset of the input block, namely the training window, and not of the entire block. The value of the interval, I, in (4) would be of lesser magnitude than the one used for the GRNNA. This provides the design stage more distributed samples for function approximation than the GRNNP, while not requiring that the entire input block be buffered as in the GRNNA implementation. The approximated function is projected across the input block. This scheme assumes that intrablock input pattern variation is small, so that the projected function is able to perform well in estimating the input values. 3 Training Schemes The GRNN is trained at the design stage, where only one set of input values is presented to the network at a time. For the GRNNP, this is done numerous times with different p consecutive input values within the training window each time. The trained GRNN is set up as an approximation function, F, of the input. Adjustment of the weights and biases is done in a way that minimises the mean absolute error (MAE) between values approximated using F and the actual training sample values. Training schemes may be varied in terms of the size of data sets, S B, presented to the network. In the single-block (SB) training scheme, the entire input file is treated as one continuous block. A training window of the first 500 samples is used for training, according to the different approaches. The remaining samples are then predicted, approximated or estimated depending on the model used. The residues are generated from the difference between these values and the actual values. To enable semiadaptive training, whilst and minimising buffer requirements and processing time, a batch training method was also implemented. S B of 50, 500, 1000, 1500, 2000 and 2500 samples per block are tested. In this method, a training window of 20% of the block size is used. Identical first-stage networks and encoders exists at the transmitter and receiver ends. It is assumed that this criteria ensures that any losses incurred at the transmitter is identically incurred at the receiver, so the restored value with the residue is identical to the source input data. The networks would also need to be trained in the same manner to achieve losslessness. In order to maximise compression, the coefficients of the trained network is transmitted to set the receiver network up, followed by the residues of the first block. The magnitude of the residues transmitted is significantly lower than the original input values. For subsequent blocks, the training data is predicted / approximated / estimated using the previous block s network and the residues are sent. Fig. 6 shows the general block-adaptive training and prediction stages of the input stream at the transmitter. 3 SB prediction process block3 2.2 SB 2 SB block2 prediction process Training process sample.no. 1 p Fig. 6 : Batch-training process for the GRNN Training occurs at both ends. Since training takes identical amount of time, the transmitter and receiver would be synchronised (disregarding the initial delay in setting up the first block, and assuming that the residue generation related processes take the same amount of time as the 1.2 SB SB block1 0.2 SB prediction process

4 restoration process, and that transmission time is constant and negligible). As such, both networks would theoretically be working at a constant rate, enabling real-time implementation. The networks are re-trained for each block, changing F. Initialisation of the hidden layer is done using unsupervised learning through algorithms such as the k-means clustering algorithm [7]. Fast initialisation of the output layer may be done via optimised methods such as the Nguyen-Widrow algorithm [8]. Supervised backpropagation may then be used in fine-tuning the two-layer network [9]. In the event when random numbers are used for initialisation, the same seed must be used at both networks, thus the sequence of pseudo-random numbers generated would be the same. 4 Configurations Neural network configurations are done with respect to topological and neurodynamic set up. In the case of the GRNN, the number of layers and activation functions are predetermined. Only one output PE is used as only one value is approximated at each iteration of the algorithm. The weights are determined automatically during training. The bias allows the sensitivity of the node to be adjusted. It is set using (5) such that for a range -N SPREAD N +N SPREAD, the output of the radial basis would satisfy 0.5 f rb (N) 1.0. The constant N SPREAD should be chosen such that it is larger than adjacent input values, to allow the PE to respond strongly to overlapping active regions of the input space and allowing the network to function smoother, resulting in better generalisation of input vectors. However, N SPREAD must be smaller than the input space so that each PE does not effectively respond to the same large area of input. log (0.5) b / e = = N (5) SPREAD N SPREAD The GRNN has an equal number of hidden layer PE and input nodes. The optimum number of input nodes is determined empirically. Tables 1 and 2 give the results obtained. The results are in terms of compression ratio (CR), given by the size ratio of the original source file over the compressed file. The residues are transmitted / stored using a set number of bits, t, determined empirically. In the case that the magnitude of the residue exceeds the t-bit representation, the actual input value would be transmitted in the original s bits, preceded with a t- bit flag. The compressed file includes all the transmitted information, including network set up information and flags. Only a limited number of test files were used at this stage, so as not to over-train the networks and restrict their ability to generalise input patterns. The three files chosen were those that best represented the distribution of the six telemetry test data files used in the performance evaluation in the next section. Table 1 : CR obtained by GRNNP with different number of input nodes for the 3 test files Test File Number of input nodes (Size, bytes) Td3 (139571) Td4 (55365) Td6 (184774) Average Table 2: Average CR obtained by GRNNA and GRNNE for different number of input nodes GRNN Number of input nodes model GRNNA GRNNE From the results in Table 1, it is observed that the average CR obtained for all configurations of is the same (to two decimal places). Analysing the results obtained for each test file, it is noticed that the GRNNP with 5 inputs achieved the highest CR in all the files. This 5 th -order predictor is chosen. In Table 2, the maximum CR for GRNNA is achieved for the 2 nd -order and 10 th -order networks. The (10 inputs, 10 hidden nodes and 10 output) configuration is chosen as the additional nodes would be useful in producing a better approximation function. The 2-node combination performed well as part of the telemetry data contains a linear parameter that was well approximated by the linear function approximated using 2 nodes. In the case of the GRNNE, the configuration is selected as it produced the best. 5 Performance Evaluation Using the chosen GRNN configurations, the performance of the GRNN networks is evaluated for each of the training schemes. Results, in terms of the CR achieved for the first stage, using only the GRNN with the residue optimisation is given in Table 3. These values represent the average

5 performance of the GRNN models over the six telemetry data files. Simulation of the neural networks is done on a serial machine, and as such processing times of the algorithms is unavailable. Table 3 : Average CR achieved by the GRNN models in the first stage for different block sizes GRNN Block size (samples per block) model SB GRNNP GRNNA GRNNE From the results in Table 3, it is found that the best performance for all the three models is achieved with the smallest block size of 50 samples per block. This is as expected since the approximation function derived would be a closer representation to the actual distribution across a smaller span of the input. An interesting result to note is that although the approach used for the GRNNP is unconventional, with regards to designing a GRNN, it is found that this network produced better results than the other two approaches in almost all the training schemes. On further investigation, it was realised that in order to implement this unconventional scheme, the simulation adapted the architecture of the GRNNP by changing the number of hidden layer PE. The number used was equal to the number of input patterns presented to the network (i.e. S B p), so the actual architecture used for the result achieved with S B 50 was With a larger network of more PE, the GRNNP was able to better predict the input and produce smaller residues. It is also interesting to note that the GRNNA did not perform as well as expected. It is likely that the intervals used for sampling were unable to capture the significant patterns in the data. In general, as expected, the GRNNE achieved the lowest CR. It did, however, manage to outperform the other two networks for block sizes of 500 and 2500, and would be recommended in such situations when buffer space is costly. The GRNN is incorporated into the two-stage scheme by pairing GRNN models with a popular encoder, namely arithmetic coding (AC). The results achieved by the new two-stage schemes are given in Table 4. Again, it is observed that the GRNNP maintains the best compression performance. The way the encoders handle different residue patterns produced by the first-stage influences the overall results [10]. For instance, the best performance for block size 2000 in the two-stage scheme uses the GRNNA, not the GRNNE. To illustrate the influence of the encoder on the twostage scheme further, two-stage schemes of the GRNN with Huffman coding is evaluated. From the results in Table 5, it is noticeable that the performance of certain two-stage schemes involving GRNNE degrades to lower CR than the results of the single-stage scheme in Table 3. This indicates that it is possible for data expansion to occur by incorporating the second stage, as observed in the case of some files using the GRNNE-HC combination for S B of 1000, 1500, 2000 and Table 4 : Average CR achieved by the GRNN in the two-stage schemes with arithmetic coding (AC) GRNN- Block size (samples per block) AC SB GRNNP GRNNA GRNNE Table 5 : Average CR achieved by the GRNN in the two-stage schemes with Huffman coding (HC) GRNN- Block size (samples per block) HC SB GRNNP GRNNA GRNNE In order to provide relative performance comparison, existing two-stage may be used [1]. A fixed 5 th -order finite impulse response (FIR) filter using the coefficients described by (6), and an adaptive normalised least mean squares (NLMS) predictor are applied to the first stage. X ˆ (6) n = 4xn 1 7xn 2 + 7xn 3 4xn 4 + xn 5 The compression performance of these linear predictors and GRNN models (for block size 50) in the single- and two-stage schemes on the test data files is given in Fig. 7. It is shown that all three neural GRNN models perform significantly better than the linear models in the single- and two-stage schemes. To compare against different neural networks, a 2 nd -order multi-layer feedforward network (MLFN) and a 5 th -order recurrent Elman network (EN) were implemented and the results are incorporated in Fig. 5. The MLFN was set up with a hidden linear node and a sigmoidal output node. The recurrent was also set up with the same activation functions, but with two hidden nodes. Training was undertaken for S B of 1000 and 50 for MLFN and EN, respectively. The

6 GRNN combinations were found to achieve better results than the EN, although the well-known MLFN, achieved the highest CR compared to the rest of schemes. CR predictor predictor-ac predictor-hc FIR NLMS GRNNP GRNNA GRNNE MLFN EN Predictor Fig. 7 : Best average CR achieved by linear and neural models in single- and two-stage schemes 6 Conclusions The GRNN generally requires more PE than standard feedforward networks (such as the MLFN), but can be designed in a fraction of the time taken to train the other networks. This gives it an advantage in terms of real-time applications. In order to minimise complexity and processing requirements, only small networks were implemented. All results quoted are simulation based, and given in terms of compression ratios achieved for a number of telemetry data files. Altogether, three different approaches of the GRNN in nine schemes are discussed, in both single- and two-stage implementations. The GRNNP functions as a predictor using a number of past values to predict the current value. Iterative training is done by redesigning the network a number of times within a predefined training window (20% of the training block, or 500 samples in the SB scheme). This approach is slower than the others, but was capable of producing the highest compression performance with the telemetry data. The GRNNA functions as an approximator by sampling the entire training block at regular intervals. In order to improve performance of this approach, more PE and/or function analyser is required to identify significant patterns in the inputs. A large buffer of size S B is required as the entire training block is buffered during the design stage. One-shot design, however, allows the network to be designed very quickly. The GRNNE estimates the input by projecting the approximation function derived using the training window, across the training block. It uses a smaller buffer (0.2 S B ) and can be designed quickly. The performance of this approach was the lower than the others since the stretching of the approximation function was a weak representation of the input pattern of the entire block. All three GRNN models were shown to achieve better compression performance than existing linear schemes using the FIR, NLMS and certain neural network algorithms. It is also shown that the choice of second-stage encoder is important in the twostage schemes, as data expansion may occur if the encoder chosen is unsuited to compressing certain patterns in the first-stage residue stream. References: [1] J.W. McCoy, N. Magotra and S. Stearns, Lossless Predictive Coding, IEEE Midwest Symposium on Circuits and Systems, 1995, pp [2] R. Logeswaran and C. Eswaran, Lossless Data Compression using a Recurrent Neural Network, International Conference on Signal Processing Applications and Technology, November [3] R. Logeswaran and C. Eswaran, Neural Network Based Lossless Coding Schemes for Telemetry Data, Proceedings of the IEEE International Geoscience and Remote Sensing Symposium 1999, Vol. 4, 1999, pp [4] G.F. Poggio and F. Girosi, Networks for Approximation and Learning, Proceedings of the IEEE, Vol. 78, No. 9, September 1990, pp [5] S. Chen, C.F.N. Cowan and P.M. Grant, Orthogonal Least Squares Learning Algorithm for Radial Basis Function Networks, IEEE Transactions on Neural Networks, Vol. 2, No. 2, March 1991, pp [6] P.D. Wasserman, Advanced Methods in Neural Computing, Van Nostrand Reinhold, New York, [7] R.O. Duda & P.E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, [8] D. Nguyen and B. Widrow, Improving the Learning Speed of 2-layer Neural Networks by Choosing Initial Values of the Adaptive Weights, International Joint Conference of Neural Networks, Vol. 3, July 1990, pp [9] D.E. Rumelhart, G.E. Hinton and R.J. Williams, Learning Internal Representation by Error Propagation, Parallel Distributed Processing : Explorations in Microstructures of Cognition, Vol. 1, MIT Press, Massachusettes, [10] R. Logeswaran and C. Eswaran, Effect of Encoders on the Performance of Lossless twostage Data Compression, IEE Electronics Letters, Vol. 35, No. 18, September 1999, pp

Multiple-Layer Networks. and. Backpropagation Algorithms

Multiple-Layer Networks. and. Backpropagation Algorithms Multiple-Layer Networks and Algorithms Multiple-Layer Networks and Algorithms is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions.

More information

A Bi-level Block Coding Technique for Encoding Data Sequences with Sparse Distribution

A Bi-level Block Coding Technique for Encoding Data Sequences with Sparse Distribution Paper 85, ENT 2 A Bi-level Block Coding Technique for Encoding Data Sequences with Sparse Distribution Li Tan Department of Electrical and Computer Engineering Technology Purdue University North Central,

More information

Initialisation improvement in engineering feedforward ANN models.

Initialisation improvement in engineering feedforward ANN models. Initialisation improvement in engineering feedforward ANN models. A. Krimpenis and G.-C. Vosniakos National Technical University of Athens, School of Mechanical Engineering, Manufacturing Technology Division,

More information

Some Properties of RBF Network with Applications to System Identification

Some Properties of RBF Network with Applications to System Identification Some Properties of RBF Network with Applications to System Identification M. Y. Mashor School of Electrical and Electronic Engineering, University Science of Malaysia, Perak Branch Campus, 31750 Tronoh,

More information

NEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH

NEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH FIFTH INTERNATIONAL CONGRESS ON SOUND AND VIBRATION DECEMBER 15-18, 1997 ADELAIDE, SOUTH AUSTRALIA NEURO-ACTIVE NOISE CONTROL USING A DECOUPLED LINEAIUNONLINEAR SYSTEM APPROACH M. O. Tokhi and R. Wood

More information

Neural Filters: MLP VIS-A-VIS RBF Network

Neural Filters: MLP VIS-A-VIS RBF Network 6th WSEAS International Conference on CIRCUITS, SYSTEMS, ELECTRONICS,CONTROL & SIGNAL PROCESSING, Cairo, Egypt, Dec 29-31, 2007 432 Neural Filters: MLP VIS-A-VIS RBF Network V. R. MANKAR, DR. A. A. GHATOL,

More information

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast

AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE A Thesis by Andrew J. Zerngast Bachelor of Science, Wichita State University, 2008 Submitted to the Department of Electrical

More information

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution 2.1. General Purpose There are many popular general purpose lossless compression techniques, that can be applied to any type of data. 2.1.1. Run Length Encoding Run Length Encoding is a compression technique

More information

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 1, 1993 WIT Press,   ISSN Combining multi-layer perceptrons with heuristics for reliable control chart pattern classification D.T. Pham & E. Oztemel Intelligent Systems Research Laboratory, School of Electrical, Electronic and

More information

Prediction of airblast loads in complex environments using artificial neural networks

Prediction of airblast loads in complex environments using artificial neural networks Structures Under Shock and Impact IX 269 Prediction of airblast loads in complex environments using artificial neural networks A. M. Remennikov 1 & P. A. Mendis 2 1 School of Civil, Mining and Environmental

More information

Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks

Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks Högskolan i Skövde Department of Computer Science Constant False Alarm Rate Detection of Radar Signals with Artificial Neural Networks Mirko Kück mirko@ida.his.se Final 6 October, 1996 Submitted by Mirko

More information

A Hybrid Technique for Image Compression

A Hybrid Technique for Image Compression Australian Journal of Basic and Applied Sciences, 5(7): 32-44, 2011 ISSN 1991-8178 A Hybrid Technique for Image Compression Hazem (Moh'd Said) Abdel Majid Hatamleh Computer DepartmentUniversity of Al-Balqa

More information

IBM SPSS Neural Networks

IBM SPSS Neural Networks IBM Software IBM SPSS Neural Networks 20 IBM SPSS Neural Networks New tools for building predictive models Highlights Explore subtle or hidden patterns in your data. Build better-performing models No programming

More information

VOL. 3, NO.11 Nov, 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved.

VOL. 3, NO.11 Nov, 2012 ISSN Journal of Emerging Trends in Computing and Information Sciences CIS Journal. All rights reserved. Effect of Fading Correlation on the Performance of Spatial Multiplexed MIMO systems with circular antennas M. A. Mangoud Department of Electrical and Electronics Engineering, University of Bahrain P. O.

More information

A LITERATURE REVIEW IN METHODS TO REDUCE MULTIPLE ACCESS INTERFERENCE, INTER-SYMBOL INTERFERENCE AND CO-CHANNEL INTERFERENCE

A LITERATURE REVIEW IN METHODS TO REDUCE MULTIPLE ACCESS INTERFERENCE, INTER-SYMBOL INTERFERENCE AND CO-CHANNEL INTERFERENCE Ninth LACCEI Latin American and Caribbean Conference (LACCEI 2011), Engineering for a Smart Planet, Innovation, Information Technology and Computational Tools for Sustainable Development, August 3-5, 2011,

More information

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR

LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 1 LECTURE VI: LOSSLESS COMPRESSION ALGORITHMS DR. OUIEM BCHIR 2 STORAGE SPACE Uncompressed graphics, audio, and video data require substantial storage capacity. Storing uncompressed video is not possible

More information

A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction

A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction 1514 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 10, NO. 8, DECEMBER 2000 A High-Throughput Memory-Based VLC Decoder with Codeword Boundary Prediction Bai-Jue Shieh, Yew-San Lee,

More information

Neural Network Synthesis Beamforming Model For Adaptive Antenna Arrays

Neural Network Synthesis Beamforming Model For Adaptive Antenna Arrays Neural Network Synthesis Beamforming Model For Adaptive Antenna Arrays FADLALLAH Najib 1, RAMMAL Mohamad 2, Kobeissi Majed 1, VAUDON Patrick 1 IRCOM- Equipe Electromagnétisme 1 Limoges University 123,

More information

Systematic Treatment of Failures Using Multilayer Perceptrons

Systematic Treatment of Failures Using Multilayer Perceptrons From: FLAIRS-00 Proceedings. Copyright 2000, AAAI (www.aaai.org). All rights reserved. Systematic Treatment of Failures Using Multilayer Perceptrons Fadzilah Siraj School of Information Technology Universiti

More information

A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor

A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor Umesh 1,Mr. Suraj Rana 2 1 M.Tech Student, 2 Associate Professor (ECE) Department of Electronic and Communication Engineering

More information

Chapter - 7. Adaptive Channel Equalization

Chapter - 7. Adaptive Channel Equalization Chapter - 7 Adaptive Channel Equalization Chapter - 7 Adaptive Channel Equalization 7.1 Introduction The transmission o f digital information over a communication channel causes Inter Symbol Interference

More information

Evaluating the Performance of MLP Neural Network and GRNN in Active Cancellation of Sound Noise

Evaluating the Performance of MLP Neural Network and GRNN in Active Cancellation of Sound Noise Evaluating the Performance of Neural Network and in Active Cancellation of Sound Noise M. Salmasi, H. Mahdavi-Nasab, and H. Pourghassem Abstract Active noise control (ANC) is based on the destructive interference

More information

Model Validity Tests for RBF Network

Model Validity Tests for RBF Network Model Validity Tests for RBF Network M. Y. Mashor School of Electrical and Electronic Engineering, University Science of Malaysia, Perak Branch Campus, 31750 Tronoh, Perak, Malaysia. E-mail : yusof@eng.usm.my

More information

DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS

DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS DIAGNOSIS OF STATOR FAULT IN ASYNCHRONOUS MACHINE USING SOFT COMPUTING METHODS K. Vinoth Kumar 1, S. Suresh Kumar 2, A. Immanuel Selvakumar 1 and Vicky Jose 1 1 Department of EEE, School of Electrical

More information

A Technique for Pulse RADAR Detection Using RRBF Neural Network

A Technique for Pulse RADAR Detection Using RRBF Neural Network Proceedings of the World Congress on Engineering 22 Vol II WCE 22, July 4-6, 22, London, U.K. A Technique for Pulse RADAR Detection Using RRBF Neural Network Ajit Kumar Sahoo, Ganapati Panda and Babita

More information

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition

Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition Enhanced MLP Input-Output Mapping for Degraded Pattern Recognition Shigueo Nomura and José Ricardo Gonçalves Manzan Faculty of Electrical Engineering, Federal University of Uberlândia, Uberlândia, MG,

More information

MINE 432 Industrial Automation and Robotics

MINE 432 Industrial Automation and Robotics MINE 432 Industrial Automation and Robotics Part 3, Lecture 5 Overview of Artificial Neural Networks A. Farzanegan (Visiting Associate Professor) Fall 2014 Norman B. Keevil Institute of Mining Engineering

More information

Introduction to Audio Watermarking Schemes

Introduction to Audio Watermarking Schemes Introduction to Audio Watermarking Schemes N. Lazic and P. Aarabi, Communication over an Acoustic Channel Using Data Hiding Techniques, IEEE Transactions on Multimedia, Vol. 8, No. 5, October 2006 Multimedia

More information

Predictive Subsea Integrity Management: Effective Tools and Techniques

Predictive Subsea Integrity Management: Effective Tools and Techniques Predictive Subsea Integrity Management: Effective Tools and Techniques The Leading Edge of Value-Based Subsea Inspection 1 st November Aberdeen 2017 www.astrimar.com Background Low oil price having major

More information

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information

Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information Images with (a) coding redundancy; (b) spatial redundancy; (c) irrelevant information 1992 2008 R. C. Gonzalez & R. E. Woods For the image in Fig. 8.1(a): 1992 2008 R. C. Gonzalez & R. E. Woods Measuring

More information

Neural Network based Digital Receiver for Radio Communications

Neural Network based Digital Receiver for Radio Communications Neural Network based Digital Receiver for Radio Communications G. LIODAKIS, D. ARVANITIS, and I.O. VARDIAMBASIS Microwave Communications & Electromagnetic Applications Laboratory, Department of Electronics,

More information

J. Electrical Systems 13-3 (2017): Regular paper. An efficient digital signal processing method for RRNS-based DS-CDMA systems

J. Electrical Systems 13-3 (2017): Regular paper. An efficient digital signal processing method for RRNS-based DS-CDMA systems Peter Olsovsky 1,*, Peter Podhoransky 1 J. Electrical Systems 13-3 (2017): 606-617 Regular paper An efficient digital signal processing method for RRNS-based DS-CDMA systems JES Journal of Electrical Systems

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 7, February 2013) Performance Analysis of OFDM under DWT, DCT based Image Processing Anshul Soni soni.anshulec14@gmail.com Ashok Chandra Tiwari Abstract In this paper, the performance of conventional discrete cosine transform

More information

Low Power Approach for Fir Filter Using Modified Booth Multiprecision Multiplier

Low Power Approach for Fir Filter Using Modified Booth Multiprecision Multiplier Low Power Approach for Fir Filter Using Modified Booth Multiprecision Multiplier Gowridevi.B 1, Swamynathan.S.M 2, Gangadevi.B 3 1,2 Department of ECE, Kathir College of Engineering 3 Department of ECE,

More information

Digital Television Lecture 5

Digital Television Lecture 5 Digital Television Lecture 5 Forward Error Correction (FEC) Åbo Akademi University Domkyrkotorget 5 Åbo 8.4. Error Correction in Transmissions Need for error correction in transmissions Loss of data during

More information

Determination of optimal successor function in phase-based control using neural network

Determination of optimal successor function in phase-based control using neural network Title Determination of optimal successor function in phase-based control using neural network Author(s) Wong, SC; Law, WH; Tong, CO Citation Ieee Intelligent Vehicles Symposium, Proceedings, 1996, p. 120-125

More information

Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1

Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1 Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1 Hidden Unit Transfer Functions Initialising Deep Networks Steve Renals Machine Learning Practical MLP Lecture

More information

Empirical Probability Based QoS Routing

Empirical Probability Based QoS Routing Empirical Probability Based QoS Routing Xin Yuan Guang Yang Department of Computer Science, Florida State University, Tallahassee, FL 3230 {xyuan,guanyang}@cs.fsu.edu Abstract We study Quality-of-Service

More information

WAVELET OFDM WAVELET OFDM

WAVELET OFDM WAVELET OFDM EE678 WAVELETS APPLICATION ASSIGNMENT WAVELET OFDM GROUP MEMBERS RISHABH KASLIWAL rishkas@ee.iitb.ac.in 02D07001 NACHIKET KALE nachiket@ee.iitb.ac.in 02D07002 PIYUSH NAHAR nahar@ee.iitb.ac.in 02D07007

More information

Fundamentals of Digital Communication

Fundamentals of Digital Communication Fundamentals of Digital Communication Network Infrastructures A.A. 2017/18 Digital communication system Analog Digital Input Signal Analog/ Digital Low Pass Filter Sampler Quantizer Source Encoder Channel

More information

Chapter- 5. Performance Evaluation of Conventional Handoff

Chapter- 5. Performance Evaluation of Conventional Handoff Chapter- 5 Performance Evaluation of Conventional Handoff Chapter Overview This chapter immensely compares the different mobile phone technologies (GSM, UMTS and CDMA). It also presents the related results

More information

Optimum Power Allocation in Cooperative Networks

Optimum Power Allocation in Cooperative Networks Optimum Power Allocation in Cooperative Networks Jaime Adeane, Miguel R.D. Rodrigues, and Ian J. Wassell Laboratory for Communication Engineering Department of Engineering University of Cambridge 5 JJ

More information

Revision of Channel Coding

Revision of Channel Coding Revision of Channel Coding Previous three lectures introduce basic concepts of channel coding and discuss two most widely used channel coding methods, convolutional codes and BCH codes It is vital you

More information

Application of Feed-forward Artificial Neural Networks to the Identification of Defective Analog Integrated Circuits

Application of Feed-forward Artificial Neural Networks to the Identification of Defective Analog Integrated Circuits eural Comput & Applic (2002)11:71 79 Ownership and Copyright 2002 Springer-Verlag London Limited Application of Feed-forward Artificial eural etworks to the Identification of Defective Analog Integrated

More information

Innovative Approach Architecture Designed For Realizing Fixed Point Least Mean Square Adaptive Filter with Less Adaptation Delay

Innovative Approach Architecture Designed For Realizing Fixed Point Least Mean Square Adaptive Filter with Less Adaptation Delay Innovative Approach Architecture Designed For Realizing Fixed Point Least Mean Square Adaptive Filter with Less Adaptation Delay D.Durgaprasad Department of ECE, Swarnandhra College of Engineering & Technology,

More information

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016

Artificial Neural Networks. Artificial Intelligence Santa Clara, 2016 Artificial Neural Networks Artificial Intelligence Santa Clara, 2016 Simulate the functioning of the brain Can simulate actual neurons: Computational neuroscience Can introduce simplified neurons: Neural

More information

GSM Interference Cancellation For Forensic Audio

GSM Interference Cancellation For Forensic Audio Application Report BACK April 2001 GSM Interference Cancellation For Forensic Audio Philip Harrison and Dr Boaz Rafaely (supervisor) Institute of Sound and Vibration Research (ISVR) University of Southampton,

More information

MLP/BP-based MIMO DFEs for Suppressing ISI and ACI in Non-minimum Phase Channels

MLP/BP-based MIMO DFEs for Suppressing ISI and ACI in Non-minimum Phase Channels MLP/BP-based MIMO DFEs for Suppressing ISI and ACI in Non-minimum Phase Channels Terng-Ren Hsu ( 許騰仁 ) and Kuan-Chieh Chao Department of Microelectronics Engineering, Chung Hua University No.77, Sec. 2,

More information

A Fast Segmentation Algorithm for Bi-Level Image Compression using JBIG2

A Fast Segmentation Algorithm for Bi-Level Image Compression using JBIG2 A Fast Segmentation Algorithm for Bi-Level Image Compression using JBIG2 Dave A. D. Tompkins and Faouzi Kossentini Signal Processing and Multimedia Group Department of Electrical and Computer Engineering

More information

Highly-Accurate Real-Time GPS Carrier Phase Disciplined Oscillator

Highly-Accurate Real-Time GPS Carrier Phase Disciplined Oscillator Highly-Accurate Real-Time GPS Carrier Phase Disciplined Oscillator C.-L. Cheng, F.-R. Chang, L.-S. Wang, K.-Y. Tu Dept. of Electrical Engineering, National Taiwan University. Inst. of Applied Mechanics,

More information

A Dual-Mode Algorithm for CMA Blind Equalizer of Asymmetric QAM Signal

A Dual-Mode Algorithm for CMA Blind Equalizer of Asymmetric QAM Signal A Dual-Mode Algorithm for CMA Blind Equalizer of Asymmetric QAM Signal Mohammad ST Badran * Electronics and Communication Department, Al-Obour Academy for Engineering and Technology, Al-Obour, Egypt E-mail:

More information

Open Research Online The Open University s repository of research publications and other research outputs

Open Research Online The Open University s repository of research publications and other research outputs Open Research Online The Open University s repository of research publications and other research outputs Power system fault prediction using artificial neural networks Conference or Workshop Item How

More information

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels

Performance Optimization of Hybrid Combination of LDPC and RS Codes Using Image Transmission System Over Fading Channels European Journal of Scientific Research ISSN 1450-216X Vol.35 No.1 (2009), pp 34-42 EuroJournals Publishing, Inc. 2009 http://www.eurojournals.com/ejsr.htm Performance Optimization of Hybrid Combination

More information

Application of Artificial Neural Networks System for Synthesis of Phased Cylindrical Arc Antenna Arrays

Application of Artificial Neural Networks System for Synthesis of Phased Cylindrical Arc Antenna Arrays International Journal of Communication Engineering and Technology. ISSN 2277-3150 Volume 4, Number 1 (2014), pp. 7-15 Research India Publications http://www.ripublication.com Application of Artificial

More information

This is the peer reviewed author accepted manuscript (post print) version of a published work that appeared in final form in:

This is the peer reviewed author accepted manuscript (post print) version of a published work that appeared in final form in: Approach to frame-misalignment in physical-layer network coding This is the peer reviewed author accepted manuscript (post print) version of a published work that appeared in final form in: Nguyen, ao,

More information

Design and Analysis of RNS Based FIR Filter Using Verilog Language

Design and Analysis of RNS Based FIR Filter Using Verilog Language International Journal of Computational Engineering & Management, Vol. 16 Issue 6, November 2013 www..org 61 Design and Analysis of RNS Based FIR Filter Using Verilog Language P. Samundiswary 1, S. Kalpana

More information

Development and Comparison of Artificial Neural Network Techniques for Mobile Network Field Strength Prediction across the Jos- Plateau, Nigeria

Development and Comparison of Artificial Neural Network Techniques for Mobile Network Field Strength Prediction across the Jos- Plateau, Nigeria Development and Comparison of Artificial Neural Network Techniques for Mobile Network Field Strength Prediction across the Jos- Plateau, Nigeria Deme C. Abraham Department of Electrical and Computer Engineering,

More information

ARTIFICIAL GENERATION OF SPATIALLY VARYING SEISMIC GROUND MOTION USING ANNs

ARTIFICIAL GENERATION OF SPATIALLY VARYING SEISMIC GROUND MOTION USING ANNs ABSTRACT : ARTIFICIAL GENERATION OF SPATIALLY VARYING SEISMIC GROUND MOTION USING ANNs H. Ghaffarzadeh 1 and M.M. Izadi 2 1 Assistant Professor, Dept. of Structural Engineering, University of Tabriz, Tabriz.

More information

HIGH ORDER MODULATION SHAPED TO WORK WITH RADIO IMPERFECTIONS

HIGH ORDER MODULATION SHAPED TO WORK WITH RADIO IMPERFECTIONS HIGH ORDER MODULATION SHAPED TO WORK WITH RADIO IMPERFECTIONS Karl Martin Gjertsen 1 Nera Networks AS, P.O. Box 79 N-52 Bergen, Norway ABSTRACT A novel layout of constellations has been conceived, promising

More information

Asynchronous Space-Time Cooperative Communications in Sensor and Robotic Networks

Asynchronous Space-Time Cooperative Communications in Sensor and Robotic Networks Proceedings of the IEEE International Conference on Mechatronics & Automation Niagara Falls, Canada July 2005 Asynchronous Space-Time Cooperative Communications in Sensor and Robotic Networks Fan Ng, Juite

More information

Neural pattern recognition with self-organizing maps for efficient processing of forex market data streams

Neural pattern recognition with self-organizing maps for efficient processing of forex market data streams Neural pattern recognition with self-organizing maps for efficient processing of forex market data streams Piotr Ciskowski, Marek Zaton Institute of Computer Engineering, Control and Robotics Wroclaw University

More information

Surveillance and Calibration Verification Using Autoassociative Neural Networks

Surveillance and Calibration Verification Using Autoassociative Neural Networks Surveillance and Calibration Verification Using Autoassociative Neural Networks Darryl J. Wrest, J. Wesley Hines, and Robert E. Uhrig* Department of Nuclear Engineering, University of Tennessee, Knoxville,

More information

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes

Performance of Combined Error Correction and Error Detection for very Short Block Length Codes Performance of Combined Error Correction and Error Detection for very Short Block Length Codes Matthias Breuninger and Joachim Speidel Institute of Telecommunications, University of Stuttgart Pfaffenwaldring

More information

Research Article Lossless Compression Schemes for ECG Signals Using Neural Network Predictors

Research Article Lossless Compression Schemes for ECG Signals Using Neural Network Predictors Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 007, Article ID 6, 0 pages doi:0./007/6 Research Article Lossless Compression Schemes for ECG Signals Using Neural

More information

COHERENT DEMODULATION OF CONTINUOUS PHASE BINARY FSK SIGNALS

COHERENT DEMODULATION OF CONTINUOUS PHASE BINARY FSK SIGNALS COHERENT DEMODULATION OF CONTINUOUS PHASE BINARY FSK SIGNALS M. G. PELCHAT, R. C. DAVIS, and M. B. LUNTZ Radiation Incorporated Melbourne, Florida 32901 Summary This paper gives achievable bounds for the

More information

A variable step-size LMS adaptive filtering algorithm for speech denoising in VoIP

A variable step-size LMS adaptive filtering algorithm for speech denoising in VoIP 7 3rd International Conference on Computational Systems and Communications (ICCSC 7) A variable step-size LMS adaptive filtering algorithm for speech denoising in VoIP Hongyu Chen College of Information

More information

(xix) SYNOPSIS. Copyright

(xix) SYNOPSIS. Copyright (xix) SYNOPSIS Among the various techniques employed for communication in the presence of noise and interference, the idea of using a common channel with large time-bandwidth (TB) product has been successfully

More information

Reducing Intercarrier Interference in OFDM Systems by Partial Transmit Sequence and Selected Mapping

Reducing Intercarrier Interference in OFDM Systems by Partial Transmit Sequence and Selected Mapping Reducing Intercarrier Interference in OFDM Systems by Partial Transmit Sequence and Selected Mapping K.Sathananthan and C. Tellambura SCSSE, Faculty of Information Technology Monash University, Clayton

More information

Audio and Speech Compression Using DCT and DWT Techniques

Audio and Speech Compression Using DCT and DWT Techniques Audio and Speech Compression Using DCT and DWT Techniques M. V. Patil 1, Apoorva Gupta 2, Ankita Varma 3, Shikhar Salil 4 Asst. Professor, Dept.of Elex, Bharati Vidyapeeth Univ.Coll.of Engg, Pune, Maharashtra,

More information

Optimization of Coded MIMO-Transmission with Antenna Selection

Optimization of Coded MIMO-Transmission with Antenna Selection Optimization of Coded MIMO-Transmission with Antenna Selection Biljana Badic, Paul Fuxjäger, Hans Weinrichter Institute of Communications and Radio Frequency Engineering Vienna University of Technology

More information

Segmentation Based Image Scanning

Segmentation Based Image Scanning RADIOENGINEERING, VOL. 6, NO., JUNE 7 7 Segmentation Based Image Scanning Richard PRAČKO, Jaroslav POLEC, Katarína HASENÖHRLOVÁ Dept. of Telecommunications, Slovak University of Technology, Ilkovičova

More information

CHAPTER 6: REGION OF INTEREST (ROI) BASED IMAGE COMPRESSION FOR RADIOGRAPHIC WELD IMAGES. Every image has a background and foreground detail.

CHAPTER 6: REGION OF INTEREST (ROI) BASED IMAGE COMPRESSION FOR RADIOGRAPHIC WELD IMAGES. Every image has a background and foreground detail. 69 CHAPTER 6: REGION OF INTEREST (ROI) BASED IMAGE COMPRESSION FOR RADIOGRAPHIC WELD IMAGES 6.0 INTRODUCTION Every image has a background and foreground detail. The background region contains details which

More information

Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA

Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA Artificial Neural Network Engine: Parallel and Parameterized Architecture Implemented in FPGA Milene Barbosa Carvalho 1, Alexandre Marques Amaral 1, Luiz Eduardo da Silva Ramos 1,2, Carlos Augusto Paiva

More information

Journal of Signal Processing and Wireless Networks

Journal of Signal Processing and Wireless Networks 49 Journal of Signal Processing and Wireless Networks JSPWN Efficient Error Approximation and Area Reduction in Multipliers and Squarers Using Array Based Approximate Arithmetic Computing C. Ishwarya *

More information

Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation

Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation Steve Renals Machine Learning Practical MLP Lecture 4 9 October 2018 MLP Lecture 4 / 9 October 2018 Deep Neural Networks (2)

More information

IDMA Technology and Comparison survey of Interleavers

IDMA Technology and Comparison survey of Interleavers International Journal of Scientific and Research Publications, Volume 3, Issue 9, September 2013 1 IDMA Technology and Comparison survey of Interleavers Neelam Kumari 1, A.K.Singh 2 1 (Department of Electronics

More information

Multimedia Communications. Lossless Image Compression

Multimedia Communications. Lossless Image Compression Multimedia Communications Lossless Image Compression Old JPEG-LS JPEG, to meet its requirement for a lossless mode of operation, has chosen a simple predictive method which is wholly independent of the

More information

2. REVIEW OF LITERATURE

2. REVIEW OF LITERATURE 2. REVIEW OF LITERATURE Digital image processing is the use of the algorithms and procedures for operations such as image enhancement, image compression, image analysis, mapping. Transmission of information

More information

Proceedings of the 6th WSEAS International Conference on Multimedia Systems & Signal Processing, Hangzhou, China, April 16-18, 2006 (pp )

Proceedings of the 6th WSEAS International Conference on Multimedia Systems & Signal Processing, Hangzhou, China, April 16-18, 2006 (pp ) Proceedings of the 6th WSEAS International Conference on Multimedia Systems & Signal Processing, Hangzhou, China, April 16-18, 26 (pp137-141) Multi-Input Multi-Output MLP/BP-based Decision Feedbac Equalizers

More information

Radio Deep Learning Efforts Showcase Presentation

Radio Deep Learning Efforts Showcase Presentation Radio Deep Learning Efforts Showcase Presentation November 2016 hume@vt.edu www.hume.vt.edu Tim O Shea Senior Research Associate Program Overview Program Objective: Rethink fundamental approaches to how

More information

Frame Synchronization Symbols for an OFDM System

Frame Synchronization Symbols for an OFDM System Frame Synchronization Symbols for an OFDM System Ali A. Eyadeh Communication Eng. Dept. Hijjawi Faculty for Eng. Technology Yarmouk University, Irbid JORDAN aeyadeh@yu.edu.jo Abstract- In this paper, the

More information

Performance Evaluation of different α value for OFDM System

Performance Evaluation of different α value for OFDM System Performance Evaluation of different α value for OFDM System Dr. K.Elangovan Dept. of Computer Science & Engineering Bharathidasan University richirappalli Abstract: Orthogonal Frequency Division Multiplexing

More information

Channel Estimation in Multipath fading Environment using Combined Equalizer and Diversity Techniques

Channel Estimation in Multipath fading Environment using Combined Equalizer and Diversity Techniques International Journal of Scientific & Engineering Research Volume3, Issue 1, January 2012 1 Channel Estimation in Multipath fading Environment using Combined Equalizer and Diversity Techniques Deepmala

More information

Population Adaptation for Genetic Algorithm-based Cognitive Radios

Population Adaptation for Genetic Algorithm-based Cognitive Radios Population Adaptation for Genetic Algorithm-based Cognitive Radios Timothy R. Newman, Rakesh Rajbanshi, Alexander M. Wyglinski, Joseph B. Evans, and Gary J. Minden Information Technology and Telecommunications

More information

Neural Labyrinth Robot Finding the Best Way in a Connectionist Fashion

Neural Labyrinth Robot Finding the Best Way in a Connectionist Fashion Neural Labyrinth Robot Finding the Best Way in a Connectionist Fashion Marvin Oliver Schneider 1, João Luís Garcia Rosa 1 1 Mestrado em Sistemas de Computação Pontifícia Universidade Católica de Campinas

More information

Transmission of RTK Corrections and Measurements using Optimal Coding

Transmission of RTK Corrections and Measurements using Optimal Coding Journal of Global Positioning Systems (26) Vol. 5, No. 1-2:127-134 Transmission of RTK Corrections and Measurements using Optimal Coding William J Kellar, Miles P Moody Cooperative Research Centre for

More information

THE PROPAGATION OF PARTIAL DISCHARGE PULSES IN A HIGH VOLTAGE CABLE

THE PROPAGATION OF PARTIAL DISCHARGE PULSES IN A HIGH VOLTAGE CABLE THE PROPAGATION OF PARTIAL DISCHARGE PULSES IN A HIGH VOLTAGE CABLE Z.Liu, B.T.Phung, T.R.Blackburn and R.E.James School of Electrical Engineering and Telecommuniications University of New South Wales

More information

A Numerical Approach to Understanding Oscillator Neural Networks

A Numerical Approach to Understanding Oscillator Neural Networks A Numerical Approach to Understanding Oscillator Neural Networks Natalie Klein Mentored by Jon Wilkins Networks of coupled oscillators are a form of dynamical network originally inspired by various biological

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Deep Learning Barnabás Póczos Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio Geoffrey Hinton Yann LeCun 2

More information

6.004 Computation Structures Spring 2009

6.004 Computation Structures Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 6.004 Computation Structures Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Welcome to 6.004! Course

More information

Balancing Bandwidth and Bytes: Managing storage and transmission across a datacast network

Balancing Bandwidth and Bytes: Managing storage and transmission across a datacast network Balancing Bandwidth and Bytes: Managing storage and transmission across a datacast network Pete Ludé iblast, Inc. Dan Radke HD+ Associates 1. Introduction The conversion of the nation s broadcast television

More information

Implementation of Reed-Solomon RS(255,239) Code

Implementation of Reed-Solomon RS(255,239) Code Implementation of Reed-Solomon RS(255,239) Code Maja Malenko SS. Cyril and Methodius University - Faculty of Electrical Engineering and Information Technologies Karpos II bb, PO Box 574, 1000 Skopje, Macedonia

More information

Classification of Voltage Sag Using Multi-resolution Analysis and Support Vector Machine

Classification of Voltage Sag Using Multi-resolution Analysis and Support Vector Machine Journal of Clean Energy Technologies, Vol. 4, No. 3, May 2016 Classification of Voltage Sag Using Multi-resolution Analysis and Support Vector Machine Hanim Ismail, Zuhaina Zakaria, and Noraliza Hamzah

More information

Measurement system applications. Measurement System

Measurement system applications. Measurement System Measurement system applications Measurement System The Figure above hows a functional block diagram of a simple temperature control system in which the temperature Ta of a room is maintained at a reference

More information

Linear MMSE detection technique for MC-CDMA

Linear MMSE detection technique for MC-CDMA Linear MMSE detection technique for MC-CDMA Jean-François Hélard, Jean-Yves Baudais, Jacques Citerne o cite this version: Jean-François Hélard, Jean-Yves Baudais, Jacques Citerne. Linear MMSE detection

More information

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks

Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks Behavior Emergence in Autonomous Robot Control by Means of Feedforward and Recurrent Neural Networks Stanislav Slušný, Petra Vidnerová, Roman Neruda Abstract We study the emergence of intelligent behavior

More information

Lecture5: Lossless Compression Techniques

Lecture5: Lossless Compression Techniques Fixed to fixed mapping: we encoded source symbols of fixed length into fixed length code sequences Fixed to variable mapping: we encoded source symbols of fixed length into variable length code sequences

More information

Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting

Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting IEEE TRANSACTIONS ON BROADCASTING, VOL. 46, NO. 1, MARCH 2000 49 Multilevel RS/Convolutional Concatenated Coded QAM for Hybrid IBOC-AM Broadcasting Sae-Young Chung and Hui-Ling Lou Abstract Bandwidth efficient

More information

Harmonic detection by using different artificial neural network topologies

Harmonic detection by using different artificial neural network topologies Harmonic detection by using different artificial neural network topologies J.L. Flores Garrido y P. Salmerón Revuelta Department of Electrical Engineering E. P. S., Huelva University Ctra de Palos de la

More information

Performance comparison of convolutional and block turbo codes

Performance comparison of convolutional and block turbo codes Performance comparison of convolutional and block turbo codes K. Ramasamy 1a), Mohammad Umar Siddiqi 2, Mohamad Yusoff Alias 1, and A. Arunagiri 1 1 Faculty of Engineering, Multimedia University, 63100,

More information