Development of Neural Networks for Noise Reduction

Similar documents
Learning Ensembles of Convolutional Neural Networks

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding

ANNUAL OF NAVIGATION 11/2006

Adaptive System Control with PID Neural Networks

High Speed, Low Power And Area Efficient Carry-Select Adder

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel

Fast Code Detection Using High Speed Time Delay Neural Networks

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04.

A study of turbo codes for multilevel modulations in Gaussian and mobile channels

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme

Calculation of the received voltage due to the radiation from multiple co-frequency sources

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS

An Alternation Diffusion LMS Estimation Strategy over Wireless Sensor Network

A High-Sensitivity Oversampling Digital Signal Detection Technique for CMOS Image Sensors Using Non-destructive Intermediate High-Speed Readout Mode

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems

Partial Discharge Pattern Recognition of Cast Resin Current Transformers Using Radial Basis Function Neural Network

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION

Performance Analysis of the Weighted Window CFAR Algorithms

Performance Analysis of Cellular Radio System Using Artificial Neural Networks

Inverse Halftoning Method Using Pattern Substitution Based Data Hiding Scheme

Figure.1. Basic model of an impedance source converter JCHPS Special Issue 12: August Page 13

Enhanced Artificial Neural Networks Using Complex Numbers

Chaotic Filter Bank for Computer Cryptography

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985

Equity trend prediction with neural networks

Short Term Load Forecasting based on An Optimized Architecture of Hybrid Neural Network Model

Phasor Representation of Sinusoidal Signals

Topology Control for C-RAN Architecture Based on Complex Network

Optimal State Prediction for Feedback-Based QoS Adaptations

Space Time Equalization-space time codes System Model for STCM

Applying Rprop Neural Network for the Prediction of the Mobile Station Location

Speech bandwidth expansion based on Deep Neural Networks

Prevention of Sequential Message Loss in CAN Systems

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network

Grain Moisture Sensor Data Fusion Based on Improved Radial Basis Function Neural Network

Advanced Bio-Inspired Plausibility Checking in a Wireless Sensor Network Using Neuro-Immune Systems

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame

Uncertainty in measurements of power and energy on power networks

Th P5 13 Elastic Envelope Inversion SUMMARY. J.R. Luo* (Xi'an Jiaotong University), R.S. Wu (UC Santa Cruz) & J.H. Gao (Xi'an Jiaotong University)

The Performance Improvement of BASK System for Giga-Bit MODEM Using the Fuzzy System

Wavelet Multi-Layer Perceptron Neural Network for Time-Series Prediction

Performance analysis of a RLS-based MLP-DFE in time-invariant and time-varying channels

Cod and climate: effect of the North Atlantic Oscillation on recruitment in the North Atlantic

Source Localization by TDOA with Random Sensor Position Errors - Part II: Mobile sensors

MASTER TIMING AND TOF MODULE-

MTBF PREDICTION REPORT

STRUCTURE ANALYSIS OF NEURAL NETWORKS

ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques

Robot Docking Based on Omnidirectional Vision and Reinforcement Learning

Markov Chain Monte Carlo Detection for Underwater Acoustic Channels

Performance Analysis of Power Line Communication Using DS-CDMA Technique with Adaptive Laguerre Filters

Recognition of Low-Resolution Face Images using Sparse Coding of Local Features

Adaptive Modulation for Multiple Antenna Channels

Multi-transmitter aperture synthesis with Zernike based aberration correction

Comparison of Gradient descent method, Kalman Filtering and decoupled Kalman in training Neural Networks used for fingerprint-based positioning

Wavelet and Neural Network Approach to Demand Forecasting based on Whole and Electric Sub-Control Center Area

PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER. Chirala Engineering College, Chirala.

Yarn tenacity modeling using artificial neural networks and development of a decision support system based on genetic algorithms

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources

Evaluate the Effective of Annular Aperture on the OTF for Fractal Optical Modulator

MIMO-OFDM Systems. Team Telecommunication and Computer Networks, FSSM, University Cadi Ayyad, P.O. Box 2390, Marrakech, Morocco.

Lecture 3: Multi-layer perceptron

Uplink User Selection Scheme for Multiuser MIMO Systems in a Multicell Environment

Optimization of an Oil Production System using Neural Networks and Genetic Algorithms

Multi-sensor optimal information fusion Kalman filter with mobile agents in ring sensor networks

New Parallel Radial Basis Function Neural Network for Voltage Security Analysis

A New Type of Weighted DV-Hop Algorithm Based on Correction Factor in WSNs

antenna antenna (4.139)

@IJMTER-2015, All rights Reserved 383

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart

Servo Actuating System Control Using Optimal Fuzzy Approach Based on Particle Swarm Optimization

High Speed ADC Sampling Transients

A Fuzzy-based Routing Strategy for Multihop Cognitive Radio Networks

A Novel UWB Imaging System Setup for Computer- Aided Breast Cancer Diagnosis

On the Feasibility of Receive Collaboration in Wireless Sensor Networks

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter

NEURO-FUZZY TECHNIQUES FOR SYSTEM MODELLING AND CONTROL

A Heuristic Speech De-noising with the aid of Dual Tree Complex Wavelet Transform using Teaching-Learning Based Optimization

RC Filters TEP Related Topics Principle Equipment

Throughput Maximization by Adaptive Threshold Adjustment for AMC Systems

Digital Transmission

Behavior-Based Autonomous Robot Navigation on Challenging Terrain: A Dual Fuzzy Logic Approach

Implementation of Adaptive Neuro Fuzzy Inference System in Speed Control of Induction Motor Drives

Analysis of Time Delays in Synchronous and. Asynchronous Control Loops. Bj rn Wittenmark, Ben Bastian, and Johan Nilsson

Modeling the Properties of Core-Compact Spun Yarn Using Artificial Neural Network

Optimization Frequency Design of Eddy Current Testing

Revision of Lecture Twenty-One

熊本大学学術リポジトリ. Kumamoto University Repositor

Application of Self Organizing Map Approach to Partial Discharge Pattern Recognition of Cast-Resin Current Transformers

Priority based Dynamic Multiple Robot Path Planning

The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf

Transcription:

The Internatonal Arab Journal of Informaton Technology, Vol. 7, No. 3, July 00 89 Development of Neural Networks for Nose Reducton Lubna Badr Faculty of Engneerng, Phladelpha Unversty, Jordan Abstract: Ths paper descrbes the development of neural network models for nose reducton. The networks used to enhance the performance of modelng captured sgnals by reducng the effect of nose. Both recurrent and mult-layer Backpropagaton neural networks models are examned and compared wth dfferent tranng algorthms. The paper presented s to llustrate the effect of tranng algorthms and network archtecture on neural network performance for a gven applcaton. Keywords: Nose reducton, recurrent neural networks, mult-layer backpropagaton. Receved January 3, 009; accepted February 5, 009. Introducton In physcal systems, transmtted sgnals are usually dstrbuted partally, or sometmes almost completely, by an addtve nose from the transmtter, channel, and recever. The approach nvestgated n ths work s to consder nose reducton as an essentally requred process to enhance the estmaton process of mage reconstructon of the captured sgnal. Nose reducton s consdered as a contnuous mappng process of the nosy nput data to a nose free output data. The resulted enhanced sgnal can be appled to the holographc magng process and mproves the performance of the estmated model. Artfcal Neural Networks (ANNs) are fndng ncreasng use n nose reducton problems [,, 3, 4, 7, 8,, 3, 6, 7], and the man desgn goal of these Neural Networks (NNs) was to obtan a good approxmaton for some nputoutput mappng. In addton to obtanng a conventonal approxmaton, NNs are expected to generalze from the gven tranng data. The generalzaton s to use nformaton that NN learned durng tranng phase n order to synthesze, smlar but not dentcal, nputoutput mappng []. In ths paper, two dfferent NN archtectures are employed. These are Recurrent Neural Networks (RNNs) and MultLayer Neural Networks (MLNNs). Both networks are traned wth fve tranng algorthms. The tranng functons used are: Gradent descent backpropagaton (trangd), gradent descent wth momentum backpropagaton (trangdm), gradent descent wth adaptve lr (learnng rate) backprobagaton (trangda), gradent descent w/momentum and adaptve lr backpropagaton (trangdx), and Leverberg Marquardt backpropagaton (tranlm). The desgned NNs are traned wth nput sequences that are assumed to be a composton of the desred sgnal plus an addtve whte Gaussan nose. The networks are expected to learn the nosy tranng data wth the correspondng desred output and generalze the model. Ths research s an attempt to employ ANN for the enhancement of the measured corrupted sgnal and reduce the nose. The man contrbuton ncludes the followng: The nput tranng sequences to the desgned NNs are assumed to be a composton of the desred sgnal plus an addtve whte Gaussan nose. Ths assumpton speeds up the learnng process and mproves the approxmaton of the desred model [5]. The development and comparson of NN archtectures for use n nose reducton applcatons. A comparson of modelng performance usng mult-layer and recurrent NNs. An examnaton of the relatonshp between tranng performance and tranng speed wth the tranng algorthm used for a gven NN archtecture.. Artfcal Neural Networks There are two man phases n the operaton of ANN: learnng and testng. Learnng s the process of adaptng or modfyng the NN weghts n response to the tranng nput patterns beng presented at the nput layer. How weghts adapt n response to a learnng example s controlled by a tranng algorthm. Testng s the applcaton mode where the network processes a tested nput pattern presented at ts nput layer and creates a response at the output layer. Desgnng an ANN for a gven applcaton requres determnng the NN archtecture, the optmal sze for the network (the total number of layers, the number of hdden unts n the mddle layers, and number of unts n the nput and output layers) n terms of accuracy on a test set, and the tranng algorthm used durng the

90 The Internatonal Arab Journal of Informaton Technology, Vol. 7, No. 3, July 00 learnng phase. Two types of neural networks are used to perform the requred extracton of the knowledge from a nosy tranng set to acheve better sgnal enhancement. These networks are RNN and MLNN. The archtectures of both networks are presented n the followng secton... Archtecture... Recurrent Neural Network Archtecture The desgned RNN s called Elman network. Elman networks are two-layer backpropagaton networks, wth the addton of a feedback connecton from the output of the hdden layer to ts nput. Ths feedback path allows Elman networks to learn to recognze and generate temporal patterns, as well as spatal patterns [6]. A two-layer Elman network s shown n Fgure. Fgure. The archtecture of Elman Network [0]. The Elman network constructed has tansg neurons n ts hdden (recurrent) layer, and pureln neurons n ts output layer, shown n Fgures and 3, respectvely. The numbers of neurons n the hdden and output layers are 0 and, respectvely. The hdden unts and the output unt also have bases. These bas terms act lke weghts on connectons from unts whose output s always. The bas gves the network an extra varable, and so the network wth bas s expected to be more powerful than those wthout [0]. Ths combnaton s specal n that two-layer networks wth these transfer functons can approxmate any functon (wth a fnte number of dscontnutes) wth arbtrary accuracy. The only requrement s that the hdden layer must have enough neurons. More hdden neurons are needed as the functon beng ftted ncreases n complexty. Note that the Elman network dffers from conventonal twolayer networks n that the frst layer has a recurrent connecton. The delay n ths connecton stores values from the prevous tme step, whch can be used n the current tme step. Thus, even f two Elman networks, wth the same weghts and bases, are gven dentcal nputs at a gven tme step, ther outputs can be dfferent due to dfferent feedback states [5]. Fgure. Tansg transfer functon. Fgure 3. Purlne transfer functon. Elman network performs the followng:. The nput unts receve the frst nput.. Both the nput unts and context unts (group of unts that receves feedback sgnals from the prevous tme step [8]) actvate the hdden unts. 3. The hdden unts also feedback to actvate the context unts (copyng the content of the hdden unt). 4. The output unts s compared wth a teacher nput (desred output) and backpropogaton of error s used to ncrementally adjust the connecton strength. The recurrent connectons allow the network's hdden unts to see ts own prevous output, so that the subsequent behavour can be shaped by prevous responses. These recurrent connectons are what gve the network memory. The context unts are also "hdden" n the sense that they nteract exclusvely wth other nodes nternal to the network, and not the outsde world [7].... Multlayer Neural Network Archtecture A MLNN s desgned wth three layers as shown n Fgure 4. The feedforward network has two hdden layers of tansg neurons (f and f ) followed by an output layer of pureln neurons (f 3 ). The numbers of neurons n the frst and second hdden layers are 7 and 3 respectvely. The hdden unts and the output unt also have bases. These bas terms act lke weghts on connectons from unts whose output s always. Multple layers of neurons wth nonlnear transfer functons allow the network to learn nonlnear and lnear relatonshp between nput and output vectors. The backpropagaton model s multlayered snce t has dstnct layers. The neurons wthn each layer are connected wth the neurons of the adjacent layers through drected edges. There are no connectons among the neurons wthn the same layer.

Development of Neural Networks for Nose Reducton 9 Only the drecton of nformaton flow for the feedforward phase of operaton s shown. Durng the backpropagaton phase of learnng, sgnals are sent n the reverse drecton. Fgure 4. The Archtecture of three-layer neural network [0]. Three-layer neural network performs the followng: Durng feedforward phase, the nput unt receves an nput sgnal and broadcasts ths sgnal to the each of the hdden unts n the frst hdden layer. Each of the hdden unts then computes ts actvaton and sends ts sgnal to the hdden unts n the second layer. Each hdden unt n the second layer computes ts actvaton and sends ts sgnal to the output unts. Fnally, the output unt computes ts actvaton to form the response of the net for the gven nput pattern. Durng tranng phase, each output unt compares ts computed actvaton wth ts target value to determne the error assocated wth that unt. The error s dstrbuted from output layer back to all unts n the next lower layer, and also used to update the weghts between the output and the second hdden layer. The computed error n the second layer s dstrbuted to all unts n the prevous layer and used to update the weghts between the second hdden layer and the frst hdden layer. The computed error n the frst hdden layer s used to update the weghts between the frst hdden layer and the nput layer... Tranng Neural Networks The neural networks are traned usng backpropagaton algorthm. There are several varatons to the tranng algorthm of backpropagaton NN. These varatons are the bass of test procedures evaluaton the overall most effectve way to model the system. The dstncton between tranng and generalzaton accuraces les n the test patterns adopted. Good tranng accuracy can be acheved by formng complex decson boundares, whch n turn requres a large network sze. Also, good generalzaton accuracy needs not to push too hard on the tranng accuracy; the overtranng may result n degraded generalzaton. Ths may occur f too many hdden unts are used []. A number of network archtectures have been desgned and tested wth dfferent nosy data samples. The am was to have good tranng process, to avod overtranng problem, and to have better Mean Square Error (MSE) goal durng the tranng process. It has been proven [0] that the addton of random nose to the desred sgnal durng the tranng process of the neural network can mprove the generalzaton of the network and can avod the learnng process from beng trapped nto local mnmum. Assume xk denotes the k th element of an nput vector; y s the th output of the output layer. Let d(t) denote the desred response for output neuron at tme t, where t s the dscrete tme ndex. The error sgnal e(t) s defned as the dfference between the target response d(t) and the actual response y(t). e = d y () The am of learnng s to mnmze a cost functon based on the error sgnal e(t), wth respect to network parameters (weghts), such that the actual response of each output neuron n the network approaches the target response [6]. A crteron commonly used for the cost functon s the MSE crteron, defned as the mean-square value of the sum squared error: J= E = E ( e ) ( d y ) () (3) where E s the statstcal expectaton operator and the summaton s over all the neurons of the output layer. Usually the adaptaton of weghts s performed by usng the desred sgnal d(t) only. In [6] t s stated that a new sgnal d ( t ) + n ( t ) can be used as a desred sgnal for output neuron nstead of usng the orgnal desred sgnal d(t), where n(t) s a nose term. Ths nose term s assumed to be whte Gaussan nose, ndependent of both the nput sgnals xk(t) and the desred sgnals d(t). Wth the new desred sgnals, the MSE of equaton 3 can be wrtten as: J = E[ ( ( ) ( ) ( )) d t + n t y t ] (4) It s shown n [6] that Eq.4 s equal to: J= + E ( y E{ d + n x( t)}) E var(( d + n ) x( t)) (5) where the symbol means condtonal probabltes and 'var' s an abbrevaton of varance. The second term n the rght hand sde of equaton 5 wll contrbute to the total error J and as learnng progresses, but t does not affect the fnal value of the weghts because t s not a

9 The Internatonal Arab Journal of Informaton Technology, Vol. 7, No. 3, July 00 functon of the network weghts, whle the frst term wll decde the optmal value of the weghts [6]. Snce the nose s zero mean and t s ndependent of both desred and the nput sgnals, thus: { E { d + n x( t)} = { E{ d x( t)} (6) It s clear from equatons 5 and 6 that the fnal weght values can be determned wthout the exstence of nose n the equaton. The tranng of NN was made to follow the model descrbed by the followng equaton for holograph mage process: d(x) = A r + S + A r SCos θ (7) where θ s a phase dfference between A r and S s reflected sgnal from the object under magng process, and A r s reference sgnal due to the requrement of nlne holography. 3. Expermental Results In ths secton, the expermental results obtaned usng RNN and MLNN archtectures on the recorded sgnal from a test object. A test object conssts of two steel rods of.5cm dameter was used. The separaton between the two rods was 7cm. The dstance Zo between the object and recordng planes was 90cm. The object was covered by two dfferent opaque materals: a sheet of paper and a Styrofoam. The object was llumnated by ultrasound waves usng ultrasonc transmttng transducers []. The receved sgnals from the object due to the reflecton are recorded and n order to enhance them, or n other words, to decrease the effect the envronments such as the relatvely hgh background that caused by the opaque materal; the neural network s used to ncrease the SNR of holographc data. Desgnng neural network archtecture for a gven applcaton requres determnng the optmal sze for the network n terms of accuracy on a test set, usually by ncreasng ts sze untl there s no longer sgnfcant decrease n error. The analyss was performed through programs mplemented on MATLAB software. Both networks performed the requred reducton of the nose n the captured sgnal,.e., the RNN and MLNN. The MLNN performs better than RNN n terms of run tme and also the Mean Square Error (MSE) performance. Tables and summarze the results of NN's tranng by comparng the Elapsed tme, Epochs, and the MSE of RNN and MLNN wth fve tranng algorthms. These algorthms are Trangd, Trangdm, Trangda, Tranlm, and Trangdx [5]. All of these algorthms use the gradent of the performance functon to determne how to adjust the weghts to mnmze performance (.e., MSE). As can be seen n Tables and, the MLNN traned wth TRAINLM backpropagaton functon results n a fastest algorthm mplementaton (90 epochs) wth a best performance (MSE equals 0.085). Table. Elapsed tme and epochs of RNN and MLNN wth fve tranng algorthms Recurrent Neural Network (RNN) Multlayer Neural Network (MLNN) Algorthm Elapsed Tme Epochs Elapsed Tme Epochs (Sec) (Sec) Trangd 7.563 800 4.047 7000 Trangdm 8.38 000 75.35 0000 Trangda 8.656 000 7.83 000 Tranlm 4.78 500.484 90 Trangdx 4.563 000.593 3500 Table. Mean square error of RNN and MLNN wth fve tranng algorthms. Mean Square Error (MSE) Algorthm Recurrent Neural Network (RNN) Multlayer Neural Network (MLNN) Trangd 0.05797 0.05354 Trangdm 0.05767 0.04463 Trangda 0.06874 0.05736 Tranlm 0.09 0.085 Trangdx 0.050799 0.06875 The tranlm functon works accordng to levenbergmarquardz optmzaton technque [5]. Fgure 5 shows MLNN performance durng the tranng process wth TRAINLM algorthm. Extensve testng s made to mprove the performance of MLNN. Table 3 shows the MLNN performance wth two experments cases: sheet of paper and Styrofoam solated materals. The desgned MLNN was able to decrease the effect of concealng medum and the nose n the captured sgnal. Fgure 6 shows the behavor of the neural network output after applyng the captured sgnal h(n) n the sheet of paper and styrofoam cases, respectvely. Fgure 5. Multlayer neural network MLNN performance durng the tranng process wth TRAINLM tranng algorthm. 4. Conclusons In ths paper, two dfferent neural networks have been compared to mnmze the effect of nose n the model of a concealed object and ncrease the SNR of holographc data. Expermental results show that usng

Development of Neural Networks for Nose Reducton 93 neural network to enhance the captured sgnal can mprove the trackng of the model parameters. The RNN and MLNN archtecture have been studed and tested to obtan the optmal archtecture n terms of number of hdden layers and neurons n each layer. The results obtaned show that durng the pre-processng stage, the RNN and MLNN were able to enhance the tested recorded sgnal and produce an output sgnal that follows the desred model wth mnmum MSE (0.085). The effect of addng whte Gaussan nose to the desred sgnal when tranng the neural network wth backpropagaton has been dscussed. Both analytcally and expermentally t has been demonstrated that the addtve nose mproves the network generalzaton on the tested patterns and the tranng trajectory. Smlar results have been obtaned when tranng both RNN and MLNN. Table 3. Multlayer neural network performance wth two experments cases: sheet of paper and Styrofoam solatng materals. Tranng Algorthm Mean Square Error (MSE) Sheet of Paper Case Styrofoam Case Trangd 0.05354 0.77533 Trangdm 0.04463 0.70649 Trangda 0.05736 0.5886 Tranlm 0.085 0.49683 Trangdx 0.06875 0.64845 Fgure 6. Multlayer neural network MLNN output after applyng the captured sgnal h(n) n two cases: sheet of paper and Styrofoam opaque materals. References [] Badr L. and Al-Azzo M., Burg-Neural Network Based Holographc Source Localzaton, WSEAS Transactons on Sgnal Processng, vol., no. 4, pp. 44-4, 006. [] Badr L. and Al-Azzo M., Modellng of Long Wavelength Detecton of Objects Usng Elman Network Modfed Covarance Combnaton, Internatonal Arab Journal of Informaton Technology (IAJIT), vol. 5, no. 3, pp. 65-7, 008. [3] Brueckmann R., Schedg A., and Gross H., Adaptve Nose Reducton and Voce Actvty Detecton for mproved Verbal Human-Robot Interacton usng Bnaural Data, n Proceedngs of the IEEE Internatonal Conference on Robotcs and Automaton, Italy, pp. 0-4, 007. [4] Chuan W. and Jose P., Tranng Neural Networks wth Addtve Nose n the Desred Sgnal, Transactons on Neural Networks, vol. 0, no. 6, pp. 5-57, 999. [5] Demuth H. and Beale M., Neural Network Toolbox-for Use wth MATLAB User s Gude, The Mathworks, Massachusetts, 00. [6] Dorronsoro J., López V., Cruz C., and Sgüenza J., Autoassocatve Neural Networks and Nose Flterng, IEEE Transactons on Sgnal Processng, vol. 5, no. 5, pp. 43-438, 003. [7] Elman J., Fndng Structure n Tme, Cogntve Scence, vol. 4, no., pp.79-, 990. [8] Fausett L., Fundamentals of Neural Networks: Archtectures, Algorthms, and Applcatons, Prentce Hall Internatonal, New Jersey, 994. [9] Gles C., Lawrence S., and Tso A., Nosy Tme Seres Predcton Usng a Recurrent Neural Network and Grammatcal Inference, Machne Learnng, vol. 44, no. -, pp. 6-83, 00. [0] Hagan M., Demuth H., and Beale M., Neural Network Desgn, PWS Publshng Company and Thomson Asa, 00. [] Kharnar D., Merchant S., and Desa U., Radar Sgnal Detecton In Non-Gaussan Nose Usng RBF Neural Network, Journal of Computers, vol. 3, no., pp. 3-39, 008. [] Kung S., Dgtal Neural Network, Prntce Hall, 993. [3] Mastran M. and Graldez A., Neural Shrnkage for Wavelet-Based SAR Despecklng, Internatonal Journal of Intellgent Technology, vol., no. 3, pp. -, 006. [4] Parveen S. and Green P., Speech Enhancement wth Mssng Data Technques Usng Recurrent Neural Networks, n Proceedngs of the IEEE Internatonal Conference on Acoustcs, Speech and Sgnal Processng, Canada, pp. 733-736, 004. [5] Radonja P., Neural Networks Based Model of A Hghly Nonlnear Process, n Proceedngs of the IX Telekomunkacon Forum Telfor 00, Beograd, 00. [6] Yoshmura H., Shmzu T., Matumura T., Kmoto M., and Isu N., Adaptve Nose Reducton Flter for Speech Usng Cascaded Sandglass-Type Neural Network, n Proceedngs of the IEEE Internatonal Conference on Robotcs and Automaton, Italy, pp. 0-4, 007. [7] Zhang X., Thresholdng Neural Network for Adaptve Nose Reducton, IEEE Transactons on Neural Networks, vol., no. 3, 00.

94 The Internatonal Arab Journal of Informaton Technology, Vol. 7, No. 3, July 00 Lubna Badr receved her BSc and MSc degree n computer and control engneerng, from the Unversty of Technology, Baghdad n 994 and 996, respectvely, and the PhD degree n computer engneerng, from Unversty of Technology n Baghdad, Iraq, n 999. Her research nterest ncludes neural network and fuzzy logc, knowledge acquston systems, and embedded system desgn. She has one book and more than 5 publcatons n reputed journals and conferences.