Breast Cancer Detection using Recursive Least Square and Modified Radial Basis Functional Neural Network

Similar documents
Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b

ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION

Optimal Placement of PMU and RTU by Hybrid Genetic Algorithm and Simulated Annealing for Multiarea Power System State Estimation

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht

Comparison of Gradient descent method, Kalman Filtering and decoupled Kalman in training Neural Networks used for fingerprint-based positioning

Learning Ensembles of Convolutional Neural Networks

ANNUAL OF NAVIGATION 11/2006

Grain Moisture Sensor Data Fusion Based on Improved Radial Basis Function Neural Network

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04.

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network

Development of Neural Networks for Noise Reduction

Advanced Bio-Inspired Plausibility Checking in a Wireless Sensor Network Using Neuro-Immune Systems

Investigation of Hybrid Particle Swarm Optimization Methods for Solving Transient-Stability Constrained Optimal Power Flow Problems

Optimal Allocation of Static VAr Compensator for Active Power Loss Reduction by Different Decision Variables

Indirect Symmetrical PST Protection Based on Phase Angle Shift and Optimal Radial Basis Function Neural Network

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES

High Speed, Low Power And Area Efficient Carry-Select Adder

arxiv: v1 [cs.lg] 8 Jul 2016

BP Neural Network based on PSO Algorithm for Temperature Characteristics of Gas Nanosensor

Research Article Dynamic Relay Satellite Scheduling Based on ABC-TOPSIS Algorithm

New Applied Methods For Optimum GPS Satellite Selection

Applications of Modern Optimization Methods for Controlling Parallel Connected DC-DC Buck Converters

Calculation of the received voltage due to the radiation from multiple co-frequency sources

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results

Inverse Halftoning Method Using Pattern Substitution Based Data Hiding Scheme

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems

STRUCTURE ANALYSIS OF NEURAL NETWORKS

NETWORK 2001 Transportation Planning Under Multiple Objectives

Network Reconfiguration in Distribution Systems Using a Modified TS Algorithm

Open Access Research on PID Controller in Active Magnetic Levitation Based on Particle Swarm Optimization Algorithm

Particle Filters. Ioannis Rekleitis

Intelligent and Robust Genetic Algorithm Based Classifier

MODEL ORDER REDUCTION AND CONTROLLER DESIGN OF DISCRETE SYSTEM EMPLOYING REAL CODED GENETIC ALGORITHM J. S. Yadav, N. P. Patidar, J.

Beam quality measurements with Shack-Hartmann wavefront sensor and M2-sensor: comparison of two methods

Performance Enhancement in Machine Learning System using Hybrid Bee Colony based Neural Network

A study of turbo codes for multilevel modulations in Gaussian and mobile channels

Application of a Modified PSO Algorithm to Self-Tuning PID Controller for Ultrasonic Motor

Application of Intelligent Voltage Control System to Korean Power Systems

Partial Discharge Pattern Recognition of Cast Resin Current Transformers Using Radial Basis Function Neural Network

Multiple Beam Array Pattern Synthesis by Amplitude and Phase with the Use of a Modified Particle Swarm Optimisation Algorithm

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter

A Patent Quality Classification System Using a Kernel-PCA with SVM

sensors ISSN by MDPI

Adaptive System Control with PID Neural Networks

Controlled Random Search Optimization For Linear Antenna Arrays

Lecture 3: Multi-layer perceptron

Fast Code Detection Using High Speed Time Delay Neural Networks

Genetic Algorithm for Sensor Scheduling with Adjustable Sensing Range

Chaotic Filter Bank for Computer Cryptography

PSO and ACO Algorithms Applied to Location Optimization of the WLAN Base Station

Nonlinear Complex Channel Equalization Using A Radial Basis Function Neural Network

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate

Applying Rprop Neural Network for the Prediction of the Mobile Station Location

Letters. Evolving a Modular Neural Network-Based Behavioral Fusion Using Extended VFF and Environment Classification for Mobile Robot Navigation

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation

Optimal Reconfiguration of Distribution System by PSO and GA using graph theory

New Parallel Radial Basis Function Neural Network for Voltage Security Analysis

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme

Flagged and Compact Fuzzy ART: Fuzzy ART in more efficient forms

A Novel Hybrid Neural Network for Data Clustering

Medical Diagnosis using Incremental Evolution of Neural Network

Uncertainty in measurements of power and energy on power networks

Servo Actuating System Control Using Optimal Fuzzy Approach Based on Particle Swarm Optimization

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION

Implementation of Adaptive Neuro Fuzzy Inference System in Speed Control of Induction Motor Drives

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid

Sensors for Motion and Position Measurement

Machine Learning in Production Systems Design Using Genetic Algorithms

An Optimal Model and Solution of Deployment of Airships for High Altitude Platforms

Artificial Intelligence Techniques Applications for Power Disturbances Classification

An Alternation Diffusion LMS Estimation Strategy over Wireless Sensor Network

Open Access Node Localization Method for Wireless Sensor Networks Based on Hybrid Optimization of Differential Evolution and Particle Swarm Algorithm

A PARTICLE SWARM OPTIMIZATION FOR REACTIVE POWER AND VOLTAGE CONTROL CONSIDERING VOLTAGE SECURITY ASSESSMENT

Key-Words: - Automatic guided vehicles, Robot navigation, genetic algorithms, potential fields

On Evolutionary Programming for Channel Equalization

Recognition of Low-Resolution Face Images using Sparse Coding of Local Features

Target Response Adaptation for Correlation Filter Tracking

Optimal PID Design for Control of Active Car Suspension System

Th P5 13 Elastic Envelope Inversion SUMMARY. J.R. Luo* (Xi'an Jiaotong University), R.S. Wu (UC Santa Cruz) & J.H. Gao (Xi'an Jiaotong University)

Design of Digital Band Stop FIR Filter using Craziness Based Particle Swarm Optimization (CRPSO) Technique

Modified Bat Algorithm for the Multi-Objective Flexible Job Shop Scheduling Problem

Novel Artificial Neural Networks For Remote-Sensing Data Classification

Phoneme Probability Estimation with Dynamic Sparsely Connected Artificial Neural Networks

NEW EVOLUTIONARY PARTICLE SWARM ALGORITHM (EPSO) APPLIED TO VOLTAGE/VAR CONTROL

th year, No., Computational Intelligence in Electrical Engineering,

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame

Mooring Cost Sensitivity Study Based on Cost-Optimum Mooring Design

Enhanced Artificial Neural Networks Using Complex Numbers

Research on Peak-detection Algorithm for High-precision Demodulation System of Fiber Bragg Grating

A Novel UWB Imaging System Setup for Computer- Aided Breast Cancer Diagnosis

Integration of Global Positioning System and Inertial Navigation System with Different Sampling Rate Using Adaptive Neuro Fuzzy Inference System

Modified Predictive Optimal Control Using Neural Network-based Combined Model for Large-Scale Power Plants

DIMENSIONAL SYNTHESIS FOR WIDE-BAND BAND- PASS FILTERS WITH QUARTER-WAVELENGTH RES- ONATORS

Source Localization by TDOA with Random Sensor Position Errors - Part II: Mobile sensors

RC Filters TEP Related Topics Principle Equipment

Transcription:

Breast Cancer Detecton usng Recursve Least Square and Modfed Radal Bass Functonal Neural Network M.R.Senapat a, P.K.Routray b,p.k.dask b,a Department of computer scence and Engneerng Gandh Engneerng College Bju Patnak Unversty of Technology, Inda manas_senapat@sfy.com b Department of computer scence and Engneerng N M Insttute of Engneerng and Technology Bju Patnak Unversty of Technology, Inda pravat.routray@gmal.com b S O A Unversty, Inda pkdash_nda@yahoo.com Abstract-- A new approach for classfcaton has been presented n ths paper. The proposed technque, Modfed Radal Bass Functonal Neural Network (MRBFNN) conssts of assgnng weghts between the nput layer and the hdden layer of Radal Bass functonal Neural Network (RBFNN). The centers of MRBFNN are ntalzed usng Partcle swarm Optmzaton (PSO) and varance and centers are updated usng back propagaton and both the sets of weghts are updated usng Recursve Least Square (RLS). Our smulaton result s carred out on sconsn Breast Cancer (BC) data set. The results are compared wth RBFNN, where the varance and centers are updated usng back propagaton and weghts are updated usng Recursve Least Square (RLS) and Kalman Flter. It s found the proposed method provdes more accurate result and better classfcaton. Keywords: Radal Bass Functonal Neural Networks (RBFNN), sconsn Breast Cancer (BC), Pattern Recognton, Gradent Descent Method, Recursve Least Square, Kalman Flter.. INTRODUCTION A Radal Bass Functonal neural network (RBFNN) s traned to perform a mappng from an m-dmensonal nput space to an n-dmensonal output space. RBFNN s can be used for dscrete pattern classfcaton, functon approxmaton, sgnal processng, control, or any other applcaton, whch requres a mappng from an nput space to an output space. An RBFNN conssts of the m-dmensonal nput x beng passed drectly to a hdden layer. Suppose there are c neurons n the hdden layer. Each of the c neurons n the hdden layer apples an actvaton functon, whch s a functon of the Eucldean dstance between the nput and an m-dmensonal prototype vector. Each hdden neuron contans ts own prototype vector as a parameter. The output of each hdden neuron s then weghted and passed to the output layer. The Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December outputs of the network consst of sums of the weghted hdden layer neurons. Fgure shows a schematc form of an RBFNN network. It can be seen from the basc archtecture, that the desgn of an RBFNN requres several decsons, ncludng the followng:. How many neurons wll resde n the hdden layer? (.e., what s the value of the nteger c);. hat are the values of the prototypes (.e., what are the values of the v vectors)? 3. hat functon wll be used at the hdden unts (.e., what s the functon g ( ))? 4. hat weghts wll be appled between the hdden layer and the output layer? The performance of an RBFNN network depends on the number and locaton (n the nput space) of the centers, the shape of the RBFNN functons at the hdden neurons, and the method used for determnng the network weghts. Some researchers have traned RBFNN networks by selectng the centers randomly from the tranng data[]. Some have used unsupervsed procedures (such as the k- means algorthm) for selectng the RBFNN centers], whle others have used supervsed procedures for selectng the RBFNN centers []. Several tranng methods separate the tasks of prototype determnaton and weght optmzaton for classfcaton and rule generaton. Ths trend probably arose because of the quck tranng that could result from the separaton of the two tasks. In fact, one of the prmary contrbutors to the popularty of RBFNN networks was probably ther fast tranng tmes as compared to gradent descent tranng (ncludng back propagaton) shown n Fgure, t can be

seen that once the prototypes are fxed and the hdden layer functon g( ) s known, the network s lnear n the weght parameters w. At that pont tranng the network becomes a quck and easy task that can be solved va lnear least squares. (Ths s smlar to the popularty of the optmal nterpolatve net that s due n large part to the effcent non- teratve learnng algorthms that are avalable [3,4]. Tranng methods that separate the tasks of prototype determnaton and weght optmzaton often do not use the nput output data from the tranng set for the selecton of the prototypes. For nstance, the random selecton method and the k-means algorthm result n prototypes that are completely ndependent of the nput output data from the tranng set. Although ths results n fast tranng, t clearly does not take full advantage of the nformaton contaned n the tranng set. ( ) g v x ŷ n n c x Ŷ Output layer ( n neurons) e v x Gradent descent tranng of RBFNN networks has proven to be much more effectve than more conventonal methods[]. However gradent descent tranng can be computatonally expensve. Ths paper extends the results of [] and formulates a tranng method for RBFNN s based on Recursve Least Square. Ths new method proves to be qucker than gradent descent whle stll provdng performance at the same level of effectveness. Tranng a neural network s, n general, a challengng nonlnear optmzaton problem. Varous dervatve-based methods have been used to tran neural networks, ncludng gradent descent [], Kalman.Flterng [5, 6], and the wellknown back-propagaton [7]. Dervatve-free methods, ncludng genetc programmng [8-] and smulated annealng [] have also been used to tran neural networks. Dervatve-free methods have the advantage that they do not requre the dervatve of the objectve functon wth respect to the neural network parameters. They are more robust than dervatve-based methods wth respect to fndng a global mnmum and wth respect to ther applcablty to c Ŷ n ( ) Input layer ( m neurons) x n Fg. : Radal Bass Functonal Network a wde range of objectve functons and neural network archtectures. However, they typcally tend to converge more slowly than dervatve-based methods. Dervatvebased methods have the advent age of fast convergence, but they tend to converge to local mnma. In addton, due to ther dependence on analytcal dervatves, they are lmted to specfc objectve functons and specfc types of neural network archtectures.. INTERPRETATION OF RADIAL BASIS FUNCTIONAL NEURAL NETORK The mult layered feed forward network (MFN) s the most wdely used neural network model for pattern classfcaton applcatons. Ths s because the topology of the MFN allows t to generate nternal representatons talored to classfy the nput regons that may be ether dsjonted or ntersectng. The hdden layer nodes n the MFN can form hyper planes to partton the nput space nto varous regons and the output nodes can select and combne the regons that belong to the same class. Back propagaton (BP) s the most wdely used tranng algorthm for the MFN s. Recently researchers have begun to examne the use of Radal Bass Functon neural networks (RBFNN) for pattern Recognton problems due to a number of drawbacks of BP-traned networks. Although a BP network produces decson surfaces that effectvely separate tranng examples of dfferent classes, ths does not necessarly result n the most plausble or robust classfer. The decson surfaces of BP networks may not take on any ntutve shapes because regons of the nput space not occuped by tranng data are classfed arbtrarly, not accordng to proxmty to tranng data. In addton, BP networks have no mechansm to detect that a case to be classfed has fallen nto a regon wth no tranng data. Ths s a serous drawback snce the power system operates wthn a wde range of system and fault condtons. The RBFNN conssts of an nput layer made up of source nodes and a hdden layer of a suffcently hgh dmenson. The output layer supples the response of the network to the actvaton patterns appled to the nput layer. The nodes wthn each layer are fully connected to the prevous layer as shown n the Fgure. The nput varables are each assgned to a node n the nput layer and pass drectly to the hdden layer wthout weghts. The hdden nodes, or unts, contan the radal bass functons (RBFNN s) and are represented by the bell-shaped curve n the hdden nodes as shown n the Fg. Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 3

. RBFNN Algorthm: Ths secton descrbes how we used an RBFNN network to classfy the data sets. RBFNN used here has an nput layer, a hdden layer consstng of Gaussan node functon, an output layer, and a set of weghts, to connect the hdden layer and output layer. e denote x to be the nput vector to the network, where x = (x, x, x 3, x D ), and D s the embeddng dmenson. e call o the ANN output vector, where o = (o, o, o 3,. o n ) T s the number of out put nodes. e have P tranng patterns. The RBFNN classfcaton problem s to approxmate the mappng from the set of nputs, x = {x(), x(),.., x(p)},.() to the set of outputs, o={o(),o(),o(3),., o(p)}..() For an nput vector x(t), the output of j th output node produced by an RBFNN s gven by mtot mtot j() = jφ() = j = = o t w t w e x() t c c here C s the center of the th hdden node, σ s the wdth of the th center, and mtot s the total number of hdden nodes. Usng vector notaton, let m = ( m(t),m(t),..,mtot(t)) and w j = (w j, w j,.., wmto(tj)) and RBFNN output can be wrtten as o j = w j *T(t). The cost functon of the network for the jth output s then calculated as e = (d o j ) where d = desred output. The RBFNN classfer contans four sets of parameters that have to be learned form the examples. They are the centers, c(t), number of centers mtot, varances σ I, and weghts wj,. e denote all the RBFNN s centers by Cwhole. In our mplementaton of RBFNN, classes do not share centers. Each of these sets of centers s traned wth a separate PSO clusterng run. Once the RBFNN centers are ntalzed by PSO then the weghts are updated accordng to the followng: (3) w(t+) j = w(t)+e j (t) The centers are then updated accordng to the followng: ewj ( x cj) t+ ) = c( t) + (4) σ The wdth assocated wth the k th center s adjusted as σ k() = / Na ck() cj() (5) There are several reasons for usng an RBFNN n our classfcaton problem. Frst many neural networks requre nonlnear optmzaton for tranng. The second reason for employng a RBFNN classfer s that the nternal representaton of tranng data of an RBFNN s ntutve. Each RBFNN center approxmates a cluster of tranng of data vectors that are close each other n Eucldean space. hen a vector s nput to the RBFNN, the center near to that vector becomes strongly actvated, n turn actvatng certan output nodes. The hypothess space mplanted by these learnng machnes s consttuted by functons of the form m f ( xwv,, ) = wk k( xv, k) + w (6) = The nonlnear actvaton functon Øk expresses the smlarty between any nput pattern x and the center v k by means of a dstance measure. Each functon Ø k defnes a regon n the nput space (receptve feld) on whch the neuron produces a apprecable actvaton value. If the common case when the Gaussan functon s used, the center C k of the functon k defnes the prototype of nput cluster k and the varance Ø k the sze of the covered regon n the nput space. The rule extracton method for RBFNN derves descrptons n the form of ellpsod. Intally, assgnng each nput pattern to ther closest center of RBFNN node accordng to the Eucldean dstance functon a partton of the nput space s made. hen assgnng a pattern to ts closest center, ths one wll be assgned to the RBFNN node that wll gve the maxmum actvaton value for that pattern. From these parttons the ellpsod are constructed. Next, a class label s assgned for each center of RBFNN unts. Output value of the RBFNN network for each center s used n order to determne ths class label. Then, for each node an ellpsod wth the assocated partton data s constructed. Once determned the ellpsod, they are transferred to rules. Ths procedure wll generate a rule by each node.. Modfed Radal Bass Functonal Neural Network : ( ) g v x I ŷ Ŷ Output n layer ( n neurons) n c I.. I nm x Modfed Radal Bass Functonal Neural Network s same as that of RBFNN wth an excepton that weghts are assgned between neurons n the nput layer and the c Ŷ n x n Fg. : Modfed Radal Bass Functonal ( ) e v x Input layer ( m neurons) Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 4

neurons n the hdden layer (fg.). The net nput to the neurons n the hdden layer s calculated as = x* w and the out put s gven as equaton 3. here s the neuron number, x s the nput to the network and w s the weghts between nput layer and hdden layer The centers are updated usng the equaton 4 and the varances are updated usng the equaton 5. The weghts between nput layer and the hdden layer as well as hdden layer and output layer of the RBFNN classfer can be traned usng the lnear recursve least square (RLS) algorthm. The RLS s employed here snce t has a much faster rate of convergence compared to gradent search and least mean square (LMS) algorthms. P(-)φT() k()= λ+p(-) φt() (7) w(j)=wj(-)+k()[dj()-wj(-) φt()] (8) P () = [ P ( ) k () ϕ() p ( )] (9) λ where λ s real number between and,p()=a - I, and a s a small postve number and w j ()=. The computatonal steps nvolved n mplementng of MRBFNN for fault classfcaton are:. for each class c, ntalse the centers usng Partcle swarm Optmzaton mc =mnt (ntalzaton);. tran the MRBFNN Centers and spreads usng Error Back Propagaton 3. tran the MRBFNN eghts (Between nput layer & hdden layer and hdden layer & output layer) usng RLS 4. add enc centers to Nc classes wth hghest output, to get a new m, then go to step ; 5. the RBFNN s used wth the one wth the current m. The learnng rate of the RBFNN s. and the center and the weghts are updated n every teraton that s by new tranng nput to the RBFNN. 3. PARTICLE SARM OPTIMIZATION Partcle Swarm Optmzaton (PSO) s a populaton based stochastc search process, modeled after the socal behavor of a brd flock[-5]. The algorthm mantans a populaton of partcles, where each partcle represents a potental soluton to an optmzaton problem. In the context of PSO, a swarm refers to a number of potental solutons to the optmzaton problem, where each potental soluton s referred to as a partcle. The am of the PSO s to fnd the partcle poston that results n the best evaluaton of a gven ftness (objectve) functon. Each partcle represents a poston n N d dmensonal space, and s flown through ths mult-dmensonal search space, adjustng ts poston towards both The partcle s best poston found thus far, and The best poston n the neghborhood of that partcle. Each partcle I mantans the followng nformaton : x : The current poston of the partcle. v : The current velocty of the partcle. y : The personal best poston of the partcle. Usng the above notaton, a partcle s poston s adjusted accordng to v (t+)=wv (t)+cr,k(t) (y,k(t)-x,k(t))+cr,k(t)( y k(t)-x,k(t)) k,,k x (t+)=x (t)+v (t+) () () where w s the nerta weght c and c are the acceleraton constants, r,j (t), r,j (t) ~ U(,), and k=,., N d. The velocty s thus calculated based on three contrbutons: ) a fracton of the prevous velocty, ) the cogntve component whch s a functon of the dstance of the partcle from ts personal best poston, and 3) the socal component whch s a functon of the dstance of the partcle from the best partcle found thus far (.e; the best of the personal bests). The personal best poston of the partcle s calculated as y f f ( x ( t + )) f ( y ( t)) y = () x ( t + ) f f( x (t+)) < f ( y (t)) Two basc approaches to PSO exsts based on the nterpretaton of the neghborhood of partcles. Equaton () reflects the gbest verson of PSO where, for each partcle, the neghborhood s smply the entre swarm. The socal component then causes partcles to be drawn toward the best partcle n the swarm. In the lbest PSO model, the swarm s dvded nto overlappng neghborhoods, and the best partcle of each neghborhood s determned. For the lbest PSO model, the socal component of equaton() changes to c r, k ()( t y k () t x, k ()) t (3) where ŷ j s the best partcle n the neghborhood of the th partcle. The PSO s usually executed wth repeated applcaton of equatons () and () untl a specfed number of teratons has been exceeded. Alternatvely, Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 5

the algorthm can be termnated when the velocty updates are close to zero over a number of teratons. 3. PSO Clusterng : In the context of clusterng, a sngle partcle represents the Nc cluster centrod vectors. That s, each partcle x s constructed as follows: x = (m,, m j,, m Nc ) (4) where mj refers to the j-th cluster centrod vector of the -th partcle n the cluster Cj. Therefore, a swarm represents a number of canddate clusterng for the current data vectors. The ftness of partcles s easly measured as the quantzaton error, c r, k ()( t y k () t x, k ()) t here d s defned n equaton, (3) N d d( Zp, m j ) = ( Z m ) k pk jk where k subscrpts the dmenson. = (5) ) calculate the ftness functon usng (6 ) c) Update the global best and local best postons d) Update the cluster centrods usng equatons () and (). where tmax s the maxmum number of teratons. 4. DISCUSSION In order to evaluate the performance of the algorthm, we carred out a two fold experment wth BC data set wth the center ntalzed by PSO. The result shows f both the set of weghts of MRBFNN are optmzed usng RLS then the performance.e. the percentage of Classfcaton s better as compared to optmzng the same usng Kalman Flter and RLS Table.3. The algorthms assocated to the extracton method were smulated usng MATLAB v6.5. 4. Smulaton Envronment. e tested the algorthms of the prevous sectons wth sconsn breast cancer (BC) data set by optmzng the weghts of RBFNN usng RLS, Kalman Flter. The weghts of the MRBFNN are optmzed usng RLS. 4. BC Dataset mj = Zp nj Zp Cj and Cj s the number of data vectors belongng to the cluster Cj, e; the frequency of that cluster. Ths secton frst presents the standard gbest PSO for clusterng data nto a gven number of clusters and then shows the PSO algorthm can be used to mprove the performance of Radal bass functonal Neural Network (RBFNN) for classfcaton. 3.. gbest PSO clusterng Algorthm Usng the standard gbest PSO, data vectors can be clustered as follows :. Intalze each partcle to contan Nc randomly selected cluster centrods.. For t = to tmax do a) For each partcle do b) For each data vector Zp ) calculate the Eucldean dstance d(zp, mj ) to all cluster centrods Cj ) Assgn Zp to cluster Cj, such that d(zp, mj ) = mn c=,..., Nc {d(zp, mc )} Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 6 The BC tranng set contans 4 exemplars and the test set contanng 99 exemplars for a total of 699 exemplars. The nput data were normalzed by replacng each feature value x by x= (x µx) / σx where µx and σx denote the sample mean and standard devaton of ths feature over the entre data set. The networks are traned to respond wth the target value y k =, and y jk = j, when presented wth an nput vector x k from the th category. The MATLAB m-fles were used to generate the smulaton results presented n ths secton. The tranng algorthms were ntalzed wth prototype vectors randomly selected from the nput data on a two fold bass and wth the weght matrx set to and ntalzed to random values. 4.3 Smulaton Results 4.3. Tabular Data: The results of centers obtaned by from our smulaton studes are shown n tables. Table-, Table- 5, and Table- 8 shows the centers of BC obtaned from MRBFNN, RBFNN usng RLS and Kalman Flter respectvely. Table-, Table-3 shows the weghts obtaned from MRBFNN and Table-6, and Table-9, shows the weghts obtaned form RLS and Kalman Flter respectvely. Table- 4, Table-7, and Table-, shows the varances obtaned.

Table : Centers obtaned from MRBFNN MODIFIED RBF NETORK ITH RLS ON BC DATA SET.5.5.5.5.5.5.5.5.5.5 -.5.5 -.5 - -.5.5 B C.538.538.538.538.538.538.538.538.538 Fg 5: Classfcaton usng MRBFNN and RLS Table : eghts between nput layer and hdden layer obtaned from MRBFNN 4.4475 4.4475 4.4475 4.4475 4.4475 RBF NETORK ITH RLS ON BC DATA SET 4.4475 4.4475 4.4475 4.4475.5 B C.668.66 8.668.66 8.668.66 8.668.66 8.6 68 -.5.5 -.5 - -.5.5 Table 3: eghts between hdden layer and output layer obtaned from MRBFNN Fg6: Classfcaton usng RBFNN and RLS 3.4475.3 3.4475.3 RBF NETORK ITH KALMAN ON BC DATA SET Table 4: Varances obtaned from MRBFNN.5.557.3765 -.5.5 -.5 -.5 - Fg7: Classfcaton usng RBFNN and Kalman Flter.5 Rule for classfcaton of BC data sets usng MRBFNN f (oo(r,)>=.958 & oo(r,)<=.54) & (oo(r,)>=.35 & oo(r,)<=.3999) then class Bengn f (oo(r,)>=.56 & oo(r,)<=.55) & (oo(r,)>=.47 & oo(r,)<=5.484) then class Malgnant Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 7

Table 5: Centers obtaned from RBFNN and RLS.35.35.35.35.35 B.35.35.35.35 C.646.646.646.646.646.646.646.646.646 Rule for classfcaton of BC data sets usng Kalman Flter f (oo(r,)>=.677 & oo(r,)<=.79) & (oo(r,)>=.587 & oo(r,)<=.664) then class Bengn f (oo(r,)>=.763 & oo(r,)<=.338) & (oo(r,)>=.665 & oo(r,)<=.974) then class Malgnant; Table : Percentage of Classfcaton Table 6: eghts obtaned from RBFNN and RLS.646.646 5.748 5.748 Table 7: Varances obtaned from RBFNN and RLS 7.86 8.643 Rule for classfcaton of BC data sets usng RBFNN and RLS f (oo(r,)>=8.588 & oo(r,)<=8.9768) & (oo(r,)>=3.43 & oo(r,)<=3.866) then class Bengn; f (oo(r,)>=8.9937 & oo(r,)<=.668) & (oo(r,)>=3.885 & oo(r,)<=6.74) then class Malgnant; Table 8: Centers obtaned from RBFNN and Kalman Flter.95.95.95.95.95.95.95.95.95 B.9.9.9.9.9 C.9.9.9.9 Table 9: eghts obtaned from RBFNN and Kalman Flter.9849.998.95.9 Table : Varances obtaned from RBFNN and Kalman Flter 7.86 8.643 % of Accuracy MODIFIED RBFNN 98.4 RBFNN and RLS 97.388 RBFNN and Kalman Flter 96.435 Table- shows the percentage of classfcaton of respectve technques for BC datasets. 4. CONCLUSION An effcent Pattern Recognton and rule extracton technque usng Recursve Least square approxmaton and Modfed Radal Bass Functonal Neural Networks (MRBFNN) s presented n ths paper. Partcle Swarm Optmzaton [-5] s used to fnd the ntal centers of sconsn Breast Cancer (BC). After the centers have been ntalzed RBFNN s used to update the centers and the varances and the weghts of MRBFNN are updated usng Recursve Least Square approxmaton. The Classfcaton result s gven n Table 3 shows the effectveness of MRBFNN. The traned network s capable of provdng better Classfcaton wth comparson to tranng RBFNN network usng Kalman Flter and Recursve Least Square approxmaton. Further research could focus on the applcaton of dfferent tranng methods to tran MRBFNN. Ths technque can be appled to large problems to obtan expermental verfcaton of the computatonal results can be ncluded as a future work. 5 ACKNOLEDGEMENT : e would lke to acknowledge the encouragement and support gven by Prof. S. N. Dehury, Fakr Mohan Unversty, Orssa, Inda. He had been very knd and patent whle suggestng the outlnes of ths work. 6. REFERENCES [] D.Broomhead, D.Lowe, Multvarable Functonal Interpolaton and adaptve networks, complex systems, v, 998, pp. 3-355. Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 8

[]. N. Karayanns, Reformulated radal bass neural networks traned by gradent descent, IEEE Transactons on Neural Networks, Volume:, Issue: 3, 999 pp. 657-67. [3]. D. Smon, Dstrbuted fault tolerance n optmal nterpolatve nets,. IEEE Transacton on Neural Networks, Volume:, Issue: 6, pp. 348-357. [4]. S. Sn, R. DeFgueredo, Effcent learnng procedures for optmal nterpolatve nets, Neural Networks, Volume:6, Issue:, 993 pp. 6 99-3. [5]. John Sum, Ch-sng Leung, Glbert H. Young, and ng-kay Kan,. Kalman Flterng Method n Neural-Network, Tranng and Prunng, IEEE Transactons On Neural Networks,. Volume:, Issue:, 999, pp. 6-66. [6]. Y. Zhang, X. L, A fast U-D factorzaton-based learnng algorthm wth applcatons to nonlnear system modelng and dentfcaton, IEEE Transacton on Neural Networks, Volume:, Issue: 4, 999, pp. 93-938. [7]. Duro, R.J.; Reyes, J.S, Dscrete-tme back propagaton for tranng synaptc delay-based artfcal neural networks, IEEE Transacton on Neural Networks, Volume:, Issue: 4, 999, pp. 779-789. [8]. S. Chen, Y. u, B. Luk, Combned genetc algorthm optmzaton and regularzed orthogonal least squares learnng for radal bass functon networks, IEEE Transacton on Neural Networks, Volume:, 999,Issue: 5, pp. 39-43. [9]. Bahram G. Kerman, Mark. hte, H. Tory Nagle, Feature Extracton By Genetc Algorthms for Neural Networks n Breast Cancer Classfcaton, IEEE, 997, pp.83-83. []. McKay B.; lls, M.J.; Hdden, H.G.; Montague, G.A. and Barton, G.. March, Identfcaton of ndustral processes usng genetc programmng. Proceedngs of Internatonal Conference on Identfcaton n Engneerng Systems, 996, pp.38-337. []. S. Krkpatrck, Cl. Gelatt, M. Vecch, Optmzaton by smulated annealng, Volume:, no: 4598, 983, pp. 67-68. []. J. Kennedy, Stereotypng: mprovng partcle swarm performance wth cluster analyss In Proceedngs of the IEEE Congress on Evolutonary Computaton (CEC),, pp.57-5. [3]. J Kennedy, RC Eberhart, Y.Sh. Swarm Intellgence, Internatonal Journal of Computer Research,, pp.434-45. [4]. Xaohu Hu, Yuhu Sh, and Russ Eberhart, Recent advances n partcle swarm. In Proceedngs of IEEE Congress on Evolutonary Computaton (CEC), 4. pp. 9-97. [5]..M.R.Senapat, I.Vjaya, P.K.Dash, Rule Extracton from Radal Bass Functonal Neural Networks by usng Partcle Swarm Optmzaton. Journal of computer scence, Scence Publcaton 3 (8), 7. pp.59-599 Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 9