Breast Cancer Detecton usng Recursve Least Square and Modfed Radal Bass Functonal Neural Network M.R.Senapat a, P.K.Routray b,p.k.dask b,a Department of computer scence and Engneerng Gandh Engneerng College Bju Patnak Unversty of Technology, Inda manas_senapat@sfy.com b Department of computer scence and Engneerng N M Insttute of Engneerng and Technology Bju Patnak Unversty of Technology, Inda pravat.routray@gmal.com b S O A Unversty, Inda pkdash_nda@yahoo.com Abstract-- A new approach for classfcaton has been presented n ths paper. The proposed technque, Modfed Radal Bass Functonal Neural Network (MRBFNN) conssts of assgnng weghts between the nput layer and the hdden layer of Radal Bass functonal Neural Network (RBFNN). The centers of MRBFNN are ntalzed usng Partcle swarm Optmzaton (PSO) and varance and centers are updated usng back propagaton and both the sets of weghts are updated usng Recursve Least Square (RLS). Our smulaton result s carred out on sconsn Breast Cancer (BC) data set. The results are compared wth RBFNN, where the varance and centers are updated usng back propagaton and weghts are updated usng Recursve Least Square (RLS) and Kalman Flter. It s found the proposed method provdes more accurate result and better classfcaton. Keywords: Radal Bass Functonal Neural Networks (RBFNN), sconsn Breast Cancer (BC), Pattern Recognton, Gradent Descent Method, Recursve Least Square, Kalman Flter.. INTRODUCTION A Radal Bass Functonal neural network (RBFNN) s traned to perform a mappng from an m-dmensonal nput space to an n-dmensonal output space. RBFNN s can be used for dscrete pattern classfcaton, functon approxmaton, sgnal processng, control, or any other applcaton, whch requres a mappng from an nput space to an output space. An RBFNN conssts of the m-dmensonal nput x beng passed drectly to a hdden layer. Suppose there are c neurons n the hdden layer. Each of the c neurons n the hdden layer apples an actvaton functon, whch s a functon of the Eucldean dstance between the nput and an m-dmensonal prototype vector. Each hdden neuron contans ts own prototype vector as a parameter. The output of each hdden neuron s then weghted and passed to the output layer. The Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December outputs of the network consst of sums of the weghted hdden layer neurons. Fgure shows a schematc form of an RBFNN network. It can be seen from the basc archtecture, that the desgn of an RBFNN requres several decsons, ncludng the followng:. How many neurons wll resde n the hdden layer? (.e., what s the value of the nteger c);. hat are the values of the prototypes (.e., what are the values of the v vectors)? 3. hat functon wll be used at the hdden unts (.e., what s the functon g ( ))? 4. hat weghts wll be appled between the hdden layer and the output layer? The performance of an RBFNN network depends on the number and locaton (n the nput space) of the centers, the shape of the RBFNN functons at the hdden neurons, and the method used for determnng the network weghts. Some researchers have traned RBFNN networks by selectng the centers randomly from the tranng data[]. Some have used unsupervsed procedures (such as the k- means algorthm) for selectng the RBFNN centers], whle others have used supervsed procedures for selectng the RBFNN centers []. Several tranng methods separate the tasks of prototype determnaton and weght optmzaton for classfcaton and rule generaton. Ths trend probably arose because of the quck tranng that could result from the separaton of the two tasks. In fact, one of the prmary contrbutors to the popularty of RBFNN networks was probably ther fast tranng tmes as compared to gradent descent tranng (ncludng back propagaton) shown n Fgure, t can be
seen that once the prototypes are fxed and the hdden layer functon g( ) s known, the network s lnear n the weght parameters w. At that pont tranng the network becomes a quck and easy task that can be solved va lnear least squares. (Ths s smlar to the popularty of the optmal nterpolatve net that s due n large part to the effcent non- teratve learnng algorthms that are avalable [3,4]. Tranng methods that separate the tasks of prototype determnaton and weght optmzaton often do not use the nput output data from the tranng set for the selecton of the prototypes. For nstance, the random selecton method and the k-means algorthm result n prototypes that are completely ndependent of the nput output data from the tranng set. Although ths results n fast tranng, t clearly does not take full advantage of the nformaton contaned n the tranng set. ( ) g v x ŷ n n c x Ŷ Output layer ( n neurons) e v x Gradent descent tranng of RBFNN networks has proven to be much more effectve than more conventonal methods[]. However gradent descent tranng can be computatonally expensve. Ths paper extends the results of [] and formulates a tranng method for RBFNN s based on Recursve Least Square. Ths new method proves to be qucker than gradent descent whle stll provdng performance at the same level of effectveness. Tranng a neural network s, n general, a challengng nonlnear optmzaton problem. Varous dervatve-based methods have been used to tran neural networks, ncludng gradent descent [], Kalman.Flterng [5, 6], and the wellknown back-propagaton [7]. Dervatve-free methods, ncludng genetc programmng [8-] and smulated annealng [] have also been used to tran neural networks. Dervatve-free methods have the advantage that they do not requre the dervatve of the objectve functon wth respect to the neural network parameters. They are more robust than dervatve-based methods wth respect to fndng a global mnmum and wth respect to ther applcablty to c Ŷ n ( ) Input layer ( m neurons) x n Fg. : Radal Bass Functonal Network a wde range of objectve functons and neural network archtectures. However, they typcally tend to converge more slowly than dervatve-based methods. Dervatvebased methods have the advent age of fast convergence, but they tend to converge to local mnma. In addton, due to ther dependence on analytcal dervatves, they are lmted to specfc objectve functons and specfc types of neural network archtectures.. INTERPRETATION OF RADIAL BASIS FUNCTIONAL NEURAL NETORK The mult layered feed forward network (MFN) s the most wdely used neural network model for pattern classfcaton applcatons. Ths s because the topology of the MFN allows t to generate nternal representatons talored to classfy the nput regons that may be ether dsjonted or ntersectng. The hdden layer nodes n the MFN can form hyper planes to partton the nput space nto varous regons and the output nodes can select and combne the regons that belong to the same class. Back propagaton (BP) s the most wdely used tranng algorthm for the MFN s. Recently researchers have begun to examne the use of Radal Bass Functon neural networks (RBFNN) for pattern Recognton problems due to a number of drawbacks of BP-traned networks. Although a BP network produces decson surfaces that effectvely separate tranng examples of dfferent classes, ths does not necessarly result n the most plausble or robust classfer. The decson surfaces of BP networks may not take on any ntutve shapes because regons of the nput space not occuped by tranng data are classfed arbtrarly, not accordng to proxmty to tranng data. In addton, BP networks have no mechansm to detect that a case to be classfed has fallen nto a regon wth no tranng data. Ths s a serous drawback snce the power system operates wthn a wde range of system and fault condtons. The RBFNN conssts of an nput layer made up of source nodes and a hdden layer of a suffcently hgh dmenson. The output layer supples the response of the network to the actvaton patterns appled to the nput layer. The nodes wthn each layer are fully connected to the prevous layer as shown n the Fgure. The nput varables are each assgned to a node n the nput layer and pass drectly to the hdden layer wthout weghts. The hdden nodes, or unts, contan the radal bass functons (RBFNN s) and are represented by the bell-shaped curve n the hdden nodes as shown n the Fg. Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 3
. RBFNN Algorthm: Ths secton descrbes how we used an RBFNN network to classfy the data sets. RBFNN used here has an nput layer, a hdden layer consstng of Gaussan node functon, an output layer, and a set of weghts, to connect the hdden layer and output layer. e denote x to be the nput vector to the network, where x = (x, x, x 3, x D ), and D s the embeddng dmenson. e call o the ANN output vector, where o = (o, o, o 3,. o n ) T s the number of out put nodes. e have P tranng patterns. The RBFNN classfcaton problem s to approxmate the mappng from the set of nputs, x = {x(), x(),.., x(p)},.() to the set of outputs, o={o(),o(),o(3),., o(p)}..() For an nput vector x(t), the output of j th output node produced by an RBFNN s gven by mtot mtot j() = jφ() = j = = o t w t w e x() t c c here C s the center of the th hdden node, σ s the wdth of the th center, and mtot s the total number of hdden nodes. Usng vector notaton, let m = ( m(t),m(t),..,mtot(t)) and w j = (w j, w j,.., wmto(tj)) and RBFNN output can be wrtten as o j = w j *T(t). The cost functon of the network for the jth output s then calculated as e = (d o j ) where d = desred output. The RBFNN classfer contans four sets of parameters that have to be learned form the examples. They are the centers, c(t), number of centers mtot, varances σ I, and weghts wj,. e denote all the RBFNN s centers by Cwhole. In our mplementaton of RBFNN, classes do not share centers. Each of these sets of centers s traned wth a separate PSO clusterng run. Once the RBFNN centers are ntalzed by PSO then the weghts are updated accordng to the followng: (3) w(t+) j = w(t)+e j (t) The centers are then updated accordng to the followng: ewj ( x cj) t+ ) = c( t) + (4) σ The wdth assocated wth the k th center s adjusted as σ k() = / Na ck() cj() (5) There are several reasons for usng an RBFNN n our classfcaton problem. Frst many neural networks requre nonlnear optmzaton for tranng. The second reason for employng a RBFNN classfer s that the nternal representaton of tranng data of an RBFNN s ntutve. Each RBFNN center approxmates a cluster of tranng of data vectors that are close each other n Eucldean space. hen a vector s nput to the RBFNN, the center near to that vector becomes strongly actvated, n turn actvatng certan output nodes. The hypothess space mplanted by these learnng machnes s consttuted by functons of the form m f ( xwv,, ) = wk k( xv, k) + w (6) = The nonlnear actvaton functon Øk expresses the smlarty between any nput pattern x and the center v k by means of a dstance measure. Each functon Ø k defnes a regon n the nput space (receptve feld) on whch the neuron produces a apprecable actvaton value. If the common case when the Gaussan functon s used, the center C k of the functon k defnes the prototype of nput cluster k and the varance Ø k the sze of the covered regon n the nput space. The rule extracton method for RBFNN derves descrptons n the form of ellpsod. Intally, assgnng each nput pattern to ther closest center of RBFNN node accordng to the Eucldean dstance functon a partton of the nput space s made. hen assgnng a pattern to ts closest center, ths one wll be assgned to the RBFNN node that wll gve the maxmum actvaton value for that pattern. From these parttons the ellpsod are constructed. Next, a class label s assgned for each center of RBFNN unts. Output value of the RBFNN network for each center s used n order to determne ths class label. Then, for each node an ellpsod wth the assocated partton data s constructed. Once determned the ellpsod, they are transferred to rules. Ths procedure wll generate a rule by each node.. Modfed Radal Bass Functonal Neural Network : ( ) g v x I ŷ Ŷ Output n layer ( n neurons) n c I.. I nm x Modfed Radal Bass Functonal Neural Network s same as that of RBFNN wth an excepton that weghts are assgned between neurons n the nput layer and the c Ŷ n x n Fg. : Modfed Radal Bass Functonal ( ) e v x Input layer ( m neurons) Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 4
neurons n the hdden layer (fg.). The net nput to the neurons n the hdden layer s calculated as = x* w and the out put s gven as equaton 3. here s the neuron number, x s the nput to the network and w s the weghts between nput layer and hdden layer The centers are updated usng the equaton 4 and the varances are updated usng the equaton 5. The weghts between nput layer and the hdden layer as well as hdden layer and output layer of the RBFNN classfer can be traned usng the lnear recursve least square (RLS) algorthm. The RLS s employed here snce t has a much faster rate of convergence compared to gradent search and least mean square (LMS) algorthms. P(-)φT() k()= λ+p(-) φt() (7) w(j)=wj(-)+k()[dj()-wj(-) φt()] (8) P () = [ P ( ) k () ϕ() p ( )] (9) λ where λ s real number between and,p()=a - I, and a s a small postve number and w j ()=. The computatonal steps nvolved n mplementng of MRBFNN for fault classfcaton are:. for each class c, ntalse the centers usng Partcle swarm Optmzaton mc =mnt (ntalzaton);. tran the MRBFNN Centers and spreads usng Error Back Propagaton 3. tran the MRBFNN eghts (Between nput layer & hdden layer and hdden layer & output layer) usng RLS 4. add enc centers to Nc classes wth hghest output, to get a new m, then go to step ; 5. the RBFNN s used wth the one wth the current m. The learnng rate of the RBFNN s. and the center and the weghts are updated n every teraton that s by new tranng nput to the RBFNN. 3. PARTICLE SARM OPTIMIZATION Partcle Swarm Optmzaton (PSO) s a populaton based stochastc search process, modeled after the socal behavor of a brd flock[-5]. The algorthm mantans a populaton of partcles, where each partcle represents a potental soluton to an optmzaton problem. In the context of PSO, a swarm refers to a number of potental solutons to the optmzaton problem, where each potental soluton s referred to as a partcle. The am of the PSO s to fnd the partcle poston that results n the best evaluaton of a gven ftness (objectve) functon. Each partcle represents a poston n N d dmensonal space, and s flown through ths mult-dmensonal search space, adjustng ts poston towards both The partcle s best poston found thus far, and The best poston n the neghborhood of that partcle. Each partcle I mantans the followng nformaton : x : The current poston of the partcle. v : The current velocty of the partcle. y : The personal best poston of the partcle. Usng the above notaton, a partcle s poston s adjusted accordng to v (t+)=wv (t)+cr,k(t) (y,k(t)-x,k(t))+cr,k(t)( y k(t)-x,k(t)) k,,k x (t+)=x (t)+v (t+) () () where w s the nerta weght c and c are the acceleraton constants, r,j (t), r,j (t) ~ U(,), and k=,., N d. The velocty s thus calculated based on three contrbutons: ) a fracton of the prevous velocty, ) the cogntve component whch s a functon of the dstance of the partcle from ts personal best poston, and 3) the socal component whch s a functon of the dstance of the partcle from the best partcle found thus far (.e; the best of the personal bests). The personal best poston of the partcle s calculated as y f f ( x ( t + )) f ( y ( t)) y = () x ( t + ) f f( x (t+)) < f ( y (t)) Two basc approaches to PSO exsts based on the nterpretaton of the neghborhood of partcles. Equaton () reflects the gbest verson of PSO where, for each partcle, the neghborhood s smply the entre swarm. The socal component then causes partcles to be drawn toward the best partcle n the swarm. In the lbest PSO model, the swarm s dvded nto overlappng neghborhoods, and the best partcle of each neghborhood s determned. For the lbest PSO model, the socal component of equaton() changes to c r, k ()( t y k () t x, k ()) t (3) where ŷ j s the best partcle n the neghborhood of the th partcle. The PSO s usually executed wth repeated applcaton of equatons () and () untl a specfed number of teratons has been exceeded. Alternatvely, Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 5
the algorthm can be termnated when the velocty updates are close to zero over a number of teratons. 3. PSO Clusterng : In the context of clusterng, a sngle partcle represents the Nc cluster centrod vectors. That s, each partcle x s constructed as follows: x = (m,, m j,, m Nc ) (4) where mj refers to the j-th cluster centrod vector of the -th partcle n the cluster Cj. Therefore, a swarm represents a number of canddate clusterng for the current data vectors. The ftness of partcles s easly measured as the quantzaton error, c r, k ()( t y k () t x, k ()) t here d s defned n equaton, (3) N d d( Zp, m j ) = ( Z m ) k pk jk where k subscrpts the dmenson. = (5) ) calculate the ftness functon usng (6 ) c) Update the global best and local best postons d) Update the cluster centrods usng equatons () and (). where tmax s the maxmum number of teratons. 4. DISCUSSION In order to evaluate the performance of the algorthm, we carred out a two fold experment wth BC data set wth the center ntalzed by PSO. The result shows f both the set of weghts of MRBFNN are optmzed usng RLS then the performance.e. the percentage of Classfcaton s better as compared to optmzng the same usng Kalman Flter and RLS Table.3. The algorthms assocated to the extracton method were smulated usng MATLAB v6.5. 4. Smulaton Envronment. e tested the algorthms of the prevous sectons wth sconsn breast cancer (BC) data set by optmzng the weghts of RBFNN usng RLS, Kalman Flter. The weghts of the MRBFNN are optmzed usng RLS. 4. BC Dataset mj = Zp nj Zp Cj and Cj s the number of data vectors belongng to the cluster Cj, e; the frequency of that cluster. Ths secton frst presents the standard gbest PSO for clusterng data nto a gven number of clusters and then shows the PSO algorthm can be used to mprove the performance of Radal bass functonal Neural Network (RBFNN) for classfcaton. 3.. gbest PSO clusterng Algorthm Usng the standard gbest PSO, data vectors can be clustered as follows :. Intalze each partcle to contan Nc randomly selected cluster centrods.. For t = to tmax do a) For each partcle do b) For each data vector Zp ) calculate the Eucldean dstance d(zp, mj ) to all cluster centrods Cj ) Assgn Zp to cluster Cj, such that d(zp, mj ) = mn c=,..., Nc {d(zp, mc )} Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 6 The BC tranng set contans 4 exemplars and the test set contanng 99 exemplars for a total of 699 exemplars. The nput data were normalzed by replacng each feature value x by x= (x µx) / σx where µx and σx denote the sample mean and standard devaton of ths feature over the entre data set. The networks are traned to respond wth the target value y k =, and y jk = j, when presented wth an nput vector x k from the th category. The MATLAB m-fles were used to generate the smulaton results presented n ths secton. The tranng algorthms were ntalzed wth prototype vectors randomly selected from the nput data on a two fold bass and wth the weght matrx set to and ntalzed to random values. 4.3 Smulaton Results 4.3. Tabular Data: The results of centers obtaned by from our smulaton studes are shown n tables. Table-, Table- 5, and Table- 8 shows the centers of BC obtaned from MRBFNN, RBFNN usng RLS and Kalman Flter respectvely. Table-, Table-3 shows the weghts obtaned from MRBFNN and Table-6, and Table-9, shows the weghts obtaned form RLS and Kalman Flter respectvely. Table- 4, Table-7, and Table-, shows the varances obtaned.
Table : Centers obtaned from MRBFNN MODIFIED RBF NETORK ITH RLS ON BC DATA SET.5.5.5.5.5.5.5.5.5.5 -.5.5 -.5 - -.5.5 B C.538.538.538.538.538.538.538.538.538 Fg 5: Classfcaton usng MRBFNN and RLS Table : eghts between nput layer and hdden layer obtaned from MRBFNN 4.4475 4.4475 4.4475 4.4475 4.4475 RBF NETORK ITH RLS ON BC DATA SET 4.4475 4.4475 4.4475 4.4475.5 B C.668.66 8.668.66 8.668.66 8.668.66 8.6 68 -.5.5 -.5 - -.5.5 Table 3: eghts between hdden layer and output layer obtaned from MRBFNN Fg6: Classfcaton usng RBFNN and RLS 3.4475.3 3.4475.3 RBF NETORK ITH KALMAN ON BC DATA SET Table 4: Varances obtaned from MRBFNN.5.557.3765 -.5.5 -.5 -.5 - Fg7: Classfcaton usng RBFNN and Kalman Flter.5 Rule for classfcaton of BC data sets usng MRBFNN f (oo(r,)>=.958 & oo(r,)<=.54) & (oo(r,)>=.35 & oo(r,)<=.3999) then class Bengn f (oo(r,)>=.56 & oo(r,)<=.55) & (oo(r,)>=.47 & oo(r,)<=5.484) then class Malgnant Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 7
Table 5: Centers obtaned from RBFNN and RLS.35.35.35.35.35 B.35.35.35.35 C.646.646.646.646.646.646.646.646.646 Rule for classfcaton of BC data sets usng Kalman Flter f (oo(r,)>=.677 & oo(r,)<=.79) & (oo(r,)>=.587 & oo(r,)<=.664) then class Bengn f (oo(r,)>=.763 & oo(r,)<=.338) & (oo(r,)>=.665 & oo(r,)<=.974) then class Malgnant; Table : Percentage of Classfcaton Table 6: eghts obtaned from RBFNN and RLS.646.646 5.748 5.748 Table 7: Varances obtaned from RBFNN and RLS 7.86 8.643 Rule for classfcaton of BC data sets usng RBFNN and RLS f (oo(r,)>=8.588 & oo(r,)<=8.9768) & (oo(r,)>=3.43 & oo(r,)<=3.866) then class Bengn; f (oo(r,)>=8.9937 & oo(r,)<=.668) & (oo(r,)>=3.885 & oo(r,)<=6.74) then class Malgnant; Table 8: Centers obtaned from RBFNN and Kalman Flter.95.95.95.95.95.95.95.95.95 B.9.9.9.9.9 C.9.9.9.9 Table 9: eghts obtaned from RBFNN and Kalman Flter.9849.998.95.9 Table : Varances obtaned from RBFNN and Kalman Flter 7.86 8.643 % of Accuracy MODIFIED RBFNN 98.4 RBFNN and RLS 97.388 RBFNN and Kalman Flter 96.435 Table- shows the percentage of classfcaton of respectve technques for BC datasets. 4. CONCLUSION An effcent Pattern Recognton and rule extracton technque usng Recursve Least square approxmaton and Modfed Radal Bass Functonal Neural Networks (MRBFNN) s presented n ths paper. Partcle Swarm Optmzaton [-5] s used to fnd the ntal centers of sconsn Breast Cancer (BC). After the centers have been ntalzed RBFNN s used to update the centers and the varances and the weghts of MRBFNN are updated usng Recursve Least Square approxmaton. The Classfcaton result s gven n Table 3 shows the effectveness of MRBFNN. The traned network s capable of provdng better Classfcaton wth comparson to tranng RBFNN network usng Kalman Flter and Recursve Least Square approxmaton. Further research could focus on the applcaton of dfferent tranng methods to tran MRBFNN. Ths technque can be appled to large problems to obtan expermental verfcaton of the computatonal results can be ncluded as a future work. 5 ACKNOLEDGEMENT : e would lke to acknowledge the encouragement and support gven by Prof. S. N. Dehury, Fakr Mohan Unversty, Orssa, Inda. He had been very knd and patent whle suggestng the outlnes of ths work. 6. REFERENCES [] D.Broomhead, D.Lowe, Multvarable Functonal Interpolaton and adaptve networks, complex systems, v, 998, pp. 3-355. Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 8
[]. N. Karayanns, Reformulated radal bass neural networks traned by gradent descent, IEEE Transactons on Neural Networks, Volume:, Issue: 3, 999 pp. 657-67. [3]. D. Smon, Dstrbuted fault tolerance n optmal nterpolatve nets,. IEEE Transacton on Neural Networks, Volume:, Issue: 6, pp. 348-357. [4]. S. Sn, R. DeFgueredo, Effcent learnng procedures for optmal nterpolatve nets, Neural Networks, Volume:6, Issue:, 993 pp. 6 99-3. [5]. John Sum, Ch-sng Leung, Glbert H. Young, and ng-kay Kan,. Kalman Flterng Method n Neural-Network, Tranng and Prunng, IEEE Transactons On Neural Networks,. Volume:, Issue:, 999, pp. 6-66. [6]. Y. Zhang, X. L, A fast U-D factorzaton-based learnng algorthm wth applcatons to nonlnear system modelng and dentfcaton, IEEE Transacton on Neural Networks, Volume:, Issue: 4, 999, pp. 93-938. [7]. Duro, R.J.; Reyes, J.S, Dscrete-tme back propagaton for tranng synaptc delay-based artfcal neural networks, IEEE Transacton on Neural Networks, Volume:, Issue: 4, 999, pp. 779-789. [8]. S. Chen, Y. u, B. Luk, Combned genetc algorthm optmzaton and regularzed orthogonal least squares learnng for radal bass functon networks, IEEE Transacton on Neural Networks, Volume:, 999,Issue: 5, pp. 39-43. [9]. Bahram G. Kerman, Mark. hte, H. Tory Nagle, Feature Extracton By Genetc Algorthms for Neural Networks n Breast Cancer Classfcaton, IEEE, 997, pp.83-83. []. McKay B.; lls, M.J.; Hdden, H.G.; Montague, G.A. and Barton, G.. March, Identfcaton of ndustral processes usng genetc programmng. Proceedngs of Internatonal Conference on Identfcaton n Engneerng Systems, 996, pp.38-337. []. S. Krkpatrck, Cl. Gelatt, M. Vecch, Optmzaton by smulated annealng, Volume:, no: 4598, 983, pp. 67-68. []. J. Kennedy, Stereotypng: mprovng partcle swarm performance wth cluster analyss In Proceedngs of the IEEE Congress on Evolutonary Computaton (CEC),, pp.57-5. [3]. J Kennedy, RC Eberhart, Y.Sh. Swarm Intellgence, Internatonal Journal of Computer Research,, pp.434-45. [4]. Xaohu Hu, Yuhu Sh, and Russ Eberhart, Recent advances n partcle swarm. In Proceedngs of IEEE Congress on Evolutonary Computaton (CEC), 4. pp. 9-97. [5]..M.R.Senapat, I.Vjaya, P.K.Dash, Rule Extracton from Radal Bass Functonal Neural Networks by usng Partcle Swarm Optmzaton. Journal of computer scence, Scence Publcaton 3 (8), 7. pp.59-599 Specal Issue of IJCCT Vol. Issue, 3, 4; for Internatonal Conference [ICCT-], 3 rd -5 th December 9