THE INCREDIBLE SHRINKING NEURAL NETWORK: NEW PERSPECTIVES ON LEARNING REPRESENTA-

Size: px
Start display at page:

Download "THE INCREDIBLE SHRINKING NEURAL NETWORK: NEW PERSPECTIVES ON LEARNING REPRESENTA-"

Transcription

1 Under revew as a conference paper at ICLR 217 THE INCREDIBLE SHRINKING NEURAL NETWORK: NEW PERSPECTIVES ON LEARNING REPRESENTA- TIONS THROUGH THE LENS OF PRUNING Nkolas Wolfe, Adtya Sharma & Bhksha Ra School of Computer Scence Carnege Mellon Unversty Pttsburgh, PA 15213, USA {nwolfe, bhksha}@cs.cmu.edu, adtyasharma@cmu.edu Lukas Drude Unverstat Paderborn drude@nt.upb.de ABSTRACT How much can prunng algorthms teach us about the fundamentals of learnng representatons n neural networks? A lot, t turns out. Neural network model compresson has become a topc of great nterest n recent years, and many dfferent technques have been proposed to address ths problem. In general, ths s motvated by the dea that smaller models typcally lead to better generalzaton. At the same tme, the decson of what to prune and when to prune necessarly forces us to confront our assumptons about how neural networks actually learn to represent patterns n data. In ths work we set out to test several long-held hypotheses about neural network learnng representatons and numercal approaches to prunng. To accomplsh ths we frst revewed the hstorcal lterature and derved a novel algorthm to prune whole neurons as opposed to the tradtonal method of prunng weghts from optmally traned networks usng a second-order Taylor method. We then set about testng the performance of our algorthm and analyzng the qualty of the decsons t made. As a baselne for comparson we used a frst-order Taylor method based on the Skeletonzaton algorthm and an exhaustve brute-force seral prunng algorthm. Our proposed algorthm worked well compared to a frst-order method, but not nearly as well as the brute-force method. Our error analyss led us to queston the valdty of many wdely-held assumptons behnd prunng algorthms n general and the trade-offs we often make n the nterest of reducng computatonal complexty. We dscovered that there s a straghtforward way, however expensve, to serally prune 4-7% of the neurons n a traned network wth mnmal effect on the learnng representaton and wthout any re-tranng. 1 INTRODUCTION In ths work we propose and evaluate a novel algorthm for prunng whole neurons from a traned neural network wthout any re-tranng and examne ts performance compared to two smpler methods. We then analyze the knds of errors made by our algorthm and use ths as a steppng off pont to launch an nvestgaton nto the fundamental nature of learnng representatons n neural networks. Our results corroborate an nsghtful though largely forgotten observaton by Mozer & Smolensky 1989a concernng the nature of neural network learnng. Ths observaton s best summarzed n a quotaton from Segee & Carter 1991 on the noton of fault-tolerance n multlayer perceptron networks: Contrary to the belef wdely held, multlayer networks are not nherently fault tolerant. In fact, the loss of a sngle weght s frequently suffcent to completely 1

2 Under revew as a conference paper at ICLR 217 dsrupt a learned functon approxmaton. Furthermore, havng a large number of weghts does not seem to mprove fault tolerance. [Emphass added] Essentally, Mozer & Smolensky 1989b observed that durng tranng neural networks do not dstrbute the learnng representaton evenly or equtably across hdden unts. What actually happens s that a few, elte neurons learn an approxmaton of the nput-output functon, and the remanng unts must learn a complex nterdependence functon whch cancels out ther respectve nfluence on the network output. Furthermore, assumng enough unts exst to learn the functon n queston, ncreasng the number of parameters does not ncrease the rchness or robustness of the learned approxmaton, but rather smply ncreases the lkelhood of overfttng and the number of nosy parameters to be canceled durng tranng. Ths s evnced by the fact that n many cases, multple neurons can be removed from a network wth no re-tranng and wth neglgble mpact on the qualty of the output approxmaton. In other words, there are few bpartsan unts n a traned network. A unt s typcally ether part of the possbly overft nput-output functon approxmaton, or t s part of an elaborate nose cancellaton task force. Assumng ths s the case, most of the compute-tme spent tranng a neural network s lkely occuped by ths arguably wasteful procedure of slencng superfluous parameters, and prunng can be vewed as a necessary procedure to trm the fat. We observed copous evdence of ths phenomenon n our experments, and ths s the motvaton behnd our decson to evaluate the prunng algorthms n ths study on the smple crtera of ther ablty to trm neurons wthout any re-tranng. If we were to employ re-tranng as part of our evaluaton crtera, we would arguably not be evaluatng the qualty of our algorthm s prunng decsons per se but rather the ablty of back-propagaton traned networks to recover from faults caused by non-deal prunng decsons, as suggested by the conclusons of Segee & Carter 1991 and Mozer & Smolensky 1989a. Moreover, as Fahlman & Lebere 1989 dscuss, due to the herd effect and movng target phenomena n back-propagaton learnng, the remanng unts n a network wll smply shft course to account for whatever error sgnal s re-ntroduced as a result of a bad prunng decson or network fault. So long as there are enough crtcal parameters to learn the functon n queston, a network can typcally recover faults wth addtonal tranng. Ths lmts the conclusons we can draw about the qualty of our prunng crtera when we employ re-tranng. In terms of removng unts wthout re-tranng, what we dscovered s that predctng the behavor of a network when a unt s to be pruned s very dffcult, and most of the approxmaton technques put forth n exstng prunng algorthms do not fare well at all when compared to a brute-force search. To begn our dscusson of how we arrved at our algorthm and set up our experments, we revew of the exstng lterature. 2 LITERATURE REVIEW Prunng algorthms, as comprehensvely surveyed by Reed 1993, are a useful set of heurstcs desgned to dentfy and remove elements from a neural network whch are ether redundant or do not sgnfcantly contrbute to the output of the network. Ths s motvated by the observed tendency of neural networks to overft to the dosyncrases of ther tranng data gven too many tranable parameters or too few nput patterns from whch to generalze, as stated by Chauvn 199. Network archtecture desgn and hyperparameter selecton are nherently dffcult tasks typcally approached usng a few well-known rules of thumb, e.g. varous weght ntalzaton procedures, choosng the wdth and number of layers, dfferent actvaton functons, learnng rates, momentum, etc. Some of ths black art appears unavodable. For problems whch cannot be solved usng lnear threshold unts alone, Baum & Haussler 1989 demonstrate that there s no way to precsely determne the approprate sze of a neural network a pror gven any random set of tranng nstances. Usng too few neurons seems to nhbt learnng, and so n practce t s common to attempt to overparameterze networks ntally usng a large number of hdden unts and weghts, and then prune or compress them afterwards f necessary. Of course, as the old sayng goes, there s more than one way to skn a neural network. 2.1 NON-PRUNING BASED GENERALIZATION & COMPRESSION TECHNIQUES The generalzaton behavor of neural networks has been well studed, and apart from prunng algorthms many heurstcs have been used to avod overfttng, such as dropout Srvastava et al. 2

3 Under revew as a conference paper at ICLR , maxout Goodfellow et al. 213, and cascade correlaton Fahlman & Lebere 1989, among others. Of course, whle cascade correlaton specfcally tres to construct of mnmal networks, many technques to mprove network generalzaton do not explctly attempt to reduce the total number of parameters or the memory footprnt of a traned network per se. Model compresson often has benefts wth respect to generalzaton performance and the portablty of neural networks to operate n memory-constraned or embedded envronments. Wthout explctly removng parameters from the network, weght quantzaton allows for a reducton n the number of bytes used to represent each weght parameter, as nvestgated by Balzer et al. 1991, Dundar & Rose 1994, and Hoehfeld & Fahlman A recently proposed method for compressng recurrent neural networks Prabhavalkar et al. 216 uses the sngular values of a traned weght matrx as bass vectors from whch to derve a compressed hdden layer. Øland & Ra 215 successfully mplemented network compresson through weght quantzaton wth an encodng step whle others such as Han et al. 216 have tred to expand on ths by addng weght-prunng as a precedng step to quantzaton and encodng. In summary, we can say that there are many dfferent ways to mprove network generalzaton by alterng the tranng procedure, the obectve error functon, or by usng compressed representatons of the network parameters. But these are not, strctly speakng, examples of technques to reduce the number of parameters n a network. For ths we must employ some form of prunng crtera. 2.2 PRUNING TECHNIQUES If we wanted to contnually shrnk a neural network down to mnmum sze, the most straghtforward brute-force way to do t s to ndvdually swtch each element off and measure the ncrease n total error on the tranng set. We then pck the element whch has the least mpact on the total error, and remove t. Rnse and repeat. Ths s extremely computatonally expensve, gven a reasonably large neural network and tranng set. Alternatvely, we mght accomplsh ths usng any number of much faster off-the-shelf prunng algorthms, such as Skeletonzaton Mozer & Smolensky 1989a, Optmal Bran Damage LeCun et al. 1989, or later varants such as Optmal Bran Surgeon Hassb & Stork In fact, we borrow much of our nspraton from these algorthms, wth one maor varaton: Instead of prunng ndvdual weghts, we prune entre neurons, thereby elmnatng all of ther ncomng and outgong weght parameters n one go, resultng n more memory saved, faster. The algorthm developed for ths paper s targeted at reducng the total number of neurons n a traned network, whch s one way of reducng ts computatonal memory footprnt. Ths s often a desrable crtera to mnmze n the case of resource-constraned or embedded devces, and also allows us to probe the lmtatons of prunng down to the very last essental network elements. In terms of generalzaton as well, we can measure the error of the network on the test set as each element s sequentally removed from the network. Wth an oracle prunng algorthm, what we expect to observe s that the output of the network remans stable as the frst few superfluous neurons are removed, and as we start to bte nto the more crucal members of the functon approxmaton, the error should start to rse dramatcally. In ths paper, the brute-force approach descrbed at the begnnng of ths secton serves as a proxy for an oracle prunng algorthm. One reason to choose to rank and prune ndvdual neurons as opposed to weghts s that there are far fewer elements to consder. Furthermore, the removal of a sngle weght from a large network s a drop n the bucket n terms of reducng a network s core memory footprnt. If we want to reduce the sze of a network as effcently as possble, we argue that prunng neurons nstead of weghts s more effcent computatonally as well as practcally n terms of quckly reachng a hypothetcal target reducton n memory consumpton. Ths approach also offers downstream applcatons a realstc expectaton of the mnmal ncrease n error resultng from the removal of a specfed percentage of neurons. Such trade-offs are unavodable, but performance mpacts can be lmted f a prncpled approach s used to fnd the best canddate neurons for removal. It s well known that too many free parameters n a neural network can lead to overfttng. Regardless of the number of weghts used n a gven network, as Segee & Carter 1991 assert, the representaton of a learned functon approxmaton s almost never evenly dstrbuted over the hdden unts, and thus the removal of any sngle hdden unt at random can actually result n a network fault. Mozer & Smolensky 1989b argue that only a subset of the hdden unts n a neural network actually 3

4 Under revew as a conference paper at ICLR 217 latch on to the nvarant or generalzng propertes of the tranng nputs, and the rest learn to ether mutually cancel each other s nfluence or begn overfttng to the nose n the data. We leverage ths dea n the current work to rank all neurons n pre-traned networks based on ther effectve contrbutons to the overall performance. We then remove the unnecessary neurons to reduce the network s footprnt. Through our experments we not only concretely valdate the theory put forth by Mozer & Smolensky 1989b but we also successfully buld on t to prune networks to 4 to 6 % of ther orgnal sze wthout any maor loss n performance. 3 PRUNING NEURONS TO SHRINK NEURAL NETWORKS As dscussed n Secton 1 our am s to leverage the hghly non-unform dstrbuton of the learnng representaton n pre-traned neural networks to elmnate redundant neurons, wthout focusng on ndvdual weght parameters. Takng ths approach enables us to remove all the weghts ncomng and outgong assocated wth a non-contrbutng neuron at once. We would lke to note here that n an deal scenaro, based on the neuron nterdependency theory put forward by Mozer & Smolensky 1989a, one would evaluate all possble combnatons of neurons to remove one at a tme, two at a tme, three at a tme and so forth to fnd the optmal subset of neurons to keep. Ths s computatonally unacceptable, and so we wll only focus on removng one neuron at a tme and explore more greedy algorthms to do ths n a more effcent manner. The general approach taken to prune an optmally traned neural network here s to create a ranked lst of all the neurons n the network based off of one of the 3 proposed rankng crtera: a brute force approxmaton, a lnear approxmaton and a quadratc approxmaton of the neuron s mpact on the output of the network. We then test the effects of removng neurons on the accuracy and error of the network. All the algorthms and methods presented here are easly parallelzable as well. One last thng to note here before movng forward s that the methods dscussed n ths secton nvolve some non-trval dervatons whch are beyond the scope of ths paper. We are more focused on analyzng the mplcatons of these methods on our understandng of neural network learnng representatons. However, a complete step-by-step dervaton and proof of all the results presented s provded n the Supplementary Materal as an Appendx. 3.1 BRUTE FORCE REMOVAL APPROACH Ths s perhaps the most nave yet the most accurate method for prunng the network. It s also the slowest and hence possbly unusable on large-scale neural networks wth thousands of neurons. Ths method explctly evaluates each neuron n the network. The dea s to manually check the effect of every sngle neuron on the output. Ths s done by runnng a forward propagaton on the valdaton set K tmes where K s the total number of neurons n the network, turnng off exactly one neuron each tme keepng all other neurons actve and notng down the change n error. Turnng a neuron off can be acheved by smply settng ts output to. Ths results n all the outgong weghts from that neuron beng turned off. Ths change n error s then used to generate the ranked lst. 3.2 TAYLOR SERIES REPRESENTATION OF ERROR Let us denote the total error from the optmally traned neural network for any gven valdaton dataset by E. E can be seen as a functon of O, where O s the output of any general neuron n the network. Ths error can be approxmated at a partcular neuron s output say O k by usng the 2nd order Taylor Seres as, ÊO EO k + O O k O +.5 O O k 2 2 E Ok O 2, 1 Ok When a neuron s pruned, ts output O becomes. Replacng O by O k n equaton 1 shows us that the error s approxmated perfectly by equaton 1 at O k. So: 4

5 Under revew as a conference paper at ICLR 217 E k = Ê ÊO k = O k O +.5 Ok 2 2 E Ok O 2, 2 Ok where E k s the change n the total error of the network when exactly one neuron k s turned off. Most of the terms n ths equaton are farly easy to compute, as we have O k already from the actvatons of the hdden unts and we already compute O O k for each tranng nstance durng backpropagaton. The 2 E O 2 Ok terms are a lttle more dffcult to compute. Ths s derved n the appendx and summarzed n the sectons below LINEAR APPROXIMATION APPROACH We can use equaton 2 to get the lnear error approxmaton of the change n error due to the kth neuron beng turned off and represent t as Ek 1 as follows: Ek 1 = O k O 3 Ok The dervatve term above s the frst-order gradent whch represents the change n error wth respect to the output a gven neuron. Ths term can be collected durng back-propagaton. As we shall see further n ths secton, lnear approxmatons are not relable ndcators of change n error but they provde us wth an nterestng bass for comparson wth the other methods dscussed n ths paper QUADRATIC APPROXIMATION APPROACH As above, we can use equaton 2 to get the quadratc error approxmaton of the change n error due to the kth neuron beng turned off and represent t as Ek 2 as follows: Ek 2 = O k O +.5 Ok 2 2 E Ok O 2 4 Ok The addtonal second-order gradent term appearng above represents the quadratc change n error wth respect to the output of a gven neuron. Ths term can be generated by performng backpropagaton usng second order dervatves. Collectng these quadratc gradents nvolves some non-trval mathematcs, the entre step-by-step dervaton procedure of whch s provded n the Supplementary Materal as an Appendx. 3.3 PROPOSED PRUNING ALGORITHM Fgure 1 shows a random error functon plotted aganst the output of any gven neuron. Note that ths fgure s for llustraton purposes only. The error functon s mnmzed at a partcular value of the neuron output as can be seen n the fgure. The process of tranng a neural network s essentally the process of fndng these mnmzng output values for all the neurons n the network. Prunng ths partcular neuron whch translates to gettng a zero output from t wll result n a change n the total overall error. Ths change n error s represented by dstance between the orgnal mnmum error shown by the dashed lne and the top red arrow. Ths neuron s clearly a bad canddate for removal snce removng t wll result n a huge error ncrease. The straght red lne n the fgure represents the frst-order approxmaton of the error usng Taylor Seres as descrbed before whle the parabola represents a second-order approxmaton. It can be clearly seen that the second-order approxmaton s a much better estmate of the change n error. One thng to note here s that t s possble n some cases that there s some thresholdng requred when tryng to approxmate the error usng the 2nd order Taylor Seres expanson. These cases mght arse when the parabolc approxmaton undergoes a steep slope change. To take nto account such cases, mean and medan thresholdng were employed, where any change above a certan threshold was assgned a mean or medan value respectvely. 5

6 Under revew as a conference paper at ICLR 217 Real Change n Error 2nd Order Approxmaton 2nd Order Estmate 1st Order Approxmaton 1st Order Estmate 18 Fgure 1: The ntuton behnd 1st & 2nd order neuron prunng decsons Two prunng algorthms are proposed here. They are dfferent n the way the neurons are ranked but both of them use Ek, the approxmaton of the change n error as the bass for the rankng. Ek can be calculated usng the method, or one of the two Taylor Seres approxmatons dscussed prevously. The frst step n both the algorthms s to decde a stoppng crteron. Ths can vary dependng on the applcaton but some ntutve stoppng crtera can be: maxmum number of neurons to remove, percentage scalng needed, maxmum allowable accuracy drop etc A LGORITHM I: S INGLE OVERALL R ANKING The complete algorthm s shown n Algorthm 1. The dea here s to generate a sngle ranked lst based on the values of Ek. Ths nvolves a sngle pass of second-order back-propagaton wthout weght updates to collect the gradents for each neuron. The neurons from ths rank-lst wth the lowest values of Ek are then pruned accordng to the stoppng crteron decded. We note here that ths algorthm s ntentonally nave and s used for comparson only. Data: optmally traned network, tranng set Result: A pruned network ntalze and defne stoppng crteron ; perform forward propagaton over the tranng set ; perform second-order back-propagaton wthout updatng weghts and collect lnear and quadratc gradents ; rank the remanng neurons based on Ek ; whle stoppng crteron s not met do remove the last ranked neuron ; end Algorthm 1: Sngle Overall Rankng A LGORITHM II: I TERATIVE R E -R ANKING In ths greedy varaton of the algorthm Algorthm 2, after each neuron removal, the remanng network undergoes a sngle forward and backward pass of second-order back-propagaton wthout weght updates and the rank lst s formed agan. Hence, each removal nvolves a new pass through 6

7 Under revew as a conference paper at ICLR 217 the network. Ths method s computatonally more expensve but takes nto account the dependences the neurons mght have on one another whch would lead to a change n error contrbuton every tme a dependent neuron s removed. Data: optmally traned network, tranng set Result: A pruned network ntalze and defne stoppng crteron ; whle stoppng crteron s not met do perform forward propagaton over the tranng set ; perform second-order back-propagaton wthout updatng weghts and collect lnear and quadratc gradents ; rank the remanng neurons based on E k ; remove the worst neuron based on the rankng ; end Algorthm 2: Iteratve Re-Rankng 4 EXPERIMENTAL RESULTS 4.1 EXAMPLE REGRESSION PROBLEM Ths problem serves as a quck example to demonstrate many of the phenomena descrbed n prevous sectons. We traned two networks to learn the cosne functon, wth one nput and one output. Ths s a task whch requres no more than 11 sgmod neurons to solve entrely, and n ths case we don t care about overfttng because the cosne functon has a precse defnton. Furthermore, the cosne functon s a good toy example because t s a smooth contnuous functon and, as demonstrated by Nelsen 215, f we were to tnker drectly wth the weghts and bas parameters of the network, we could allocate ndvdual unts wthn the network to be responsble for constraned ranges of nputs, smlar to a bass splne functon wth many control ponts. Ths would dstrbute the learned functon approxmaton evenly across all hdden unts, and thus we have presented the network wth a problem n whch t could productvely use as many hdden unts as we gve t. In ths case, a prunng algorthm would observe a farly consstent ncrease n error after the removal of each successve unt. In practce however, regardless of the number of expermental trals, ths s not what happens. The network wll always use 1-11 hdden unts and leave the rest to cancel each other s nfluence. 4 1st Order / Skeletonzaton st Order / Skeletonzaton Number of Neurons Removed Number of Neurons Removed Fgure 2: Degradaton n squared error after prunng a two-layer network traned to compute the cosne functon Left Network: 2 layers, 1 neurons each, 1 output, logstc sgmod actvaton, startng test accuracy: , Rght Network: 2 layers, 5 neurons each, 1 output, logstc sgmod actvaton, startng test accuracy: Fgure 2 shows two graphs. Both graphs demonstrate the use of the teratve re-rankng algorthm and the comparatve performance of the brute-force prunng method n blue, the frst order method n green, and the second order method n red. The graph on the left shows the performance of these algorthms startng from a network wth two layers of 1 neurons 2 total, and the graph on the rght shows a network wth two layers of 5 neurons 1 total. 7

8 Under revew as a conference paper at ICLR 217 In the left graph, we see that the brute-force method shows a graceful degradaton, and the error only begns to rse sharply after 5% of the total neurons have been removed. The error s bascally constant up to that pont. In the frst and second order methods, we see evdence of poor decson makng n the sense that both made mstakes early on, whch dsrupted the output functon approxmaton. The frst order method made a large error early on, though we see after a few more neurons were removed ths error was corrected somewhat though t only got worse from there. Ths s drect evdence of the lack of fault tolerance n a traned neural network. Ths phenomenon s even more starkly demonstrated n the second order method. After makng a few poor neuron removal decsons n a row, the error sgnal rose sharply, and then went back to zero after the 6th neuron was removed. Ths s due to the fact that the neurons t chose to remove were traned to cancel each others nfluence wthn a localzed part of the network. After the entre group was elmnated, the approxmaton returned to normal. Ths can only happen f the output functon approxmaton s not evenly dstrbuted over the hdden unts n a traned network. Ths phenomenon s even more starkly demonstrated n the graph on the rght. Here we see the frst order method got lucky n the begnnng and made decent decsons up to about the 4th removed neuron. The second order method had a small error n the begnnng whch t recovered from gracefully and proceeded to pass the 5 neuron pont before fnally begnnng to unravel. The brute force method, n sharp contrast, shows lttle to no ncrease n error at all untl 9% of the neurons n the network have been oblterated. Clearly frst and second order methods have some value n that they do not make completely arbtrary choces, but the brute force method s far better at ths task. Ths also demonstrates the sharp dualsm n neuron roles wthn a traned network. These networks were traned to near-perfect precson and each prunng method was appled wthout any re-tranng of any knd. Clearly, n the case of the brute force or oracle method, up to 9% of the network can be completely extrpated before the output approxmaton even begns to show any sgns of degradaton. Ths would be mpossble f the learnng representaton were evenly or equtably dstrbuted. Note, for example, that the degradaton pont n both cases s approxmately the same. Ths example s not a real-world applcaton of course, but t brngs nto very clear focus the knd of phenomena we wll dscuss n the followng sectons. 4.2 RESULTS ON MNIST DATASET For all the results presented n ths secton, the MNIST database of Handwrtten Dgts by LeCun & Cortes 21 was used. It s worth notng that due to the tme taken by the brute force algorthm we rather used a 5 mage subset of the MNIST database n whch we have normalzed the pxel values between and 1., and compressed the mage szes to 2x2 mages rather than 28x28, so the startng test accuracy reported here appears hgher than those reported by LeCun et al. We do not beleve that ths affects the nterpretaton of the presented results because the basc learnng problem does not change wth a larger dataset or nput dmenson. 4.3 PRUNING A 1-LAYER NETWORK The network archtecture n ths case conssted of 1 layer, 1 neurons, 1 outputs, logstc sgmod actvatons, and a startng test accuracy of SINGLE OVERALL RANKING ALGORITHM We frst present the results for a sngle-layer neural network n Fgure 3, usng the Sngle Overall algorthm Algorthm 1 as proposed n Secton 3. We agan note that ths algorthm s ntentonally nave and s used for comparson only. Its performance should be expected to be poor. After tranng, each neuron s assgned ts permanent rankng based on the three crtera dscussed prevously: A brute force ground truth rankng, and two approxmatons of ths rankng usng frst and second order Taylor estmatons of the change n network output error resultng from the removal of each neuron. An nterestng observaton here s that wth only a sngle layer, no crtera for rankng the neurons n the network brute force or the two Taylor Seres varants usng Algorthm 1 emerges superor, ndcatng that the 1st and 2nd order Taylor Seres methods are actually reasonable approxmatons 8

9 Under revew as a conference paper at ICLR st Order / Skeletonzaton st Order / Skeletonzaton Classfcaton Accuracy Percentage of Neurons Removed Percentage of Neurons Removed Fgure 3: Degradaton n squared error left and classfcaton accuracy rght after prunng a sngle-layer network usng The Sngle Overall Rankng algorthm Network: 1 layer, 1 neurons, 1 outputs, logstc sgmod actvaton, startng test accuracy:.998 of the brute force method under certan condtons. Of course, ths method s stll qute bad n terms of the rate of degradaton of the classfcaton accuracy and n practce we would lkely follow Algorthm 2 whch s takes nto account Mozer & Smolensky 1989a s observatons stated n the Related Work secton. The purpose of the present nvestgaton, however, s to demonstrate how much of a traned network can be theoretcally removed wthout alterng the network s learned parameters n any way ITERATIVE RE-RANKING ALGORITHM st Order / Skeletonzaton Classfcaton Accuracy st Order / Skeletonzaton Percentage of Neurons Removed Percentage of Neurons Removed Fgure 4: Degradaton n squared error left and classfcaton accuracy rght after prunng a sngle-layer network the teratve re-rankng algorthm Network: 1 layer, 1 neurons, 1 outputs, logstc sgmod actvaton, startng test accuracy:.998 In Fgure 4 we present our results usng Algorthm 2 The teratve re-rankng Algorthm n whch all remanng neurons are re-ranked after each successve neuron s swtched off. We compute the same brute force rankngs and Taylor seres approxmatons of error deltas over the remanng actve neurons n the network after each prunng decson. Ths s ntended to account for the effects of cancellng nteractons between neurons. There are 2 key observatons here. Usng the brute force rankng crtera, almost 6% of the neurons n the network can be pruned away wthout any maor loss n performance. The other noteworthy observaton here s that the 2nd order Taylor Seres approxmaton of the error performs consstently better than ts 1st order verson, n most stuatons, though Fgure 21 s a pognant counter-example VISUALIZATION OF ERROR SURFACE & PRUNING DECISIONS As explaned n Secton 3, these graphs are a vsualzaton of the error surface of the network output wth respect to the neurons chosen for removal usng each of the 3 rankng crtera, represented n 9

10 Under revew as a conference paper at ICLR 217 ntervals of 1 neurons. In each graph, the error surface of the network output s dsplayed n log space left and n real space rght wth respect to each canddate neuron chosen for removal. We create these plots durng the prunng exercse by pckng a neuron to swtch off, and then multplyng ts output by a scalar gan value α whch s adusted from. to 1. wth a step sze of.1. When the value of α s 1., ths represents the unperturbed neuron output learned durng tranng. Between. and 1., we are graphng the lteral effect of turnng the neuron off α =, and when α > 1. we are smulatng a boostng of the neuron s nfluence n the network,.e. nflatng the value of ts outgong weght parameters. We graph the effect of boostng the neuron s output to demonstrate that for certan neurons n the network, even doublng, trplng, or quadruplng the scalar output of the neuron has no effect on the overall error of the network, ndcatng the remarkable degree to whch the network has learned to gnore the value of certan parameters. In other cases, we can get a sense of the senstvty of the network s output to the value of a gven neuron when the curve rses steeply after the red 1. lne. Ths ndcates that the learned value of the parameters emanatng from a gven neuron are relatvely mportant, and ths s why we should deally see sharper uptcks n the curves for the later-removed neurons n the network, that s, when the neurons crucal to the learnng representaton start to be pcked off. Some very nterestng observatons can be made n each of these graphs. Remember that lower s better n terms of the heght of the curve and mnmal or negatve horzontal change between the vertcal red lne at 1. neuron on, α = 1. and. neuron off, α =. s ndcatve of a good canddate neuron to prune,.e. there wll be mnmal effect on the network output when the neuron s removed VISUALIZATION OF BRUTE FORCE PRUNING DECISIONS Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 5: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the brute force crteron; Network: 1 layer, 1 neurons, 1 outputs, logstc sgmod actvaton, startng test accuracy:.998 In Fgure??, we notce how low to the floor and flat most of the curves are. It s only untl the 9th removed neuron that we see a hgher curve wth a more convex shape clearly a more senstve, nfluental pece of the network VISUALIZATION OF 1ST ORDER APPROXIMATION PRUNING DECISIONS It can be seen n Fgure 6 that most choces seem to have flat or negatvely sloped curves, ndcatng that the frst order approxmaton seems to be pretty good, but examnng the brute force choces shows they could be better VISUALIZATION OF 2ND ORDER APPROXIMATION PRUNING DECISIONS The method n Fgure 7 looks smlar to the brute force method choces, though clearly not as good they re more spread out. Notce the dfference n convexty between the 2nd and 1st order method 1

11 Under revew as a conference paper at ICLR Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 6: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the 1st order Taylor Seres error approxmaton crteron; Network: 1 layer, 1 neurons, 1 outputs, logstc sgmod actvaton, startng test accuracy: Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 7: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the 2nd order Taylor Seres error approxmaton crteron; Network: 1 layer, 1 neurons, 1 outputs, logstc sgmod actvaton, startng test accuracy:.998 choces. It s clear that the frst order method s fttng a lne and the 2nd order method s fttng a parabola n ther approxmaton. 4.4 PRUNING A 2-LAYER NETWORK The network archtecture n ths case conssted of 2 layers, 5 neurons per layer, 1 outputs, logstc sgmod actvatons, and a startng test accuracy of SINGLE OVERALL RANKING ALGORITHM Fgure 8 shows the prunng results for Algorthm 1 on a 2-layer network. The rankng procedure s dentcal to the one used to generate Fgure 3. We agan note that ths algorthm s ntentonally nave and s used for comparson only. Its performance should be expected to be poor. Unsurprsngly, a 2-layer network s harder to prune because a sngle overall rankng wll never capture the nterdependences between neurons n dfferent layers. It makes sense that ths s worse 11

12 Under revew as a conference paper at ICLR st Order / Skeletonzaton st Order / Skeletonzaton Classfcaton Accuracy Percentage of Neurons Removed Percentage of Neurons Removed Fgure 8: Degradaton n squared error left and classfcaton accuracy rght after prunng a 2- layer network usng the Sngle Overall Rankng algorthm; Network: 2 layers, 5 neurons/layer, 1 outputs, logstc sgmod actvaton, startng test accuracy: 1. than the performance on the 1-layer network, even f ths method s already known to be bad, and we d lkely never use t n practce ITERATIVE RE-RANKING ALGORITHM 8 7 1st Order / Skeletonzaton st Order / Skeletonzaton Classfcaton Accuracy Percentage of Neurons Removed Percentage of Neurons Removed Fgure 9: Degradaton n squared error left and classfcaton accuracy rght after prunng a 2- layer network usng the teratve re-rankng algorthm; Network: 2 layers, 5 neurons/layer, 1 outputs, logstc sgmod actvaton, startng test accuracy: 1. Fgure 9 shows the results from usng Algorthm 2 on a 2-layer network. We compute the same brute force rankngs and Taylor seres approxmatons of error deltas over the remanng actve neurons n the network after each prunng decson used to generate Fgure 4. Agan, ths s ntended to account for the effects of cancellng nteractons between neurons. It s clear that t becomes harder to remove neurons 1-by-1 wth a deeper network whch makes sense because the neurons have more nterdependences n a deeper network, but we see an overall better performance wth 2nd order method vs. 1st order, except for the frst 2% of the neurons but ths doesn t seem to make much dfference for classfcaton accuracy. Perhaps a more mportant observaton here s that even wth a more complex network, t s possble to remove up to 4% of the neurons wth no maor loss n performance whch s clearly llustrated by the brute force curve. Ths shows the clear potental of an deal prunng technque and also shows how nconsstent 1st and 2nd order Taylor Seres approxmatons of the error can be as rankng crtera. 12

13 Under revew as a conference paper at ICLR VISUALIZATION OF ERROR SURFACE & PRUNING DECISIONS As seen n the case of a sngle layered network, these graphs are a vsualzaton the error surface of the network output wth respect to the neurons chosen for removal usng each algorthm, represented n ntervals of 1 neurons VISUALIZATION OF BRUTE FORCE PRUNING DECISIONS Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 1: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the brute force crteron; Network: 2 layers, 5 neurons/layer, 1 outputs, logstc sgmod actvaton, startng test accuracy: 1. In Fgure 1, t s clear why these neurons got chosen, ther graphs clearly show lttle change when neuron s removed, are mostly near the floor, and show convex behavour of error surface, whch argues for the ratonalzaton of usng 2nd order methods to estmate dfference n error when they are turned off VISUALIZATION OF 1ST ORDER APPROXIMATION PRUNING DECISIONS Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 11: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the 1st order Taylor Seres error approxmaton crteron; Network: 2 layers, 5 neurons/layer, 1 outputs, logstc sgmod actvaton, startng test accuracy: 1. Drawng a flat lne at the pont of each neurons ntersecton wth the red vertcal lne no change n gan shows that the 1st dervatve method s actually accurate for estmaton of change n error n these cases, but stll ultmately leads to poor decsons. 13

14 Under revew as a conference paper at ICLR VISUALIZATION OF 2ND ORDER APPROXIMATION PRUNING DECISIONS Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 12: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the 2nd order Taylor Seres error approxmaton crteron; Network: 2 layers, 5 neurons/layer, 1 outputs, logstc sgmod actvaton, startng test accuracy: 1. Clearly these neurons are not overtly poor canddates for removal error doesn t change much between 1. & zero-crossng left-hand-sde, but could be better as descrbed above n the brute force Crteron dscusson. 4.5 INVESTIGATION OF PRUNING PERFORMANCE WITH IMPERFECT STARTING CONDITIONS In our experments thus far we have tactly assumed that we start wth a network whch has learned an optmal representaton of the tranng obectve,.e. t has been traned to the pont where we accept ts performance on the test set. Here we explore what happens when we prune wth a sub-optmal startng network. If the assumptons of ths paper regardng the nature of neural network learnng are correct, we expect that two processes are essentally at work durng back-propagaton tranng. Frst, we expect that the neurons whch drectly partcpate n the fundamental learnng representaton even f redundantly work together to reduce error on the tranng data. Second, we expect that neurons whch do not drectly partcpate n the learnng representaton work to cancel each other s negatve nfluence. Furthermore, we expect that these two groups are essentally dstnct, as evnced by the fact that multple neurons can often be removed as a group wth lttle to no effect on the network output. Some non-trval porton of the tranng tme, then, s spent dong work whch has nothng ntrnscally to do wth the learnng representaton and essentally functons as nose cancellaton. If ths s the case, when we attempt to prune a network whch has not fully canceled the nosy nfluence of extraneous or redundant unts, we mght expect to see the error actually mprove after removng a few bad apples. Ths s n fact what we observe, as demonstrated n the followng experments. For each experment n ths secton we traned wth the full MNIST tranng set LeCun & Cortes 21, uncompressed and wthout any data normalzaton. We traned three dfferent networks to learn to dstngush a sngle handwrtten dgt from the rest of the data. The network archtectures were each composed of 784 nputs, 1 hdden layer wth 1 neurons, and 2 soft-max outputs; one to say yes, and the other to say no. These networks were traned to dstngush the dgts, 1, and 2, and ther respectve startng accuraces were a sub-optmal.9757,.9881, and Fnally, we only consder the teratve re-rankng algorthm, as the sngle overall rankng algorthm s clearly nonvable. 14

15 Under revew as a conference paper at ICLR MNIST SINGLE DIGIT CLASSIFICATION: DIGIT Fgure 13 shows the degradaton n squared error after removng neurons from a network traned to dstngush the dgt. What we observe s that the frst and second order methods both fal n dfferent ways, though clearly the second order method makes better decsons overall. The frst order method explodes spectacularly n the frst few teratons. The brute force method, n stark contrast, actually mproves n the frst few teratons, and remans essentally flat untl around the 6% mark, at whch pont t begns to gradually ncrease and meet the other curves st Order / Skeletonzaton Number of Neurons Removed Fgure 13: Degradaton n squared error after prunng a sngle-layer network traned to do a oneversus-all classfcaton of the dgt usng the teratve re-rankng algorthm The behavor of the brute force method here demonstrates that the network was essentally workng to cancel the effect of a few bad neurons when the tranng convergence crtera were met,.e. the network was no longer able to make progress on the tranng set. After removng these neurons durng prunng, the output mproved. We can nvestgate ths by lookng at the error surface wth respect to the neurons chosen for removal by each method n turn. Below n Fgure 14 s the graph of the brute force method Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 14: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the brute force teratve re-rankng removal crteron Fgure 14 shows an nterestng phenomenon, whch we wll see n later experments as well. The hgh blue curve correspondng to neuron s negatvely sloped n the begnnng and clearly after removng ths neuron, the output wll mprove. The rest of the curves, n correspondence wth the squared error degradaton curve above, are mostly flat and tghtly layered together, ndcatng that they are good neurons to remove. In Fgure 15 below, we observe a stark contrast to ths. The curves correspondng to neurons and 1 are mostly flat, and farly lower than the rest, though clearly a mstake was made early on and the rest of the curves are clearly bad choces. In all of these cases however, we see that the curves are 15

16 Under revew as a conference paper at ICLR 217 easly approxmated wth a straght lne and so the frst order method may have been farly accurate n ts predctons, even though t stll made poor decsons Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 15: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the frst-order teratve re-rankng removal crteron Fgure 15 s an example of how thngs can go south once a few bad mstakes are made at the outset. Fgure 16 shows a much better set of choces made by the second order method, though clearly not as good as the brute force method. The log-space plots make t a bt easer to see the dfference between the brute force and second order methods n Fgures 14 and 16, respectvely Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 16: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the second-order teratve re-rankng removal crteron MNIST SINGLE DIGIT CLASSIFICATION: DIGIT 1 Examnng Fgure 17, we see a much starker example of the prevous phenomenon, n whch the brute force method contnues to mprove the performance of the network after removng 8% of the neurons n the network. The frst and second order methods fal early and proceed n fts and starts clearly demonstratng evdence of nterrelated groups of nose-cancelng neurons, and never fully recover. It should be noted that t would be mpossble to see curves lke ths f neural networks evenly dstrbuted the learnng representaton evenly or equtably over ther hdden unts. One of the most strkng thngs about the blue curve n Fgure 17 s the fact that the network never drops below ts startng error untl t crosses the 8% mark, ndcatng that only 2% of the neurons n ths network are actually essental to the learnng the tranng obectve. In ths sense, we can only 16

17 Under revew as a conference paper at ICLR st Order / Skeletonzaton Number of Neurons Removed Fgure 17: Degradaton n squared error after prunng a sngle-layer network traned to do a oneversus-all classfcaton of the dgt 1 usng the teratve re-rankng algorthm wonder how much of the tranng tme was spent wnnowng the error out of the remanng 8% of the network Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 18: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the brute force teratve re-rankng removal crteron In Fgures 18, 19 and 2 we can examne the choces made by the respectve methods. The brute force method serves as our example of a near-optmal prunng regmen, and the rest are frst and second order approxmatons of ths. Small dfferences, clearly, can lead to large effects on the network output as shown n Fgure MNIST SINGLE DIGIT CLASSIFICATION: DIGIT 2 Fgure 21 s an nterestng case because t shatters our confdence n the relablty of the second order method to make good prunng decsons, and further demonstrates the phenomenon of how much the error can mprove f the rght neurons are removed after tranng gets stuck. In ths case, though stll a poor performance overall, the frst order method vastly outperforms the second order method. Fgure 22 shows us a clear example of the frst element to remove havng a negatve error slope, and mprovng the output as a result. The rest of the prunng decsons are reasonable. Comparng wth the blue curve n Fgure 21, we see the correspondence between the frst prunng decson mprovng the output, and the remanng prunng decsons keepng the output farly flat. Clearly, however, there sn t much room to get worse gven our startng pont wth a sub-optmal network, and we see that the endng sum of squared errors s not much hgher than the startng pont. At the same tme, we can stll see the contrast n performance f we make optmal prunng decsons, and most of the neurons n ths network were clearly dong nothng. 17

18 Under revew as a conference paper at ICLR Gan value 1 = normal output, = off 2 Gan value 1 = normal output, = off Fgure 19: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the frst-order teratve re-rankng removal crteron Gan value 1 = normal output, = off 2 Gan value 1 = normal output, = off Fgure 2: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the second-order teratve re-rankng removal crteron st Order / Skeletonzaton Number of Neurons Removed Fgure 21: Degradaton n squared error after prunng a sngle-layer network traned to do a oneversus-all classfcaton of the dgt 2 usng the teratve re-rankng algorthm In Fgure 23, we see a mxed bag n whch the decsons are clearly sub-optmal, though much better than Fgure 24, n whch we can observe how a bad frst decson essentally runed the network for good. The agged edges of the red curve n Fgure 21 correspond wth the postve and negatve 18

19 Under revew as a conference paper at ICLR Gan value 1 = normal output, = off Gan value 1 = normal output, = off Fgure 22: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the brute force teratve re-rankng removal crteron slopes of the cluster of bad prunng decsons n 24. Once agan, these are not necessarly bad decsons, but the startng pont s already bad and ths cannot be recovered wthout re-tranng the network Gan value 1 = normal output, = off 5 Gan value 1 = normal output, = off Fgure 23: Error surface of the network output n log space left and real space rght wth respect to each canddate neuron chosen for removal usng the frst-order teratve re-rankng removal crteron ASIDE: IMPLICATIONS OF THIS EXPERIMENT From the three examples above, we see that n each case, startng from a sub-optmal network, a brute force removal technque consstently mproves performance for the frst few prunng teratons, and the sum of squared errors does not degrade beyond the startng pont untl around 6-8% of the neurons have been removed. Ths s only possble f we have an essentally strct dchotomy between the roles of dfferent neurons durng tranng. If the network needs only 2-4% of the neurons t began wth, the tranng process s essentally domnated by the task of cancelng the resdual nose of redundant neurons. Furthermore, the network can get stuck n tranng wth redundant unts and dstort the fnal output. Ths s strong evdence of our thess that the learnng representaton s nether equtably nor evenly dstrbuted and that most of the neurons whch do not drectly partcpate n the learnng representaton can be removed wthout any retranng. 4.6 EXPERIMENTS ON TOY DATASETS As can be seen from the experments on MNIST, even though the 2nd-order approxmaton crteron s consstently better than 1st-order, ts performance s not nearly as good as brute force based rank- 19

Learning Ensembles of Convolutional Neural Networks

Learning Ensembles of Convolutional Neural Networks Learnng Ensembles of Convolutonal Neural Networks Lran Chen The Unversty of Chcago Faculty Mentor: Greg Shakhnarovch Toyota Technologcal Insttute at Chcago 1 Introducton Convolutonal Neural Networks (CNN)

More information

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel To: Professor Avtable Date: February 4, 3 From: Mechancal Student Subject:.3 Experment # Numercal Methods Usng Excel Introducton Mcrosoft Excel s a spreadsheet program that can be used for data analyss,

More information

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht 68 Internatonal Journal "Informaton Theores & Applcatons" Vol.11 PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION Evgeny Artyomov and Orly

More information

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results AMERICAN JOURNAL OF UNDERGRADUATE RESEARCH VOL. 1 NO. () A Comparson of Two Equvalent Real Formulatons for Complex-Valued Lnear Systems Part : Results Abnta Munankarmy and Mchael A. Heroux Department of

More information

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985 NATONAL RADO ASTRONOMY OBSERVATORY Green Bank, West Vrgna SPECTRAL PROCESSOR MEMO NO. 25 MEMORANDUM February 13, 1985 To: Spectral Processor Group From: R. Fsher Subj: Some Experments wth an nteger FFT

More information

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04.

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04. Networs Introducton to - In 1986 a method for learnng n mult-layer wor,, was nvented by Rumelhart Paper Why are what and where processed by separate cortcal vsual systems? - The algorthm s a sensble approach

More information

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6)

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6) Passve Flters eferences: Barbow (pp 6575), Hayes & Horowtz (pp 360), zzon (Chap. 6) Frequencyselectve or flter crcuts pass to the output only those nput sgnals that are n a desred range of frequences (called

More information

arxiv: v1 [cs.lg] 8 Jul 2016

arxiv: v1 [cs.lg] 8 Jul 2016 Overcomng Challenges n Fxed Pont Tranng of Deep Convolutonal Networks arxv:1607.02241v1 [cs.lg] 8 Jul 2016 Darryl D. Ln Qualcomm Research, San Dego, CA 92121 USA Sachn S. Talath Qualcomm Research, San

More information

High Speed ADC Sampling Transients

High Speed ADC Sampling Transients Hgh Speed ADC Samplng Transents Doug Stuetzle Hgh speed analog to dgtal converters (ADCs) are, at the analog sgnal nterface, track and hold devces. As such, they nclude samplng capactors and samplng swtches.

More information

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart Control Chart - hstory Control Chart Developed n 920 s By Dr. Walter A. Shewhart 2 Process n control A phenomenon s sad to be controlled when, through the use of past experence, we can predct, at least

More information

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS INTRODUCTION Because dgtal sgnal rates n computng systems are ncreasng at an astonshng rate, sgnal ntegrty ssues have become far more mportant to

More information

ANNUAL OF NAVIGATION 11/2006

ANNUAL OF NAVIGATION 11/2006 ANNUAL OF NAVIGATION 11/2006 TOMASZ PRACZYK Naval Unversty of Gdyna A FEEDFORWARD LINEAR NEURAL NETWORK WITH HEBBA SELFORGANIZATION IN RADAR IMAGE COMPRESSION ABSTRACT The artcle presents the applcaton

More information

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b 2nd Internatonal Conference on Computer Engneerng, Informaton Scence & Applcaton Technology (ICCIA 207) Research of Dspatchng Method n Elevator Group Control System Based on Fuzzy Neural Network Yufeng

More information

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES IEE Electroncs Letters, vol 34, no 17, August 1998, pp. 1622-1624. ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES A. Chatzgeorgou, S. Nkolads 1 and I. Tsoukalas Computer Scence Department, 1 Department

More information

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University Dynamc Optmzaton Assgnment 1 Sasanka Nagavall snagaval@andrew.cmu.edu 16-745 January 29, 213 Robotcs Insttute Carnege Mellon Unversty Table of Contents 1. Problem and Approach... 1 2. Optmzaton wthout

More information

MTBF PREDICTION REPORT

MTBF PREDICTION REPORT MTBF PREDICTION REPORT PRODUCT NAME: BLE112-A-V2 Issued date: 01-23-2015 Rev:1.0 Copyrght@2015 Bluegga Technologes. All rghts reserved. 1 MTBF PREDICTION REPORT... 1 PRODUCT NAME: BLE112-A-V2... 1 1.0

More information

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding Sde-Match Vector Quantzers Usng Neural Network Based Varance Predctor for Image Codng Shuangteng Zhang Department of Computer Scence Eastern Kentucky Unversty Rchmond, KY 40475, U.S.A. shuangteng.zhang@eku.edu

More information

Calculation of the received voltage due to the radiation from multiple co-frequency sources

Calculation of the received voltage due to the radiation from multiple co-frequency sources Rec. ITU-R SM.1271-0 1 RECOMMENDATION ITU-R SM.1271-0 * EFFICIENT SPECTRUM UTILIZATION USING PROBABILISTIC METHODS Rec. ITU-R SM.1271 (1997) The ITU Radocommuncaton Assembly, consderng a) that communcatons

More information

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter Walsh Functon Based Synthess Method of PWM Pattern for Full-Brdge Inverter Sej Kondo and Krt Choesa Nagaoka Unversty of Technology 63-, Kamtomoka-cho, Nagaoka 9-, JAPAN Fax: +8-58-7-95, Phone: +8-58-7-957

More information

antenna antenna (4.139)

antenna antenna (4.139) .6.6 The Lmts of Usable Input Levels for LNAs The sgnal voltage level delvered to the nput of an LNA from the antenna may vary n a very wde nterval, from very weak sgnals comparable to the nose level,

More information

Comparison of Two Measurement Devices I. Fundamental Ideas.

Comparison of Two Measurement Devices I. Fundamental Ideas. Comparson of Two Measurement Devces I. Fundamental Ideas. ASQ-RS Qualty Conference March 16, 005 Joseph G. Voelkel, COE, RIT Bruce Sskowsk Rechert, Inc. Topcs The Problem, Eample, Mathematcal Model One

More information

Understanding the Spike Algorithm

Understanding the Spike Algorithm Understandng the Spke Algorthm Vctor Ejkhout and Robert van de Gejn May, ntroducton The parallel soluton of lnear systems has a long hstory, spannng both drect and teratve methods Whle drect methods exst

More information

Test 2. ECON3161, Game Theory. Tuesday, November 6 th

Test 2. ECON3161, Game Theory. Tuesday, November 6 th Test 2 ECON36, Game Theory Tuesday, November 6 th Drectons: Answer each queston completely. If you cannot determne the answer, explanng how you would arrve at the answer may earn you some ponts.. (20 ponts)

More information

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems 0 nd Internatonal Conference on Industral Technology and Management (ICITM 0) IPCSIT vol. 49 (0) (0) IACSIT Press, Sngapore DOI: 0.776/IPCSIT.0.V49.8 A NSGA-II algorthm to solve a b-obectve optmzaton of

More information

STRUCTURE ANALYSIS OF NEURAL NETWORKS

STRUCTURE ANALYSIS OF NEURAL NETWORKS STRUCTURE ANALYSIS OF NEURAL NETWORKS DING SHENQIANG NATIONAL UNIVERSITY OF SINGAPORE 004 STRUCTURE ANALYSIS OF NEURAL NETWORKS DING SHENQIANG 004 STRUCTURE ANANLYSIS OF NEURAL NETWORKS DING SHENQIANG

More information

Review: Our Approach 2. CSC310 Information Theory

Review: Our Approach 2. CSC310 Information Theory CSC30 Informaton Theory Sam Rowes Lecture 3: Provng the Kraft-McMllan Inequaltes September 8, 6 Revew: Our Approach The study of both compresson and transmsson requres that we abstract data and messages

More information

1 GSW Multipath Channel Models

1 GSW Multipath Channel Models In the general case, the moble rado channel s pretty unpleasant: there are a lot of echoes dstortng the receved sgnal, and the mpulse response keeps changng. Fortunately, there are some smplfyng assumptons

More information

4.3- Modeling the Diode Forward Characteristic

4.3- Modeling the Diode Forward Characteristic 2/8/2012 3_3 Modelng the ode Forward Characterstcs 1/3 4.3- Modelng the ode Forward Characterstc Readng Assgnment: pp. 179-188 How do we analyze crcuts wth juncton dodes? 2 ways: Exact Solutons ffcult!

More information

Weighted Penalty Model for Content Balancing in CATS

Weighted Penalty Model for Content Balancing in CATS Weghted Penalty Model for Content Balancng n CATS Chngwe Davd Shn Yuehme Chen Walter Denny Way Len Swanson Aprl 2009 Usng assessment and research to promote learnng WPM for CAT Content Balancng 2 Abstract

More information

problems palette of David Rock and Mary K. Porter 6. A local musician comes to your school to give a performance

problems palette of David Rock and Mary K. Porter 6. A local musician comes to your school to give a performance palette of problems Davd Rock and Mary K. Porter 1. If n represents an nteger, whch of the followng expressons yelds the greatest value? n,, n, n, n n. A 60-watt lghtbulb s used for 95 hours before t burns

More information

Analysis of Time Delays in Synchronous and. Asynchronous Control Loops. Bj rn Wittenmark, Ben Bastian, and Johan Nilsson

Analysis of Time Delays in Synchronous and. Asynchronous Control Loops. Bj rn Wittenmark, Ben Bastian, and Johan Nilsson 37th CDC, Tampa, December 1998 Analyss of Delays n Synchronous and Asynchronous Control Loops Bj rn Wttenmark, Ben Bastan, and Johan Nlsson emal: bjorn@control.lth.se, ben@control.lth.se, and johan@control.lth.se

More information

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation 1 Parameter Free Iteratve Decodng Metrcs for Non-Coherent Orthogonal Modulaton Albert Gullén Fàbregas and Alex Grant Abstract We study decoder metrcs suted for teratve decodng of non-coherently detected

More information

UNIT 11 TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT

UNIT 11 TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT UNIT TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT Structure. Introducton Obectves. Key Terms Used n Game Theory.3 The Maxmn-Mnmax Prncple.4 Summary.5 Solutons/Answers. INTRODUCTION In Game Theory, the word

More information

Webinar Series TMIP VISION

Webinar Series TMIP VISION Webnar Seres TMIP VISION TMIP provdes techncal support and promotes knowledge and nformaton exchange n the transportaton plannng and modelng communty. DISCLAIMER The vews and opnons expressed durng ths

More information

PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER. Chirala Engineering College, Chirala.

PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER. Chirala Engineering College, Chirala. PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER 1 H. RAGHUNATHA RAO, T. ASHOK KUMAR & 3 N.SURESH BABU 1,&3 Department of Electroncs and Communcaton Engneerng, Chrala Engneerng College,

More information

Adaptive System Control with PID Neural Networks

Adaptive System Control with PID Neural Networks Adaptve System Control wth PID Neural Networs F. Shahra a, M.A. Fanae b, A.R. Aromandzadeh a a Department of Chemcal Engneerng, Unversty of Sstan and Baluchestan, Zahedan, Iran. b Department of Chemcal

More information

Discussion on How to Express a Regional GPS Solution in the ITRF

Discussion on How to Express a Regional GPS Solution in the ITRF 162 Dscusson on How to Express a Regonal GPS Soluton n the ITRF Z. ALTAMIMI 1 Abstract The usefulness of the densfcaton of the Internatonal Terrestral Reference Frame (ITRF) s to facltate ts access as

More information

Guidelines for CCPR and RMO Bilateral Key Comparisons CCPR Working Group on Key Comparison CCPR-G5 October 10 th, 2014

Guidelines for CCPR and RMO Bilateral Key Comparisons CCPR Working Group on Key Comparison CCPR-G5 October 10 th, 2014 Gudelnes for CCPR and RMO Blateral Key Comparsons CCPR Workng Group on Key Comparson CCPR-G5 October 10 th, 2014 These gudelnes are prepared by CCPR WG-KC and RMO P&R representatves, and approved by CCPR,

More information

Fall 2018 #11 Games and Nimbers. A. Game. 0.5 seconds, 64 megabytes

Fall 2018 #11 Games and Nimbers. A. Game. 0.5 seconds, 64 megabytes 5-95 Fall 08 # Games and Nmbers A. Game 0.5 seconds, 64 megabytes There s a legend n the IT Cty college. A student that faled to answer all questons on the game theory exam s gven one more chance by hs

More information

High Speed, Low Power And Area Efficient Carry-Select Adder

High Speed, Low Power And Area Efficient Carry-Select Adder Internatonal Journal of Scence, Engneerng and Technology Research (IJSETR), Volume 5, Issue 3, March 2016 Hgh Speed, Low Power And Area Effcent Carry-Select Adder Nelant Harsh M.tech.VLSI Desgn Electroncs

More information

ECE315 / ECE515 Lecture 5 Date:

ECE315 / ECE515 Lecture 5 Date: Lecture 5 Date: 18.08.2016 Common Source Amplfer MOSFET Amplfer Dstorton Example 1 One Realstc CS Amplfer Crcut: C c1 : Couplng Capactor serves as perfect short crcut at all sgnal frequences whle blockng

More information

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame Ensemble Evoluton of Checkers Players wth Knowledge of Openng, Mddle and Endgame Kyung-Joong Km and Sung-Bae Cho Department of Computer Scence, Yonse Unversty 134 Shnchon-dong, Sudaemoon-ku, Seoul 120-749

More information

Figure 1. DC-DC Boost Converter

Figure 1. DC-DC Boost Converter EE46, Power Electroncs, DC-DC Boost Converter Verson Oct. 3, 11 Overvew Boost converters make t possble to effcently convert a DC voltage from a lower level to a hgher level. Theory of Operaton Relaton

More information

Generalized Incomplete Trojan-Type Designs with Unequal Cell Sizes

Generalized Incomplete Trojan-Type Designs with Unequal Cell Sizes Internatonal Journal of Theoretcal & Appled Scences 6(1): 50-54(2014) ISSN No. (Prnt): 0975-1718 ISSN No. (Onlne): 2249-3247 Generalzed Incomplete Trojan-Type Desgns wth Unequal Cell Szes Cn Varghese,

More information

Digital Transmission

Digital Transmission Dgtal Transmsson Most modern communcaton systems are dgtal, meanng that the transmtted normaton sgnal carres bts and symbols rather than an analog sgnal. The eect o C/N rato ncrease or decrease on dgtal

More information

A TWO-PLAYER MODEL FOR THE SIMULTANEOUS LOCATION OF FRANCHISING SERVICES WITH PREFERENTIAL RIGHTS

A TWO-PLAYER MODEL FOR THE SIMULTANEOUS LOCATION OF FRANCHISING SERVICES WITH PREFERENTIAL RIGHTS A TWO-PLAYER MODEL FOR THE SIMULTANEOUS LOCATION OF FRANCHISING SERVICES WITH PREFERENTIAL RIGHTS Pedro Godnho and oana Das Faculdade de Economa and GEMF Unversdade de Combra Av. Das da Slva 65 3004-5

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

Application of Intelligent Voltage Control System to Korean Power Systems

Application of Intelligent Voltage Control System to Korean Power Systems Applcaton of Intellgent Voltage Control System to Korean Power Systems WonKun Yu a,1 and HeungJae Lee b, *,2 a Department of Power System, Seol Unversty, South Korea. b Department of Power System, Kwangwoon

More information

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION Phaneendra R.Venkata, Nathan A. Goodman Department of Electrcal and Computer Engneerng, Unversty of Arzona, 30 E. Speedway Blvd, Tucson, Arzona

More information

Equity trend prediction with neural networks

Equity trend prediction with neural networks Res. Lett. Inf. Math. Sc., 2004, Vol. 6, pp 15-29 15 Avalable onlne at http://ms.massey.ac.nz/research/letters/ Equty trend predcton wth neural networks R.HALLIDAY Insttute of Informaton & Mathematcal

More information

CS345a: Data Mining Jure Leskovec and Anand Rajaraman Stanford University

CS345a: Data Mining Jure Leskovec and Anand Rajaraman Stanford University CS345a: Data Mnng Jure Leskovec and Anand Rajaraman Stanford Unversty HW3 s out Poster sesson s on last day of classes: Thu March 11 at 4:15 Reports are due March 14 Fnal s March 18 at 12:15 Open book,

More information

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources POLYTECHNIC UNIERSITY Electrcal Engneerng Department EE SOPHOMORE LABORATORY Experment 1 Laboratory Energy Sources Modfed for Physcs 18, Brooklyn College I. Oerew of the Experment Ths experment has three

More information

Subarray adaptive beamforming for reducing the impact of flow noise on sonar performance

Subarray adaptive beamforming for reducing the impact of flow noise on sonar performance Subarray adaptve beamformng for reducng the mpact of flow nose on sonar performance C. Bao 1, J. Leader and J. Pan 1 Defence Scence & Technology Organzaton, Rockngham, WA 6958, Australa School of Mechancal

More information

Low Switching Frequency Active Harmonic Elimination in Multilevel Converters with Unequal DC Voltages

Low Switching Frequency Active Harmonic Elimination in Multilevel Converters with Unequal DC Voltages Low Swtchng Frequency Actve Harmonc Elmnaton n Multlevel Converters wth Unequal DC Voltages Zhong Du,, Leon M. Tolbert, John N. Chasson, Hu L The Unversty of Tennessee Electrcal and Computer Engneerng

More information

USE OF GPS MULTICORRELATOR RECEIVERS FOR MULTIPATH PARAMETERS ESTIMATION

USE OF GPS MULTICORRELATOR RECEIVERS FOR MULTIPATH PARAMETERS ESTIMATION Rdha CHAGGARA, TeSA Chrstophe MACABIAU, ENAC Erc CHATRE, STNA USE OF GPS MULTICORRELATOR RECEIVERS FOR MULTIPATH PARAMETERS ESTIMATION ABSTRACT The performance of GPS may be degraded by many perturbatons

More information

RC Filters TEP Related Topics Principle Equipment

RC Filters TEP Related Topics Principle Equipment RC Flters TEP Related Topcs Hgh-pass, low-pass, Wen-Robnson brdge, parallel-t flters, dfferentatng network, ntegratng network, step response, square wave, transfer functon. Prncple Resstor-Capactor (RC)

More information

Figure 1. DC-DC Boost Converter

Figure 1. DC-DC Boost Converter EE36L, Power Electroncs, DC-DC Boost Converter Verson Feb. 8, 9 Overvew Boost converters make t possble to effcently convert a DC voltage from a lower level to a hgher level. Theory of Operaton Relaton

More information

Development of Neural Networks for Noise Reduction

Development of Neural Networks for Noise Reduction The Internatonal Arab Journal of Informaton Technology, Vol. 7, No. 3, July 00 89 Development of Neural Networks for Nose Reducton Lubna Badr Faculty of Engneerng, Phladelpha Unversty, Jordan Abstract:

More information

Priority based Dynamic Multiple Robot Path Planning

Priority based Dynamic Multiple Robot Path Planning 2nd Internatonal Conference on Autonomous obots and Agents Prorty based Dynamc Multple obot Path Plannng Abstract Taxong Zheng Department of Automaton Chongqng Unversty of Post and Telecommuncaton, Chna

More information

Fast Code Detection Using High Speed Time Delay Neural Networks

Fast Code Detection Using High Speed Time Delay Neural Networks Fast Code Detecton Usng Hgh Speed Tme Delay Neural Networks Hazem M. El-Bakry 1 and Nkos Mastoraks 1 Faculty of Computer Scence & Informaton Systems, Mansoura Unversty, Egypt helbakry0@yahoo.com Department

More information

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid Abstract Latency Inserton Method (LIM) for IR Drop Analyss n Power Grd Dmtr Klokotov, and José Schutt-Ané Wth the steadly growng number of transstors on a chp, and constantly tghtenng voltage budgets,

More information

A High-Sensitivity Oversampling Digital Signal Detection Technique for CMOS Image Sensors Using Non-destructive Intermediate High-Speed Readout Mode

A High-Sensitivity Oversampling Digital Signal Detection Technique for CMOS Image Sensors Using Non-destructive Intermediate High-Speed Readout Mode A Hgh-Senstvty Oversamplng Dgtal Sgnal Detecton Technque for CMOS Image Sensors Usng Non-destructve Intermedate Hgh-Speed Readout Mode Shoj Kawahto*, Nobuhro Kawa** and Yoshak Tadokoro** *Research Insttute

More information

Cod and climate: effect of the North Atlantic Oscillation on recruitment in the North Atlantic

Cod and climate: effect of the North Atlantic Oscillation on recruitment in the North Atlantic Ths appendx accompanes the artcle Cod and clmate: effect of the North Atlantc Oscllaton on recrutment n the North Atlantc Lef Chrstan Stge 1, Ger Ottersen 2,3, Keth Brander 3, Kung-Sk Chan 4, Nls Chr.

More information

Applying Rprop Neural Network for the Prediction of the Mobile Station Location

Applying Rprop Neural Network for the Prediction of the Mobile Station Location Sensors 0,, 407-430; do:0.3390/s040407 OPE ACCESS sensors ISS 44-80 www.mdp.com/journal/sensors Communcaton Applyng Rprop eural etwork for the Predcton of the Moble Staton Locaton Chen-Sheng Chen, * and

More information

Evaluate the Effective of Annular Aperture on the OTF for Fractal Optical Modulator

Evaluate the Effective of Annular Aperture on the OTF for Fractal Optical Modulator Global Advanced Research Journal of Management and Busness Studes (ISSN: 2315-5086) Vol. 4(3) pp. 082-086, March, 2015 Avalable onlne http://garj.org/garjmbs/ndex.htm Copyrght 2015 Global Advanced Research

More information

NEURAL PROCESSIN G.SYSTEMS 2 INF ORM.ATIO N (Q90. ( Iq~O) DAVID S. TOURETZKY ADVANCES CARNEGIE MELLON UNIVERSITY. ..F~ k \ """ Ct... V\.

NEURAL PROCESSIN G.SYSTEMS 2 INF ORM.ATIO N (Q90. ( Iq~O) DAVID S. TOURETZKY ADVANCES CARNEGIE MELLON UNIVERSITY. ..F~ k \  Ct... V\. ....F~ k \ """ Ct... V\. ~.Le.- b;e ve-. ( Iq~O) ADVANCES IN NEURAL INF ORM.ATIO N PROCESSIN G.SYSTEMS 2 EDITED BY DAVID S. TOURETZKY CARNEGIE MELLON UNIVERSITY (Q90.MORGAN KAUFMANN PUBLISHERS 2929 CAMPUS

More information

Adaptive Modulation for Multiple Antenna Channels

Adaptive Modulation for Multiple Antenna Channels Adaptve Modulaton for Multple Antenna Channels June Chul Roh and Bhaskar D. Rao Department of Electrcal and Computer Engneerng Unversty of Calforna, San Dego La Jolla, CA 993-7 E-mal: jroh@ece.ucsd.edu,

More information

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network Avalable onlne at www.scencedrect.com Proceda Engneerng 5 (2 44 445 A Prelmnary Study on Targets Assocaton Algorthm of Radar and AIS Usng BP Neural Networ Hu Xaoru a, Ln Changchuan a a Navgaton Insttute

More information

Graph Method for Solving Switched Capacitors Circuits

Graph Method for Solving Switched Capacitors Circuits Recent Advances n rcuts, ystems, gnal and Telecommuncatons Graph Method for olvng wtched apactors rcuts BHUMIL BRTNÍ Department of lectroncs and Informatcs ollege of Polytechncs Jhlava Tolstého 6, 586

More information

Fiber length of pulp and paper by automated optical analyzer using polarized light (Five-year review of T 271 om-12) (no changes since Draft 1)

Fiber length of pulp and paper by automated optical analyzer using polarized light (Five-year review of T 271 om-12) (no changes since Draft 1) OTICE: Ths s a DRAFT of a TAPPI Standard n ballot. Although avalable for publc vewng, t s stll under TAPPI s copyrght and may not be reproduced or dstrbuted wthout permsson of TAPPI. Ths draft s OT a currently

More information

Beam quality measurements with Shack-Hartmann wavefront sensor and M2-sensor: comparison of two methods

Beam quality measurements with Shack-Hartmann wavefront sensor and M2-sensor: comparison of two methods Beam qualty measurements wth Shack-Hartmann wavefront sensor and M-sensor: comparson of two methods J.V.Sheldakova, A.V.Kudryashov, V.Y.Zavalova, T.Y.Cherezova* Moscow State Open Unversty, Adaptve Optcs

More information

ESTIMATION OF DIVERGENCES IN PRECAST CONSTRUCTIONS USING GEODETIC CONTROL NETWORKS

ESTIMATION OF DIVERGENCES IN PRECAST CONSTRUCTIONS USING GEODETIC CONTROL NETWORKS Proceedngs, 11 th FIG Symposum on Deformaton Measurements, Santorn, Greece, 2003. ESTIMATION OF DIVERGENCES IN PRECAST CONSTRUCTIONS USING GEODETIC CONTROL NETWORKS George D. Georgopoulos & Elsavet C.

More information

Introduction to Coalescent Models. Biostatistics 666

Introduction to Coalescent Models. Biostatistics 666 Introducton to Coalescent Models Bostatstcs 666 Prevously Allele frequences Hardy Wenberg Equlbrum Lnkage Equlbrum Expected state for dstant markers Lnkage Dsequlbrum Assocaton between neghborng alleles

More information

Phoneme Probability Estimation with Dynamic Sparsely Connected Artificial Neural Networks

Phoneme Probability Estimation with Dynamic Sparsely Connected Artificial Neural Networks The Free Speech Journal, Issue # 5(1997) Publshed 10/22/97 1997 All rghts reserved. Phoneme Probablty Estmaton wth Dynamc Sparsely Connected Artfcal Neural Networks Nkko Ström, (nkko@speech.kth.se) Department

More information

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate Comparatve Analyss of Reuse and 3 n ular Network Based On IR Dstrbuton and Rate Chandra Thapa M.Tech. II, DEC V College of Engneerng & Technology R.V.. Nagar, Chttoor-5727, A.P. Inda Emal: chandra2thapa@gmal.com

More information

HUAWEI TECHNOLOGIES CO., LTD. Huawei Proprietary Page 1

HUAWEI TECHNOLOGIES CO., LTD. Huawei Proprietary Page 1 Project Ttle Date Submtted IEEE 802.16 Broadband Wreless Access Workng Group Double-Stage DL MU-MIMO Scheme 2008-05-05 Source(s) Yang Tang, Young Hoon Kwon, Yajun Kou, Shahab Sanaye,

More information

Network Reconfiguration in Distribution Systems Using a Modified TS Algorithm

Network Reconfiguration in Distribution Systems Using a Modified TS Algorithm Network Reconfguraton n Dstrbuton Systems Usng a Modfed TS Algorthm ZHANG DONG,FU ZHENGCAI,ZHANG LIUCHUN,SONG ZHENGQIANG School of Electroncs, Informaton and Electrcal Engneerng Shangha Jaotong Unversty

More information

Introduction to Coalescent Models. Biostatistics 666 Lecture 4

Introduction to Coalescent Models. Biostatistics 666 Lecture 4 Introducton to Coalescent Models Bostatstcs 666 Lecture 4 Last Lecture Lnkage Equlbrum Expected state for dstant markers Lnkage Dsequlbrum Assocaton between neghborng alleles Expected to decrease wth dstance

More information

Estimation of Solar Radiations Incident on a Photovoltaic Solar Module using Neural Networks

Estimation of Solar Radiations Incident on a Photovoltaic Solar Module using Neural Networks XXVI. ASR '2001 Semnar, Instruments and Control, Ostrava, Aprl 26-27, 2001 Paper 14 Estmaton of Solar Radatons Incdent on a Photovoltac Solar Module usng Neural Networks ELMINIR, K. Hamdy 1, ALAM JAN,

More information

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme Performance Analyss of Mult User MIMO System wth Block-Dagonalzaton Precodng Scheme Yoon Hyun m and Jn Young m, wanwoon Unversty, Department of Electroncs Convergence Engneerng, Wolgye-Dong, Nowon-Gu,

More information

熊本大学学術リポジトリ. Kumamoto University Repositor

熊本大学学術リポジトリ. Kumamoto University Repositor 熊本大学学術リポジトリ Kumamoto Unversty Repostor Ttle Wreless LAN Based Indoor Poston and Its Smulaton Author(s) Ktasuka, Teruak; Nakansh, Tsune CtatonIEEE Pacfc RIM Conference on Comm Computers, and Sgnal Processng

More information

Rational Secret Sharing without Broadcast

Rational Secret Sharing without Broadcast Ratonal Secret Sharng wthout Broadcast Amjed Shareef, Department of Computer Scence and Engneerng, Indan Insttute of Technology Madras, Chenna, Inda. Emal: amjedshareef@gmal.com Abstract We use the concept

More information

arxiv: v1 [cs.lg] 22 Jan 2016 Abstract

arxiv: v1 [cs.lg] 22 Jan 2016 Abstract Mne Km MINJE@ILLINOIS.EDU Department of Computer Scence, Unversty of Illnos at Urbana-Champagn, Urbana, IL 61801 USA Pars Smaragds Unversty of Illnos at Urbana-Champagn, Urbana, IL 61801 USA Adobe Research,

More information

Ultimate X Bonus Streak Analysis

Ultimate X Bonus Streak Analysis Ultmate X Bonus Streak Analyss Gary J. Koehler John B. Hgdon Emnent Scholar, Emertus Department of Informaton Systems and Operatons Management, 35 BUS, The Warrngton College of Busness, Unversty of Florda,

More information

Reflections on Rotators, Or, How to Turn the FEL Upgrade 3F Skew Quad Rotator Into a Skew Quad Rotator

Reflections on Rotators, Or, How to Turn the FEL Upgrade 3F Skew Quad Rotator Into a Skew Quad Rotator JLAB-TN-4-23 4 August 24 Reflectons on Rotators, Or, How to Turn the FEL Upgrade 3F Skew Quad Rotator nto a Skew Quad Rotator D. Douglas ntroducton A prevous note [] descrbes a smple skew quad system that

More information

A Simple Satellite Exclusion Algorithm for Advanced RAIM

A Simple Satellite Exclusion Algorithm for Advanced RAIM A Smple Satellte Excluson Algorthm for Advanced RAIM Juan Blanch, Todd Walter, Per Enge Stanford Unversty ABSTRACT Advanced Recever Autonomous Integrty Montorng s a concept that extends RAIM to mult-constellaton

More information

Advanced Bio-Inspired Plausibility Checking in a Wireless Sensor Network Using Neuro-Immune Systems

Advanced Bio-Inspired Plausibility Checking in a Wireless Sensor Network Using Neuro-Immune Systems Fourth Internatonal Conference on Sensor Technologes and Applcatons Advanced Bo-Inspred Plausblty Checkng n a reless Sensor Network Usng Neuro-Immune Systems Autonomous Fault Dagnoss n an Intellgent Transportaton

More information

Optimal Placement of PMU and RTU by Hybrid Genetic Algorithm and Simulated Annealing for Multiarea Power System State Estimation

Optimal Placement of PMU and RTU by Hybrid Genetic Algorithm and Simulated Annealing for Multiarea Power System State Estimation T. Kerdchuen and W. Ongsakul / GMSARN Internatonal Journal (09) - Optmal Placement of and by Hybrd Genetc Algorthm and Smulated Annealng for Multarea Power System State Estmaton Thawatch Kerdchuen and

More information

New Parallel Radial Basis Function Neural Network for Voltage Security Analysis

New Parallel Radial Basis Function Neural Network for Voltage Security Analysis New Parallel Radal Bass Functon Neural Network for Voltage Securty Analyss T. Jan, L. Srvastava, S.N. Sngh and I. Erlch Abstract: On-lne montorng of power system voltage securty has become a very demandng

More information

STAR POWER BOM/BOQ SETTING IDEA 1 - TWIST & SHOUT

STAR POWER BOM/BOQ SETTING IDEA 1 - TWIST & SHOUT Below are two deas for settng your blocks together. Of course, there are dozens more! Take your blocks out to play, and decde on a settng that makes you smle! STAR POWER BOM/BOQ SETTING IDEA 1 - TWIST

More information

Old text. From Through the Looking Glass by Lewis Carroll. Where is the setting of this place? Describe in your own words.

Old text. From Through the Looking Glass by Lewis Carroll. Where is the setting of this place? Describe in your own words. Old text Read ths extract carefully, then answer, n complete sentences, the questons that follow. For some mnutes Alce stood wthout speakng, lookng out n all drectons over the country and a most curous

More information

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS A MODIFIED DIFFERENTIAL EVOLUTION ALORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS Kaml Dmller Department of Electrcal-Electroncs Engneerng rne Amercan Unversty North Cyprus, Mersn TURKEY kdmller@gau.edu.tr

More information

DETERMINATION OF WIND SPEED PROFILE PARAMETERS IN THE SURFACE LAYER USING A MINI-SODAR

DETERMINATION OF WIND SPEED PROFILE PARAMETERS IN THE SURFACE LAYER USING A MINI-SODAR DETERMINATION OF WIND SPEED PROFILE PARAMETERS IN THE SURFACE LAYER USING A MINI-SODAR A. Coppalle, M. Talbaut and F. Corbn UMR 6614 CORIA, Sant Etenne du Rouvray, France INTRODUCTION Recent mprovements

More information

Multi-Robot Map-Merging-Free Connectivity-Based Positioning and Tethering in Unknown Environments

Multi-Robot Map-Merging-Free Connectivity-Based Positioning and Tethering in Unknown Environments Mult-Robot Map-Mergng-Free Connectvty-Based Postonng and Tetherng n Unknown Envronments Somchaya Lemhetcharat and Manuela Veloso February 16, 2012 Abstract We consder a set of statc towers out of communcaton

More information

Pulse Extraction for Radar Emitter Location

Pulse Extraction for Radar Emitter Location 00 Conference on Informaton Scences and Systems, The Johns opkns Unversty, March 3, 00 Pulse Extracton for Radar Emtter Locaton Mark L. Fowler, Zhen Zhou, and Anupama Shvaprasad Department of Electrcal

More information

Frequency Map Analysis at CesrTA

Frequency Map Analysis at CesrTA Frequency Map Analyss at CesrTA J. Shanks. FREQUENCY MAP ANALYSS A. Overvew The premse behnd Frequency Map Analyss (FMA) s relatvely straghtforward. By samplng turn-by-turn (TBT) data (typcally 2048 turns)

More information

Estimating Mean Time to Failure in Digital Systems Using Manufacturing Defective Part Level

Estimating Mean Time to Failure in Digital Systems Using Manufacturing Defective Part Level Estmatng Mean Tme to Falure n Dgtal Systems Usng Manufacturng Defectve Part Level Jennfer Dworak, Davd Dorsey, Amy Wang, and M. Ray Mercer Texas A&M Unversty IBM Techncal Contact: Matthew W. Mehalc, PowerPC

More information

Chapter 2 Two-Degree-of-Freedom PID Controllers Structures

Chapter 2 Two-Degree-of-Freedom PID Controllers Structures Chapter 2 Two-Degree-of-Freedom PID Controllers Structures As n most of the exstng ndustral process control applcatons, the desred value of the controlled varable, or set-pont, normally remans constant

More information

Figure.1. Basic model of an impedance source converter JCHPS Special Issue 12: August Page 13

Figure.1. Basic model of an impedance source converter JCHPS Special Issue 12: August Page 13 A Hgh Gan DC - DC Converter wth Soft Swtchng and Power actor Correcton for Renewable Energy Applcaton T. Selvakumaran* and. Svachdambaranathan Department of EEE, Sathyabama Unversty, Chenna, Inda. *Correspondng

More information

Research Article Indoor Localisation Based on GSM Signals: Multistorey Building Study

Research Article Indoor Localisation Based on GSM Signals: Multistorey Building Study Moble Informaton Systems Volume 26, Artcle ID 279576, 7 pages http://dx.do.org/.55/26/279576 Research Artcle Indoor Localsaton Based on GSM Sgnals: Multstorey Buldng Study RafaB Górak, Marcn Luckner, MchaB

More information