Siamese Multi-layer Perceptrons for Dimensionality Reduction and Face Identification

Size: px
Start display at page:

Download "Siamese Multi-layer Perceptrons for Dimensionality Reduction and Face Identification"

Transcription

1 Samese Mult-layer Perceptrons for Dmensonalty Reducton and Face Identfcaton Lle Zheng, Stefan Duffner, Khald Idrss, Chrstophe Garca, Atlla Baskurt To cte ths verson: Lle Zheng, Stefan Duffner, Khald Idrss, Chrstophe Garca, Atlla Baskurt. Samese Mult-layer Perceptrons for Dmensonalty Reducton and Face Identfcaton. Multmeda Tools and Applcatons, Sprnger Verlag, 25, pp.,. <.7/s >. <hal-82273> HAL Id: hal Submtted on 3 Jul 25 HAL s a mult-dscplnary open access archve for the depost and dssemnaton of scentfc research documents, whether they are publshed or not. The documents may come from teachng and research nsttutons n France or abroad, or from publc or prvate research centers. L archve ouverte plurdscplnare HAL, est destnée au dépôt et à la dffuson de documents scentfques de nveau recherche, publés ou non, émanant des établssements d ensegnement et de recherche franças ou étrangers, des laboratores publcs ou prvés.

2 Noname manuscrpt No. (wll be nserted by the edtor) Samese Mult-layer Perceptrons for Dmensonalty Reducton and Face Identfcaton Lle Zheng Stefan Duffner Khald Idrss Chrstophe Garca Atlla Baskurt Receved: date / Accepted: date Abstract Ths paper presents a framework usng samese Mult-layer Perceptrons (MLP) for supervsed dmensonalty reducton and face dentfcaton. Compared wth the classcal MLP that trans on fully labeled data, the samese MLP learns on sde nformaton only,.e., how smlar of data examples are to each other. In ths study, we compare t wth the classcal MLP on the problem of face dentfcaton. Expermental results on the Extended Yale B database demonstrate that the samese MLP tranng wth sde nformaton acheves comparable classfcaton performance wth the classcal MLP tranng on fully labeled data. Besdes, whle the classcal MLP fxes the dmenson of the output space, the samese MLP allows flexble output dmenson, hence we also apply the samese MLP for vsualzaton of the dmensonalty reducton to the 2-d and 3-d spaces. Keywords samese neural networks mult-layer perceptrons metrc learnng face dentfcaton dmensonalty reducton Introducton Wth the capablty of approxmatng non-lnear mappngs, Mult-layer Perceptrons (MLP) has been a popular soluton to object classfcaton problems snce the 98s, fndng applcatons n dverse felds such as mage recognton [28] and speech recognton [2, 5]. A classcal MLP conssts of an nput layer, one or more hdden layer(s) and an output layer of perceptrons. Generally, n a mult-class classfcaton problem, the sze of the output layer (.e., the output dmenson), s fxed to Lle Zheng E-mal: lle.zheng@nsa-lyon.fr All the authors are wth the Unversté de Lyon, CNRS, INSA-Lyon, LIRIS, UMR525, F-6962, France

3 2 Lle Zheng et al. f ( x, W ) + δ = g x y x W W2 f (z) Mult-layer Perceptron P f (z) Mult-layer Perceptron a = f ( x, P) b = f ( y, P) cos( a, b ) Input Hdden Output (a) Target : smlar -: dssmlar/dfferent (b) Fg. (a)tradtonal sngle mult-layer perceptrons. (b) Samese mult-layer perceptrons. x W W 2 W 3 f ( x, W ) +δ = g the number of classes n ths problem. Fgure (a) llustrates the structure of an MLP. The objectve of such an MLP s to make the network outputs approxmatng predefned target values (or ground truth) for dfferent classes. In practce, the error δ between the output and the target s used to update the network parameters va the Back-propagaton algorthm [25]. Moreover, these predefned target values are typcally bnary for classfcaton problems. For example, for a 3-class classfcaton problem, we usually set unt vectors [,, ] T, [,, ] T, [,, ] T as the target vectors for the 3 classes. Input In ths work, Hddenwe propose Hddena 2samese Output MLP framework Target to relax the constrant on the output dmenson, makng flexble dmensonalty reducton to the nput data. A samese MLP s a symmetrc archtecture consstng of two MLPs, where they actually share the same set of parameters P (Fgure (b)). Compared wth the sngle MLP (Fgure (a)), nstead of constranng the outputs approachng some predefned target values, the samese MLP defnes a specfc objectve: () for an nput par from the same class, makng the parwse smlarty between ther outputs larger; (2) for an nput par from dfferent classes, makng the parwse smlarty between ther outputs smaller. Wth such an objectve, the dmenson of the target space can be arbtrarly specfed. Another advantage of the samese MLP over the classcal MLP s that the samese MLP s able to learn on data pars nstead of fully labeled data. In other words, the samese MLP s applcable for weakly supervsed cases where we have no access to the labels of tranng nstances: only some sde nformaton of parwse relatonshp s avalable. Ths s a meanngful settng n varous applcatons where labeled data are more costly than the sde nformaton [3]. Examples nclude users mplct feedback on the nternet (e.g., clcks on search engne results), ctatons among artcles or lnks n a socal network, knshp relatonshp between ndvduals [22]. More nterestngly, the samese MLP has these two advantages WITHOUT losng ts superor ablty of accurate classfcaton. In the experments, we compare the samese MLP wth the classcal MLP for face dentfcaton on the

4 Ttle Suppressed Due to Excessve Length 3 Extended Yale B database [3]. In addton, we employ a statstcal sgnfcance testng method called Bootstrap Resamplng [8] to evaluate the comparson between the samese MLP and the classcal MLP. The testng results show that the samese MLP acheves comparable performance wth the classcal MLP on the problem of face dentfcaton. Overall, the man contrbutons of ths paper are summarzed as below: we have presented the samese MLP as a sem-supervsed learnng method for classfcaton. It can perform learnng from sde-nformaton only, nstead of fully labeled tranng data. we have shown the capablty of the samese MLP for dmensonalty reducton and data vsualzaton n 2-d and 3-d spaces. We fnd that the samese MLP projects the orgnal nput data to the vertexes of a regular polyhedron (see Fgure 7). we have demonstrated that the samese MLP has the above two advantages WITHOUT losng ts superor ablty of accurate classfcaton. It acheves comparable performance wth the standard MLP on face dentfcaton. The remander of ths paper s organzed as follows: Secton 2 brefly summarzes the related work on samese neural networks and metrc learnng. Secton 3 presents the proposed samese MLP method. Secton 4 depcts the datasets and experments on face dentfcaton. Fnally, we draw the conclusons n Secton 5. 2 Related Work Usng MLP for dmensonalty reducton s an old dea whch has ts orgns n the late 98s and early 99s. The frst work may be the Auto-Assocate Neural Networks (AANN) [8,4], a specal type of MLP where the nput and output layers have the same number of neurons, and the mddle hdden layer has fewer neurons than the nput and output layers. The objectve of AANN s to reproduce the nput pattern at ts output. Thus t actually learns a mappng on the nput patterns nto a lower-dmensonal space and then an nverse mappng to reconstruct the nput patterns. Snce t does not need the nput data to be labeled, the mddle hdden layer learns a compact representaton of the nput data n an unsupervsed manner []. However, researchers have found that the dmensonalty reducton by the AANN s qute smlar wth the well-known Prncpal Components Analyss (PCA) technque [2]. More recently, a more mature and powerful AANN, the deep autoencoder networks [6] have presented an effectve way of ntalzng the network parameters that leads the low-dmensonal codng much better than PCA. For all the layers n the deep networks, the authors proposed a restrcted Boltzmann machne to pretran the network parameters layer-by-layer, followed by a fne-tunng procedure for optmal reconstructon va the Back-propagaton algorthm [25]. Dfferent from the unsupervsed dmensonalty reducton by the above AANNs, we propose to employ the MLP to perform dmensonalty reducton

5 4 Lle Zheng et al. n a supervsed manner usng samese neural networks. Samese neural networks have frst been presented by Bromley et al. [6] usng Tme Delay Neural Networks (TDNN) on the problem of sgnature verfcaton. Ths dea was then adopted by Chopra et al. [7] who used samese Convolutonal Neural Networks (CNN) for face verfcaton,.e., to decde f two gven face mages belong to the same person or not. Recently, Berlemont et al. [4] also successfully employed the samese neural networks for nertal gesture recognton and rejecton. Concretely, the samese neural networks mnmze a loss functon that drves the smlarty metrc to be small for data pars from the same class, and large for pars from dfferent classes [7]. Ths technque of specfyng a metrc from data pars (or trplets) s also called Metrc Learnng [3, 27,, 29]. In ths paper, the proposed samese MLP employs the Trangular Smlarty Metrc Learnng (TSML) objectve functon [29] as the loss functon, and shows ts effectveness on dmensonalty reducton and object classfcaton. 3 Samese Mult-Layer Perceptron In ths secton, we present the classcal MLP model and the proposed samese MLP model. Snce the samese MLP takes the MLP as a basc component, we frst ntroduce the classcal MLP model n detal. After that, we develop the samese varant. Concretely, we use a 3-layer MLP consstng of an nput layer, an output layer and only one hdden layer. 3. Three-layer MLP An MLP s a feed-forward neural network,.e., the actvaton of the neurons s propagated layer-wse from the nput to the output layer []. Moreover, the actvaton functon of the neurons has to be dfferentable n order to update the network parameters va the Back-propagaton algorthm. Commonly used non-lnear actvaton functons nclude the sgmod functon and the tanh functon (.e., the hyperbolc tangent functon). In contrast wth that the sgmod functon allows only postve output values, the tanh functon produces both negatve and postve output values. Snce negatve values are necessary n the proposed samese MLP (Secton 3.2), we choose the tanh functon n our experments. The tanh functon and ts dervatve are: 3.. Feed-forward tanh(x) = ex e x e x, () + e x tanh (x) = tanh 2 (x). (2) Frst, we ntroduce the feed-forward procedure of the 3-layer MLP. For any gven nput sample x, assumng ts output on the MLP s a. At the frst

6 Ttle Suppressed Due to Excessve Length 5 step, from the the nput layer to the hdden layer, wth the parameter matrx W () and the bas vector b (), the values n the hdden layer are computed as h = tanh(w () x + b () ). At the second step, from the hdden layer to the output layer, wth the parameter matrx W (2) and the bas vector b (2), the output values are calculated as a = tanh(w (2) h +b (2) ). Fnally, the objectve functon of an MLP classfer s smply the Mean Squared Error (MSE) between the computed outputs and ther desred targets for all tranng samples: J = 2N N (a g ) 2, (3) = where N s the number of all possble tranng samples, g s the target vector for the output sample a. Remnd that g s usually hand-crafted unt vectors. For example, for a 3-class classfcaton problem, we usually set unt vectors [,, ] T, [,, ] T, [,, ] T as the target vectors for the 3 classes Back-propagaton Now we use the Back-propagaton algorthm [25] to update the set of parameters P : {W (2), b (2), W (), b () }. Takng dervatve of Equaton (3), the gradent for the th sample s: J P = (a g ) T a P, (4) and the dfferental on the output layer, wth respect to z (2) = W (2) h + b (2), s: δ (2) = ( a a ) (a g ), (5) where the notaton means element-wse multplcaton. Subsequently, the dfferental on the hdden layer, wth respect to z () = W () x + b (), s: δ () = ( h h ) [(W (2) ) T δ (2) ], (6) and the dfferentals of the network parameters are computed as: W (2) = δ (2) h T, (7) b (2) = δ (2), (8) W () = δ () x T, (9) b () = δ (). () After that, the parameters P : {W (2), b (2), W (), b () } can be updated by usng the followng gradent descent functon: P = P µ N P, () where µ s the learnng rate. The default learnng rate s set to 4 n our experments. =

7 6 Lle Zheng et al. 3.2 Samese MLP As we have llustrated n Fgure (b), a samese MLP conssts of two MLPs whch actually share the same set of parameters P : {W (2), b (2), W (), b () }. Let a = f(x, P ) denotes the output of an nput x, and b = f(y, P ) denotes the output of the other nput y. Compared wth the tradtonal MLP that makes the output a close to ts hand-crafted target g, the samese MLP ams to make {a, b } close f {x, y } are of the same class and to separate {a, b } f {x, y } are of two dfferent classes [29]. Consequently, the samese MLP needs no hand-crafted targets. To acheve ths goal, we employ a modfed Trangular Smlarty Metrc Learnng (TSML) objectve functon [29]: J = K( a + b c ) + 2 ( a K) ( b K) 2, (2) where K s a constant that constrans the length (.e., the L 2 norm) of a and b ; c = a + s b and s = (resp. s = ) means that the two vectors a and b are a wthn-class par (resp. a between-class par). Generally, we can set the constant K wth the average length of all the nput tranng vectors. The frst part of Equaton (2), a + b c, ncludes three sdes of an trangle (Fgure 2 (a)). Accordng to the well-known trangle nequalty theorem: the sum of the lengths of two sdes of a trangle must always be greater than the length of the thrd sde, the frst part should be always larger than. Moreover, mnmzng ths part s equvalent to mnmzng the angle θ nsde a wthn-class par (s = ) or maxmzng the angle θ nsde a betweenclass par (s = ), n other words, mnmzng the cosne smlarty between a and s b. Note that a + b = c when the cost J arrves the mnmum. Besdes, the second part of Equaton (2), 2 ( a K) ( b K) 2, ams to prevent a and b from degeneratng to. Further, Equaton (2) can be rewrtten as: wth gradent over the parameters P : J = 2 a b 2 K c + K 2, (3) J P = (a K c c )T a P + (b K s c c )T b P. (4) Now, we can obtan the optmal cost J = at the zero gradent: a K c c = and b K c sc c =. In other words, the gradent functon has set K c and K c sc as targets for a and b, respectvely. See Fgure 2(b): for a wthn-class par, a and b are mapped to the same vector along the dagonal (the red sold lne); for a between-class par, a and b are mapped to opposte vectors along the other dagonal (the blue sold lne). More nterestngly, substtutng the hand-crafted target g wth the two automatcally computed targets K c c sc and K c, the samese MLP gradent

8 c = a + s b = a b a θ b a θ b s = s = Ttle Suppressed Due to Excessve Length 7 c = a + s b = a + b c c K = sc K c c = a + s b = a b a θ b c K c a θ b a θ = s b a θ = s b -K K s = -K K s = sc K c (a) (b) Fg. 2 Geometrcal nterpretaton of the cost and gradent. (a) Mnmzng the cost means c s c to make K a wthn-class = K par parallel and make a between-class par opposte. (b) Takng zero c c gradent means to set dagonal vectors as targets for a and b. (s = for a wthn-class par and s = for a between-class par) a c K a θ c θ b b -K K -K K functon s = (Equaton (4)) s = s exactly sc a double copy of the tradtonal MLP K gradent functon (Equaton (4)). c And ths fact allows us to use the same Back-propagaton algorthm to update the network parameters (Secton 3..2). 3.3 Dfference between MLP and Samese MLP In the last two subsectons, we have shown that the classcal MLP and the samese MLP have smlar gradent formulatons that allows us to employ the same Back-propagaton algorthm for tranng. However, there are also apparent dfferences between them on both the nput and output layers. For each nput vector x, the classcal MLP needs to know whch class x belongs to. In contrast, the samese MLP takes a more flexble constrant: t only needs the sde nformaton whether two nput vectors x and y are of the same class or not. The relatonshp between the two constrants can be summarzed as: when we know the classes of x and y, we know whether x and y are of the same class or not; however, even we know whether x and y are of the same class or not, we may have no dea about the class labels of x and y. As a result, the samese MLP s applcable wth the second constrant whle the classcal MLP s not,.e., the samese MLP can learn on sde nformaton only (Secton ). More mportant, we wll demonstrate that the relaxaton of constrant would not cause classfcaton accuracy loss to the experments (Secton 4). On the output layer, the classcal MLP fxes the output dmenson equal to the number of classes. However, the samese MLP has no constrant on the output dmenson. Therefore, for a problem wth more than 3 classes, the samese MLP s applcable for data vsualzaton,.e., projectng the nput data nto 2-d or 3-d spaces; but the classcal MLP can only make a projecton nto a space wth dmenson more than 3. In Secton 4.4, we wll llustrate the effect of the samese MLP on dmensonalty reducton and data vsualzaton.

9 8 Lle Zheng et al. 3.4 Batch Gradent Descent or Stochastc Gradent Descent Once we have defned an error functon and ts gradent, the Back-propagaton algorthm [25] apples the gradent descent technque to mnmze the overall error for all tranng data teratvely. There are manly three modes to perform gradent descent: stochastc gradent descent, batch gradent descent, or the trade-off between them, mnbatch gradent descent. Concretely, stochastc gradent descent uses only one tranng sample n each teraton whle batch gradent descent uses all tranng samples n each teraton. Mn-batch gradent descent, as the name suggests, takes several tranng samples n each teraton. Usually, the mn-batch gradent descent s the fastest choce among the three for many optmzaton problems. Partcularly, batch gradent descent can be nvolved n some advanced optmzaton algorthms to accelerate the learnng speed, such as the Conjugate Gradent Descent (CGD) algorthm [23] and the Lmted-memory Broyden Fletcher Goldfarb Shanno (L-BFGS) algorthm [2]. Compared wth the standard gradent descent technque, these advanced algorthms have no need to manually pck a learnng rate and are usually much faster for small and medum scale problems. However, for a large scale problem wth an overlarge tranng dataset, t may be mpossble to load all the tranng data nto memory n a sngle teraton. In ths case, the mn-batch gradent descent may be more applcable as t takes only a few tranng samples n each teraton. For the proposed samese MLP, the advanced algorthms usng batch gradent descent maybe only sutable for small scale problems, because the samese MLP takes data pars n the learnng procedure, and the total number of all tranng sample pars s exponentally larger than the total number of tranng samples. Specfcally, for a problem of N tranng samples, the number of all possble sample pars s N(N )/2. Therefore, for medum and large scale problems, we have to use stochastc gradent descent or mn-batch gradent descent. Commonly, a probable mn-batch contans equvalent number of wthnclass pars and between-class pars [7, 29]. However, the actual rato of wthnclass pars and between-class pars s not equvalent. For example, for m classes each wth n tranng samples, the number of wthn-class pars s mn(n )/2 and the number of all between-class pars s mn(mn n)/2. Thus the rato between wthn-class pars and between-class pars s n n(m ),.e., one wthnclass par s accompaned by n(m ) n between-class pars. Consequently, nstead of takng equvalent number of wthn-class pars and between-class pars n a mn-batch, we propose the followng strategy to choose data pars for a mn-batch: Count the tranng samples and denote the number as N, hence there are totally N(N )/2 sample pars. gradent descent

10 Ttle Suppressed Due to Excessve Length 9 S R = D / S Fg. 3 Index matrx for mn-batch gradent descent of the samese MLP. The frst row stores the S wthn-class pars, followed by all the between-class pars. The empty postons n the end of the matrx can be optonally flled wth some between-class pars. Count the wthn-class pars and denote the number as S, then the number of between-class pars s D = N(N )/2 S. Let R = D/S,.e., the smallest nteger not less than D/S. Make an ndex matrx wth R + rows and S columns (Fgure 3), put the ndexes of the S wthn-class pars n the frst row and put the ndexes of all the between-class pars n the followng rows. (Optonal) Randomly pck some between-class pars to fll the reman empty poston n the end of the matrx. Take the ndexes n each column as a mn-batch, whch contans a sngle wthn-class par and R between-class pars. In general, we summarze the optmzaton procedure for the proposed samese MLP n Algorthm. For a large scale problem, the mn-batch gradent descent s used n optmzaton; for a small scale problem, the batch gradent descent s adopted. Partcularly, the scale of a problem s small or large depends on the machne capacty we used. In our case, we usually consder a problem wth more than, tranng samples as a large scale problem, snce the number of all possble smlar and dssmlar pars s at least 499, 5. 4 Experment and Analyss 4. Extended Yale B Database We perform experments on the Extended Yale B database [3]. It contans 2,44 frontal-face mages of 38 ndvduals. These mages were captured under varous lghtng condtons. All the mages have been cropped and normalzed, wth the same sze Fgure 4 provdes some example mages of an ndvdual n the database. We can see that the lghtng drectons n dfferent

11 Lle Zheng et al. Algorthm : Optmzaton of the samese MLP nput : Tranng set; Number of tranng data N; output: Parameters P % ntalzaton Random ntalzaton to the set of parameters P ; % optmzaton by back propagaton f N s large then % ths s a large scale problem (N > ) Set learnng rate µ = 4 ; Generate mn batches that each contans smlar par and R dssmlar pars (Fgure 3); Employ mn-batch gradent descent to update P ; else % ths s a small scale problem Generate a whole batch whch contans all smlar and dssmlar pars; Employ batch gradent descent (the advanced L-BFGS algorthm) to update P ; % output the fnal set of parameters return P. Fg. 4 Example mages of an ndvdual n the Extended Yale B database. These frontal-face mages were captured under varous lghtng condtons.

12 Ttle Suppressed Due to Excessve Length mages are sgnfcantly vared. For nstance, t s dffcult to recognze the face n the mddle of Fgure 4 snce t hdes n deeply dark. We dvde the whole database nto three non-overlappng subsets: tranng, valdaton and testng. We learn a model on the tranng set, choose the best set of parameters that acheves the hghest performance on the valdaton set, and report the performance on the testng set usng the best parameters. Especally, we take a small scale tranng set n the experments: for each ndvdual, only one out of ten mages are used for tranng,.e., there are 263 face mages n the tranng set. And the sze rato of the tranng, valdaton and testng sets s :3:6. All the experments are repeated tmes wth randomly shuffled data, and the mean accuracy (± standard error of the mean) are reported. 4.2 Face Descrptors Popular face descrptors for face detecton and face recognton nclude egenfaces [26], Gabor wavelets [9], haar-lke features [9], SIFT [7], Local Bnary Pattern(LBP) [], etc. Recently, Barkan et al. [2] proposed Over-complete Local Bnary Patterns (OCLBP), a new varant of LBP that sgnfcantly mproved the face verfcaton performance. Thus we adopt the OCLBP feature as the major face descrptor n our experments. Besdes, we also use Gabor wavelets and the standard LBP to represent the face mages as a comparson. Followng [2, 29], both the orgnal face descrptors and ther square roots are evaluated n the experments. Gabor wavelets: we extract Gabor wavelets wth 5 scales and 8 orentatons on each downsampled mage. The downsamplng rate s for all the mages, thus the dmenson of an extracted Gabor vector s 26 (= ). Local Bnary Patterns: we use the unform LBP [24] to represent face mages. The unform LBP s denoted as LBP u2 p,r, where u2 stands for unform, (p, r) means to sample p ponts over a crcle wth a radus r. The dmenson of an unform pattern s 59. Concretely, each mage s dvded nto nonoverlappng 6 6 blocks and unform LBP patterns LBP u2 8, are extracted from all the blocks. We catenate all the LBP patterns nto a feature vector, whose dmenson s 7788 (= 2 59). Over-complete Local Bnary Patterns: besdes LBP, we also use ts new varant, OCLBP, to mprove the overall performance on face dentfcaton [29]. Unlke LBP, OCLBP adopts overlappng to adjacent blocks. Formally, the confguraton of OCLBP s denoted as S : (a, b, v, h, p, r). An mage s dvded nto a b blocks wth vertcal overlap of v and horzontal overlap of h, and then unform pattern LBP u2 p,r are extracted from all the blocks. Moreover, the OCLBP s composed of several dfferent confguratons: S : (6, 6, 2, 2, 8, ), S 2 : (24, 24, 2, 2, 8, 2), S 3 : (32, 32, 2, 2, 8, 3). The three confguratons consder three block szes: 6 6, 24 24, 32 32, and adopt half overlap rates along the vertcal and horzontal drectons. We shft the block wndow to produce overlaps. Takng the 6 6 block wndow for example,

13 2 Lle Zheng et al. wth the shftng step 6 2 = 8 to the left and downwards, the total number of 6 6 blocks s ( ) ( 8 ) = 46. Smlarly, shftng the wndow produces 95 blocks and shftng the wndow produces blocks. The dmenson of our OCLBP vectors s 45, 35 ((46+95+) 59). Apparently, the OCLBP contans the LBP as a subpart, hence usng OCLBP always acheves better classfcaton performance than usng LBP. Usually, drectly takng the orgnal face descrptors for learnng causes computatonal problem. For example, the tme requred for multplcatons between 45, 35-d OCLBP vectors would be unacceptable. Therefore, before learnng, we apply whtened PCA to reduce the vector dmenson. Snce the sze of the tranng set s small (only 263 samples), we keep all the varance durng dmensonalty reducton. Thus the reduced dmenson s 262, and these 262-d feature vectors are taken as nputs to the classcal MLP or the samese MLP. 4.3 Dmensonalty Reducton n Face Identfcaton We evaluate three dfferent methods n our experments: K-Nearest Neghbors (KNN), MLP and the proposed samese MLP. Snce the samese MLP s desgned for nonlnear mappng rather than classfcaton, t s hard to drectly make class predctons on ts output. Hence we apply KNN on ts output to perform class dentfcaton. Ths s also the reason why we evaluate the KNN method as a comparson. Specfcally, KNN n our experments uses the cosne functon to measure the parwse dstance and the number of nearest neghbors K s set to Output dmenson of the samese MLP Emprcally, the sze of the hdden layer s set to for both the classcal MLP and the samese MLP. As the number of dfferent classes n the Extended Yale B database s 38, the output dmenson of the classcal MLP s fxed to 38. In contrast, the samese MLP allows flexble output dmenson, thus we shft the output dmenson from 2 to 25 and record the nfluence on the dentfcaton accuracy. Note that the nput dmenson s 262, so we keep the output dmenson less than 262 n order to perform dmensonalty reducton. Fgure 5 shows the dentfcaton accuracy curve of the samese MLP method on the square-rooted OCLBP feature. We can see that the curve rses rapdly when the output dmenson ncreases from 2 to, but then clmbs much more slowly. The optmal soluton s wth the output dmenson of more than Comparson to the classcal MLP Table summarzes the results of dfferent methods on dfferent face descrptors on the extended Yale B database. The output dmenson of the samese

14 Ttle Suppressed Due to Excessve Length 3 Identfcaton Accuarcy Output Dmenson Fg. 5 Identfcaton accuracy curve of the samese MLP method on the square-rooted OCLBP feature, wth respect to the ncreasng output dmenson. Table Face dentfcaton performance on the extended Yale B database. Generally, Samese MLP = MLP > KNN. The output dmenson of the samese MLP s set to 8. Gabor LBP OCLBP Method KNN MLP Samese MLP orgnal.6937(±.432).7972(±.349).797(±.344) square-rooted.832(±.43).9248(±.27).9262(±.28) orgnal.796(±.42).925(±.4).9227(±.39) square-rooted.8478(±.5).9628(±.3).9634(±.3) orgnal.825(±.54).964(±.28).9659(±.3) square-rooted.86(±.55).9833(±.7).9842(±.6) Table 2 Sgnfcance testng between MLP and samese MLP. A p-value smaller than.5 or. ndcates a sgnfcant dfference. Results confrm no sgnfcant dfference between MLP and samese MLP. Gabor LBP OCLBP Method MLP Samese MLP p-value orgnal.7972(±.349).797(±.344).4982 square-rooted.9248(±.27).9262(±.28).3559 orgnal.925(±.4).9227(±.39).45 square-rooted.9628(±.3).9634(±.3).4486 orgnal.964(±.28).9659(±.3).3364 square-rooted.9833(±.7).9842(±.6).334 MLP s set to 8. Compared wth KNN, the samese MLP has brought sgnfcant mprovement on face dentfcaton. Compared wth the classcal MLP, the samese MLP acheves comparable results. For example, on the square-rooted LBP features, the samese MLP obtans an accuracy of.9634, seems slghtly better than the result of the classcal MLP, Besdes, methods usng square-rooted features always obtan better performance than those usng the orgnal features. Ths phenomenon s consstent wth that on the problem of face verfcaton [29]. To confrm the comparson, we employ the Bootstrap Resamplng approach [8] to evaluate the parwse statstcal sgnfcance between the two methods. Note that the smaller the p-value, the larger the sgnfcance. Usually, we consder a p-value smaller than.5 or. to ndcate a sgnfcant dfference. The sgnfcance testng results n Table 2 are all n the range [.3,

15 4 Lle Zheng et al. Fg. 6 Face mages that the samese MLP usng square-rooted OCLBP faled to recognze..5], showng that there s no sgnfcant performance dfference between the classcal MLP and the samese MLP. We also test the sgnfcance between samese MLP and KNN, the p-value s always on all the dfference features, demonstratng that the samese MLP has sgnfcantly mprove the performance over the KNN method. Comparng the three dfferent face descrptors, the results on OCLBP are sgnfcantly better than those on Gabor wavelets and those on LBP. For example, the samese MLP usng square-rooted OCBLP acheves an average accuracy of.9842 on the repeated experments. Fgure 6 shows the face mages that the samese MLP faled to recognze. Most of the falure examples are rather dark so that t s dffcult to extract effectve facal texture features from them. However, there are also some falure examples n good lghtng condton. Ths s probably because we apply KNN as the classfer and the fnal decson reles on the test sample s nearest neghbor n the tranng set. Snce the tranng data are randomly selected, a good nearest neghbor for each test sample s not guaranteed. 4.4 Dmensonalty Reducton n Data Vsualzaton In ths subsecton, we apply the samese MLP to llustrate data vsualzaton on a few data from the Extended Yale B database. We select the frst 4 classes each wth 7 face mages, totally 28 face mages. These mages are represented

16 Ttle Suppressed Due to Excessve Length (a) By Whtened PCA (b) By Samese MLP Fg. 7 Vsualzaton of dmensonalty reducton nto the 2-d or 3-d spaces usng (a ) Whtened PCA and (b) Samese MLP by 262-d OCLBP feature vectors. For data vsualzaton, all the nput vectors are projected nto the 2-d and 3-d spaces, respectvely. In addton, we also vsualze the projecton of whtened PCA as a comparson n Fgure 7. Fgure 7 (a) shows the data dstrbuton n the 2-d and 3-d target spaces usng whtened PCA, ponts wth dfferent colors are from 4 dfferent classes. We can see that ponts of dfferent classes are mxng n both the 2-d and 3-d spaces. In contrast, the samese MLP successfully separates the ponts of dfferent classes (Fgure 7 (b)). More nterestngly, ponts of the same class concentrate tghtly at a certan poston, standng as a vertex of a square n the 2-d space or a regular tetrahedron n the 3-d space. Note that both the square and the regular tetrahedron take the orgn pont as the center. Thus all the between-class pars share exactly the same angle: () n the 2-d space, the angle between two ponts from dfferent classes s 9 ; (2) n the 3-d space, the between-class angle s about In summary, the objectve of our

17 6 Lle Zheng et al (a) Random Intalzaton (b) After teratons (c) After 3 teratons (d) After 2 teratons Fg. 8 Illustraton of dmensonalty reducton nto the 2-d or 3-d spaces usng Samese MLP.

18 Ttle Suppressed Due to Excessve Length 7 samese MLP has been satsfed perfectly: separatng the between-class pars and concentratng the wthn-class pars. Fgure 8 pctures a more detaled procedure of data projecton by the samese MLP usng the mn-batch gradent descent algorthm (Secton 3.4). At the begnnng, the samese MLP s ntalzed wth random parameters, so we observe mxed data classes around the orgn pont n Fgure 8 (a). Towards the objectve of closng the wthn-class pars and separatng between-class pars, the ponts scatter away after teratons. Successvely, after 3 teratons, data from dfferent classes have found ther own optmal postons, and we can see clear blank boundares between dfferent classes. Fnally, after 2 teratons, data of the same class concentrate at each optmal poston n Fgure 8 (d). 5 Concluson In ths work, we have presented the samese MLP method for dmensonalty reducton. One advantage of the samese MLP s that t allows flexble output dmenson, we have vsualzed the results of dmensonalty reducton nto the 2-d and 3-d spaces, showng nterestng geometrcal characterstc. Another advantage of the samese MLP s that t learns on sde nformaton only. And we have compared t wth the classcal MLP on the problem of face dentfcaton, showng that the samese MLP tranng wth sde nformaton acheves comparable classfcaton performance wth the classcal MLP tranng on fully labeled data. In the future, we are nterested n changng the proposed optmal objectve nto a margn-based varant [27] and applyng t for manfold learnng [5]. References. Ahonen, T., Hadd, A., Petkänen, M.: Face recognton wth local bnary patterns. In: Proc. ECCV, pp Sprnger (24) 2. Barkan, O., Well, J., Wolf, L., Aronowtz, H.: Fast hgh dmensonal vector multplcaton face recognton. In: Proc. ICCV, pp IEEE (23) 3. Bellet, A., Habrard, A., Sebban, M.: A survey on metrc learnng for feature vectors and structured data. arxv preprnt arxv: (23) 4. Berlemont, S., Lefebvre, G., Duffner, S., Garca, C.: Samese Neural Network based Smlarty Metrc for Inertal Gesture Classfcaton and Rejecton. In: th IEEE Internatonal Conference on Automatc Face and Gesture Recognton (25) 5. Bourlard, H., Wellekens, C.J.: Lnks between markov models and multlayer perceptrons. Pattern Analyss and Machne Intellgence, IEEE Transactons on 2(2), (99) 6. Bromley, J., Bentz, J.W., Bottou, L., Guyon, I., LeCun, Y., Moore, C., Säcknger, E., Shah, R.: Sgnature verfcaton usng a samese tme delay neural network. Internatonal Journal of Pattern Recognton and Artfcal Intellgence 7(4), (993) 7. Chopra, S., Hadsell, R., LeCun, Y.: Learnng a smlarty metrc dscrmnatvely, wth applcaton to face verfcaton. In: Proc. CVPR, vol., pp IEEE (25) 8. Cottrell, G.W., Metcalfe, J.: Empath: face, emoton, and gender recognton usng holons. In: Advances n Neural Informaton Processng Systems, pp Morgan Kaufmann Publshers Inc. (99)

19 8 Lle Zheng et al. 9. Daugman, J.G.: Complete dscrete 2-d gabor transforms by neural networks for mage analyss and compresson. Acoustcs, Speech and Sgnal Processng, IEEE Transactons on 36(7), (988). Davs, J.V., Kuls, B., Jan, P., Sra, S., Dhllon, I.S.: Informaton-theoretc metrc learnng. In: Internatonal Conference on Machne learnng, pp ACM (27). Duffner, S.: Face mage analyss wth convolutonal neural networks. Ph.D. thess (28) 2. Dunteman, G.H.: Prncpal components analyss. 69. Sage (989) 3. Georghades, A.S., Belhumeur, P.N., Kregman, D.: From few to many: Illumnaton cone models for face recognton under varable lghtng and pose. Pattern Analyss and Machne Intellgence, IEEE Transactons on 23(6), (2) 4. Golomb, B.A., Lawrence, D.T., Sejnowsk, T.J.: Sexnet: A neural network dentfes sex from human faces. In: Advances n Neural Informaton Processng Systems, pp (99) 5. Hadsell, R., Chopra, S., LeCun, Y.: Dmensonalty reducton by learnng an nvarant mappng. In: Proc. CVPR, vol. 2, pp IEEE (26) 6. Hnton, G.E., Salakhutdnov, R.R.: Reducng the dmensonalty of data wth neural networks. Scence 33(5786), (26) 7. Ke, Y., Sukthankar, R.: Pca-sft: A more dstnctve representaton for local mage descrptors. In: Proc. CVPR, vol. 2, pp. II 56. IEEE (24) 8. Koehn, P.: Statstcal sgnfcance tests for machne translaton evaluaton. In: EMNLP, pp Cteseer (24) 9. Lenhart, R., Maydt, J.: An extended set of haar-lke features for rapd object detecton. In: Internatonal Conference on Image Processng, vol., pp. I 9. IEEE (22) 2. Lppmann, R.P.: Revew of neural networks for speech recognton. Neural Computaton (), 38 (989) 2. Lu, D.C., Nocedal, J.: On the lmted memory bfgs method for large scale optmzaton. Mathematcal Programmng 45(-3), (989) 22. Lu, J., Zhou, X., Tan, Y.P., Shang, Y., Zhou, J.: Neghborhood repulsed metrc learnng for knshp verfcaton. Pattern Analyss and Machne Intellgence, IEEE Transactons on 36(2), (24) 23. Luenberger, D.G.: Introducton to lnear and nonlnear programmng, vol. 28. Addson- Wesley Readng, MA (973) 24. Ojala, T., Petkanen, M., Maenpaa, T.: Multresoluton gray-scale and rotaton nvarant texture classfcaton wth local bnary patterns. Pattern Analyss and Machne Intellgence, IEEE Transactons on 24(7), (22) 25. Rumelhart, D.E., Hnton, G.E., Wllams, R.J.: Learnng nternal representatons by error propagaton. Tech. rep., DTIC Document (985) 26. Turk, M.A., Pentland, A.P.: Face recognton usng egenfaces. In: Proc. CVPR, pp IEEE (99) 27. Wenberger, K.Q., Bltzer, J., Saul, L.K.: Dstance metrc learnng for large margn nearest neghbor classfcaton. In: Advances n Neural Informaton Processng Systems, pp (25) 28. Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparson between geometry-based and gabor-wavelets-based facal expresson recognton usng mult-layer perceptron. In: IEEE Internatonal Conference on Automatc Face and Gesture Recognton, pp IEEE (998) 29. Zheng, L., Idrss, K., Garca, C., Duffner, S., Baskurt, A.: Trangular Smlarty Metrc Learnng for Face Verfcaton. In: th IEEE Internatonal Conference on Automatc Face and Gesture Recognton (25)

Learning Ensembles of Convolutional Neural Networks

Learning Ensembles of Convolutional Neural Networks Learnng Ensembles of Convolutonal Neural Networks Lran Chen The Unversty of Chcago Faculty Mentor: Greg Shakhnarovch Toyota Technologcal Insttute at Chcago 1 Introducton Convolutonal Neural Networks (CNN)

More information

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht 68 Internatonal Journal "Informaton Theores & Applcatons" Vol.11 PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION Evgeny Artyomov and Orly

More information

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University Dynamc Optmzaton Assgnment 1 Sasanka Nagavall snagaval@andrew.cmu.edu 16-745 January 29, 213 Robotcs Insttute Carnege Mellon Unversty Table of Contents 1. Problem and Approach... 1 2. Optmzaton wthout

More information

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding Sde-Match Vector Quantzers Usng Neural Network Based Varance Predctor for Image Codng Shuangteng Zhang Department of Computer Scence Eastern Kentucky Unversty Rchmond, KY 40475, U.S.A. shuangteng.zhang@eku.edu

More information

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b 2nd Internatonal Conference on Computer Engneerng, Informaton Scence & Applcaton Technology (ICCIA 207) Research of Dspatchng Method n Elevator Group Control System Based on Fuzzy Neural Network Yufeng

More information

ANNUAL OF NAVIGATION 11/2006

ANNUAL OF NAVIGATION 11/2006 ANNUAL OF NAVIGATION 11/2006 TOMASZ PRACZYK Naval Unversty of Gdyna A FEEDFORWARD LINEAR NEURAL NETWORK WITH HEBBA SELFORGANIZATION IN RADAR IMAGE COMPRESSION ABSTRACT The artcle presents the applcaton

More information

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel To: Professor Avtable Date: February 4, 3 From: Mechancal Student Subject:.3 Experment # Numercal Methods Usng Excel Introducton Mcrosoft Excel s a spreadsheet program that can be used for data analyss,

More information

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results AMERICAN JOURNAL OF UNDERGRADUATE RESEARCH VOL. 1 NO. () A Comparson of Two Equvalent Real Formulatons for Complex-Valued Lnear Systems Part : Results Abnta Munankarmy and Mchael A. Heroux Department of

More information

Calculation of the received voltage due to the radiation from multiple co-frequency sources

Calculation of the received voltage due to the radiation from multiple co-frequency sources Rec. ITU-R SM.1271-0 1 RECOMMENDATION ITU-R SM.1271-0 * EFFICIENT SPECTRUM UTILIZATION USING PROBABILISTIC METHODS Rec. ITU-R SM.1271 (1997) The ITU Radocommuncaton Assembly, consderng a) that communcatons

More information

Fast Code Detection Using High Speed Time Delay Neural Networks

Fast Code Detection Using High Speed Time Delay Neural Networks Fast Code Detecton Usng Hgh Speed Tme Delay Neural Networks Hazem M. El-Bakry 1 and Nkos Mastoraks 1 Faculty of Computer Scence & Informaton Systems, Mansoura Unversty, Egypt helbakry0@yahoo.com Department

More information

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04.

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04. Networs Introducton to - In 1986 a method for learnng n mult-layer wor,, was nvented by Rumelhart Paper Why are what and where processed by separate cortcal vsual systems? - The algorthm s a sensble approach

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems 0 nd Internatonal Conference on Industral Technology and Management (ICITM 0) IPCSIT vol. 49 (0) (0) IACSIT Press, Sngapore DOI: 0.776/IPCSIT.0.V49.8 A NSGA-II algorthm to solve a b-obectve optmzaton of

More information

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network Avalable onlne at www.scencedrect.com Proceda Engneerng 5 (2 44 445 A Prelmnary Study on Targets Assocaton Algorthm of Radar and AIS Usng BP Neural Networ Hu Xaoru a, Ln Changchuan a a Navgaton Insttute

More information

Recognition of Low-Resolution Face Images using Sparse Coding of Local Features

Recognition of Low-Resolution Face Images using Sparse Coding of Local Features Recognton of Low-Resoluton Face Images usng Sparse Codng of Local Features M. Saad Shakeel and Kn-Man-Lam Centre for Sgnal Processng, Department of Electronc and Informaton Engneerng he Hong Kong Polytechnc

More information

arxiv: v1 [cs.lg] 8 Jul 2016

arxiv: v1 [cs.lg] 8 Jul 2016 Overcomng Challenges n Fxed Pont Tranng of Deep Convolutonal Networks arxv:1607.02241v1 [cs.lg] 8 Jul 2016 Darryl D. Ln Qualcomm Research, San Dego, CA 92121 USA Sachn S. Talath Qualcomm Research, San

More information

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques The th Worshop on Combnatoral Mathematcs and Computaton Theory Effcent Large Integers Arthmetc by Adoptng Squarng and Complement Recodng Technques Cha-Long Wu*, Der-Chyuan Lou, and Te-Jen Chang *Department

More information

Chaotic Filter Bank for Computer Cryptography

Chaotic Filter Bank for Computer Cryptography Chaotc Flter Bank for Computer Cryptography Bngo Wng-uen Lng Telephone: 44 () 784894 Fax: 44 () 784893 Emal: HTwng-kuen.lng@kcl.ac.ukTH Department of Electronc Engneerng, Dvson of Engneerng, ng s College

More information

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter Walsh Functon Based Synthess Method of PWM Pattern for Full-Brdge Inverter Sej Kondo and Krt Choesa Nagaoka Unversty of Technology 63-, Kamtomoka-cho, Nagaoka 9-, JAPAN Fax: +8-58-7-95, Phone: +8-58-7-957

More information

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS A MODIFIED DIFFERENTIAL EVOLUTION ALORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS Kaml Dmller Department of Electrcal-Electroncs Engneerng rne Amercan Unversty North Cyprus, Mersn TURKEY kdmller@gau.edu.tr

More information

A Novel Hybrid Neural Network for Data Clustering

A Novel Hybrid Neural Network for Data Clustering A Novel Hybrd Neural Network for Data Clusterng Dongha Guan, Andrey Gavrlov Department of Computer Engneerng Kyung Hee Unversty, Korea dongha@oslab.khu.ac.kr, Avg1952@rambler.ru Abstract. Clusterng plays

More information

Network Reconfiguration in Distribution Systems Using a Modified TS Algorithm

Network Reconfiguration in Distribution Systems Using a Modified TS Algorithm Network Reconfguraton n Dstrbuton Systems Usng a Modfed TS Algorthm ZHANG DONG,FU ZHENGCAI,ZHANG LIUCHUN,SONG ZHENGQIANG School of Electroncs, Informaton and Electrcal Engneerng Shangha Jaotong Unversty

More information

Understanding the Spike Algorithm

Understanding the Spike Algorithm Understandng the Spke Algorthm Vctor Ejkhout and Robert van de Gejn May, ntroducton The parallel soluton of lnear systems has a long hstory, spannng both drect and teratve methods Whle drect methods exst

More information

Research Article Indoor Localisation Based on GSM Signals: Multistorey Building Study

Research Article Indoor Localisation Based on GSM Signals: Multistorey Building Study Moble Informaton Systems Volume 26, Artcle ID 279576, 7 pages http://dx.do.org/.55/26/279576 Research Artcle Indoor Localsaton Based on GSM Sgnals: Multstorey Buldng Study RafaB Górak, Marcn Luckner, MchaB

More information

ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION

ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION 7th European Sgnal Processng Conference (EUSIPCO 9 Glasgow, Scotland, August 4-8, 9 ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION Babta Majh, G. Panda and B.

More information

Adaptive System Control with PID Neural Networks

Adaptive System Control with PID Neural Networks Adaptve System Control wth PID Neural Networs F. Shahra a, M.A. Fanae b, A.R. Aromandzadeh a a Department of Chemcal Engneerng, Unversty of Sstan and Baluchestan, Zahedan, Iran. b Department of Chemcal

More information

Effective Representation Using ICA for Face Recognition Robust to Local Distortion and Partial Occlusion

Effective Representation Using ICA for Face Recognition Robust to Local Distortion and Partial Occlusion IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 12, DECEMBER 2005 1977 Effectve Representaton Usng ICA for Face Recognton Robust to Local Dstorton and Partal Occluson Jongsun

More information

The Performance Improvement of BASK System for Giga-Bit MODEM Using the Fuzzy System

The Performance Improvement of BASK System for Giga-Bit MODEM Using the Fuzzy System Int. J. Communcatons, Network and System Scences, 10, 3, 1-5 do:10.36/jcns.10.358 Publshed Onlne May 10 (http://www.scrp.org/journal/jcns/) The Performance Improvement of BASK System for Gga-Bt MODEM Usng

More information

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate Comparatve Analyss of Reuse and 3 n ular Network Based On IR Dstrbuton and Rate Chandra Thapa M.Tech. II, DEC V College of Engneerng & Technology R.V.. Nagar, Chttoor-5727, A.P. Inda Emal: chandra2thapa@gmal.com

More information

A MODIFIED DIRECTIONAL FREQUENCY REUSE PLAN BASED ON CHANNEL ALTERNATION AND ROTATION

A MODIFIED DIRECTIONAL FREQUENCY REUSE PLAN BASED ON CHANNEL ALTERNATION AND ROTATION A MODIFIED DIRECTIONAL FREQUENCY REUSE PLAN BASED ON CHANNEL ALTERNATION AND ROTATION Vncent A. Nguyen Peng-Jun Wan Ophr Freder Computer Scence Department Illnos Insttute of Technology Chcago, Illnos vnguyen@t.edu,

More information

High Speed, Low Power And Area Efficient Carry-Select Adder

High Speed, Low Power And Area Efficient Carry-Select Adder Internatonal Journal of Scence, Engneerng and Technology Research (IJSETR), Volume 5, Issue 3, March 2016 Hgh Speed, Low Power And Area Effcent Carry-Select Adder Nelant Harsh M.tech.VLSI Desgn Electroncs

More information

Partial Discharge Pattern Recognition of Cast Resin Current Transformers Using Radial Basis Function Neural Network

Partial Discharge Pattern Recognition of Cast Resin Current Transformers Using Radial Basis Function Neural Network J Electr Eng Technol Vol. 9, No. 1: 293-300, 2014 http://dx.do.org/10.5370/jeet.2014.9.1.293 ISSN(Prnt) 1975-0102 ISSN(Onlne) 2093-7423 Partal Dscharge Pattern Recognton of Cast Resn Current Transformers

More information

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES IEE Electroncs Letters, vol 34, no 17, August 1998, pp. 1622-1624. ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES A. Chatzgeorgou, S. Nkolads 1 and I. Tsoukalas Computer Scence Department, 1 Department

More information

antenna antenna (4.139)

antenna antenna (4.139) .6.6 The Lmts of Usable Input Levels for LNAs The sgnal voltage level delvered to the nput of an LNA from the antenna may vary n a very wde nterval, from very weak sgnals comparable to the nose level,

More information

Time-frequency Analysis Based State Diagnosis of Transformers Windings under the Short-Circuit Shock

Time-frequency Analysis Based State Diagnosis of Transformers Windings under the Short-Circuit Shock Tme-frequency Analyss Based State Dagnoss of Transformers Wndngs under the Short-Crcut Shock YUYING SHAO, ZHUSHI RAO School of Mechancal Engneerng ZHIJIAN JIN Hgh Voltage Lab Shangha Jao Tong Unversty

More information

NETWORK 2001 Transportation Planning Under Multiple Objectives

NETWORK 2001 Transportation Planning Under Multiple Objectives NETWORK 200 Transportaton Plannng Under Multple Objectves Woodam Chung Graduate Research Assstant, Department of Forest Engneerng, Oregon State Unversty, Corvalls, OR9733, Tel: (54) 737-4952, Fax: (54)

More information

RC Filters TEP Related Topics Principle Equipment

RC Filters TEP Related Topics Principle Equipment RC Flters TEP Related Topcs Hgh-pass, low-pass, Wen-Robnson brdge, parallel-t flters, dfferentatng network, ntegratng network, step response, square wave, transfer functon. Prncple Resstor-Capactor (RC)

More information

Optimal Placement of PMU and RTU by Hybrid Genetic Algorithm and Simulated Annealing for Multiarea Power System State Estimation

Optimal Placement of PMU and RTU by Hybrid Genetic Algorithm and Simulated Annealing for Multiarea Power System State Estimation T. Kerdchuen and W. Ongsakul / GMSARN Internatonal Journal (09) - Optmal Placement of and by Hybrd Genetc Algorthm and Smulated Annealng for Multarea Power System State Estmaton Thawatch Kerdchuen and

More information

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources POLYTECHNIC UNIERSITY Electrcal Engneerng Department EE SOPHOMORE LABORATORY Experment 1 Laboratory Energy Sources Modfed for Physcs 18, Brooklyn College I. Oerew of the Experment Ths experment has three

More information

MTBF PREDICTION REPORT

MTBF PREDICTION REPORT MTBF PREDICTION REPORT PRODUCT NAME: BLE112-A-V2 Issued date: 01-23-2015 Rev:1.0 Copyrght@2015 Bluegga Technologes. All rghts reserved. 1 MTBF PREDICTION REPORT... 1 PRODUCT NAME: BLE112-A-V2... 1 1.0

More information

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol., No., November 23, 3-9 Rejecton of PSK Interference n DS-SS/PSK System Usng Adaptve Transversal Flter wth Condtonal Response Recalculaton Zorca Nkolć, Bojan

More information

An Effective Approach for Distribution System Power Flow Solution

An Effective Approach for Distribution System Power Flow Solution World Academy of Scence, Engneerng and Technology nternatonal Journal of Electrcal and Computer Engneerng ol:, No:, 9 An Effectve Approach for Dstrbuton System Power Flow Soluton A. Alsaad, and. Gholam

More information

Development of Neural Networks for Noise Reduction

Development of Neural Networks for Noise Reduction The Internatonal Arab Journal of Informaton Technology, Vol. 7, No. 3, July 00 89 Development of Neural Networks for Nose Reducton Lubna Badr Faculty of Engneerng, Phladelpha Unversty, Jordan Abstract:

More information

Application of Intelligent Voltage Control System to Korean Power Systems

Application of Intelligent Voltage Control System to Korean Power Systems Applcaton of Intellgent Voltage Control System to Korean Power Systems WonKun Yu a,1 and HeungJae Lee b, *,2 a Department of Power System, Seol Unversty, South Korea. b Department of Power System, Kwangwoon

More information

Wavelet Multi-Layer Perceptron Neural Network for Time-Series Prediction

Wavelet Multi-Layer Perceptron Neural Network for Time-Series Prediction Wavelet Mult-Layer Perceptron Neural Network for Tme-Seres Predcton Kok Keong Teo, Lpo Wang* and Zhpng Ln School of Electrcal and Electronc Engneerng Nanyang Technologcal Unversty Block S2, Nanyang Avenue

More information

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid Abstract Latency Inserton Method (LIM) for IR Drop Analyss n Power Grd Dmtr Klokotov, and José Schutt-Ané Wth the steadly growng number of transstors on a chp, and constantly tghtenng voltage budgets,

More information

Multi-focus Image Fusion Using Spatial Frequency and Genetic Algorithm

Multi-focus Image Fusion Using Spatial Frequency and Genetic Algorithm 0 IJCSNS Internatonal Journal of Computer Scence and Network Securty, VOL.8 No., February 008 Mult-focus Image Fuson Usng Spatal Frequency and Genetc Algorthm Jun Kong,, Kayuan Zheng,, Jngbo Zhang,,*,,

More information

Discussion on How to Express a Regional GPS Solution in the ITRF

Discussion on How to Express a Regional GPS Solution in the ITRF 162 Dscusson on How to Express a Regonal GPS Soluton n the ITRF Z. ALTAMIMI 1 Abstract The usefulness of the densfcaton of the Internatonal Terrestral Reference Frame (ITRF) s to facltate ts access as

More information

High Speed ADC Sampling Transients

High Speed ADC Sampling Transients Hgh Speed ADC Samplng Transents Doug Stuetzle Hgh speed analog to dgtal converters (ADCs) are, at the analog sgnal nterface, track and hold devces. As such, they nclude samplng capactors and samplng swtches.

More information

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION Phaneendra R.Venkata, Nathan A. Goodman Department of Electrcal and Computer Engneerng, Unversty of Arzona, 30 E. Speedway Blvd, Tucson, Arzona

More information

熊本大学学術リポジトリ. Kumamoto University Repositor

熊本大学学術リポジトリ. Kumamoto University Repositor 熊本大学学術リポジトリ Kumamoto Unversty Repostor Ttle Wreless LAN Based Indoor Poston and Its Smulaton Author(s) Ktasuka, Teruak; Nakansh, Tsune CtatonIEEE Pacfc RIM Conference on Comm Computers, and Sgnal Processng

More information

An Algorithm Forecasting Time Series Using Wavelet

An Algorithm Forecasting Time Series Using Wavelet IJCSI Internatonal Journal of Computer Scence Issues, Vol., Issue, No, January 04 ISSN (Prnt): 94-084 ISSN (Onlne): 94-0784 www.ijcsi.org 0 An Algorthm Forecastng Tme Seres Usng Wavelet Kas Ismal Ibraheem,Eman

More information

Fall 2018 #11 Games and Nimbers. A. Game. 0.5 seconds, 64 megabytes

Fall 2018 #11 Games and Nimbers. A. Game. 0.5 seconds, 64 megabytes 5-95 Fall 08 # Games and Nmbers A. Game 0.5 seconds, 64 megabytes There s a legend n the IT Cty college. A student that faled to answer all questons on the game theory exam s gven one more chance by hs

More information

problems palette of David Rock and Mary K. Porter 6. A local musician comes to your school to give a performance

problems palette of David Rock and Mary K. Porter 6. A local musician comes to your school to give a performance palette of problems Davd Rock and Mary K. Porter 1. If n represents an nteger, whch of the followng expressons yelds the greatest value? n,, n, n, n n. A 60-watt lghtbulb s used for 95 hours before t burns

More information

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985 NATONAL RADO ASTRONOMY OBSERVATORY Green Bank, West Vrgna SPECTRAL PROCESSOR MEMO NO. 25 MEMORANDUM February 13, 1985 To: Spectral Processor Group From: R. Fsher Subj: Some Experments wth an nteger FFT

More information

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme Performance Analyss of Mult User MIMO System wth Block-Dagonalzaton Precodng Scheme Yoon Hyun m and Jn Young m, wanwoon Unversty, Department of Electroncs Convergence Engneerng, Wolgye-Dong, Nowon-Gu,

More information

New Applied Methods For Optimum GPS Satellite Selection

New Applied Methods For Optimum GPS Satellite Selection New Appled Methods For Optmum GPS Satellte Selecton Hamed Azam, Student Member, IEEE Department of Electrcal Engneerng Iran Unversty of Scence &echnology ehran, Iran hamed_azam@eee.org Mlad Azarbad Department

More information

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6)

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6) Passve Flters eferences: Barbow (pp 6575), Hayes & Horowtz (pp 360), zzon (Chap. 6) Frequencyselectve or flter crcuts pass to the output only those nput sgnals that are n a desred range of frequences (called

More information

Controlled Random Search Optimization For Linear Antenna Arrays

Controlled Random Search Optimization For Linear Antenna Arrays L. MERAD, F. T. BENDIMERAD, S. M. MERIAH, CONTROLLED RANDOM SEARCH OPTIMIZATION FOR LINEAR Controlled Random Search Optmzaton For Lnear Antenna Arrays Lotf MERAD, Feth Tar BENDIMERAD, Sd Mohammed MERIAH

More information

Algorithms Airline Scheduling. Airline Scheduling. Design and Analysis of Algorithms Andrei Bulatov

Algorithms Airline Scheduling. Airline Scheduling. Design and Analysis of Algorithms Andrei Bulatov Algorthms Arlne Schedulng Arlne Schedulng Desgn and Analyss of Algorthms Andre Bulatov Algorthms Arlne Schedulng 11-2 The Problem An arlne carrer wants to serve certan set of flghts Example: Boston (6

More information

MASTER TIMING AND TOF MODULE-

MASTER TIMING AND TOF MODULE- MASTER TMNG AND TOF MODULE- G. Mazaher Stanford Lnear Accelerator Center, Stanford Unversty, Stanford, CA 9409 USA SLAC-PUB-66 November 99 (/E) Abstract n conjuncton wth the development of a Beam Sze Montor

More information

Applying Rprop Neural Network for the Prediction of the Mobile Station Location

Applying Rprop Neural Network for the Prediction of the Mobile Station Location Sensors 0,, 407-430; do:0.3390/s040407 OPE ACCESS sensors ISS 44-80 www.mdp.com/journal/sensors Communcaton Applyng Rprop eural etwork for the Predcton of the Moble Staton Locaton Chen-Sheng Chen, * and

More information

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation 1 Parameter Free Iteratve Decodng Metrcs for Non-Coherent Orthogonal Modulaton Albert Gullén Fàbregas and Alex Grant Abstract We study decoder metrcs suted for teratve decodng of non-coherently detected

More information

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart Control Chart - hstory Control Chart Developed n 920 s By Dr. Walter A. Shewhart 2 Process n control A phenomenon s sad to be controlled when, through the use of past experence, we can predct, at least

More information

A Novel UWB Imaging System Setup for Computer- Aided Breast Cancer Diagnosis

A Novel UWB Imaging System Setup for Computer- Aided Breast Cancer Diagnosis A Novel UWB Imagng System Setup for Computer- Aded Breast Cancer Dagnoss Xang He, Ja L, Chenxng Wu Electrcal and Computer Engneerng Oakland Unversty, OU Rochester, I 48309, U.S.A xhe2@oakland.edu, l4@oakland.edu,

More information

A New Type of Weighted DV-Hop Algorithm Based on Correction Factor in WSNs

A New Type of Weighted DV-Hop Algorithm Based on Correction Factor in WSNs Journal of Communcatons Vol. 9, No. 9, September 2014 A New Type of Weghted DV-Hop Algorthm Based on Correcton Factor n WSNs Yng Wang, Zhy Fang, and Ln Chen Department of Computer scence and technology,

More information

ESTIMATION OF DIVERGENCES IN PRECAST CONSTRUCTIONS USING GEODETIC CONTROL NETWORKS

ESTIMATION OF DIVERGENCES IN PRECAST CONSTRUCTIONS USING GEODETIC CONTROL NETWORKS Proceedngs, 11 th FIG Symposum on Deformaton Measurements, Santorn, Greece, 2003. ESTIMATION OF DIVERGENCES IN PRECAST CONSTRUCTIONS USING GEODETIC CONTROL NETWORKS George D. Georgopoulos & Elsavet C.

More information

Comparison of Gradient descent method, Kalman Filtering and decoupled Kalman in training Neural Networks used for fingerprint-based positioning

Comparison of Gradient descent method, Kalman Filtering and decoupled Kalman in training Neural Networks used for fingerprint-based positioning Comparson of Gradent descent method, Kalman lterng and decoupled Kalman n tranng Neural Networs used for fngerprnt-based postonng Claude Mbusa Taenga, Koteswara Rao Anne, K Kyamaya, Jean Chamberlan Chedou

More information

Electrical Capacitance Tomography with a Square Sensor

Electrical Capacitance Tomography with a Square Sensor Electrcal Capactance Tomography wth a Square Sensor W Q Yang * Department of Electrcal Engneerng and Electroncs, Process Tomography Group, UMIST, P O Box 88, Manchester M60 QD, UK, emal w.yang@umst.ac.uk

More information

STRUCTURE ANALYSIS OF NEURAL NETWORKS

STRUCTURE ANALYSIS OF NEURAL NETWORKS STRUCTURE ANALYSIS OF NEURAL NETWORKS DING SHENQIANG NATIONAL UNIVERSITY OF SINGAPORE 004 STRUCTURE ANALYSIS OF NEURAL NETWORKS DING SHENQIANG 004 STRUCTURE ANANLYSIS OF NEURAL NETWORKS DING SHENQIANG

More information

Grain Moisture Sensor Data Fusion Based on Improved Radial Basis Function Neural Network

Grain Moisture Sensor Data Fusion Based on Improved Radial Basis Function Neural Network Gran Mosture Sensor Data Fuson Based on Improved Radal Bass Functon Neural Network Lu Yang, Gang Wu, Yuyao Song, and Lanlan Dong 1 College of Engneerng, Chna Agrcultural Unversty, Bejng,100083, Chna zhjunr@gmal.com,{yanglu,maozhhua}@cau.edu.cn

More information

PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER. Chirala Engineering College, Chirala.

PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER. Chirala Engineering College, Chirala. PERFORMANCE EVALUATION OF BOOTH AND WALLACE MULTIPLIER USING FIR FILTER 1 H. RAGHUNATHA RAO, T. ASHOK KUMAR & 3 N.SURESH BABU 1,&3 Department of Electroncs and Communcaton Engneerng, Chrala Engneerng College,

More information

Enhanced Artificial Neural Networks Using Complex Numbers

Enhanced Artificial Neural Networks Using Complex Numbers Enhanced Artfcal Neural Networks Usng Complex Numers Howard E. Mchel and A. A. S. Awwal Computer Scence Department Unversty of Dayton Dayton, OH 45469-60 mchel@cps.udayton.edu Computer Scence & Engneerng

More information

Breast Cancer Detection using Recursive Least Square and Modified Radial Basis Functional Neural Network

Breast Cancer Detection using Recursive Least Square and Modified Radial Basis Functional Neural Network Breast Cancer Detecton usng Recursve Least Square and Modfed Radal Bass Functonal Neural Network M.R.Senapat a, P.K.Routray b,p.k.dask b,a Department of computer scence and Engneerng Gandh Engneerng College

More information

A study of turbo codes for multilevel modulations in Gaussian and mobile channels

A study of turbo codes for multilevel modulations in Gaussian and mobile channels A study of turbo codes for multlevel modulatons n Gaussan and moble channels Lamne Sylla and Paul Forter (sylla, forter)@gel.ulaval.ca Department of Electrcal and Computer Engneerng Laval Unversty, Ste-Foy,

More information

Relevance of Energy Efficiency Gain in Massive MIMO Wireless Network

Relevance of Energy Efficiency Gain in Massive MIMO Wireless Network Relevance of Energy Effcency Gan n Massve MIMO Wreless Network Ahmed Alzahran, Vjey Thayananthan, Muhammad Shuab Quresh Computer Scence Department, Faculty of Computng and Informaton Technology Kng Abdulazz

More information

THEORY OF YARN STRUCTURE by Prof. Bohuslav Neckář, Textile Department, IIT Delhi, New Delhi. Compression of fibrous assemblies

THEORY OF YARN STRUCTURE by Prof. Bohuslav Neckář, Textile Department, IIT Delhi, New Delhi. Compression of fibrous assemblies THEORY OF YARN STRUCTURE by Prof. Bohuslav Neckář, Textle Department, IIT Delh, New Delh. Compresson of fbrous assembles Q1) What was the dea of fbre-to-fbre contact accordng to van Wyk? A1) Accordng to

More information

Comparison of Two Measurement Devices I. Fundamental Ideas.

Comparison of Two Measurement Devices I. Fundamental Ideas. Comparson of Two Measurement Devces I. Fundamental Ideas. ASQ-RS Qualty Conference March 16, 005 Joseph G. Voelkel, COE, RIT Bruce Sskowsk Rechert, Inc. Topcs The Problem, Eample, Mathematcal Model One

More information

MULTIPLE LAYAR KERNEL-BASED APPROACH IN RELEVANCE FEEDBACK CONTENT-BASED IMAGE RETRIEVAL SYSTEM

MULTIPLE LAYAR KERNEL-BASED APPROACH IN RELEVANCE FEEDBACK CONTENT-BASED IMAGE RETRIEVAL SYSTEM Proceedngs of the Fourth Internatonal Conference on Machne Learnng and Cybernetcs, Guangzhou, 18-21 August 2005 MULTIPLE LAYAR KERNEL-BASED APPROACH IN RELEVANCE FEEDBACK CONTENT-BASED IMAGE RETRIEVAL

More information

Adaptive Modulation for Multiple Antenna Channels

Adaptive Modulation for Multiple Antenna Channels Adaptve Modulaton for Multple Antenna Channels June Chul Roh and Bhaskar D. Rao Department of Electrcal and Computer Engneerng Unversty of Calforna, San Dego La Jolla, CA 993-7 E-mal: jroh@ece.ucsd.edu,

More information

Phoneme Probability Estimation with Dynamic Sparsely Connected Artificial Neural Networks

Phoneme Probability Estimation with Dynamic Sparsely Connected Artificial Neural Networks The Free Speech Journal, Issue # 5(1997) Publshed 10/22/97 1997 All rghts reserved. Phoneme Probablty Estmaton wth Dynamc Sparsely Connected Artfcal Neural Networks Nkko Ström, (nkko@speech.kth.se) Department

More information

Generalized Incomplete Trojan-Type Designs with Unequal Cell Sizes

Generalized Incomplete Trojan-Type Designs with Unequal Cell Sizes Internatonal Journal of Theoretcal & Appled Scences 6(1): 50-54(2014) ISSN No. (Prnt): 0975-1718 ISSN No. (Onlne): 2249-3247 Generalzed Incomplete Trojan-Type Desgns wth Unequal Cell Szes Cn Varghese,

More information

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS INTRODUCTION Because dgtal sgnal rates n computng systems are ncreasng at an astonshng rate, sgnal ntegrty ssues have become far more mportant to

More information

Target Response Adaptation for Correlation Filter Tracking

Target Response Adaptation for Correlation Filter Tracking Target Response Adaptaton for Correlaton Flter Tracng Adel Bb, Matthas Mueller, and Bernard Ghanem Image and Vdeo Understandng Laboratory IVUL, Kng Abdullah Unversty of Scence and Technology KAUST, Saud

More information

EE 508 Lecture 6. Degrees of Freedom The Approximation Problem

EE 508 Lecture 6. Degrees of Freedom The Approximation Problem EE 508 Lecture 6 Degrees of Freedom The Approxmaton Problem Revew from Last Tme Desgn Strategy Theorem: A crcut wth transfer functon T(s) can be obtaned from a crcut wth normalzed transfer functon T n

More information

Lecture 3: Multi-layer perceptron

Lecture 3: Multi-layer perceptron x Fundamental Theores and Applcatons of Neural Netors Lecture 3: Mult-laer perceptron Contents of ths lecture Ree of sngle laer neural ors. Formulaton of the delta learnng rule of sngle laer neural ors.

More information

Evaluate the Effective of Annular Aperture on the OTF for Fractal Optical Modulator

Evaluate the Effective of Annular Aperture on the OTF for Fractal Optical Modulator Global Advanced Research Journal of Management and Busness Studes (ISSN: 2315-5086) Vol. 4(3) pp. 082-086, March, 2015 Avalable onlne http://garj.org/garjmbs/ndex.htm Copyrght 2015 Global Advanced Research

More information

Optimization Frequency Design of Eddy Current Testing

Optimization Frequency Design of Eddy Current Testing Optmzaton Frequency Desgn of Eddy Current Testng NAONG MUNGKUNG 1, KOMKIT CHOMSUWAN 1, NAONG PIMPU 2 AND TOSHIFUMI YUJI 3 1 Department of Electrcal Technology Educaton Kng Mongkut s Unversty of Technology

More information

arxiv: v1 [cs.lg] 22 Jan 2016 Abstract

arxiv: v1 [cs.lg] 22 Jan 2016 Abstract Mne Km MINJE@ILLINOIS.EDU Department of Computer Scence, Unversty of Illnos at Urbana-Champagn, Urbana, IL 61801 USA Pars Smaragds Unversty of Illnos at Urbana-Champagn, Urbana, IL 61801 USA Adobe Research,

More information

Phasor Representation of Sinusoidal Signals

Phasor Representation of Sinusoidal Signals Phasor Representaton of Snusodal Sgnals COSC 44: Dgtal Communcatons Instructor: Dr. Amr Asf Department of Computer Scence and Engneerng York Unversty Handout # 6: Bandpass odulaton Usng Euler dentty e

More information

Equity trend prediction with neural networks

Equity trend prediction with neural networks Res. Lett. Inf. Math. Sc., 2004, Vol. 6, pp 15-29 15 Avalable onlne at http://ms.massey.ac.nz/research/letters/ Equty trend predcton wth neural networks R.HALLIDAY Insttute of Informaton & Mathematcal

More information

Resource Allocation Optimization for Device-to- Device Communication Underlaying Cellular Networks

Resource Allocation Optimization for Device-to- Device Communication Underlaying Cellular Networks Resource Allocaton Optmzaton for Devce-to- Devce Communcaton Underlayng Cellular Networks Bn Wang, L Chen, Xaohang Chen, Xn Zhang, and Dacheng Yang Wreless Theores and Technologes (WT&T) Bejng Unversty

More information

International Journal of Network Security & Its Application (IJNSA), Vol.2, No.1, January SYSTEL, SUPCOM, Tunisia.

International Journal of Network Security & Its Application (IJNSA), Vol.2, No.1, January SYSTEL, SUPCOM, Tunisia. Internatonal Journal of Network Securty & Its Applcaton (IJNSA), Vol.2, No., January 2 WEAKNESS ON CRYPTOGRAPHIC SCHEMES BASED ON REGULAR LDPC CODES Omessaad Hamd, Manel abdelhed 2, Ammar Bouallegue 2,

More information

Webinar Series TMIP VISION

Webinar Series TMIP VISION Webnar Seres TMIP VISION TMIP provdes techncal support and promotes knowledge and nformaton exchange n the transportaton plannng and modelng communty. DISCLAIMER The vews and opnons expressed durng ths

More information

Machine Learning in Production Systems Design Using Genetic Algorithms

Machine Learning in Production Systems Design Using Genetic Algorithms Internatonal Journal of Computatonal Intellgence Volume 4 Number 1 achne Learnng n Producton Systems Desgn Usng Genetc Algorthms Abu Quder Jaber, Yamamoto Hdehko and Rzauddn Raml Abstract To create a soluton

More information

MODEL ORDER REDUCTION AND CONTROLLER DESIGN OF DISCRETE SYSTEM EMPLOYING REAL CODED GENETIC ALGORITHM J. S. Yadav, N. P. Patidar, J.

MODEL ORDER REDUCTION AND CONTROLLER DESIGN OF DISCRETE SYSTEM EMPLOYING REAL CODED GENETIC ALGORITHM J. S. Yadav, N. P. Patidar, J. ABSTRACT Research Artcle MODEL ORDER REDUCTION AND CONTROLLER DESIGN OF DISCRETE SYSTEM EMPLOYING REAL CODED GENETIC ALGORITHM J. S. Yadav, N. P. Patdar, J. Sngha Address for Correspondence Maulana Azad

More information

Kalman Filter and SVR Combinations in Forecasting US Unemployment

Kalman Filter and SVR Combinations in Forecasting US Unemployment Kalman Flter and SVR Combnatons n Forecastng US Unemployment Georgos Sermpns, Charalampos Stasnaks, Andreas Karathanasopoulos To cte ths verson: Georgos Sermpns, Charalampos Stasnaks, Andreas Karathanasopoulos.

More information

Define Y = # of mobiles from M total mobiles that have an adequate link. Measure of average portion of mobiles allocated a link of adequate quality.

Define Y = # of mobiles from M total mobiles that have an adequate link. Measure of average portion of mobiles allocated a link of adequate quality. Wreless Communcatons Technologes 6::559 (Advanced Topcs n Communcatons) Lecture 5 (Aprl th ) and Lecture 6 (May st ) Instructor: Professor Narayan Mandayam Summarzed by: Steve Leung (leungs@ece.rutgers.edu)

More information

The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game

The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game 8 Y. B. LI, R. YAG, Y. LI, F. YE, THE SPECTRUM SHARIG I COGITIVE RADIO ETWORKS BASED O COMPETITIVE The Spectrum Sharng n Cogntve Rado etworks Based on Compettve Prce Game Y-bng LI, Ru YAG., Yun LI, Fang

More information

Traffic balancing over licensed and unlicensed bands in heterogeneous networks

Traffic balancing over licensed and unlicensed bands in heterogeneous networks Correspondence letter Traffc balancng over lcensed and unlcensed bands n heterogeneous networks LI Zhen, CUI Qme, CUI Zhyan, ZHENG We Natonal Engneerng Laboratory for Moble Network Securty, Bejng Unversty

More information