Ensemble of Distributed Learners for Online Classification of Dynamic Data Streams

Size: px
Start display at page:

Download "Ensemble of Distributed Learners for Online Classification of Dynamic Data Streams"

Transcription

1 1 Ensemble of Dstrbuted Learners for Onlne Classfcaton of Dynamc Data Streams Luca Canzan, Member, IEEE, Yu Zhang, and Mhaela van der Schaar, Fellow, IEEE arxv: v1 [cs.lg] 24 Aug 213 Abstract We present an effcent dstrbuted onlne learnng scheme to classfy data captured from dstrbuted, heterogeneous, and dynamc data sources. Our scheme conssts of multple dstrbuted local learners, that analyze dfferent streams of data that are correlated to a common event that needs to be classfed. Each learner uses a local classfer to make a local predcton. The local predctons are then collected by each learner and combned usng a weghted majorty rule to output the fnal predcton. We propose a novel onlne ensemble learnng algorthm to update the aggregaton rule n order to adapt to the underlyng data dynamcs. We rgorously determne a bound for the worst case ms classfcaton probablty of our algorthm whch depends on the ms classfcaton probabltes of the best statc aggregaton rule, and of the best local classfer. Importantly, the worst case ms classfcaton probablty of our algorthm tends asymptotcally to f the ms classfcaton probablty of the best statc aggregaton rule or the ms classfcaton probablty of the best local classfer tend to. Then we extend our algorthm to address challenges specfc to the dstrbuted mplementaton and we prove new bounds that apply to these settngs. Fnally, we test our scheme by performng an evaluaton study on several data sets. When appled to data sets wdely used by the lterature dealng wth dynamc data streams and concept drft, our scheme exhbts performance gans rangng from 34% to 71% wth respect to state of the art solutons. Index Terms Onlne learnng, dstrbuted learnng, ensemble of classfers, dynamc streams, concept drft, classfcaton. 1 ITRODUCTIO Recent years have wtnessed the prolferaton of data drven applcatons that explot the large amount of data captured from dstrbuted, heterogeneous, and dynamc (.e., whose characterstcs are varyng over tme) data sources. Examples of such applcatons nclude survellance [1], drver assstance systems [2], network montorng [3], socal multmeda [4], and patent montorng [5]. However, the effectve utlzaton of such hgh-volume data also nvolves sgnfcant challenges that are the man concern of ths work. Frst, the captured data need to be analyzed onlne (e.g., to make predctons and tmely decsons based on these predctons); thus, the learnng algorthms need to deal wth the tme varyng characterstcs of the underlyng data,.e., adequately deal wth concept drft [6]. Second, the prvacy, communcaton, and sharng costs make t dffcult to collect and store all the observed data. Thrd, the devces that collect the data may be managed by dfferent enttes (e.g., multple hosptals, multple camera systems, multple routers, etc.) and may follow polces (e.g., type of nformaton to exchange, rate at whch data are collected, etc.) that are not centrally controllable. To address these challenges, we propose an onlne ensemble learnng technque, whch we refer to as Perceptron Weghted Majorty (PWM). Specfcally, we consder a set of dstrbuted learners that observe data from dfferent sources, whch are correlated to a common event that must be classfed by the learners (see Fg. 1). We focus on bnary classfcaton prob- The authors are wth the Department of Electrcal Engneerng, UCLA, Los Angeles CA 995, USA. lems. 1 For each sngle nstance that enters the system, each learner makes the fnal classfcaton decson by collectng the local predctons of all the learners and combnng them usng a weghted majorty rule as n [8] [17]. After havng made the fnal predcton, the learner s told the real value,.e., the label, assocated to the event to classfy. Explotng such nformaton, the learner updates the aggregaton weghts adoptng a perceptron learnng rule [18]. The man features of our scheme are: DIS: Dstrbuted data streams. The majorty of the exstng ensemble schemes proposed n lterature assume that the learners make a predcton after havng observed the same data [9] [17], [19], [2]. Our approach does not make such an assumpton, allowng for the possblty that the dstrbuted learners observe dfferent correlated data streams. In partcular, the statstcal dependency among the label and the observaton of a learner can be dfferent from the statstcal dependency among the label and the observaton of another learner,.e., each source has a specfc generatng process [21]. DY: Dynamc data streams. Many exstng ensemble schemes [8] [12] assume that the data are generated from a statonary dstrbuton,.e., that the concept s stable. Our scheme s developed and evaluated, both analytcally and expermentally, consderng the possblty that the data streams are dynamc,.e., they may experence concept drft. OL: Onlne learnng. To deal wth dynamc data streams our scheme must learn the aggregaton rule "on the fly". In ths way the learners mantan an up to date aggregaton rule and are able to track the concept drfts. COM: Low complexty. Some onlne ensemble learnng Ths work was partally supported by the AFOSR DDDAS grant and the SF CCF grant. 1. We remark that a mult class classfer can be decomposed as a cascade of bnary classfers [7].

2 2 schemes, such as [12] [14], [19], need to collect and store chunks of data, that are later processed to update the aggregaton model of the system. Ths requres a large memory and hgh computatonal capabltes, thereby resultng n hgh mplementaton cost. Dfferent from these approaches, n our scheme each data s processed "on arrval" and afterwards t s thrown away. Only the up to date aggregaton model s kept n the memory. The local predcton of each learner, whch s the only nformaton that must be exchanged, conssts of a bnary value. Moreover, our scheme s scalable to a large number of sources and learners and the learners can be chaned n any herarchcal structure. ID: Independence from local classfers. Dfferent from [16], [17], [2], our scheme s general and can be appled to dfferent types of local classfers, such as support vector machne, decson tree, neural networks, offlne/onlne classfers, etc. Ths feature s mportant, because the dfferent learners can be managed by dfferent enttes, wllng to cooperate n exchangng nformaton but not to modfy ther own local classfers. Also, our algorthm does not need any a pror knowledge about the performance of the local classfers, t automatcally adapts the confguraton of the dstrbuted system to the current performance of the local classfers. DEL: Delayed labels, mssng labels, and asynchronous learners. In dstrbuted envronments there are many factors that may mpact the performance of the learnng system. Frst, because obtanng the nformaton about the label may be both costly and tme consumng, one cannot expect that all the learners always observe the label n a tmely manner. Some learners can receve the label wth delay, or not receve t at all. Second, the learners can be asynchronous,.e., they can observe data at dfferent tme nstants. In ths paper we frst propose a basc algorthm, consderng an dealzed scenaro n whch the above ssues are not present, and then we extend our scheme to deal wth the above ssues. The rest of ths paper s organzed as follows. Secton 2 revews the exstng lterature n ensemble learnng technques. Secton 3 presents our formalsm, framework, and algorthm for dstrbuted onlne learnng. Secton 4 proves a bound for the ms classfcaton probablty of our scheme whch depends on the ms classfcaton probabltes of the best (unknown) statc aggregaton rule, and of the best (unknown) local classfer. Secton 5 dscusses several extensons to our learnng algorthm to deal wth practcal ssues assocated to the dstrbuted mplementaton of the ensemble of learners, and proves new bounds that apply to these settngs. Secton 6 presents the emprcal evaluaton of our algorthm on several data sets. Secton 7 concludes the paper. 2 RELATED WORS In ths secton we revew the exstng lterature on ensemble learnng technques and dscuss the dfferences between the cted works and our paper. Ensemble learnng technques [22] [24] combne a collecton of base classfers nto a unque classfer. Adaboost [8], for example, trans a sequence of classfers on ncreasngly more dffcult examples and combnes them usng a weghted majorty rule. Our paper s clearly dfferent wth respect to tradtonal offlne approaches such as Adaboost, whch rely on the presence of a tranng set for offlne tranng the ensemble and assume a stable concept. An onlne verson of Adaboost s proposed n [12]. When a new chunk of data enters the system, the current classfers are reweghed, a weghted tranng set s generated, a new classfer (and ts weght) s created on ths data set, and the oldest classfer s dscarded. Smlar proposals are made n [13], [14], [19], [25]. Our work dffers from these onlne boostng lke technques because () t processes each nstance "on arrval" only once, wthout the need for storage and reprocessng chunks of data, and () t does not requre that the local classfers are centrally retraned (e.g., n a dstrbuted scenaro t may be expensve to retran the local classfers or unfeasble f the learners are operated by dfferent enttes). An alternatve approach to storng chunks of labeled data conssts n updatng the ensemble as soon as data flows n the system. [16] and [17] adopt a dynamc weghted majorty algorthm, refnng, addng, and removng learners based on the global algorthm s performance. [2] proposes a scheme based on two onlne ensembles wth dfferent levels of dversty. The low dversty ensemble s used for system predctons, the hgh dversty ensemble s used to learn the new concept after a drft s detected. Our work dffers from [16], [17], [2] because t does not requre that the local classfers are centrally retraned. The lterature closest to our work s represented by the multplcatve weght update schemes [9] [11], [15] that mantan a collecton of gven learners, predct usng a weghted majorty rule, and update onlne the weghts assocated to the learners n a multplcatve manner. Weghted majorty [9] decreases the weghts of the learners n the pool that dsagree wth the label whenever the ensemble makes a mstakes. Wnnow2 [1] uses a slghtly dfferent update rule, but the fnal effect s the same as weghted majorty. In [11] the weghts of the learners that agree wth the label when the ensemble makes a mstakes are ncreased, and the weghts of the learners that dsagree wth the label are decreased also when the ensemble predcts correctly. To prevent the weghts of the learners whch performed poorly n the past from becomng too small wth respect to the other learners, [15] proposes a modfed verson of these schemes addng a phase, after the multplcatve weght update, n whch each learner shares a porton of ts weght wth the other learners. In our algorthm, dfferently from [9] [11], [15], the weghts are updated n an addtve manner and learners can also have negatve weghts (e.g., a learner that s always wrong would receve a negatve weght and could contrbute to the system as a learner that s always rght). Fnally, we dfferentate from all the cted works n another key pont: we consder a dstrbuted scenaro, allowng for the possblty that the learners observe dfferent data streams. Ths s the reason why n Secton 5 we extend our learnng algorthm to address challenges specfc to the dstrbuted mplementaton. Table 2 summarzes the dfferences between our approach and the cted works n terms of the features descrbed n Secton 1.

3 3 TABLE 1 Comparson among dfferent ensemble learnng works. DIS DY OL COM ID DEL [8] X X [12] [14], [19], [25] X X X [9] [11], [15] X X X X [16], [17], [2] X X X our work X X X X X X Fg. 1. System model 3 DISTRIBUTED LEARIG FRAMEWOR AD THE PROPOSED ALGORITHM We consder a set of dstrbuted learners, denoted by = {1,...,}. Each learner observes a separate sequence of nstances. The tme s slotted and the learners are synchronzed. Throughout the paper, we use the ndces and j to denote partcular learners, the ndces n and m to denote partcular tme nstants, the ndex to denote the possble nfnte tme horzon (.e., for how many slots the system operates), and bold letters to denote vectors. At the begnnng of each tme slot n, each learnerobserves X denote the mult dmensonal nstance observed by learner at tme nstant n, and y (n) { 1,1} denote the correspondng label, a common event that the learners have to classfy at tme nstant n. We call the par (x (n),y (n) ) a{ labeled nstance. } an nstance generated by a source S (n). Let x (n) We formally defne a source S (n),y (n) ) for learner at tme nstant n as the probablty densty p (n) (x (n) functon( p (n) over the labeled ) nstance (x (n),y (n) ). We wrte S (n) = S (n) 1,...,S (n) for the vector of sources at tme nstant n. The task of a generc learner at tme nstant n s to predct the labely (n). The predcton utlzes the dea of ensemble data mnng: each learner adopts an ndvdual classfer to generate a local predcton, the local predctons are exchanged, and learner aggregates ts local predcton and the receved ones to generate the fnal predcton ŷ (n) { 1,1}. Ths process s represented n Fg. 1. Let s (n) { 1,1} denote the local predcton of learner at tme nstant n. As n [9] [11], [15], n ths paper we assume that the local classfers are gven (.e., s (n) s gven ) and we focus on the adaptvty of the rule that aggregates the local predctons. Smlarly to most ensemble technques, such as [8] [17], we consder a weghted majorty aggregaton rule n whch learner mantans a weght vector w (n) (w (n),w(n) 1,...,w(n) k ) R +1, combnes t lnearly wth the local predcton vector s (n) (1,s (n) 1,...,s(n) k ), and predcts 1 f the result s negatve, 1 otherwse,.e., ( ŷ (n) = sgn w (n) s (n)) { (n) 1 f w = s (n) (1) 1 otherwse where sgn( ) s the sgn functon (we defne sgn() 1) and w (n) s (n) w (n) + j=1 w(n) j s(n) j s the nner product among the vectors w (n) and s (n). The equaton w (n) + j=1 w(n) j s(n) j = defnes an hyperplane n R (the space of the local predctons) whch separates the postve predctons (.e.,ŷ (n) = 1) from the negatve ones (.e.,ŷ (n) = 1). otce that n most of the weghted majorty schemes proposed n lterature, [8] [17], w (n) = whch constrans the hyperplane to pass through the orgn. However, n our paper the weght w (n) can be thought of as the weght assocated to a "vrtual learner" that always sends the local predcton 1, and we ntroduce t to explot an addtonal degree of freedom. We consder the followng rule to update the weght vector w (n) at the end of tme nstant n: w (n+1) = { w (n) w (n) f ŷ (n) +y (n) s (n) otherwse = y (n) That s, after havng observed the true label, learner compares t wth ts predcton. If the predcton s correct, the model s not modfed. If the predcton s ncorrect, the weghts of the learners that reported a wrong predcton are decreased by one unt, whereas the weghts of the learners that reported a correct predcton are ncreased by one unt. 2 Snce (2) s analogous to the learnng rule of a Perceptron algorthm [18], we call the resultng onlne learnng scheme Perceptron Weghted Majorty (PWM). We ntalze to the,,j. Because at the end of each tme nstant n the value of w (n) j can reman constant, decrease by one unt, or ncrease by one unt, w (n) j s always an nteger number. weghtsw (1) j Algorthm Perceptron Weghted Majorty (PWM) 1: Intalzaton: w j =,,j 2: For each learner and tme nstant n 3: Observe x (n) 4: Obtan s (n) = (1,s (n) 1,...,s(n) k ) 5: Predct ŷ (n) sgn(w s (n) ) 6: Observe y (n) 7: If y (n) ŷ (n) do w w +y (n) s (n) To summarze, the sequence of events that take place at tme nstant n for each learner adoptng the PWM algorthm can be descrbed as follows. 1. Observaton: learner observes the nstance x (n) ; 2. Ths s n the same phlosophy of many weghted majorty schemes [9], [1], [15] and boostng lke technques [12] [14], [19] that mprove the model focusng manly on those nstances n whch the actual model fals. (2)

4 4 Importantly, we remark that PWM s desgned n absence of a pror knowledge about the sources and the performance of the local classfers. We do not need to know a pror whether there are accurate local classfers or accurate aggregaton rules. It s the scheme tself that adapts the confguraton of the dstrbuted system to the current performance of the local classfers. Fg. 2. Illustratve system of two learners adoptng the PWM scheme 2. Local predcton exchange: learner sends ts local predcton s (n) = f (n) (x (n) ) to the other learners, and receves the local predctons s (n) j = f (n) j (x (n) j ), j, from the other learners; 3. Fnal predcton: learner ( computes and outputs ts fnal predcton ŷ (n) = sgn w (n) s ); (n) 4. Feedback: learner observes the true label y (n) ; 5. Confguraton update: learner updates the weght vector w (n) adoptng (2). Fg. 2 llustrates ths sequence of events for a system of two learners. 4 PERFORMACE OF PWM In ths secton we analytcally quantfy the performance of PWM n terms of ts emprcal ms classfcaton probablty (shortly, ms predcton probablty), whch s defned as the number of predcton mstakes per nstance. We prove two upper bounds for the ms classfcaton probablty of our scheme. The frst bound depends on the ms classfcaton probablty of the best (unknown) statc aggregaton rule, and s partcularly useful when the local classfers are weak (.e., ther performance are comparable to random guessng) but ther combnaton can result n an accurate ensemble. 3 The second bound depends on the ms classfcaton probablty of the best (unknown) local classfers, and s partcularly useful when there are accurate local classfers n the system. We then combne these two bounds nto a unque bound. We show that the resultng bound and the ms classfcaton probablty of PWM tend asymptotcally to f the ms classfcaton probablty of the best statc aggregaton rule or the ms classfcaton probablty of the best local classfer tend to. Then we formally defne the notons of concept and concept drft and we show that the ms classfcaton probablty of PWM tends to f, for each concept, there exsts a (unknown) statc aggregaton rule whose ms classfcaton probablty (for the consdered concept) tends to. 3. It s known that the combnaton of weak classfers can result n a hgh accurate ensemble [26], n partcular when the classfers are dverse and ther errors are ndependent. 4.1 Defntons Gven the sequence of labeled nstances ( D x (n) 1,...,x(n),y(n)), n=1,..., we denote by P (D ) the ms classfcaton probablty of the local classfer used by learner, by P (D ) the ms classfcaton probablty of the most accurate local classfer, and by v (D ) the number of local classfers whose ms classfcaton probabltes are P (D ), 4 P (D ) 1 I{s y (n) } n=1 P (D ) mn P (D ) v (D ) { : P (D ) = P (D )} where denotes the cardnalty of the consdered set. Also, we denote by P O (D ) the ms classfcaton probablty of learner f t combnes the local predctons of all the learners usng the optmal statc weght vector w O that mnmzes ts number of mstakes, P O 1 (D ) mn w O n=1 ( I{sgn w O s (n)) y (n) } Remark 1. P O (D ) and w O are the same for all the learners. For ths reason, we do not use the subscrpt. Remark 2. P O (D ) P (D ), n fact t s always possble to select a statc weght vector such that the fnal predcton n each tme nstant n s equal to the predcton of the best classfer. Remark 3. The computaton and adopton of w O would requre to know n advance, at the begnnng of tme nstant 1, the sequences of local predctons s (n) and labels y (n), for every tme nstant n = 1,...,. Moreover, we denote by P PWM (D ) the ms classfcaton probablty of learner f t adopts the PWM scheme, P PWM (D ) 1 n=1 ( I{sgn w (n) s (n)) y (n) } where w (1) j = 1,,j, and w (n) evolves accordng to (2). We denote by P PWM (D ) the average ms classfcaton 4. Ths paper does not dstngush among dfferent classfcaton errors,.e., among false alarms and ms detectons.

5 5 probablty of the dstrbuted system f all the learners adopt the PWM scheme, P PWM (D ) 1 =1 P PWM (D ) (3) Remark 4. In ths secton P PWM (D ) = P PWM (D ),, because the weght vectors of the learners are equally ntalzed and we assumed that the learners are synchronzed evolve n the same way and P PWM (D ) = Pj PWM (D ),,j. However, n Secton 5 we descrbe several extensons to our onlne learnng algorthm, n whch w (n) and w (n) j evolve dfferently, and consequently P PWM (D ) Pj PWM (D ), j. and always observe the labels, hence w (n) and w (n) j 4.2 Bounds for PWM ms classfcaton probablty In ths subsecton we derve the followng results. Lemma 1 proves a bound for P PWM (D ) as a functon of P O (D ). Lemma 2 proves a bound for P PWM (D ) as a functon of P (D ). Theorem 1 combnes these two bounds nto a unque bound. Fnally, as a specal case of Theorem 1, Theorem 2 shows that P PWM (D ) converges to f P (D ) or P O (D ) converge to. Lemma 1. For every sequence of labeled nstances D, the ms classfcaton probablty P PWM (D ) s bounded by B 1 (D ) 2P O (D )+ Proof: See Appendx A. ( +1) Remark 5. Lemma 1 shows that t s not always benefcal to have many learners n the system. On one hand, an addtonal learner can decrease the benchmark predcton probablty P O (D ). On the other hand, t ncreases the number of learners, and as a consequence the maxmum number or errors needed to approach the benchmark weght vector w O ncreases. The fnal mpact onp PWM (D ) depends on whch of the two effects s the strongest. Remark 6. If the optmal statc weght vector w O allows to predct always correctly the labeled nstances D,.e., P O (D ) =, then P PWM (D ) (+1). Hence, the bound ncreases quadratcally n the number of learners, but decreases lnearly n the number of nstances. We defne the functon f(x,y) 2x+ +1 2y + ( ) ( +1)x + 2y y Lemma 2. For every sequence of labeled nstances D, the ms classfcaton probablty P PWM (D ) s bounded by B 2 (D ) f (P (D ),v (D )) Proof: See Appendx B. Remark 7. If the best local classfer always predcts correctly the labeled nstances D,.e., P (D ) =, then P PWM (D ) +1 v (D. Ths bound s ) v (D ) tmes better than the bound n Remark 6. Remark 8. Asymptotcally, for +, B 1 (D ) 2P O (D ) and B 2 (D ) 2P (D ). On one hand, f the local classfers are weak (.e., P (D ).5) but ther aggregaton s very accurate (.e., P O (D ) 1), the frst bound s usually strcter than the second. On the other hand, f the performance of the best local classfer s comparable wth the performance of the optmal statc aggregaton rule (.e., P (D ) P O (D )), the second bound s tmes strcter than the frst one. otce that also the bound computed n [9], for the multplcatve update rule, depends lnearly on the accuracy of the best classfer. In the followng theorem, we combne B 1 (D ) and B 2 (D ) nto a unque bound. Theorem 1. For every sequence of labeled nstances D, the ms classfcaton probablty P PWM (D ) s bounded by B(D ) mn{b 1 (D ),B 2 (D ),1} Proof: We smply combne Lemmas 1 and 2, and the fact that the ms predcton probablty cannot be larger than 1. Importantly, notce that the bound B(D ) s vald for any tme horzon and for any sequence of labeled nstancesd. As a partcular case, f the tme horzon tends to nfnty and there exsts ether 1) a statc aggregaton weght vector whose ms classfcaton probablty tends to (.e., P O (D ) ), or 2) a local classfer whose ms classfcaton probablty tends to (.e., P (D ) ), we obtan that the ms classfcaton probablty of PWM tends to as well. otce that P (D ) s a specfc case of P O (D ), because P O (D ) P (D ). Hence, n the statement of the followng theorem we consder only the case P O (D ). Theorem 2. If lm + P O (D ) =, then lm + PPWM (D ) = Proof: P PWM (D ) 2P O (D )+ (+1) and the rght hand sde tends to for Bound n the Presence of Concept Drfts Gven two tme nstants n and m, n > m, we wrte S (n) S (m) f the labeled nstances (x (n),y (n) ) and (x (m) ndependently sampled from the same dstrbuton. We wrte =,y (m) ) are S (n) = S (m) f S (n) = S (m),. As n [6], we refer to a partcular vector of sources as a concept. The expresson concept drft [3], [6], [13] [17], [19], [2], [27] [29] refers to a change of concept that occurs n a certan tme nstant. Accordng to [6], we say that at tme nstant n there s a concept drft f S (n+1) S (n). Theorem 2 states that P PWM (D ) f P O (D ). Unfortunately, n presence of concept drfts t s hghly mprobable that P O (D ). In fact, the accuraces of the

6 6 local classfers can change consstently from one concept to another, and the best weght vector to aggregate the local predctons changes accordngly. In the followng we generalze the result of Theorem 2 consderng an assumpton that s more realstc f there are concept drfts. We denote by D c a sequence of c labeled nstances generated by the concept S (n) c. We say that the concept S c (n) s learnable f, D c, lm mn c + w,c O 1 c c n=1 ( I{sgn w,c O s(n)) y (n) } = That s, the concept S (n) c s learnable f there exsts a statc weght vector w,c O whose asymptotc ms classfcaton probablty, over the labelled nstances generated by that concept, tends to. Theorem 3. If D, for +, s generated by a fnte number of learnable concepts and a fnte number of concept drfts occurred, then Proof: See Appendx C. lm + PPWM (D ) Remark 9. Theorem 2 requres the exstence of a unque weght vector, w O, whose ms classfcaton probablty over the labeled nstances generated by all concepts converges to. Theorem 3 requres the exstence of one weght vector for concept, w,c O, whose ms classfcaton probablty over the labeled nstances generated by concept S (n) c converges to. 5 EXTEDED PWM So far we have consdered an dealzed settng n whch all the learners always observe an nstance at the begnnng of the tme nstant (.e., they are synchronous), and they always observe the correspondng label at the end of the tme nstant. In a dstrbuted envronment one cannot expect that these assumptons are always satsfed: sometmes the learners can be asynchronous, receve the label wth delay, or not receve t at all. In ths secton we address these challenges proposng, for each of them, a modfcaton to the basc PWM scheme ntroduced n Secton 3, and we extend Theorems 1 and 3 for each modfed verson of PWM. 5 At the end of ths secton we explctly wrte the extended PWM algorthm that ncludes all the proposed modfcaton to jontly deal wth all the consdered challenges. 5.1 Delayed and Out Of Order Labels In some cases the true label correspondng to a tme nstant n s observed wth delay. For example, n a dstrbuted envronment one learner can observe the label mmedately, and communcate t to the other learners at a later stage. In ths subsecton we show that our algorthm can be modfed n 5. otce that Theorem 3 s a more general verson of Theorem 2, and hence we do not need to extend also Theorem 2. order to deal wth ths stuaton, wth a prce to pay n terms of ncreased memory. We denote by d (n) the number of tme slots after whch learner observes the n th label. We assume that d (n) s not known a pror, but s bounded by a maxmum delay d, n, whch s known. Also, we allow for the possblty that the labels are receved out of order (e.g., t s possble that learner observes the labely (n+1) before the labely (n) ), but we assume that, when a label s receved, the tme nstant t refers to s known. PWM s modfed as follow. Learner mantans n memory all the local predcton vectors that refer to the not yet observed labels. As soon as learnerreceves the labely (m), t computes the predcton ŷ (m) = sgn(w (n) s (m) ) whch t would have made at tme nstant m wth the current weght vector w (n), and updates the weght vector accordng to { w (n+1) w (n) = f ŷ (m) = y (m) w (n) +y (m) s (m) otherwse Ths update rule s smlar to (2), but now the updates may happen wth delays. In partcular, snce dfferent learners experence dfferent delays, the weght vectorsw (n) and w (n) j, j, follow dfferent dynamcs. Theorem 4. For every sequence of labeled nstances D, P PWM (D ) s bounded by =1 B(D )+ d Proof: See Appendx D. =1 d Remark 1. The term can be nterpreted as the maxmum loss for the delayed labels. Theorem 5. If D, for +, s generated by a fnte number of learnable concepts and a fnte number of concept drfts occurred, then Proof: See Appendx E. 5.2 Mssng Labels lm + PPWM (D ) In a dstrbuted envronment one cannot expect that all the learners always receve the label, n partcular n those scenaros n whch obtanng the nformaton about the label may be both costly and tme consumng. In ths subsecton we show that our scheme can be easly extended to deal wth stuatons n whch the true labels are only occasonally observed. 1 f learnerobserves the label y (n) at the end to tme nstant n, g (n) otherwse. The followng update rule represents the natural extenson of (2) to deal wth mssng labels: { w (n+1) w (n) = f g (n) = or ŷ (n) = y (n) +y (n) s (n) otherwse Let g (n) w (n)

7 7 That s, learner updates the weght vector w (n) only when t observes the true label and t recognzes t made a predcton error. otce that dfferent learners observe dfferent labels; therefore, the weght vectors w (n) and w (n) j, j, follow dfferent dynamcs. ow we consder a smple model of mssng labels and we derve the equvalent for the Theorems 1 and 3. We assume that g (n) s an ndependent and dentcally dstrbuted (..d.) process,, and denote by µ the probablty that g (n) = 1, < µ < 1. 6 That s, at the end of a generc tme nstant n learner observes the label wth probablty µ. Denote bye PWM the number of predcton errors observed by learner,.e., the number of tmes observes the label and recognzes t made a predcton mstake. We defne the functon 1 λ(y,z) 2z ln 1 (4) y Theorem 6. Gven the sequence of nstances ( D), for any level of confdence ǫ > such that λ ǫ, µ, wth L PWM probablty at least1 ǫ we have thatp PWM (D ) s bounded by B(D ) µ λ(ǫ,e PWM (5) ) however, t can stll output a fnal predcton explotng the local predctons receved from the other learners. A generc learner mantans two weght vectors: w (n),s and w(n),a. At the tme nstants n whch all the learners observe the nstances (.e., when the learners are synchronzed), learner aggregates all the local predctons usng w (n),s and then, after havng observed the label, updatesw (n),s usng (2). At the tme nstants n whch some learners do not observe the nstances (.e., when the learners are not synchronzed), learner set to the non receved local predctons (.e., t treats the learners that do not observe the nstances as "abstaner"), aggregate the local predctons usng w (n),a, and then, after havng observed the label, updates w (n),a usng (2) (notce that the weghts of the abstaners are not modfed). Gven the sequence of labelled nstances D, we denote by M the number of tmes n whch the nstances are jontly observed by all the learners. We defne the synchronzaton ndex α M. otce that α 1, the lower α the more synchronzed the learners. Theorem 8. Gven the sequence of nstances D, P PWM (D ) s bounded by B(D )+α Proof: See Appendx F. Remark 11. The denomnator µ λ(ǫ,e PWM ), whch s lower than 1, can be nterpreted as the maxmum loss for the mssng labels. otce that, for any gven level of confdence ǫ, the functon λ ( ǫ, PWM e observed errors PWM e ) s decreasng n the number of, and tends to f e PWM +. As a consequence, the bound (5) tends to B(D ) dvded by the probablty to observe a label µ. Theorem 7. If D, for +, s generated by a fnte number of learnable concepts and a fnte number of concept drfts occurred, then Proof: See Appendx G. 5.3 Asynchronous Learners lm + PPWM (D ) (6) Another mportant factor that may mpact the performance of an onlne learnng dstrbuted system s the synchronzaton among the learners. So far we have assumed that each learner observes an nstance n every tme nstant. However, n many practcal scenaros dfferent learners may capture nstances n dfferent tme nstants, and they can have dfferent acquston rates. In ths subsecton we extend our scheme to deal wth ths stuaton. PWM s modfed as follow. A learner does not send a local predcton when t does not observe the nstance; 6. We can extend the analyss consderng an observaton probablty µ that depends on the learner. The results would be smlar to those obtaned wth a unque µ, but the notatons would be much messer. Proof: See Appendx H. Remark 12. The synchronzaton ndex α can be nterpreted as the maxmum loss for non synchronzed learners. If the learners are always synchronous (.e., α = ), Theorem 8 s equal to Theorem 1. Theorem 9. If D, for +, s generated by a fnte number of learnable concepts and a fnte number of concept drfts occurred, then Proof: See Appendx I. lm + PPWM (D ) α (7) Remark 13. Dfferent from Theorems 3, 5, and 7, n Theorem 9 the ms classfcaton probablty does not tend to. In fact, the consequence of non synchronzed learners s that a learner does not have, n all the tme nstances, the local predctons of all the other learners, and ths lack of nformaton may result n a ms classfcaton. Remark 14. Theorem 9 can be used as a tool to desgn the acquston protocol adopted by the learners. If we know that the concepts are learnable and we have to satsfy a ms classfcaton probablty constrant P ms, Eq. (7) can be used to choose the acquston protocol such that the synchronzaton ndex α s equal to or lower than P ms. 6 EXPERIMETS In ths secton we evaluate emprcally the basc PWM algorthm and the extended PWM algorthm we proposed n Sectons 3 and 5, respectvely. In order to compare PWM

8 8 Algorthm Extended PWM Intalzaton: w j,s = w j,a =,,j For each learner and tme nstant n s receved j ŷ (n) sgn(w,s s (n) ) Else For each j such that s (n) j s not receved do s (n) j If s (n) j ŷ (n) sgn(w,a s (n) ) For each nstant m n such that y (m) s observed If s (m) j j If y (m) sgn(w,s s (m) ) do w,s w,s +y (m) s (m) Else If y (m) sgn(w,a s (m) ) do w,a w,a +y (m) s (m) wth other state of the art ensemble learnng technques that do not deal wth a dstrbuted envronment, n the frst set of experments (Subsecton 6.1) all the learners observe the same data stream, but they are pre traned on dfferent data sets and hence ther local predctons are n general dfferent. In the second set of experments (Subsecton 6.2), dfferent learners observe dfferent data streams. In ths case we compare PWM aganst a learner that predcts usng only ts local predcton, and analyze the mpact on ther performance of delayed labels, mssng labels, and asynchronous learners. 6.1 Unque Data Stream In ths subsecton we test PWM and other state of the art solutons usng real data sets that are generated from a unque data stream. Frst, we shortly descrbe the data sets, then we dscuss the results Real Data Sets We consder four data sets, well known n the data mnng communty, that refer to real world problems. In partcular, the frst three data sets are wdely used by the lterature dealng wth concept drft (whch s the closest to our work), because they exhbt evdent drfts. R1: etwork Intruson. The network ntruson data set, used for the DD Cup 1999 and avalable n the UCI archve [3], conssts of a seres of TCP connecton records, labeled ether as normal connectons or as attacks. For a more detaled descrpton of the data set we refer the reader to [3], that shows that the network ntruson data set contans non-statonary data. Ths data set s wdely used n the stream mnng lterature dealng wth concept drft [3], [14], [2], [31]. R2: Electrcty Prcng. The electrcty prcng data set holds nformaton for the Australan ew South Wales electrcty market. The bnary label (up or down) dentfes the change of the prce relatve to a movng average of the last 24 hours. For a more detaled descrpton of ths dataset we refer the reader to [32]. An appealng property of ths data set s that t contans drfts of dfferent types, due to changes n consumpton habts, the seasonablty, and the expanson of the electrcty market. Ths data set s wdely used n the stream mnng lterature dealng wth concept drft [17], [2], [32] [37]. R3: Forest Cover Type. The forest cover type data set from UCI archve [3] contans cartographc varables of four wlderness areas of the Roosevelt atonal Forest n northern Colorado. Each nstance refers to a 3 3 meter cell of one of these areas and s classfed wth one of seven possble classes of forest cover type. Our task s to predct f an nstance belong to the frst class or to the other classes. For a more detaled descrpton of ths dataset we refer the reader to [38]. The forest cover type data set contans drfts because data are collected n four dfferent areas. Ths data set s wdely used n the stream mnng lterature dealng wth concept drft [14], [36], [39], [4]. R4: Credt Card Rsk Assessment. In the credt card rsk assessment data set, used for the PADD 29 Data Mnng Competton [41], each nstance contans nformaton about a clent that accesses to credt for purchasng on a specfc retal chan. The clent s labeled as good f he was able to return the credt n tme, as bad otherwse. For a more detaled descrpton of ths dataset we refer the reader to [41]. Ths data set does not contan drfts because the data were collected durng one year wth a stable nflaton condton. In fact, to the best of our knowledge, the only work dealng wth concept drft that uses ths data set s [2] Results In ths experment we compare our scheme wth other state of the art ensemble learnng algorthms. Table 2 lsts the consdered algorthms, the correspondng references, the parameters we adopted (that are equal to the ones used n the correspondng papers, except for the wndow sze that s obtaned followng a tunng procedure), and ther performance n the consdered data sets. We shortly descrbed these algorthms n Secton 2, for a more detaled descrpton we refer the reader to the cted lterature. For each data set we consder a set of 8 learners and we use logstc regresson classfers for the learners local predctons. Each local classfer s pre traned usng an ndvdual tranng data set and kept fxed for the whole smulaton (except for the OnAda, Wang, and DDD schemes, n whch the base classfers are retraned onlne). The tranng and testng procedures are as follow. From the whole data set we select 8 tranng data sets, each of them consstng of Z sequental records. Z s equal to 5, for the data sets R1 and R3, and 2, for R2 and R4. Then we take other sequental records (2, for R1 and R3, and 8, for R2 and R4) to generate a set n whch the local classfers are tested, and the results are used to tran offlne Adaboost. Fnally, we select other sequental records (2, for R1 and R3, 21, for R2, and 26, for R4) to generate the testng set that s used to run the smulatons and test all the consdered schemes. Table 2 reports the fnal ms classfcaton probablty n percentages (.e., multpled by 1) obtaned for each data set for the consdered schemes. For the frst three data sets, whch exhbt concepts drfts, the schemes that update ther models after each nstance (DDD, WM, Blum, TrackExp, and PWM) outperform the statc schemes (AM and Ada) and the

9 9 TABLE 2 The consdered schemes, ther parameters, and ther percentages of ms classfcatons n the data sets R1 R4 Abbrevaton ame of the Scheme Reference Parameters Performance R1 R2 R3 R4 AM Average Majorty [3] Ada Adaboost [8] OnAda Fan s Onlne Adaboost [12] Wndow sze: W = Wang Wang s Onlne Adaboost [13] Wndow sze: W = DDD Dversty for Dealng wth Drfts [2] Dversty parameters: λ l = 1, λ h = WM Weghted Majorty algorthm [9] Multplcatve parameter: β = Blum Blum s varant of WM [11] Multplcatve parameters: β =.5, γ = TrackExp Herbster s varant of WM [15] Multplcatve and sharng parameters: β =.5, α = PWM Perceptron Weghted Majorty our work schemes that update ther model after a chunk of nstances enters the system (OnAda and Wang). Ths result shows that the statc schemes are not able to adapt to changes n concept, and the schemes that need to wat for a chunk of data adapt slowly because 1) they have to wat for the last nstance of the chuck before updatng the model, and 2) a chuck of data can contan nstances belongng to dfferent concepts, hence the model bult on t can be naccurate to predct the current concept. Importantly, n the frst three data sets PWM outperforms all the other schemes, whereas the second best scheme s WM. The gan of PWM (n terms of reducton of the ms classfcaton probablty) wth respect to WM s about 34% for R1,38% for R2, and71% for R3. We remark that the man dfferences among our scheme and WM are 1) the weghts update rule (addtve vs. multplcatve), and 2) the weghtw (n) assocated to the vrtual learner that always sends the local predcton1. To nvestgate the real reason of the gan of PWM we tested also a verson of PWM n whch w (n) =, n, obtanng the followng percentage of ms classfcatons n the frst three data sets:.23,14.4 and4.1. Hence, the weghtw (n) can slghtly help to ncrease the accuracy of the dstrbuted system, but the man reason why PWM outperforms WM n these data sets s the update rule. Dfferently from the frst three data sets, n R4, the data set that does not contan drfts, Ada, OnAda, and Wang outperform the other schemes. In fact, they explot many stored labeled nstances to buld ther models, and ths results n more accurate models when the data are generated from a statc dstrbuton. 6.2 Dfferent Data Streams In ths subsecton we evaluate PWM usng synthetc data sets n whch dfferent learners observe dfferent data streams, and analyze the mpact of delayed labels, mssng labels, and asynchronous learners. Frst, we shortly descrbe the data sets, then we dscuss the results Synthetc Data Sets We consder three synthetc data sets to carry on dfferent experments. The frst data set represents a separatng hyperplane that rotates slowly, we use t to smulate gradual drfts [6], [2], [34]. Smlar data sets are wdely adopted n the stream mnng lterature dealng wth concept drft [3], [14], [2], [29]. In the thrd data set, smlarly to [42], each learner observes a local event that s embedded n a zero mean Gaussan nose. Concept drfts occur because the accuraces of the observatons evolve followng Markov processes. The thrd data s a smple Gaussan dstrbuted data set n whch the concept s stable. We use ths data set because we can analytcally compute the optmal ms classfcaton probablty P O (D ) and nvestgate how strct the bound B(D ) s. S1: Rotatng Hyperplane. Each learner observes a 3 dmensonal nstance x (n) = (x (n),1,x(n),2,x(n),3 ) that s unformly dstrbuted n [ 1 1] 3, and s ndependent from x (m), n m, and from x (m) j, j. The label s a determnstc functon of the nstances observed by the frst < learners (the other learners observe rrelevant nstances). Specfcally, y (n) = 1 f 3 =1 l=1 θ(n),l x(n),l, y (n) = otherwse. The parameters θ (n),l are unknown and tme varyng. As n [29], each θ (1),l s ndependently generated accordng to a zero-mean unt-varance Gaussan dstrbuton (,1), and θ (n),l = θ(n 1),l +δ (n),l where δ (n),l (,.1). S2: Dstrbuted Event Detecton. Each learner montors the occurrence of a partcular local event. Let e (n) 1 f the local event montored by learner occurs at tme nstant n, e (n) 1 otherwse. e (n) s an..d. process, the probablty that e (n) = 1 s.5,,n. The observaton of learner s x (n) = e (n) +β (n), whereβ (n) s an..d. zero mean Gaussan process. To smulate concept drfts, we assume that a source can be n two dfferent states: good or bad. In the good state β (n) (,.5), n the bad state β (n) (,1). The state of the source evolves as a Markov process wth a probablty.1 to transt from one state to the other. S3: Gaussan Dstrbuton. The labels are generated accordng to a Bernoull process wth parameter.5, and the nstance x (n) = (x (n) 1,...,x(n) ) s generated accordng to a dmensonal Gaussan dstrbuton x (n) ( y (n) µ,σ ), where Σ s the dentty matrx. That s, f the label s 1 ( 1) each component x (n) s ndependently generated accordng to a Gaussan dstrbuton wth mean µ ( µ) and untary varance. A generc learner observes only the component x (n) of the whole nstance x (n).

10 Results In the frst set of experments we adopt the synthetc data set S1 to evaluate the ms predcton probablty of a generc learner, whch we refer to as learner 1, when t predcts by ts own (ALOE), and when t adopts PWM. We consder a set of = 16 learners, n whch the last 8 learners observe rrelevant nstances. For each smulaton we generate a data set of 1, nstances. We use non pre traned onlne logstc regresson classfers for the learners local predctons. We run 1, smulatons and average the results. The fnal results are reported n the four sub fgures of Fg. 3, and are dscussed n the followng. The top left sub fgure shows how the ms classfcaton probablty of learner 1 vares, n the dealzed settng (.e., wthout the ssues descrbed n Secton 5), wth respect to the the number of learners that PWM aggregates. If there s only one learner, ALOE and PWM are equvalent, but the gap between the performance obtanable by ALOE and the performance achevable by PWM ncreases as the number of learners that PWM aggregates ncreases. In partcular, f the local predctons of all the learners are aggregated, the ms classfcaton probablty of PWM s less than half the ms classfcaton probablty of ALOE. otce that the performance of PWM remans constant from 8 to 16 learners, and ths s a postve result because the last 8 learners observe rrelevant nstances. PWM automatcally gves them a low weght such that ther (nosy) local predctons do not nfluence the fnal predcton. In fact, the smulaton for = 16 learners shows that the average absolute weght of the frst 8 learners s about twce the average absolute weght of the last 8 learners. In all the followng experments we consder = 16 learners. ow we assume that learner 1 observes the labels after some tme nstants, and each delay s unformly dstrbuted n [ D]. The top rght sub fgure shows how the ms classfcaton probablty vares wth respect to the average delay D 2. We can see that the delay does not affect consderably the performance, n fact both ms classfcaton probabltes slghtly ncreases f the delay ncreases and the gap between them reman constant. In the next experment we analyze the mpact of mssng labels on the performance of learner 1. The bottom left sub fgure shows how the ms classfcaton probablty vares wth respect to the probablty that learner 1 observes a label. Even when the probablty of observng a label s.1, the ms classfcaton probablty of PWM s about half the ms classfcaton probablty of ALOE. Ths gan s possble because learner 1, adoptng PWM, automatcally explots the fact that the other learners are learnng. Smlar consderatons are vald when learner 1 observes an nstance wth a certan probablty (see the bottom rght sub fgure), whch can be nterpreted as the recprocal of the arrval rate. The mpact on the ms classfcaton probabltes of mssng nstances s stronger (.e., the ms classfcaton probabltes are hgher) than the mpact of mssng labels. In fact, when nstances are not observed, not only learner 1 does not update the weght vector, t also wats more tme between two consecutve predctons, hence the concept between two consecutve predctons can change consstently. When the Ms class. Prob. Ms class. Prob Idealzed settng umber of learners Mssng labels Probablty label observed Ms class. Prob. Ms class. Prob ALOE PWM Delayed and out of order labels Average delay Asynchronous learners Probablty nstance observed Fg. 3. Ms classfcaton probablty of learner 1 f t predcts alone and f t uses PWM, for the data set S1 Ms class. Prob. Ms class. Prob Idealzed settng umber of learners Mssng labels Probablty label observed Ms class. Prob ALOE PWM Ms class. Prob. Delayed and out of order labels Average delay Asynchronous learners Probablty nstance observed Fg. 4. Ms classfcaton probablty of learner 1 f t predcts alone and f t uses PWM, for the data set S2 probablty of observng an nstance s.1, the gan of PWM, wth respect to ALOE, s about 4%. In the second set of experments we use a smlar set up as n the frst set of experments, but we adopt the synthetc data set S2. We consder a set of = 8 learners and for each smulaton we generate 1, nstances. Each learner uses a non pre traned onlne logstc regresson classfers to learn the best threshold to adopt to classfy the local event. We run 1 smulatons and average the results. The fnal results are reported n the four sub fgures of Fg. 4, and are brefly dscussed n the followng. The top left sub fgure shows that the ms classfcaton probablty of PWM decreases lnearly n the number of learners untl the local predcton of all learners are aggregated, n ths case the ms classfcaton probablty of PWM s about.1, whereas the ms classfcaton probablty of ALOE s about.47. As n the frst set of experments, the delay does not affect the performance of the two schemes, and the performance of PWM s much better than the performance of ALOE even when the probablty of observng the label s very low. Dfferently from the frst set of experments, wth the data set S2 the performance of PWM s strongly affected by the synchroncty of the learners, and when the learners observe few nstances the ms classfcaton of PWM becomes close

11 11 Ms class. Prob Idealzed settng ALOE AM PWM Bound µ Fg. 5. The bound B(D ) and the ms classfcaton probablty of learner 1 f 1) t predcts by ts own, 2) t uses AM, 3) t uses PWM, for the data set S3 to the ms classfcaton of ALOE. In the last experment we adopt the data set S3 to nvestgate how strct the bound B(D ) s. For each smulaton we consder = 8 learners and generate a data set of 1, nstances. We assume that the local predcton of learner s negatve, 1 otherwse. It s possble to show that, gven the structure of the problem, ths represents the most accurate polcy for the local predcton, and the best possble aggregaton rule s the average majorty (AM). We run 1, smulatons and average the results. Fg. 5 shows the bound B(D ) and the ms classfcaton probablty of learner 1 f 1) t predcts by ts own (ALOE), 2) t uses AM, and 3) t uses PWM, varyng the parameter µ. If µ s low the nstances correspondng to negatve and postve labels are smlar, hence t s more dffcult to predct correctly the labels. Fg. 5 shows that, n ths case, the ms classfcaton probablty of PWM s much lower than the bound, and t s very close to the ms classfcaton probablty of AM, that s the best aggregaton rule n ths scenaro. Wth the ncrease of µ, the ms classfcaton probabltes of all the schemes decrease, and the bound become strcter to the real performance of PWM. otce that the curve representng the bound has a cusp at about µ = In fact, before ths value B 1 (D ) s strcter than B 2 (D ), whereas for µ > 1.75 B 2 (D ) s lower than B 1 (D ). Ths agrees wth Remark 8: when µ s low the local classfers are naccurate (see ALOE), but ther ensemble can be very accurate (see AM), and B 1 (D ) s strcter than B 2 (D ); whereas, when µ s hgh the local classfers are very accurate and B 2 (D ) becomes strcter than B 1 (D ). s 1 f ts observaton x (n) 7 COCLUSIO We proposed a dstrbuted onlne ensemble learnng algorthm to classfy data captured from dstrbuted, heterogeneous, and dynamc data sources. Our approach requres lmted communcaton, computatonal, energy, and memory requrements. We rgorously determned a bound for the worst case ms classfcaton probablty of our algorthm whch depends on the ms classfcaton probabltes of the best statc aggregaton rule, and of the best local classfer. Importantly, ths bound tends asymptotcally to f the ms classfcaton probablty of the best statc aggregaton rule tends to. We extended our algorthm and the correspondng bounds such that they can address challenges specfc to the dstrbuted mplementaton. Smulaton results show the effcacy of the proposed approach. When appled to real data sets wdely used by the lterature dealng wth dynamc data streams and concept drft, our scheme exhbts performance gans rangng from 34% to 71% wth respect to state of the art solutons. APPEDIX A PROOF OF LEMMA 1 Proof: Snce P PWM (D ) = P PWM (D ),, we can derve the bound wth respect to the ms classfcaton probablty P PWM (D ) of a generc learner. The proof departs from [43, Theorem 2], whch states that, for a general Perceptron algorthm (.e., s (n) can belong to whatever subset ofr), f s (n) R, n, then for everyγ > and vector u R +1, u = 1, the number of predcton errors e PWM (D ) of the onlne Perceptron algorthm on the sequence D s bounded by ( ) 2 R+ γd e PWM (D ) (8) γ where D = m n=1 d n, d n = max (,γ y n (u s (n) ) ). Startng from ths bound, we explot the structure of our problem (.e., s (n) { 1,1}) to derve the bound B 1 (D ). Snce n our case s (n) = +1, we can consder R = +1. otce that the last elements of s (n),.e., the local predctons, represent a partcular vertex of an hypercube n R, and the optmal a posteror weght vector w O represents an hyperplane n R whch separates the 2 vertexes of the hypercube n two subsets V 1 and V 1, representng the vertexes resultng n a negatve and postve predcton respectvely. ow we consder two scenaros: (1) ether V 1 or V 1 are empty; (2) both V 1 and V 1 are not empty. We consder the frst scenaro. In ths stuaton the optmal polcy w O predcts always 1 or 1, ndependently of the local predctons (ths case s not very nterestng n practce, but we analyze t for completeness). The geometrc nterpretaton s that the separatng hyperplane does not ntersect the hypercube. Let γ be the dstance between the separatng hyperplane and the closest vertex of the hypercube, and u = wo w O. If w O predcts correctly the n-th nstance, then y n (u s (n) ) γ, hence d n =. If w O makes a mstakes n the n-th nstance, then y n (u s (n) ) γ and y n (u s (n) ) γ 2 (because the closest vertex s 2 dstant from the farthest one), therefore d n 2γ + 2 (. Hence, we obtan D 2 O (D ) γ + ), where e O (D ) s the number of mstakes made adoptng w O, and ( ) 2 R+ γd γ ( +1+ 2e O(D )γ γ + γ ) The rght sde of the above nequalty s decreasng n γ. Snce we can consder other optmal a posteror weght vectors w O and snce there s no constrant on how far the separatng hyperplane could be wth respect to the hypercube, takng the 2

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht

PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION. Evgeny Artyomov and Orly Yadid-Pecht 68 Internatonal Journal "Informaton Theores & Applcatons" Vol.11 PRACTICAL, COMPUTATION EFFICIENT HIGH-ORDER NEURAL NETWORK FOR ROTATION AND SHIFT INVARIANT PATTERN RECOGNITION Evgeny Artyomov and Orly

More information

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University

Dynamic Optimization. Assignment 1. Sasanka Nagavalli January 29, 2013 Robotics Institute Carnegie Mellon University Dynamc Optmzaton Assgnment 1 Sasanka Nagavall snagaval@andrew.cmu.edu 16-745 January 29, 213 Robotcs Insttute Carnege Mellon Unversty Table of Contents 1. Problem and Approach... 1 2. Optmzaton wthout

More information

Learning Ensembles of Convolutional Neural Networks

Learning Ensembles of Convolutional Neural Networks Learnng Ensembles of Convolutonal Neural Networks Lran Chen The Unversty of Chcago Faculty Mentor: Greg Shakhnarovch Toyota Technologcal Insttute at Chcago 1 Introducton Convolutonal Neural Networks (CNN)

More information

MTBF PREDICTION REPORT

MTBF PREDICTION REPORT MTBF PREDICTION REPORT PRODUCT NAME: BLE112-A-V2 Issued date: 01-23-2015 Rev:1.0 Copyrght@2015 Bluegga Technologes. All rghts reserved. 1 MTBF PREDICTION REPORT... 1 PRODUCT NAME: BLE112-A-V2... 1 1.0

More information

Calculation of the received voltage due to the radiation from multiple co-frequency sources

Calculation of the received voltage due to the radiation from multiple co-frequency sources Rec. ITU-R SM.1271-0 1 RECOMMENDATION ITU-R SM.1271-0 * EFFICIENT SPECTRUM UTILIZATION USING PROBABILISTIC METHODS Rec. ITU-R SM.1271 (1997) The ITU Radocommuncaton Assembly, consderng a) that communcatons

More information

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel

To: Professor Avitabile Date: February 4, 2003 From: Mechanical Student Subject: Experiment #1 Numerical Methods Using Excel To: Professor Avtable Date: February 4, 3 From: Mechancal Student Subject:.3 Experment # Numercal Methods Usng Excel Introducton Mcrosoft Excel s a spreadsheet program that can be used for data analyss,

More information

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results

A Comparison of Two Equivalent Real Formulations for Complex-Valued Linear Systems Part 2: Results AMERICAN JOURNAL OF UNDERGRADUATE RESEARCH VOL. 1 NO. () A Comparson of Two Equvalent Real Formulatons for Complex-Valued Lnear Systems Part : Results Abnta Munankarmy and Mchael A. Heroux Department of

More information

Adaptive Modulation for Multiple Antenna Channels

Adaptive Modulation for Multiple Antenna Channels Adaptve Modulaton for Multple Antenna Channels June Chul Roh and Bhaskar D. Rao Department of Electrcal and Computer Engneerng Unversty of Calforna, San Dego La Jolla, CA 993-7 E-mal: jroh@ece.ucsd.edu,

More information

Analysis of Time Delays in Synchronous and. Asynchronous Control Loops. Bj rn Wittenmark, Ben Bastian, and Johan Nilsson

Analysis of Time Delays in Synchronous and. Asynchronous Control Loops. Bj rn Wittenmark, Ben Bastian, and Johan Nilsson 37th CDC, Tampa, December 1998 Analyss of Delays n Synchronous and Asynchronous Control Loops Bj rn Wttenmark, Ben Bastan, and Johan Nlsson emal: bjorn@control.lth.se, ben@control.lth.se, and johan@control.lth.se

More information

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems

A NSGA-II algorithm to solve a bi-objective optimization of the redundancy allocation problem for series-parallel systems 0 nd Internatonal Conference on Industral Technology and Management (ICITM 0) IPCSIT vol. 49 (0) (0) IACSIT Press, Sngapore DOI: 0.776/IPCSIT.0.V49.8 A NSGA-II algorthm to solve a b-obectve optmzaton of

More information

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation

Parameter Free Iterative Decoding Metrics for Non-Coherent Orthogonal Modulation 1 Parameter Free Iteratve Decodng Metrcs for Non-Coherent Orthogonal Modulaton Albert Gullén Fàbregas and Alex Grant Abstract We study decoder metrcs suted for teratve decodng of non-coherently detected

More information

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart

Control Chart. Control Chart - history. Process in control. Developed in 1920 s. By Dr. Walter A. Shewhart Control Chart - hstory Control Chart Developed n 920 s By Dr. Walter A. Shewhart 2 Process n control A phenomenon s sad to be controlled when, through the use of past experence, we can predct, at least

More information

Generalized Incomplete Trojan-Type Designs with Unequal Cell Sizes

Generalized Incomplete Trojan-Type Designs with Unequal Cell Sizes Internatonal Journal of Theoretcal & Appled Scences 6(1): 50-54(2014) ISSN No. (Prnt): 0975-1718 ISSN No. (Onlne): 2249-3247 Generalzed Incomplete Trojan-Type Desgns wth Unequal Cell Szes Cn Varghese,

More information

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS

A MODIFIED DIFFERENTIAL EVOLUTION ALGORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS A MODIFIED DIFFERENTIAL EVOLUTION ALORITHM IN SPARSE LINEAR ANTENNA ARRAY SYNTHESIS Kaml Dmller Department of Electrcal-Electroncs Engneerng rne Amercan Unversty North Cyprus, Mersn TURKEY kdmller@gau.edu.tr

More information

High Speed ADC Sampling Transients

High Speed ADC Sampling Transients Hgh Speed ADC Samplng Transents Doug Stuetzle Hgh speed analog to dgtal converters (ADCs) are, at the analog sgnal nterface, track and hold devces. As such, they nclude samplng capactors and samplng swtches.

More information

Digital Transmission

Digital Transmission Dgtal Transmsson Most modern communcaton systems are dgtal, meanng that the transmtted normaton sgnal carres bts and symbols rather than an analog sgnal. The eect o C/N rato ncrease or decrease on dgtal

More information

Chaotic Filter Bank for Computer Cryptography

Chaotic Filter Bank for Computer Cryptography Chaotc Flter Bank for Computer Cryptography Bngo Wng-uen Lng Telephone: 44 () 784894 Fax: 44 () 784893 Emal: HTwng-kuen.lng@kcl.ac.ukTH Department of Electronc Engneerng, Dvson of Engneerng, ng s College

More information

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985

NATIONAL RADIO ASTRONOMY OBSERVATORY Green Bank, West Virginia SPECTRAL PROCESSOR MEMO NO. 25. MEMORANDUM February 13, 1985 NATONAL RADO ASTRONOMY OBSERVATORY Green Bank, West Vrgna SPECTRAL PROCESSOR MEMO NO. 25 MEMORANDUM February 13, 1985 To: Spectral Processor Group From: R. Fsher Subj: Some Experments wth an nteger FFT

More information

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation

Rejection of PSK Interference in DS-SS/PSK System Using Adaptive Transversal Filter with Conditional Response Recalculation SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol., No., November 23, 3-9 Rejecton of PSK Interference n DS-SS/PSK System Usng Adaptve Transversal Flter wth Condtonal Response Recalculaton Zorca Nkolć, Bojan

More information

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04.

Networks. Backpropagation. Backpropagation. Introduction to. Backpropagation Network training. Backpropagation Learning Details 1.04. Networs Introducton to - In 1986 a method for learnng n mult-layer wor,, was nvented by Rumelhart Paper Why are what and where processed by separate cortcal vsual systems? - The algorthm s a sensble approach

More information

Particle Filters. Ioannis Rekleitis

Particle Filters. Ioannis Rekleitis Partcle Flters Ioanns Reklets Bayesan Flter Estmate state x from data Z What s the probablty of the robot beng at x? x could be robot locaton, map nformaton, locatons of targets, etc Z could be sensor

More information

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques

Efficient Large Integers Arithmetic by Adopting Squaring and Complement Recoding Techniques The th Worshop on Combnatoral Mathematcs and Computaton Theory Effcent Large Integers Arthmetc by Adoptng Squarng and Complement Recodng Technques Cha-Long Wu*, Der-Chyuan Lou, and Te-Jen Chang *Department

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

Chapter 2 Two-Degree-of-Freedom PID Controllers Structures

Chapter 2 Two-Degree-of-Freedom PID Controllers Structures Chapter 2 Two-Degree-of-Freedom PID Controllers Structures As n most of the exstng ndustral process control applcatons, the desred value of the controlled varable, or set-pont, normally remans constant

More information

Discussion on How to Express a Regional GPS Solution in the ITRF

Discussion on How to Express a Regional GPS Solution in the ITRF 162 Dscusson on How to Express a Regonal GPS Soluton n the ITRF Z. ALTAMIMI 1 Abstract The usefulness of the densfcaton of the Internatonal Terrestral Reference Frame (ITRF) s to facltate ts access as

More information

A study of turbo codes for multilevel modulations in Gaussian and mobile channels

A study of turbo codes for multilevel modulations in Gaussian and mobile channels A study of turbo codes for multlevel modulatons n Gaussan and moble channels Lamne Sylla and Paul Forter (sylla, forter)@gel.ulaval.ca Department of Electrcal and Computer Engneerng Laval Unversty, Ste-Foy,

More information

High Speed, Low Power And Area Efficient Carry-Select Adder

High Speed, Low Power And Area Efficient Carry-Select Adder Internatonal Journal of Scence, Engneerng and Technology Research (IJSETR), Volume 5, Issue 3, March 2016 Hgh Speed, Low Power And Area Effcent Carry-Select Adder Nelant Harsh M.tech.VLSI Desgn Electroncs

More information

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding

Side-Match Vector Quantizers Using Neural Network Based Variance Predictor for Image Coding Sde-Match Vector Quantzers Usng Neural Network Based Varance Predctor for Image Codng Shuangteng Zhang Department of Computer Scence Eastern Kentucky Unversty Rchmond, KY 40475, U.S.A. shuangteng.zhang@eku.edu

More information

熊本大学学術リポジトリ. Kumamoto University Repositor

熊本大学学術リポジトリ. Kumamoto University Repositor 熊本大学学術リポジトリ Kumamoto Unversty Repostor Ttle Wreless LAN Based Indoor Poston and Its Smulaton Author(s) Ktasuka, Teruak; Nakansh, Tsune CtatonIEEE Pacfc RIM Conference on Comm Computers, and Sgnal Processng

More information

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION

NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION NOVEL ITERATIVE TECHNIQUES FOR RADAR TARGET DISCRIMINATION Phaneendra R.Venkata, Nathan A. Goodman Department of Electrcal and Computer Engneerng, Unversty of Arzona, 30 E. Speedway Blvd, Tucson, Arzona

More information

Review: Our Approach 2. CSC310 Information Theory

Review: Our Approach 2. CSC310 Information Theory CSC30 Informaton Theory Sam Rowes Lecture 3: Provng the Kraft-McMllan Inequaltes September 8, 6 Revew: Our Approach The study of both compresson and transmsson requres that we abstract data and messages

More information

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES

IEE Electronics Letters, vol 34, no 17, August 1998, pp ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES IEE Electroncs Letters, vol 34, no 17, August 1998, pp. 1622-1624. ESTIMATING STARTING POINT OF CONDUCTION OF CMOS GATES A. Chatzgeorgou, S. Nkolads 1 and I. Tsoukalas Computer Scence Department, 1 Department

More information

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame

Ensemble Evolution of Checkers Players with Knowledge of Opening, Middle and Endgame Ensemble Evoluton of Checkers Players wth Knowledge of Openng, Mddle and Endgame Kyung-Joong Km and Sung-Bae Cho Department of Computer Scence, Yonse Unversty 134 Shnchon-dong, Sudaemoon-ku, Seoul 120-749

More information

A TWO-PLAYER MODEL FOR THE SIMULTANEOUS LOCATION OF FRANCHISING SERVICES WITH PREFERENTIAL RIGHTS

A TWO-PLAYER MODEL FOR THE SIMULTANEOUS LOCATION OF FRANCHISING SERVICES WITH PREFERENTIAL RIGHTS A TWO-PLAYER MODEL FOR THE SIMULTANEOUS LOCATION OF FRANCHISING SERVICES WITH PREFERENTIAL RIGHTS Pedro Godnho and oana Das Faculdade de Economa and GEMF Unversdade de Combra Av. Das da Slva 65 3004-5

More information

Priority based Dynamic Multiple Robot Path Planning

Priority based Dynamic Multiple Robot Path Planning 2nd Internatonal Conference on Autonomous obots and Agents Prorty based Dynamc Multple obot Path Plannng Abstract Taxong Zheng Department of Automaton Chongqng Unversty of Post and Telecommuncaton, Chna

More information

Introduction to Coalescent Models. Biostatistics 666 Lecture 4

Introduction to Coalescent Models. Biostatistics 666 Lecture 4 Introducton to Coalescent Models Bostatstcs 666 Lecture 4 Last Lecture Lnkage Equlbrum Expected state for dstant markers Lnkage Dsequlbrum Assocaton between neghborng alleles Expected to decrease wth dstance

More information

Define Y = # of mobiles from M total mobiles that have an adequate link. Measure of average portion of mobiles allocated a link of adequate quality.

Define Y = # of mobiles from M total mobiles that have an adequate link. Measure of average portion of mobiles allocated a link of adequate quality. Wreless Communcatons Technologes 6::559 (Advanced Topcs n Communcatons) Lecture 5 (Aprl th ) and Lecture 6 (May st ) Instructor: Professor Narayan Mandayam Summarzed by: Steve Leung (leungs@ece.rutgers.edu)

More information

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b

Research of Dispatching Method in Elevator Group Control System Based on Fuzzy Neural Network. Yufeng Dai a, Yun Du b 2nd Internatonal Conference on Computer Engneerng, Informaton Scence & Applcaton Technology (ICCIA 207) Research of Dspatchng Method n Elevator Group Control System Based on Fuzzy Neural Network Yufeng

More information

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6)

Passive Filters. References: Barbow (pp ), Hayes & Horowitz (pp 32-60), Rizzoni (Chap. 6) Passve Flters eferences: Barbow (pp 6575), Hayes & Horowtz (pp 360), zzon (Chap. 6) Frequencyselectve or flter crcuts pass to the output only those nput sgnals that are n a desred range of frequences (called

More information

An Alternation Diffusion LMS Estimation Strategy over Wireless Sensor Network

An Alternation Diffusion LMS Estimation Strategy over Wireless Sensor Network Progress In Electromagnetcs Research M, Vol. 70, 135 143, 2018 An Alternaton Dffuson LMS Estmaton Strategy over Wreless Sensor Network Ln L * and Donghu L Abstract Ths paper presents a dstrbuted estmaton

More information

Understanding the Spike Algorithm

Understanding the Spike Algorithm Understandng the Spke Algorthm Vctor Ejkhout and Robert van de Gejn May, ntroducton The parallel soluton of lnear systems has a long hstory, spannng both drect and teratve methods Whle drect methods exst

More information

Joint Adaptive Modulation and Power Allocation in Cognitive Radio Networks

Joint Adaptive Modulation and Power Allocation in Cognitive Radio Networks I. J. Communcatons, etwork and System Scences, 8, 3, 7-83 Publshed Onlne August 8 n ScRes (http://www.scrp.org/journal/jcns/). Jont Adaptve Modulaton and Power Allocaton n Cogntve Rado etworks Dong LI,

More information

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme

Performance Analysis of Multi User MIMO System with Block-Diagonalization Precoding Scheme Performance Analyss of Mult User MIMO System wth Block-Dagonalzaton Precodng Scheme Yoon Hyun m and Jn Young m, wanwoon Unversty, Department of Electroncs Convergence Engneerng, Wolgye-Dong, Nowon-Gu,

More information

ANNUAL OF NAVIGATION 11/2006

ANNUAL OF NAVIGATION 11/2006 ANNUAL OF NAVIGATION 11/2006 TOMASZ PRACZYK Naval Unversty of Gdyna A FEEDFORWARD LINEAR NEURAL NETWORK WITH HEBBA SELFORGANIZATION IN RADAR IMAGE COMPRESSION ABSTRACT The artcle presents the applcaton

More information

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network

A Preliminary Study on Targets Association Algorithm of Radar and AIS Using BP Neural Network Avalable onlne at www.scencedrect.com Proceda Engneerng 5 (2 44 445 A Prelmnary Study on Targets Assocaton Algorthm of Radar and AIS Usng BP Neural Networ Hu Xaoru a, Ln Changchuan a a Navgaton Insttute

More information

Topology Control for C-RAN Architecture Based on Complex Network

Topology Control for C-RAN Architecture Based on Complex Network Topology Control for C-RAN Archtecture Based on Complex Network Zhanun Lu, Yung He, Yunpeng L, Zhaoy L, Ka Dng Chongqng key laboratory of moble communcatons technology Chongqng unversty of post and telecommuncaton

More information

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter

Walsh Function Based Synthesis Method of PWM Pattern for Full-Bridge Inverter Walsh Functon Based Synthess Method of PWM Pattern for Full-Brdge Inverter Sej Kondo and Krt Choesa Nagaoka Unversty of Technology 63-, Kamtomoka-cho, Nagaoka 9-, JAPAN Fax: +8-58-7-95, Phone: +8-58-7-957

More information

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid

Latency Insertion Method (LIM) for IR Drop Analysis in Power Grid Abstract Latency Inserton Method (LIM) for IR Drop Analyss n Power Grd Dmtr Klokotov, and José Schutt-Ané Wth the steadly growng number of transstors on a chp, and constantly tghtenng voltage budgets,

More information

antenna antenna (4.139)

antenna antenna (4.139) .6.6 The Lmts of Usable Input Levels for LNAs The sgnal voltage level delvered to the nput of an LNA from the antenna may vary n a very wde nterval, from very weak sgnals comparable to the nose level,

More information

On the Feasibility of Receive Collaboration in Wireless Sensor Networks

On the Feasibility of Receive Collaboration in Wireless Sensor Networks On the Feasblty of Receve Collaboraton n Wreless Sensor Networs B. Bantaleb, S. Sgg and M. Begl Computer Scence Department Insttute of Operatng System and Computer Networs (IBR) Braunschweg, Germany {behnam,

More information

Optimizing a System of Threshold-based Sensors with Application to Biosurveillance

Optimizing a System of Threshold-based Sensors with Application to Biosurveillance Optmzng a System of Threshold-based Sensors wth Applcaton to Bosurvellance Ronald D. Frcker, Jr. Thrd Annual Quanttatve Methods n Defense and Natonal Securty Conference May 28, 2008 What s Bosurvellance?

More information

Queuing-Based Dynamic Channel Selection for Heterogeneous Multimedia Applications over Cognitive Radio Networks

Queuing-Based Dynamic Channel Selection for Heterogeneous Multimedia Applications over Cognitive Radio Networks 1 Queung-Based Dynamc Channel Selecton for Heterogeneous ultmeda Applcatons over Cogntve Rado Networks Hsen-Po Shang and haela van der Schaar Department of Electrcal Engneerng (EE), Unversty of Calforna

More information

A Novel Optimization of the Distance Source Routing (DSR) Protocol for the Mobile Ad Hoc Networks (MANET)

A Novel Optimization of the Distance Source Routing (DSR) Protocol for the Mobile Ad Hoc Networks (MANET) A Novel Optmzaton of the Dstance Source Routng (DSR) Protocol for the Moble Ad Hoc Networs (MANET) Syed S. Rzv 1, Majd A. Jafr, and Khaled Ellethy Computer Scence and Engneerng Department Unversty of Brdgeport

More information

Introduction to Coalescent Models. Biostatistics 666

Introduction to Coalescent Models. Biostatistics 666 Introducton to Coalescent Models Bostatstcs 666 Prevously Allele frequences Hardy Wenberg Equlbrum Lnkage Equlbrum Expected state for dstant markers Lnkage Dsequlbrum Assocaton between neghborng alleles

More information

UNIT 11 TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT

UNIT 11 TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT UNIT TWO-PERSON ZERO-SUM GAMES WITH SADDLE POINT Structure. Introducton Obectves. Key Terms Used n Game Theory.3 The Maxmn-Mnmax Prncple.4 Summary.5 Solutons/Answers. INTRODUCTION In Game Theory, the word

More information

Multi-Robot Map-Merging-Free Connectivity-Based Positioning and Tethering in Unknown Environments

Multi-Robot Map-Merging-Free Connectivity-Based Positioning and Tethering in Unknown Environments Mult-Robot Map-Mergng-Free Connectvty-Based Postonng and Tetherng n Unknown Envronments Somchaya Lemhetcharat and Manuela Veloso February 16, 2012 Abstract We consder a set of statc towers out of communcaton

More information

Low Switching Frequency Active Harmonic Elimination in Multilevel Converters with Unequal DC Voltages

Low Switching Frequency Active Harmonic Elimination in Multilevel Converters with Unequal DC Voltages Low Swtchng Frequency Actve Harmonc Elmnaton n Multlevel Converters wth Unequal DC Voltages Zhong Du,, Leon M. Tolbert, John N. Chasson, Hu L The Unversty of Tennessee Electrcal and Computer Engneerng

More information

The Performance Improvement of BASK System for Giga-Bit MODEM Using the Fuzzy System

The Performance Improvement of BASK System for Giga-Bit MODEM Using the Fuzzy System Int. J. Communcatons, Network and System Scences, 10, 3, 1-5 do:10.36/jcns.10.358 Publshed Onlne May 10 (http://www.scrp.org/journal/jcns/) The Performance Improvement of BASK System for Gga-Bt MODEM Usng

More information

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate

Comparative Analysis of Reuse 1 and 3 in Cellular Network Based On SIR Distribution and Rate Comparatve Analyss of Reuse and 3 n ular Network Based On IR Dstrbuton and Rate Chandra Thapa M.Tech. II, DEC V College of Engneerng & Technology R.V.. Nagar, Chttoor-5727, A.P. Inda Emal: chandra2thapa@gmal.com

More information

Decomposition Principles and Online Learning in Cross-Layer Optimization for Delay-Sensitive Applications

Decomposition Principles and Online Learning in Cross-Layer Optimization for Delay-Sensitive Applications Techncal Report Decomposton Prncples and Onlne Learnng n Cross-Layer Optmzaton for Delay-Senstve Applcatons Abstract In ths report, we propose a general cross-layer optmzaton framework n whch we explctly

More information

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf

TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS TN TERMINATON FOR POINT-TO-POINT SYSTEMS. Zo = L C. ω - angular frequency = 2πf TECHNICAL NOTE TERMINATION FOR POINT- TO-POINT SYSTEMS INTRODUCTION Because dgtal sgnal rates n computng systems are ncreasng at an astonshng rate, sgnal ntegrty ssues have become far more mportant to

More information

Guidelines for CCPR and RMO Bilateral Key Comparisons CCPR Working Group on Key Comparison CCPR-G5 October 10 th, 2014

Guidelines for CCPR and RMO Bilateral Key Comparisons CCPR Working Group on Key Comparison CCPR-G5 October 10 th, 2014 Gudelnes for CCPR and RMO Blateral Key Comparsons CCPR Workng Group on Key Comparson CCPR-G5 October 10 th, 2014 These gudelnes are prepared by CCPR WG-KC and RMO P&R representatves, and approved by CCPR,

More information

Rational Secret Sharing without Broadcast

Rational Secret Sharing without Broadcast Ratonal Secret Sharng wthout Broadcast Amjed Shareef, Department of Computer Scence and Engneerng, Indan Insttute of Technology Madras, Chenna, Inda. Emal: amjedshareef@gmal.com Abstract We use the concept

More information

Space Time Equalization-space time codes System Model for STCM

Space Time Equalization-space time codes System Model for STCM Space Tme Eualzaton-space tme codes System Model for STCM The system under consderaton conssts of ST encoder, fadng channel model wth AWGN, two transmt antennas, one receve antenna, Vterb eualzer wth deal

More information

Comparison of Two Measurement Devices I. Fundamental Ideas.

Comparison of Two Measurement Devices I. Fundamental Ideas. Comparson of Two Measurement Devces I. Fundamental Ideas. ASQ-RS Qualty Conference March 16, 005 Joseph G. Voelkel, COE, RIT Bruce Sskowsk Rechert, Inc. Topcs The Problem, Eample, Mathematcal Model One

More information

A Preliminary Study of Information Collection in a Mobile Sensor Network

A Preliminary Study of Information Collection in a Mobile Sensor Network A Prelmnary Study of Informaton ollecton n a Moble Sensor Network Yuemng Hu, Qng L ollege of Informaton South hna Agrcultural Unversty {ymhu@, lqng1004@stu.}scau.edu.cn Fangmng Lu, Gabrel Y. Keung, Bo

More information

Multiband Jamming Strategies with Minimum Rate Constraints

Multiband Jamming Strategies with Minimum Rate Constraints Multband Jammng Strateges wth Mnmum Rate Constrants Karm Banawan, Sennur Ulukus, Peng Wang, and Bran Henz Department of Electrcal and Computer Engneerng, Unversty of Maryland, College Park, MD 7 US Army

More information

Hierarchical Generalized Cantor Set Modulation

Hierarchical Generalized Cantor Set Modulation 8th Internatonal Symposum on Wreless Communcaton Systems, Aachen Herarchcal Generalzed Cantor Set Modulaton Smon Görtzen, Lars Schefler, Anke Schmenk Informaton Theory and Systematc Desgn of Communcaton

More information

Exploiting Dynamic Workload Variation in Low Energy Preemptive Task Scheduling

Exploiting Dynamic Workload Variation in Low Energy Preemptive Task Scheduling Explotng Dynamc Worload Varaton n Low Energy Preemptve Tas Schedulng Lap-Fa Leung, Ch-Yng Tsu Department of Electrcal and Electronc Engneerng Hong Kong Unversty of Scence and Technology Clear Water Bay,

More information

A Simple Satellite Exclusion Algorithm for Advanced RAIM

A Simple Satellite Exclusion Algorithm for Advanced RAIM A Smple Satellte Excluson Algorthm for Advanced RAIM Juan Blanch, Todd Walter, Per Enge Stanford Unversty ABSTRACT Advanced Recever Autonomous Integrty Montorng s a concept that extends RAIM to mult-constellaton

More information

1 GSW Multipath Channel Models

1 GSW Multipath Channel Models In the general case, the moble rado channel s pretty unpleasant: there are a lot of echoes dstortng the receved sgnal, and the mpulse response keeps changng. Fortunately, there are some smplfyng assumptons

More information

Arterial Travel Time Estimation Based On Vehicle Re-Identification Using Magnetic Sensors: Performance Analysis

Arterial Travel Time Estimation Based On Vehicle Re-Identification Using Magnetic Sensors: Performance Analysis Arteral Travel Tme Estmaton Based On Vehcle Re-Identfcaton Usng Magnetc Sensors: Performance Analyss Rene O. Sanchez, Chrstopher Flores, Roberto Horowtz, Ram Raagopal and Pravn Varaya Department of Mechancal

More information

Opportunistic Beamforming for Finite Horizon Multicast

Opportunistic Beamforming for Finite Horizon Multicast Opportunstc Beamformng for Fnte Horzon Multcast Gek Hong Sm, Joerg Wdmer, and Balaj Rengarajan allyson.sm@mdea.org, joerg.wdmer@mdea.org, and balaj.rengarajan@gmal.com Insttute IMDEA Networks, Madrd, Span

More information

Distributed Topology Control of Dynamic Networks

Distributed Topology Control of Dynamic Networks Dstrbuted Topology Control of Dynamc Networks Mchael M. Zavlanos, Alreza Tahbaz-Saleh, Al Jadbabae and George J. Pappas Abstract In ths paper, we present a dstrbuted control framework for controllng the

More information

ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION

ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION 7th European Sgnal Processng Conference (EUSIPCO 9 Glasgow, Scotland, August 4-8, 9 ROBUST IDENTIFICATION AND PREDICTION USING WILCOXON NORM AND PARTICLE SWARM OPTIMIZATION Babta Majh, G. Panda and B.

More information

Performance Analysis of the Weighted Window CFAR Algorithms

Performance Analysis of the Weighted Window CFAR Algorithms Performance Analyss of the Weghted Wndow CFAR Algorthms eng Xangwe Guan Jan He You Department of Electronc Engneerng, Naval Aeronautcal Engneerng Academy, Er a road 88, Yanta Cty 6400, Shandong Provnce,

More information

Fall 2018 #11 Games and Nimbers. A. Game. 0.5 seconds, 64 megabytes

Fall 2018 #11 Games and Nimbers. A. Game. 0.5 seconds, 64 megabytes 5-95 Fall 08 # Games and Nmbers A. Game 0.5 seconds, 64 megabytes There s a legend n the IT Cty college. A student that faled to answer all questons on the game theory exam s gven one more chance by hs

More information

Energy Efficiency Analysis of a Multichannel Wireless Access Protocol

Energy Efficiency Analysis of a Multichannel Wireless Access Protocol Energy Effcency Analyss of a Multchannel Wreless Access Protocol A. Chockalngam y, Wepng u, Mchele Zorz, and Laurence B. Mlsten Department of Electrcal and Computer Engneerng, Unversty of Calforna, San

More information

Resource Allocation Optimization for Device-to- Device Communication Underlaying Cellular Networks

Resource Allocation Optimization for Device-to- Device Communication Underlaying Cellular Networks Resource Allocaton Optmzaton for Devce-to- Devce Communcaton Underlayng Cellular Networks Bn Wang, L Chen, Xaohang Chen, Xn Zhang, and Dacheng Yang Wreless Theores and Technologes (WT&T) Bejng Unversty

More information

King s Research Portal

King s Research Portal Kng s Research Portal DOI: 10.1109/TWC.2015.2460254 Document Verson Peer revewed verson Lnk to publcaton record n Kng's Research Portal Ctaton for publshed verson (APA): Shrvanmoghaddam, M., L, Y., Dohler,

More information

Power System State Estimation Using Phasor Measurement Units

Power System State Estimation Using Phasor Measurement Units Unversty of Kentucky UKnowledge Theses and Dssertatons--Electrcal and Computer Engneerng Electrcal and Computer Engneerng 213 Power System State Estmaton Usng Phasor Measurement Unts Jaxong Chen Unversty

More information

TODAY S wireless networks are characterized as a static

TODAY S wireless networks are characterized as a static IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 10, NO. 2, FEBRUARY 2011 161 A Spectrum Decson Framework for Cogntve Rado Networks Won-Yeol Lee, Student Member, IEEE, and Ian F. Akyldz, Fellow, IEEE Abstract

More information

Fast Code Detection Using High Speed Time Delay Neural Networks

Fast Code Detection Using High Speed Time Delay Neural Networks Fast Code Detecton Usng Hgh Speed Tme Delay Neural Networks Hazem M. El-Bakry 1 and Nkos Mastoraks 1 Faculty of Computer Scence & Informaton Systems, Mansoura Unversty, Egypt helbakry0@yahoo.com Department

More information

Secure Transmission of Sensitive data using multiple channels

Secure Transmission of Sensitive data using multiple channels Secure Transmsson of Senstve data usng multple channels Ahmed A. Belal, Ph.D. Department of computer scence and automatc control Faculty of Engneerng Unversty of Alexandra Alexandra, Egypt. aabelal@hotmal.com

More information

Prevention of Sequential Message Loss in CAN Systems

Prevention of Sequential Message Loss in CAN Systems Preventon of Sequental Message Loss n CAN Systems Shengbng Jang Electrcal & Controls Integraton Lab GM R&D Center, MC: 480-106-390 30500 Mound Road, Warren, MI 48090 shengbng.jang@gm.com Ratnesh Kumar

More information

Research Article Indoor Localisation Based on GSM Signals: Multistorey Building Study

Research Article Indoor Localisation Based on GSM Signals: Multistorey Building Study Moble Informaton Systems Volume 26, Artcle ID 279576, 7 pages http://dx.do.org/.55/26/279576 Research Artcle Indoor Localsaton Based on GSM Sgnals: Multstorey Buldng Study RafaB Górak, Marcn Luckner, MchaB

More information

The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game

The Spectrum Sharing in Cognitive Radio Networks Based on Competitive Price Game 8 Y. B. LI, R. YAG, Y. LI, F. YE, THE SPECTRUM SHARIG I COGITIVE RADIO ETWORKS BASED O COMPETITIVE The Spectrum Sharng n Cogntve Rado etworks Based on Compettve Prce Game Y-bng LI, Ru YAG., Yun LI, Fang

More information

Optimal Placement of PMU and RTU by Hybrid Genetic Algorithm and Simulated Annealing for Multiarea Power System State Estimation

Optimal Placement of PMU and RTU by Hybrid Genetic Algorithm and Simulated Annealing for Multiarea Power System State Estimation T. Kerdchuen and W. Ongsakul / GMSARN Internatonal Journal (09) - Optmal Placement of and by Hybrd Genetc Algorthm and Smulated Annealng for Multarea Power System State Estmaton Thawatch Kerdchuen and

More information

Localization in mobile networks via virtual convex hulls

Localization in mobile networks via virtual convex hulls Localzaton n moble networs va vrtual convex hulls Sam Safav, Student Member, IEEE, and Usman A. Khan, Senor Member, IEEE arxv:.7v [cs.sy] Jan 7 Abstract In ths paper, we develop a dstrbuted algorthm to

More information

problems palette of David Rock and Mary K. Porter 6. A local musician comes to your school to give a performance

problems palette of David Rock and Mary K. Porter 6. A local musician comes to your school to give a performance palette of problems Davd Rock and Mary K. Porter 1. If n represents an nteger, whch of the followng expressons yelds the greatest value? n,, n, n, n n. A 60-watt lghtbulb s used for 95 hours before t burns

More information

Uplink User Selection Scheme for Multiuser MIMO Systems in a Multicell Environment

Uplink User Selection Scheme for Multiuser MIMO Systems in a Multicell Environment Uplnk User Selecton Scheme for Multuser MIMO Systems n a Multcell Envronment Byong Ok Lee School of Electrcal Engneerng and Computer Scence and INMC Seoul Natonal Unversty leebo@moble.snu.ac.kr Oh-Soon

More information

Full-duplex Relaying for D2D Communication in mmwave based 5G Networks

Full-duplex Relaying for D2D Communication in mmwave based 5G Networks Full-duplex Relayng for D2D Communcaton n mmwave based 5G Networks Boang Ma Hamed Shah-Mansour Member IEEE and Vncent W.S. Wong Fellow IEEE Abstract Devce-to-devce D2D communcaton whch can offload data

More information

Approximating User Distributions in WCDMA Networks Using 2-D Gaussian

Approximating User Distributions in WCDMA Networks Using 2-D Gaussian CCCT 05: INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS, AND CONTROL TECHNOLOGIES 1 Approxmatng User Dstrbutons n CDMA Networks Usng 2-D Gaussan Son NGUYEN and Robert AKL Department of Computer

More information

Equity trend prediction with neural networks

Equity trend prediction with neural networks Res. Lett. Inf. Math. Sc., 2004, Vol. 6, pp 15-29 15 Avalable onlne at http://ms.massey.ac.nz/research/letters/ Equty trend predcton wth neural networks R.HALLIDAY Insttute of Informaton & Mathematcal

More information

Distributed Fault Detection of Wireless Sensor Networks

Distributed Fault Detection of Wireless Sensor Networks Dstrbuted Fault Detecton of Wreless Sensor Networs Jnran Chen, Shubha Kher, and Arun Soman Dependable Computng and Networng Lab Iowa State Unversty Ames, Iowa 50010 {jrchen, shubha, arun}@astate.edu ABSTRACT

More information

HUAWEI TECHNOLOGIES CO., LTD. Huawei Proprietary Page 1

HUAWEI TECHNOLOGIES CO., LTD. Huawei Proprietary Page 1 Project Ttle Date Submtted IEEE 802.16 Broadband Wreless Access Workng Group Double-Stage DL MU-MIMO Scheme 2008-05-05 Source(s) Yang Tang, Young Hoon Kwon, Yajun Kou, Shahab Sanaye,

More information

Throughput Maximization by Adaptive Threshold Adjustment for AMC Systems

Throughput Maximization by Adaptive Threshold Adjustment for AMC Systems APSIPA ASC 2011 X an Throughput Maxmzaton by Adaptve Threshold Adjustment for AMC Systems We-Shun Lao and Hsuan-Jung Su Graduate Insttute of Communcaton Engneerng Department of Electrcal Engneerng Natonal

More information

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources

POLYTECHNIC UNIVERSITY Electrical Engineering Department. EE SOPHOMORE LABORATORY Experiment 1 Laboratory Energy Sources POLYTECHNIC UNIERSITY Electrcal Engneerng Department EE SOPHOMORE LABORATORY Experment 1 Laboratory Energy Sources Modfed for Physcs 18, Brooklyn College I. Oerew of the Experment Ths experment has three

More information

A Current Differential Line Protection Using a Synchronous Reference Frame Approach

A Current Differential Line Protection Using a Synchronous Reference Frame Approach A Current Dfferental Lne rotecton Usng a Synchronous Reference Frame Approach L. Sousa Martns *, Carlos Fortunato *, and V.Fernão res * * Escola Sup. Tecnologa Setúbal / Inst. oltécnco Setúbal, Setúbal,

More information

Malicious User Detection in Spectrum Sensing for WRAN Using Different Outliers Detection Techniques

Malicious User Detection in Spectrum Sensing for WRAN Using Different Outliers Detection Techniques Malcous User Detecton n Spectrum Sensng for WRAN Usng Dfferent Outlers Detecton Technques Mansh B Dave #, Mtesh B Nakran #2 Assstant Professor, C. U. Shah College of Engg. & Tech., Wadhwan cty-363030,

More information