Enhancng Throughput n Wreless Mult-Hop Network wth Multple Packet Recepton Ja-lang Lu, Paulne Vandenhove, We Shu, Mn-You Wu Dept. of Computer Scence & Engneerng, Shangha JaoTong Unversty, Shangha, Chna Unverste de Lyon, INRIA, INSA-Lyon, CITI, F-69621, Vlleurbanne, France Dept. of Electrcal & Computer Engneerng, Unversty of New Mexco, USA {jalang.lu, mwu}@sjtu.edu.cn, paulne.vandenhove@nsa-lyon.fr, shu@ece.unm.edu Abstract Mult-Packet Recepton (MPR) enables smultaneous receptons from dfferent transmtters to a sngle recever, whch has been demonstrated to brng capacty mprovement n wreless network. However, MPR does not mprove the transmsson capablty of ntermedate relay nodes n a mult-hop routng and thus these nodes may become the bottlenecks for ncreasng throughput. We nvestgate the schedulng for mult-hop routng wth MPR to mprove the network throughput under multple data flows. We formulate the optmzaton problem under K- MPR model and analyze the performance upper bound wth deal schedulng. We propose a dstrbuted schedulng scheme based on a k-connected k-domnatng Set (k-cds) backbone to elmnate bottleneck effects. I. INTRODUCTION As ponted out n [1], wreless transmssons should respect sgnal to nose/nterference rato, so as to succeed. The capacty of wreless networks s manly restraned by the concurrent packet transmssons under collson model. Recently, nvestgatons on the ncrease of recepton capablty through mult-user technques such as SIC [2] and PPS [3], have conducted wth noton of MPR. It shfts the responsblty from transmtters to recevers n a wreless communcaton. A node s MPR capablty [4] n a network can be llustrated by a recever matrx R Eq. (1). R n,k, s defned as Pr[k packets receved n packets transmtted ]. The fundamental change of ths model compared to the collson model s that the recepton can be descrbed by condtonal probabltes nstead of determnstc falure when smultaneous transmssons occur. R = R 1,0 R 1,1 R 2,0 R 2,1 R 2,2...... R n,0... R n,k... R n,n (1) Mergen and Tong [5] have shown that the upper bound of one-hop throughput n MPR model over conventonal collson model ncreases wth the probablty of successful recepton n MPR. It s easy to understand that for one recever, the number of receptons can be smply multple K tmes by usng K- MPR, as long as the nterference condton s valdated. Recent works [6], [7], [8] have shown that MPR provdes a sgnfcant capacty mprovement for wreless networks, despte usng dfferent condtons and models. MPR can defntvely mprove the network capacty, but thngs are dfferent for network throughput. As dscussed n [9], the network throughput s nfluenced by protocols used to date, whch are not adapted to fully explot MPR to ncrease network throughput. Communcaton protocols such as MAC protocols have been desgned to avod multple access nterference by preventng mult-access. Smlarly, routng protocols for wreless networks do not allow concurrent transmssons n path selecton at the same tme. Therefore, the MPR capablty at physcal layer calls for desgn of new lnk schedulng schemes matchng ths capablty. [9] proposed to maxmze the number of node-dsjont mult-paths wth jont routng and schedulng. But by usng node-dsjont paths, at any tme slot, each recever only receves one packet for relayng. The ntermedate nodes n routng paths cannot be effectve MPR recevers. Fg. 1 gves an example wth two data flows (A to D va C, and B to E va C). Ideally, C could beneft from ts 2-MPR to receve smultaneously from A and B at slot 1 and uses the next two slots to transmt the receved packets to D and E. But wth node-dsjont paths, t requres 4 slots to transport two flows. Ths example also shows that the ntermedate nodes n a wreless network mght become bottlenecks for throughput. Ths work s motvated by resolvng such bottlenecks wth A B Fg. 1. C Slot 2 Slot 3 D A E Slot 3 B Slot 4 C Slot 2 node-dsjont mult-path s not optmal for MPR MPR as to enhance the network throughput. The rest of the paper s organzed as follows. Secton II-C presents our throughput maxmzng problem formulaton. We nvestgate ts performance upper bound wth deal schedulng n Secton III. We ntroduce a heurstc schedulng scheme based on k- CDS n Secton IV. Secton V shows the numercal results n random dstrbuted mult-hop wreless networks and we also locate the best value of K to acheve maxmal throughput wth the heurstc scheme. Secton VI concludes ths work and dscusses future works. D E 978-1-61284-233-2/11/$26.00 2011 IEEE
II. FORMULATION OF THROUGHPUT MAXIMIZING A. Assumptons PROBLEM We assume that wreless nodes are endowed wth a sngle sem-duplex rado nterface, and hence they cannot transmt and receve a packet at the same tme. Each nodes s synchronzed on tme-dvson slot systems, and the transmssons always take place at the slotted tme boundares. We defne K as the MPR capablty of a recever node. We consder M smultaneous data flows n the network. For each flow m, the set of recever nodes on the routng paths s ρ = {1, 2,..., p} m.andτ = {1, 2,..., n} s a set of transmtters ready to transmt. Let S v τ be a schedulable set of nodes actually smultaneously transmttng to a recever v. B. Channel Capacty and Maxmal Transmsson Rate For each pont-to-pont transmsson, let P v be the receved power by the recever v from the transmtter and P 0 the common transmtted power. The receved power wth path loss exponent γ s defned as follows: P v = P 0 d γ v (2) We consder a mult-user access channel for each wreless communcaton. The channel capacty functon of a sngle recever AWGN channel wth bandwdth W and channel nose power can be defned as: ϕ(sinr)=wlog 2 (1 + SINR) (3) The sgnal to nterference plus nose rato SINR takes nto account the channel s nose and the recepton sgnal power of the transmssons other than current recepton. For a mult-user access channel wth K-MPR, K concurrent transmssons are allowed. The channel capacty for a K-MPR recever s: S ϕ v = ϕ( v P v ) (4) For a general number of transcevers, the sum of transmsson rates are wthn the channel capacty gven n Eq. (4). Therefore, we have the followng nequalty: S r v ϕ v ( v P v ) (5) S v where r v denotes the transmsson rate on to v. For M data flows, we denote the source and the destnaton of mth flow s m and d m. The flow rate on a drected lnk (u, v) s denoted as fuv. m It s worth notng that ths flow rate s an average rate and the transmsson rate ruv m could be much hgher for an ntermedate recever. For nstance, f node v decodes k packets from dfferent transmtters ncludng u and takes t r slots to relay the receved packets, then the relaton between flow rate fuv m and transmsson rate ruv m can be expressed as: 1 fuv m = t r +1 ruv m (6) C. Problem Formulaton Gven the above defnton and notatons, we formulate the Throughput Maxmzng Problem (TMP) for mult-flow multhop communcatons as follows: DEFINITION 1: Maxmze the sum of flow rate reachng all destnatons: M Maxmze rd m m (7) m=1 subject to the followng three constrants: 1. Flow conservaton constrant rj m = rj; m rs m m = j 2. Recever constrant 3. Transmtter constrant r v = ϕ( S v r m d m (8) v ρ, S v K (9) S v P v ) (10) We have adopted the flow conservaton constrant Eq. (8) from [9]. However, the other two constrants are dfferent. Recever constrant: A recever v cannot decode more than K packets at the same tme, and hence the number of transmtters n S v should be lmted to K for any slot. Ths constrant covers the recever s par-wse nterference. Transmtter constrant: Each transmtter should operate at sum-rate to fully explore the bandwdth of the recever s multuser channel, as gven n Eq. (4). The bandwdth of such channel s dfferent from that n [9], whch s a combnaton of pont-to-pont lnk bandwdth. III. THROUGHPUT UPPER BOUND WITH IDEAL SCHEDULING By resolvng the TMP as an optmzaton problem, we can obtan a performance upper bound. Smlar problems have been shown to be NP-hard [9], [10]. The sze of our optmzaton problem ncreases exponentally wth the number of routng paths. Let us more focus on the computaton of ts upper bound wth the deal tme-space schedulng. Frstly, the wreless network should meet the followng necessary condton n our problem: CONDITION 1: The node degree n the network should be at least K +1 to fully explot K MPR capablty. Proof: Each recever should have at least K neghbors to fully use ts K MPR capablty at the recepton slot. Durng the frst tme slot, all transmtters n S v send ther flow to recever v. Durng the next K slots (from slot 2 to K +1), v s busy sendng all the packets t receved durng the frst tme slot out to next hop nodes n the routng path. Therefore, t s no longer avalable for K slots. In order to not waste the tme slot, all these transmtters wll send ther data to the second avalable recever n ther neghborhood, whch wll be busy sendng all the flows for the next K slots (from slot 3 to
slot K +2 respectvely). If there are only K recevers n the neghborhood, then all of them wll be busy relayng the flows they receved and tme slot k+1 wll be wasted on ths one-hop area. Therefore, a node should have at least K +1 neghbors to fully use transmsson tme slots. Secondly, the deal schedulng should meet: CONDITION 2: The local throughput on each tme slot and for each node should be maxmzed by the deal schedulng to acheve the maxmal flow throughput on the destnatons. Proof: If t, n whch the M m=1 rm v s not maxmal, then there are two possble cases. In the frst one, another transmsson (j, v) can be added to ths tme slot f recever v s dealng wth less than K transmssons. Or a transmsson (j, v) can substtute for an exstng transmsson (l, v) to j S r m v, acheve a hgher throughput. Let Lt(j) = M m=1 then we have Lt v (j) >Lt v (l). By usng the flow conservaton constrant at all nodes Eq. (8), we obtan the same relaton on the the next hop, say w, Lt w (j) >Lt w (l). Wth ths recurrent relaton, the flow rate (equalng to the transmsson rate) at the destnaton ncreases when transmsson (j, v) occurs. Wth the above condtons, the tme-space schedulng to acheve the maxmal network throughput s the schedulng that maxmzes the local throughput on each recever. We can use a mxed lnear programmng solver such as [11] to generate numercal results on the upper bound. The comparson to the heurstc scheme s presented n Secton V. IV. HEURISTIC SCHEME WITH DISTRIBUTED SCHEDULING ON k-cds A. Usng k-cds for schedulng wth MPR In ths secton, we present a heurstc approach wth dstrbuted schedulng based on k-cds backbone to approxmate the upper bound. The k-cds [12] n a network s a set of nodes whch s k-domnatng and k-connected. Every node n the network s ether n the k-cds or has k neghbors n t. The subgraph of ths node set s k-vertex connected. The propertes of k-domnatng and k-connected perfectly match for ntermedate relay nodes to explotng MPR capablty, because each of them s requred to collaborate wth at least K +1 neghbors for both receptons and transmssons. If a recever s a k-domnated node, then the set consstng of all ts k domnatng nodes s the schedulable set S. If a recever s a domnatng node n k-cds, then the k connected property guarantees that t s connected to at least k domnatng nodes. These nodes can be selected to form schedulable set S for each recepton slot. If a transmtter s a k-domnated node, then t could schedule wth k domnatng nodes to transmt. If a transmtter s a domnatng node, then t could schedule wth k domnatng neghbors to transmt. Based on k-cds backbone, only domnatng nodes are selected as ntermedate relay nodes for mult-hop routng and domnated nodes do not partcpate n the routng unless t s the source or the destnaton of a flow. Ths smple rule could reduce the complexty of desgn tme-space schedulng n the network. B. k-cds constructon algorthm Many algorthms tend to generate a mnmal k-cds, but the transmsson wll be too concentrated to ths set of nodes. On the other sde, the hgh cardnalty means few reducton from the orgnal network topology, whch s not effcent to reduce the complexty of the schedulng based on the (K +1)- CDS. Ths trade-off on the cardnalty of (K +1)-CDS can be calculated as follows. Let us assume that the average routng path length s pl. The domnated nodes only partcpate nto the frst-hop communcatons as source nodes or nto the last-hop communcatons as destnaton nodes, whle the domnatng nodes can take part nto each hop communcatons n a routng path. By assumng the schedulng has a good farness for all nodes, the amount of flows that the domnated nodes take s approxmately to: T (k CDS)= K +2 (K +1) (pl 1) (11) To meet the above constrants, we develop a constructon algorthm based on coverage rule [13]. Each node verfes f any par of ts neghbors are k-connected va node-dsjont paths and hgher ID s rule s added to avod mutual decson blockng. Ths verfcaton s known as k-coverage condton. To realze ths algorthm n dstrbuted and localzed manner, nodes exchange ther routng tables wth ther neghbors. The k-coverage condton s checked va the routng table to count the number of node-dsjont paths from any par of neghbors. C. T-R Schedulng for Mult-path Routng The (K +1)-CDS constructon algorthm results to a backbone for mult-path routng. We present here a transmtterrecever schedulng to fully explot K-MPR capablty on domnatng and domnated nodes, whch allows the use of multple paths for each flow to elmnate bottlenecks on the ntermedate nodes. A potental transmtter construct a recevers set ξ.the recevers are ordered n each set along wth ther dstances to the fnal destnaton d m n number of hops. If d m belongs to N(), the neghbor set of, then {ξ } contans only d m. Wth Lnk Schedulng Algorthm, detaled n Algorthm. 1, a recever ams to let transmtters operate at sum rate based on (K +1)-CDS. It schedules transmtter nodes wth the prorty pr n an arbtrary order. Every node s prorty s set to mnmal before any transmssons. A transmtter node s chosen, and checks ts possble recevers set ξ. If the transmtter fnds a recever v who can receve more flows, then t wll be added n the recever s schedulable set S v. If the transmtter node cannot fnd any avalable recever, then ts prorty pr wll be ncreased. Hence, durng the next tme slot, the transmtter has a hgher prorty than other transmtters and s to be added to the schedulable set sooner. For each transmtter allowed to transmt, the algorthm selects the correspondng temporary data-rate, accordng to the sum-rate constrant. For a schedulable set S v =
Algorthm 1 Lnk schedulng whle τ b m > 0 do choose wth maxmum prorty for v ξ do f S v <K then S j = S j {}; r = ϕ v( t = T/K;++ break for else next v n ξ f j = ρ m then pr ++;++ end whle for v ρ t do for S v do r 1 = ϕv( P 1 ) S v f! =1then r = r + r 1 b m (t +1)=b m (t) t r f b m (t +1)> 0 then pr ++ else pr =0 P 1 + P j j=0 {u 1,u 2,...,u K }, the correspondng data-rates are : r 1 = ϕ v ( P 1 ); r K = ϕ v ( + K 1 j=0 P ) (12) j Sv The sum of all the data-rates s equal to ϕ v ( P ). Those data-rates verfy the sum-rate constrant, whatever the number of transmtters n schedulable set S v. Lnk Schedulng algorthm allows the transmtter an amount of tme t = T K, where T s the tme slot duraton. Ths ensures that all transmtters wll have the same tme slot fracton to send ther data. Snce the frst temporary data rate s much hgher than the others, the channel utlzaton need to be re-spread to the selected transmtters n order to acheve farness and avod generatng bottlenecks on the low rate transmtters. As a result, the overall throughput can be mproved. The fnal data rates also verfy the sum-rate constrant. Let b m (t) be the transmtter s ntal amount of data to send durng tme slot t for the flow m. The amount of effectvely transmtted data s t r and hence the remaned amount of data to transmt for the flow m can be represented as b m (t + 1) = b m (t) t r. The transmtter s prorty pr wll be ncreased, f b m (t +1) s not equal to 0. P K V. PERFORMANCE EVALUATION A. Parameters and Topology Confguraton We set the channel bandwdth W =1MHz, transmsson power P 0 =1W and path loss exponent γ =3. In a square of ) 300*300, 50 transmtters are randomly generated. Accordng to low nose SNR condton (SNR ref =10dB), SNR ref = P 0d γ ref, and the maxmal dstance between two nodes d r ef = 44m, we can obtan that s equal to 1, 16.10 6 W. The numercal results on upper bound of TMP s obtaned through lpslove [11], a mxed lnear programmng solver. We smulate our heurstc based (K +1)-CDS and heurstc based on node-dsjont path [9] n NetLogo4.1 smulator [14]. We performed 100 smulatons wth a duraton of 2000 tme slots. For each flow njected n the network, t has fxed source and destnaton. And t generates one packet per tme slot n a saturaton condton. The used metrcs are follows: the throughput represents the number of flows arrved to destnaton, durng a predefned number of tme slots; the average delay represents the dfference between the moment the flow was sent and the moment t s receved; and the average acceptance rato s the rato of traffc acceptaton among the total traffc demand. B. Results The overall throughput results obtaned are shown n Fg. 2 wth a 3-D representaton. The throughput upper bound descrbes the maxmal amount of occuped recepton tme slots at all destnatons, whch s ndependent from the number of flows. However, t s shown that t ncreases wth MPR capablty. Our heurstc based on (K +1)-CDS out-performs the heurstc wth node-dsjont path on almost all smulaton settngs. The node-dsjont heurstc reaches the lmt very quckly wth the ncrease of the number of flows, because node-dsjont paths are fewer than the routng paths on k- CDS. Our heurstc has a hgher throughput lmt, despte that t decreases when the number of flows s mportant (15 flows). We can also note that there s a local hghest throughput regardng to K-MPR. The throughput of 3-MPR s hghest. It s also confrmed n Fg. 3. Ths s a very nterestng observaton that the throughput decreases when K becomes bgger wth both heurstcs. One possble explanaton s that the 4-MPR capablty requres a much hgher densty to be fully exploted. The ncrease of node degree results n that more lnks nterfere wth each other, whch could decrease the network throughput. For our heurstc, the decrease on throughput wth 4-MPR s also related to our lnk schedulng algorthm, partcularly the way we spread the recever s channel capacty between ts transmtters. Indeed, the ncrease of channel capacty s not very large wth MPR capablty, whle the amount of data to send s much hgher. The results on the heurstcs effcency to the upper bound n Fg. 4, show that the 4-MPR has the smallest effcency. Ths s because the throughput upper bound computed wth our problem contnues to ncrease, even f the ncrease s slower. Nevertheless the effcency of our heurstc s better than nodedsjont scheme under the same confguraton. Fg. 5 shows that the average delay of a flow ncreases wth the number of flows. Despte of that, usng MPR can reduce around 20% of the flow delay compared to sngle recepton
Throughput (tme slots) Impact of MPR capablty on the throughput 5500 5000 4500 4000 3500 3000 2500 2000 (K+1) CDS, 5 flows (K+1) CDS, 10 flows Node dsjont, 5 flows Node dsjont, 10 flows 1500 1 1.5 2 2.5 3 3.5 4 K MPR (K+1) CDS based scheme vs. Node dsjont scheme Rato to throughput upper bound 0.9 0.8 0.7 0.6 0.5 0.4 3 CDS, 2 MPR 4 CDS, 3 MPR 5 CDS, 4 MPR Node dsjont, 2 MPR Node dsjont, 3 MPR Node dsjont, 4 MPR Fg. 2. The throughput of upper bound and heurstcs Fg. 3. The 3-MPR can acheve the hghest throughput. Fg. 4. The heurstcs effcency, subject to upper bound model. The ncrease of delay also confrms the presence of bottlenecks; whch also cause the degradaton of the flow acceptance rato as ndcated n Fg. 6. Agan, MPR could mprove the acceptance rato by usng tme-space schedulng to avod bottleneck generaton. Fg. 5. Fg. 6. Average flow delay (tme slots) 180 160 140 120 100 80 60 40 20 Results on the flow delay no MPR 2 MPR 3 MPR 4 MPR 0 The delay ncreases wth the number of flows. Average acceptance rato 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 Results on the flow acceptance rato no MPR 2 MPR 3 MPR 4 MPR 0.2 The flow acceptance rato of K-MPR chutes. VI. CONCLUSION In ths paper, we formulated a maxmal throughput problem for mult-hop wreless communcatons. We pont out that the re-use of ntermedate nodes n dfferent paths could gan a better performance than node-dsjont approach. Based on some pror works, we formulated an optmzaton problem subjected to flow, recever and transmtter constrants. The condtons for deal schedulng are derved. The numercal results demonstrate that our heurstc scheme based on (K +1)- CDS can better explot the MPR for mult-hop wreless and approxmate the upper bound. And for a gven topology, we note that there s a optmal value of K for K-MPR for throughput enhancement. ACKNOWLEDGEMENT Ths research was supported by NSF of Chna under grant No.60773091, No.61073158 and Shangha Post-Doc grant No.09R21413700. REFERENCES [1] P. Gupta and P. R. Kumar. The capacty of wreless networks. IEEE Transactons on Informaton Theory, 46(2):388 404, March 2000. [2] W. L and T.A. Gullver. Successve Interference Cancellaton for DS- CDMA systems wth transmt dversty. EURASIP Journal on Wreless Communcatons and Networks, 1:46 54, 2004. [3] A.G. Orozco-Lugo, M.M. Lara, D.C. McLernon, and H.J. Muro-Lemus. Multple packet recepton n wreless ad hoc networks usng polynomal phase-modulatng sequences. IEEE Transactons on Sgnal Processng, 51:2093 2110, 2003. [4] L. Tong, Q. Zhao, and G. Mergen. Multpacket recepton n random access wreless networks: From sgnal processng to optmal medum access control. IEEE Communcatons Magazne, 39:108 112, November 2001. [5] G. Mergen and L. Tong. Recever controlled medum access n multhop ad hoc networks wth multpacket recepton. Proc. of Mltary Communcaton Conference (MILCOM), Venna, USA, 2:1014 1018, October 2001. [6] J. J. Garca-Luna-Aceves, H.R. Sadjadpour, and Z. Wang. Challenges: towards truly scalable ad hoc networks. Proc. of the 13th annual ACM nt l conference on Moble computng and networkng (MobCom), Montreal, Canada, pages 207 214, September 2007. [7] Z. Wang, H. Sadjadpour, and J.J. Garca-Luna-Aceves. The capacty and energy effcency of wreless ad hoc networks wth mult-packet recepton. Proc. of the 9th ACM nt l Symposum on Moble Ad Hoc Networkng and Computng, Hong Kong, Chna, pages 178 188, 2008. [8] M. Guo, X. Wang, and M. Wu. On the capacty of k-mpr wreless networks. IEEE Transactons on Wreless Communcatons, 8:3878 3886, July 2009. [9] X. Wang and J.J. Garca-Luna-Aceves. Embracng nterference n Ad Hoc networks usng jont routng and schedulng wth multple packet recepton. Proc. of 27th Int l Conference on Computer Communcatons (INFOCOM), Phoenx, USA, pages 843 851, Aprl 2008. [10] M. Kodalam and T. Nandagopal. Characterzng Achevable Rates n Mult-hop Wreless Networks: The Jont Routng and Schedulng Problem. Proc. of the 9th Int l Conference on Moble Computng and Networkng (MobCom), Sans Dego, USA, pages 42 54, 2003. [11] lp slove 5.5. http://lpsolve.sourceforge.net. [12] F. Da and J. Wu. On constructng k-connected k-domnatng set n wreless ad hoc and sensor networks. Journal of Parallel and Dstrbuted Computng, 66(7):947 958, July 2006. [13] J. Wu and F. Da. A generc dstrbuted broadcast scheme n ad hoc wreless networks. IEEE Transactons on Computers, 53(10):1343C1354, October 2004. [14] NetLogo. http://ccl.northwestern.edu/netlogo/.