Opportunistic network communications Suhas Diggavi School of Computer and Communication Sciences Laboratory for Information and Communication Systems (LICOS) Ecole Polytechnique Fédérale de Lausanne (EPFL) Lausanne, Switzerland Email: {chaotian,suhasdiggavi}@epflch URL: http://licosepflch August 31, 2006 Joint work with David Tse (Berkeley) and Chao Tian (EPFL)
Opportunism in communications Uncertainty is inherent in network communications Randomness in channel/transmission conditions Variations in QoS requirements of traffic Diverse receiver state information A conservative strategy is to design for worst case An opportunistic strategy tries to utilize the randomness to enhance overall performance Question: How do we utilize uncertainties opportunistically? We examine two illustrative applications: Wireless communication Multi-terminal source coding
Fading wireless channels Observation: Wireless channels are inherently random Implications: Channels can sometimes be very weak (deep fade ) Transmitter may not be able to track the channel Channel models: Quasi-static fading channel = random but constant in coding block Receiver channel knowledge, no knowledge at transmitter Question: Can we opportunistically use channel randomness to our advantage?
Data versus voice Two approaches to provide reliability: Delay-tolerant traffic (data): Re-transmissions correct errors Rate can be aggressively choosen based on average channel Real-time traffic (voice): Need to get it right first time To be designed to succeed even in deep fades = high reliability
Segregate or aggregate? Modern wireless networks carry both real-time and delay-tolerant traffic A natural solution is to segregate them on orthogonal resources (different time-slots, frequency-slots etc) We show that there is a performance advantage to aggregating them on the same resource Advantage comes because of diverse QoS requirements and opportunistic use of random wireless channels
Rate versus reliability (diversity) Diversity order: If average error probability is P e (SNR), diversity order d is, log( lim P e (SNR)) = d SNR log(snr) = P e (SNR) SNR d Multiplexing rate r: Transmission rate R(SNR) = r log(snr) Question: Is there a tension between rate and reliability?
Rate versus reliability (diversity) Diversity order: If average error probability is P e (SNR), diversity order d is, log( lim P e (SNR)) = d SNR log(snr) = P e (SNR) SNR d Log Error Probability d 1 d 2 SNR (db) Multiplexing rate r: Transmission rate R(SNR) = r log(snr) Question: Is there a tension between rate and reliability?
Rate versus reliability (diversity) Diversity order: If average error probability is P e (SNR), diversity order d is, log( lim P e (SNR)) = d SNR log(snr) = P e (SNR) SNR d Log Error Probability d 1 d 2 SNR (db) Multiplexing rate r: Transmission rate R(SNR) = r log(snr) Question: Is there a tension between rate and reliability?
Rate versus reliability (diversity) Diversity order: If average error probability is P e (SNR), diversity order d is, log( lim P e (SNR)) = d SNR log(snr) = P e (SNR) SNR d Log Error Probability d 1 d 2 SNR (db) Multiplexing rate r: Transmission rate R(SNR) = r log(snr) Question: Is there a tension between rate and reliability?
Rate-diversity trade-off (Zheng and Tse, 2003) Diversity Gain (0, M M ) t r (1, (M 1)(M 1)) t r (k, (M k)(m k)) t r (min(m, M ),0) t r Interpretation: For multiplexing rate k, it is like using M r k receive antennas and M t k transmit antennas to provide diversity Multiplexing Rate Establishes rate-reliability trade-off for fading channels for single traffic type
Rate-diversity trade-off (Zheng and Tse, 2003) Diversity Gain (0, M M ) t r (1, (M 1)(M 1)) t r (k, (M k)(m k)) t r (min(m, M ),0) t r Interpretation: For multiplexing rate k, it is like using M r k receive antennas and M t k transmit antennas to provide diversity Multiplexing Rate Establishes rate-reliability trade-off for fading channels for single traffic type
Rate-diversity trade-off (Zheng and Tse, 2003) Diversity Gain (0, M M ) t r (1, (M 1)(M 1)) t r (k, (M k)(m k)) t r (min(m, M ),0) t r Interpretation: For multiplexing rate k, it is like using M r k receive antennas and M t k transmit antennas to provide diversity Multiplexing Rate Establishes rate-reliability trade-off for fading channels for single traffic type
Data versus voice: Formulation Goals: We want to guarantee A multiplexing rate r V for voice with reliability (diversity) of d A multiplexing rate r D for data with positive diversity order Questions: For given d what is the best achievable (r V, r D )? Is it achievable through segregation?
Opportunistic coding Classical setting: Single rate-reliability (outage) point Rate-outage pair: Recover message m with reliability p = P {O} Characterize achievable (R, p) pair O Ō h 2 Recover message m Consequence: Lose opportunity to transmit at higher rate for better channels
Opportunistic coding Classical setting: Single rate-reliability (outage) point Rate-outage pair: Recover message m with reliability p = P {O} Characterize achievable (R, p) pair Opportunistic coding: m V recovered if H Ō V and (m V, m D ) to be recovered if H G O V Ō V G m V (m V, m D ) h 2 O h 2 Ō Recover message m Consequence: Lose opportunity to transmit at higher rate for better channels Rate-outage tuple: Message m V has outage probability p V = P {O V } and m D has outage p D = P { Ḡ } Goal: Characterization of achievable (R V, p V, R D, p D ) tuples
M t 1, Rayleigh fading channel r D Segregated r V r V 1 d M t r V + r D 1 Extra region for aggregated Superposition encoding for aggregated traffic with iid Gaussian codebooks Broadcast approach (Cover 72, Shamai 97) for average rate behavior Focus here on rate-reliability trade-off
M t 1, Rayleigh fading channel r D Segregated r V r V 1 d M t r V + r D 1 Extra region for aggregated Superposition encoding for aggregated traffic with iid Gaussian codebooks Broadcast approach (Cover 72, Shamai 97) for average rate behavior Focus here on rate-reliability trade-off
Mixing traffic types Lessons from social structure? Similar results holds for multiple degrees of freedom (parallel channels, MIMO channels) Lessons: Aggregation is better than segregation Gain comes from mixing traffic with different QoS requirements Opportunistic use of random channel variation to push through delay tolerant traffic
Opportunistic source coding Scalable delivery of source to multiple decoders Opportunistic source coding robust to uncertain decoder side-information
Progressive encoding with decoder side-information Y 1 (k) i 1 I M1 Source Stage 1 ENCODER {X(k)} i 2 I M2 Stage 2 Side-information quality Decoder 1 ψ 1 (y 1, i 1 ) Decoder 2 ψ 2 (y 2, i 1, i 2 ) Y 2 (k) ˆX 1 (k) ˆX 2 (k) Side-information quality is modeled by degradedness of side-information Side-information scalable coding: Lower-delay delivery for decoder with better side-information, X Y 1 Y 2 = side information deteriorates with stages
Side-information scalable source coding Problem set-up: First stage has better quality side-information, X Y 1 Y 2 Goal: Allow decoder with better side-information to decode with smaller delay Heegard-Berger rate-distortion function (1985): If X Y 1 Y 2, lower bound on total rate needed: R HB (D 1, D 2 ) = min [I(X; W 2 Y 2 ) + I(X; W 1 W 2, Y 1 )] p(d 1,D 2 ) where (W 1, W 2 ) X Y 1 Y 2 Questions: Can we describe information that is maximally useful to both decoders? How much do we lose due to progressive requirement?
Side-information scalable source coding Problem set-up: First stage has better quality side-information, X Y 1 Y 2 Goal: Allow decoder with better side-information to decode with smaller delay Heegard-Berger rate-distortion function (1985): If X Y 1 Y 2, lower bound on total rate needed: R HB (D 1, D 2 ) = min [I(X; W 2 Y 2 ) + I(X; W 1 W 2, Y 1 )] p(d 1,D 2 ) where (W 1, W 2 ) X Y 1 Y 2 Questions: Can we describe information that is maximally useful to both decoders? How much do we lose due to progressive requirement?
Coding scheme: Basic idea NESTED BINNING Finer bins 1 2 3 4 1 Coarser bins Binning for V Coarse bin 1 Enumerate to finer bin 4 Binning for W 1 Binning for W 2 Refinement for decoder 1 Refinement for decoder 2 The first stage encodes V using binning for decoder 1 to recover using Y 1 Also refinement W 1 if needed The second stage enumerates for decoder 2 to recover V using Y 2 Also refinement W 2 if needed
Coding scheme: Basic idea NESTED BINNING Finer bins 1 2 3 4 1 Coarser bins Binning for V Coarse bin 1 Enumerate to finer bin 4 Binning for W 1 Binning for W 2 Refinement for decoder 1 Refinement for decoder 2 The first stage encodes V using binning for decoder 1 to recover using Y 1 Also refinement W 1 if needed The second stage enumerates for decoder 2 to recover V using Y 2 Also refinement W 2 if needed
Achievable rate-region R ach (D 1, D 2 ) 1 (W 1, W 2, V ) X Y 1 Y 2 2 There exist deterministic maps f j : W j Y j ˆX such that Ed(X, f j (W j, Y j )) D j, j = 1, 2 3 The non-negative rate pairs satisfy: Interpretation: R 1 I(X; V, W 1 Y 1 ) R 1 + R 2 I(X; V, W 2 Y 2 ) + I(X; W 1 Y 1, V ) R 1 I(X; V, W 1 Y 1 ) = I(X; V Y 1 ) }{{} coarse bin R 2 I(V ; Y 1 Y 2 ) }{{} enumerate to finer bin + I(X; W 1 Y 1, V ) }{{} Decoder 1 refinement + I(X; W 2 Y 2, V ) }{{} Decoder 2 refinement
Gaussian example: A complete characterization Source: Gaussian iid source, X N(0, σ 2 x ) Distortion measure: Quadratic error, E[ X ˆX 2 ] D Side-informations: N 1, N 2 are independent Gaussians, SI-scalable: Y 1 = X + N 1, Y 2 = X + N 1 + N 2 Result Wyner-Ziv bound in first stage and Heegard-Berger sum rate bound are both simultaneously achievable Implication: This is the best one could have hoped for, and there is no loss in opportunistic transmission
Gaussian example: A complete characterization Source: Gaussian iid source, X N(0, σ 2 x ) Distortion measure: Quadratic error, E[ X ˆX 2 ] D Side-informations: N 1, N 2 are independent Gaussians, SI-scalable: Y 1 = X + N 1, Y 2 = X + N 1 + N 2 Result Wyner-Ziv bound in first stage and Heegard-Berger sum rate bound are both simultaneously achievable Implication: This is the best one could have hoped for, and there is no loss in opportunistic transmission
Gaussian example: A complete characterization Source: Gaussian iid source, X N(0, σ 2 x ) Distortion measure: Quadratic error, E[ X ˆX 2 ] D Side-informations: N 1, N 2 are independent Gaussians, SI-scalable: Y 1 = X + N 1, Y 2 = X + N 1 + N 2 Result Wyner-Ziv bound in first stage and Heegard-Berger sum rate bound are both simultaneously achievable Implication: This is the best one could have hoped for, and there is no loss in opportunistic transmission
Mixture of receiver side-informations Characterization of optimal strategies Formulation of side-information scalable coding Opportunistically serve better receivers while maximally utlizing information for others Complete characterization for Gaussian sources Open questions: Characterization for arbitrary sources: only inner and outer bounds found Characterize loss due to progressive description
Conclusions There are many other aspects of network communication where opportunism has been used, eg, opportunistic scheduling Main message: Randomness inherent in network communications can be utilized to develop opportunistic strategies Open questions: Complete characterization of rate-diversity tuples for opportunistic coding for wireless Impact of this availability of such links for improving end-to-end performance Complete characterization of rate-distortion tuples for opportunistic multi-terminal source coding