Recovering Lost Sensor Data through Compressed Sensing

Similar documents
Beyond Nyquist. Joel A. Tropp. Applied and Computational Mathematics California Institute of Technology

Compressed Sensing for Multiple Access

Compressive Coded Aperture Superresolution Image Reconstruction

Energy-Effective Communication Based on Compressed Sensing

Effects of Basis-mismatch in Compressive Sampling of Continuous Sinusoidal Signals

EXACT SIGNAL RECOVERY FROM SPARSELY CORRUPTED MEASUREMENTS

On-Mote Compressive Sampling in Wireless Seismic Sensor Networks

Compressive Imaging: Theory and Practice

Digital Transmission using SECC Spring 2010 Lecture #7. (n,k,d) Systematic Block Codes. How many parity bits to use?

Single Error Correcting Codes (SECC) 6.02 Spring 2011 Lecture #9. Checking the parity. Using the Syndrome to Correct Errors

Signal Recovery from Random Measurements

JICE: Joint Data Compression and Encryption for Wireless Energy Auditing Networks

Democracy in Action. Quantization, Saturation, and Compressive Sensing!"#$%&'"#("

Compressive Through-focus Imaging

Compressive Sampling with R: A Tutorial

Compressive Sensing based Asynchronous Random Access for Wireless Networks

WAVELET-BASED COMPRESSED SPECTRUM SENSING FOR COGNITIVE RADIO WIRELESS NETWORKS. Hilmi E. Egilmez and Antonio Ortega

An Energy Efficient Compressed Sensing Framework for the Compression of Electroencephalogram Signals

Background Dirty Paper Coding Codeword Binning Code construction Remaining problems. Information Hiding. Phil Regalia

Performance Analysis of Threshold Based Compressive Sensing Algorithm in Wireless Sensor Network

The Capability of Error Correction for Burst-noise Channels Using Error Estimating Code

Frugal Sensing Spectral Analysis from Power Inequalities

The Scientist and Engineer's Guide to Digital Signal Processing By Steven W. Smith, Ph.D.

An Introduction to Compressive Sensing and its Applications

Sensing via Dimensionality Reduction Structured Sparsity Models

Improved Random Demodulator for Compressed Sensing Applications

Distributed Compressed Sensing of Jointly Sparse Signals

The Design of Compressive Sensing Filter

High Resolution Radar Sensing via Compressive Illumination

Volume 2, Issue 9, September 2014 International Journal of Advance Research in Computer Science and Management Studies

Wireless Communication Systems: Implementation perspective

Collaborative Compressive Spectrum Sensing in a UAV Environment

SPARSE TARGET RECOVERY PERFORMANCE OF MULTI-FREQUENCY CHIRP WAVEFORMS

Phil Schniter and Jason Parker

CS4495/6495 Introduction to Computer Vision. 2C-L3 Aliasing

Optimization Techniques for Alphabet-Constrained Signal Design

COMPRESSIVE SENSING IN WIRELESS COMMUNICATIONS

Compressive Sensing Using Random Demodulation

Compressive Orthogonal Frequency Division Multiplexing Waveform based Ground Penetrating Radar

DIGITALLY-ASSISTED MIXED-SIGNAL WIDEBAND COMPRESSIVE SENSING. A Dissertation ZHUIZHUAN YU DOCTOR OF PHILOSOPHY

Empirical Rate-Distortion Study of Compressive Sensing-based Joint Source-Channel Coding

Minimax Universal Sampling for Compound Multiband Channels

Super-Resolution and Reconstruction of Sparse Sub-Wavelength Images

Digital Television Lecture 5

Open Access Research of Dielectric Loss Measurement with Sparse Representation

Non-uniform sampling and reconstruction of multi-band signals and its application in wideband spectrum sensing of cognitive radio

Lab/Project Error Control Coding using LDPC Codes and HARQ

Progress In Electromagnetics Research B, Vol. 17, , 2009

Lec 19 Error and Loss Control I: FEC

Compressed Meter Reading for Delay-sensitive and Secure Load Report in Smart Grid

Volcanic Earthquake Timing Using Wireless Sensor Networks

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Physical Layer: Modulation, FEC. Wireless Networks: Guevara Noubir. S2001, COM3525 Wireless Networks Lecture 3, 1

Using of compressed sensing in energy sensitive WSN applications

arxiv: v2 [eess.iv] 11 Jan 2018

Ultrawideband Compressed Sensing: Channel Estimation

From Fountain to BATS: Realization of Network Coding

Detection Performance of Compressively Sampled Radar Signals

Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

Data Acquisition through joint Compressive Sensing and Principal Component Analysis

Sub-Nyquist Sampling of Short Pulses

Course Developer: Ranjan Bose, IIT Delhi

Cooperative Compressed Sensing for Decentralized Networks

Rate-Adaptive Compressed-Sensing and Sparsity Variance of Biomedical Signals

Compressive Sensing with Optimal Sparsifying Basis and Applications in Spectrum Sensing

AUDIO COMPRESSION USING DCT & CS

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks

Design and Implementation of Compressive Sensing on Pulsed Radar

Compressive Sensing Multi-spectral Demosaicing from Single Sensor Architecture. Hemant Kumar Aggarwal and Angshul Majumdar

IMPERIAL COLLEGE of SCIENCE, TECHNOLOGY and MEDICINE, DEPARTMENT of ELECTRICAL and ELECTRONIC ENGINEERING.

SAMPLING THEORY. Representing continuous signals with discrete numbers

Hardware Implementation of Proposed CAMP algorithm for Pulsed Radar

Scaling Network- based Spectrum Analyzer with Constant Communica<on Cost

Reliable Wireless Video Streaming with Digital Fountain Codes

Outline. Communications Engineering 1

Computing and Communications 2. Information Theory -Channel Capacity

EE 435/535: Error Correcting Codes Project 1, Fall 2009: Extended Hamming Code. 1 Introduction. 2 Extended Hamming Code: Encoding. 1.

EE216B: VLSI Signal Processing. Wavelets. Prof. Dejan Marković Shortcomings of the Fourier Transform (FT)

Hybrid Coding (JPEG) Image Color Transform Preparation

Burst Error Correction Method Based on Arithmetic Weighted Checksums

Clipping Noise Cancellation Based on Compressed Sensing for Visible Light Communication

Research Article A Multiple Target Localization with Sparse Information in Wireless Sensor Networks

SPARSE CHANNEL ESTIMATION BY PILOT ALLOCATION IN MIMO-OFDM SYSTEMS

Image compression with multipixels

Introduction to Wavelet Transform. Chapter 7 Instructor: Hossein Pourghassem

ECE 8771, Information Theory & Coding for Digital Communications Summer 2010 Syllabus & Outline (Draft 1 - May 12, 2010)

MATHEMATICS IN COMMUNICATIONS: INTRODUCTION TO CODING. A Public Lecture to the Uganda Mathematics Society

Solutions to Information Theory Exercise Problems 5 8

Lecture 3: Wireless Physical Layer: Modulation Techniques. Mythili Vutukuru CS 653 Spring 2014 Jan 13, Monday

ECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003

ECE 6640 Digital Communications

CS3291: Digital Signal Processing

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure

Multiple Input Multiple Output (MIMO) Operation Principles

photons photodetector t laser input current output current

ABSTRACT. Imaging Plasmons with Compressive Hyperspectral Microscopy. Liyang Lu

Open Access Sparse Representation Based Dielectric Loss Angle Measurement

Chapter 4 SPEECH ENHANCEMENT

Concurrent Channel Access and Estimation for Scalable Multiuser MIMO Networking

2.1. General Purpose Run Length Encoding Relative Encoding Tokanization or Pattern Substitution

Transcription:

Recovering Lost Sensor Data through Compressed Sensing Zainul Charbiwala Collaborators: Younghun Kim, Sadaf Zahedi, Supriyo Chakraborty, Ting He (IBM), Chatschik Bisdikian (IBM), Mani Srivastava

The Big Picture Lossy Communication Link 2

The Big Picture Lossy Communication Link 2

The Big Picture Lossy Communication Link 2

The Big Picture Lossy Communication Link How do we recover from this loss? 2

The Big Picture Lossy Communication Link How do we recover from this loss? Retransmit the lost packets 2

The Big Picture Lossy Communication Link How do we recover from this loss? Retransmit the lost packets 2

The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits 2

The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits 2

The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits 2

The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits Can we do something better? 2

The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC 3

The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link Recover from Received Compressed Measurements CSEC How does this work? 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link Recover from Received Compressed Measurements CSEC How does this work? Use knowledge of signal model and channel 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link Recover from Received Compressed Measurements CSEC How does this work? Use knowledge of signal model and channel CS uses randomized sampling/projections 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements How does this work? Lossy Communication Link Recover from Received Compressed Measurements CSEC Use knowledge of signal model and channel CS uses randomized sampling/projections Random losses look like additional randomness! 3

The Big Picture - Using Compressed Sensing Generate Compressed Measurements How does this work? Lossy Communication Link Recover from Received Compressed Measurements CSEC Use knowledge of signal model and channel CS uses randomized sampling/projections Random losses look like additional randomness! Rest of this talk focuses on describing How and How Well this works 3

Talk Outline A Quick Intro to Compressed Sensing Concluding Remarks CS Erasure Coding for Recovering Lost Sensor Data Evaluating CSEC s cost and performance 4

Why Compressed Sensing? Physical Signal Sampling Compression Communication Application 5

Why Compressed Sensing? Physical Signal Sampling Compression Communication Application Physical Signal Compressive Sampling Communication Decoding Application Shifts computation to a capable server 5

Transform Domain Analysis 6

Transform Domain Analysis We usually acquire signals in the time or spatial domain 6

Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly 6

Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly 6

Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. 6

Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. Or, in this case, by the index of the FFT coefficient and its complex value 6

Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. Or, in this case, by the index of the FFT coefficient and its complex value Sine wave is sparse in frequency domain 6

Lossy Compression 7

Lossy Compression This is known as Transform Domain Compression 7

Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal 7

Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals 7

Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals A necessary property for transforms is invertibility 7

Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals A necessary property for transforms is invertibility It would also be nice if there were efficient algorithms to convert the signals to transform between domains 7

Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals A necessary property for transforms is invertibility It would also be nice if there were efficient algorithms to convert the signals to transform between domains But why is it called lossy compression? 7

Lossy Compression When we transform the signal to the right domain, some coefficients stand out but lots will be near zero The top few coeffs describe the signal well enough 8

Lossy Compression When we transform the signal to the right domain, some coefficients stand out but lots will be near zero The top few coeffs describe the signal well enough 8

Lossy Compression When we transform the signal to the right domain, some coefficients stand out but lots will be near zero The top few coeffs describe the signal well enough 8

Lossy Compression 9

Lossy Compression JPEG (100%) : 407462 bytes, ~ 2x gain 9

Lossy Compression JPEG (100%) : 407462 bytes, ~ 2x gain JPEG (10%) : 7544 bytes, ~ 100x gain 9

Lossy Compression JPEG (100%) : 407462 bytes, ~ 2x gain JPEG (10%) : 7544 bytes, ~ 100x gain JPEG (1%) : 2942 bytes, ~ 260x gain 9

Compressing a Sine Wave 10

Compressing a Sine Wave Assume we re interesting in acquiring a single sine wave x(t) in a noiseless environment 10

Compressing a Sine Wave Assume we re interesting in acquiring a single sine wave x(t) in a noiseless environment An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. 10

Compressing a Sine Wave Assume we re interesting in acquiring a single sine wave x(t) in a noiseless environment An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. Question: What s the best way to find the parameters? 10

Compressing a Sine Wave 11

Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements 11

Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements Questions: 11

Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements Questions: What are good measurements? 11

Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements Questions: What are good measurements? How do you estimate f, a, φ from three measurements? 11

Compressed Sensing 12

Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 12

Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: 12

Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = asin(2πft i + φ) i {1,2, 3} 12

Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = asin(2πft i + φ) i {1,2, 3} Feasible solution space is much smaller φ a 12

Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = asin(2πft i + φ) i {1,2, 3} Feasible solution space is much smaller As the number of constraints grows (from more measurements), the feasible solution space shrinks Exhaustive search over this space reveals the right answer φ a 12

Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. 13

Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. 13

Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. x Sine wave. Amplitude represented by color 13

Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Ψ (Fourier Transform) x Sine wave. Amplitude represented by color 13

Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. y = Ψ (Fourier Transform) x Sine wave. Amplitude represented by color 13

Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. y = Ψ (Fourier Transform) x Sine wave. Amplitude represented by color j2π ft+φ ae 13

Sampling Matrix We could also write out the sampling process in matrix form 14

Sampling Matrix We could also write out the sampling process in matrix form x 14

Sampling Matrix We could also write out the sampling process in matrix form Φ x 14

Sampling Matrix We could also write out the sampling process in matrix form z = Φ x 14

Sampling Matrix We could also write out the sampling process in matrix form z = Φ x Three non-zero entries at some good locations 14

Sampling Matrix We could also write out the sampling process in matrix form Three measurements z = Φ x Three non-zero entries at some good locations 14

Sampling Matrix We could also write out the sampling process in matrix form Three measurements z = Φ k x n Three non-zero entries at some good locations 14

Exhaustive Search Objective of exhaustive search: Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) Our search is now guided by the fact that y is a sparse vector Rewriting constraints: z = Φx y = Ψx z = ΦΨ 1 y 15

Exhaustive Search Objective of exhaustive search: Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) Our search is now guided by the fact that y is a sparse vector Rewriting constraints: z = Φx y = Ψx z = ΦΨ 1 y ŷ = arg min y s.t. y 0 z = ΦΨ 1 y y 0 {i : y i 0} 15

Exhaustive Search Objective of exhaustive search: Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) Our search is now guided by the fact that y is a sparse vector Rewriting constraints: z = Φx y = Ψx z = ΦΨ 1 y ŷ = arg min y s.t. y 0 z = ΦΨ 1 y y 0 {i : y i 0} This optimization problem is NP-Hard! 15

l1 Minimization Approximate the l 0 norm to an l1 norm ŷ = argmin y s.t. y 1 z = ΦΨ 1 y y 1 = i y i This problem can now be solved efficiently using linear programming techniques This approximation was not new The big leap in Compressed Sensing was a theorem that showed that under the right conditions, this approximation was exact! 16

The Restricted Isometry Property ŷ = argmin y s.t. y 1 z = ΦΨ 1 y Rewrite as: z = A y For any positive integer constant s, find the smallest δ s such that: (1 δ s ) y 2 Ay 2 (1+ δ s ) y 2 holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries 17

The Restricted Isometry Property ŷ = argmin y s.t. y 1 z = ΦΨ 1 y Rewrite as: z = A y For any positive integer constant s, find the smallest δ s such that: (1 δ s ) y 2 Ay 2 (1+ δ s ) y 2 holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries The closer δ s (A) is to 0, the better the matrix combination A is at capturing unique features of the signal 17

CS Recovery Theorem Theorem: Assume that δ s (A) < 2-1 for some matrix A, then the solution to the l1 minimization problem obeys: [Candes-Romberg- Tao-05] ŷ y l1 C 0 for some small positive constant C 0 ŷ y s l1 ŷ y l 2 C 0 s ŷ y s l1 y s is an approximation of a non-sparse vector with only its s-largest entries If y is s-sparse, the reconstruction is exact 18

Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) * y 19

Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) Ψ -1 (Inverse Fourier Transform) * y 19

Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) Φ * Ψ -1 (Inverse Fourier Transform) * y 19

Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) z = Φ * Ψ -1 (Inverse Fourier Transform) * y 19

Bernoulli Random Projections Realizations of equiprobable Bernoulli RV z = Φ +1 n, 1 n * Ψ -1 (Inverse Fourier Transform) * y 20

Uniform Random Sampling Select samples uniformly randomly z = Φ * Ψ -1 (Inverse Fourier Transform) * y 21

Per-Module Energy Consumption on Mica 23456(789: %! $! #! "! ;<= >?) 001 ;@=A3(1B ;D<<A<E(1A85(78F:! "!!! C+! +!! #+!! "!&'( )* "!&'( )* #!&'( )* #!&'( )* $!&'( )* $!&'( )* #+!&' )* #+!&' )* "!#%&',-.* ;<= >?) 001 ;@=A3(1B "!#%&',-.* "!#%&',*/001 "!#%&',*/001 FFT computation higher than transmission cost Highest consumer in CS is the random number generator 22

Compressive Sampling Physical Signal x n Sampling z = I n x Time domain samples Physical Signal x n Compressive Sampling z = Φx k x n k < n Randomized measurements 23

Compressive Sampling Physical Signal x n Sampling z = I n x Compression y = Ψz n x n Compressed domain samples Physical Signal x n Compressive Sampling z = Φx k x n k < n Decoding y = argmin y y 1 s.t. z = ΦΨ 1 y Compressed domain samples 24

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples 25

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples When communication channel is lossy: 25

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples When communication channel is lossy: Use retransmissions to recover lost data 25

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples When communication channel is lossy: Use retransmissions to recover lost data Or, use error (erasure) correcting codes 25

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples 26

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Channel Coding Communication Channel Missing Decoding samples Recovered compressed domain samples 26

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Channel Coding w = Ωy m x n m > n Communication w l = Cw Channel Missing Decoding samples ŷ = ( CΩ) + w l Recovered compressed domain samples 26

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Done at application layer Channel Coding w = Ωy m x n m > n Communication w l = Cw Channel Missing Decoding samples ŷ = ( CΩ) + w l Recovered compressed domain samples 26

Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Done at application layer Channel Coding w = Ωy m x n m > n Communication w l = Cw Channel Missing Decoding samples ( ) + w l ŷ = CΩ Done at physical layer Can t exploit signal characteristics Recovered compressed domain samples 26

CS Erasure Coding Physical Signal x n Compressive Sampling z = Φx k x n k < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples 27

CS Erasure Coding Physical Signal x n Compressive Sampling z = Φx k x n k < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples Physical Signal x n Compressive Sampling z = Φx m x n k < m < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples 27

CS Erasure Coding Physical Signal x n Compressive Sampling z = Φx k x n k < n Over-sampling in CS is Erasure Coding! Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples Physical Signal x n Compressive Sampling z = Φx m x n k < m < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples 27

Features of CS Erasure Coding No need of additional channel coding block Redundancy achieved by oversampling Recovery is resilient to incorrect channel estimates Traditional channel coding fails if redundancy is inadequate Decoding is free if CS was used for compression anyway 28

Features of CS Erasure Coding No need of additional channel coding block Redundancy achieved by oversampling Recovery is resilient to incorrect channel estimates Traditional channel coding fails if redundancy is inadequate Decoding is free if CS was used for compression anyway Intuition: Channel Coding spreads information out over measurements Compression (Source Coding) - compact information in few measurements CSEC - spreads information while compacting! 28

Effects of Missing Samples on CS z = Φ x 29

Effects of Missing Samples on CS z = Φ x Missing samples at the receiver 29

Effects of Missing Samples on CS z = Φ x Missing samples at the receiver Same as missing rows in the sampling matrix 29

Effects of Missing Samples on CS z = Φ x What happens if we over-sample? 29

Effects of Missing Samples on CS z = Φ x What happens if we over-sample? Can we recover the lost data? 29

Effects of Missing Samples on CS z = Φ x What happens if we over-sample? Can we recover the lost data? How much over-sampling is needed? 29

Some CS Results Theorem: If k samples of a length n signal are acquired uniformly randomly (if each sample is equiprobable) and reconstruction is performed in the Fourier basis: [Rudelson06] s C k log 4 (n) w.h.p. Where s is the sparsity of the signal 30

Extending CS Results Claim: When m>k samples are acquired uniformly randomly and communicated through a memoryless binary erasure channel that drops m-k samples, the received k samples are still equiprobable. Implies that bound on sparsity condition should hold. If bound is tight, over-sampling rate (m-k) is same as loss rate [Charbiwala10] 31

Evaluating the RIP Create CS Sampling+Domain Matrix Φ Simulate Channel * Compute RIP constant of received matrix Ψ -1 (Inverse Fourier Transform) 10 3 instances, size 256x1024 32

Evaluating the RIP Create CS Sampling+Domain Matrix A= Φ* Ψ -1 Simulate Channel Compute RIP constant of received matrix 10 3 instances, size 256x1024 32

Evaluating the RIP Create CS Sampling+Domain Matrix Simulate Channel Compute RIP constant of received matrix A= Φ* Ψ -1 A = C*Φ* Ψ -1 10 3 instances, size 256x1024 32

RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 33

RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant 33

RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant 20 % Oversampling - RIP constant recovers 33

RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 34

RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 34

RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn t recover 34

RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn t recover Oversampling + Interleaving - RIP constant recovers 34

Signal Recovery Performance Evaluation Create Signal CS Sampling Interleave Samples Lossy Channel CS Recovery Reconstruction Error? 35

In Memoryless Channels Baseline performance - No Loss 36

In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 36

In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery 36

In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely 36

In Bursty Channels Baseline performance - No Loss 37

In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 37

In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely 37

In Bursty Channels Baseline performance - No Loss Oversampling + Interleaving - Still incomplete recovery 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely 37

In Bursty Channels Worse than baseline Baseline performance - No Loss Oversampling + Interleaving - Still incomplete recovery 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely Better than baseline Recovery incomplete because of low interleaving depth Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance 37

In Bursty Channels Worse than baseline Baseline performance - No Loss Oversampling + Interleaving - Still incomplete recovery 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely Better than baseline Recovery incomplete because of low interleaving depth Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance 37

In Real 802.15.4 Channel Baseline performance - No Loss 38

In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 38

In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery 38

In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely 38

Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mj) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense and Send Sense, Compress (FFT) and Send CS and Send (1/4 th rate) Sense and Send with Reed Solomon Sense, Compress and Send with RS CSEC and Send 39

Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mj) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense and Send Sense, Compress (FFT) and Send CS and Send (1/4 th rate) Sense and Send with Reed Solomon Sense, Compress and Send with RS CSEC and Send 39

Summary Oversampling is a valid erasure coding strategy for compressive reconstruction For binary erasure channels, an oversampling rate equal to loss rate is sufficient (empirical) CS erasure coding can be rate-less like fountain codes Allows adaptation to varying channel conditions Can be computationally more efficient than traditional erasure codes 40

Closing Remarks CSEC spreads information out while compacting No free lunch syndrome: Data rate requirement is higher than if using good source and channel coding independently But, then, computation cost is higher too This can be done using over-sampling too Can use CS streaming with feedback CSEC requires knowledge of signal model If signal is non-stationary, model needs to be updated during recovery CSEC requires knowledge of channel conditions 41

Thank You