Recovering Lost Sensor Data through Compressed Sensing Zainul Charbiwala Collaborators: Younghun Kim, Sadaf Zahedi, Supriyo Chakraborty, Ting He (IBM), Chatschik Bisdikian (IBM), Mani Srivastava
The Big Picture Lossy Communication Link 2
The Big Picture Lossy Communication Link 2
The Big Picture Lossy Communication Link 2
The Big Picture Lossy Communication Link How do we recover from this loss? 2
The Big Picture Lossy Communication Link How do we recover from this loss? Retransmit the lost packets 2
The Big Picture Lossy Communication Link How do we recover from this loss? Retransmit the lost packets 2
The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits 2
The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits 2
The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits 2
The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? Retransmit the lost packets Proactively encode the data with some protection bits Can we do something better? 2
The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC 3
The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link Recover from Received Compressed Measurements CSEC How does this work? 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link Recover from Received Compressed Measurements CSEC How does this work? Use knowledge of signal model and channel 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link Recover from Received Compressed Measurements CSEC How does this work? Use knowledge of signal model and channel CS uses randomized sampling/projections 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements How does this work? Lossy Communication Link Recover from Received Compressed Measurements CSEC Use knowledge of signal model and channel CS uses randomized sampling/projections Random losses look like additional randomness! 3
The Big Picture - Using Compressed Sensing Generate Compressed Measurements How does this work? Lossy Communication Link Recover from Received Compressed Measurements CSEC Use knowledge of signal model and channel CS uses randomized sampling/projections Random losses look like additional randomness! Rest of this talk focuses on describing How and How Well this works 3
Talk Outline A Quick Intro to Compressed Sensing Concluding Remarks CS Erasure Coding for Recovering Lost Sensor Data Evaluating CSEC s cost and performance 4
Why Compressed Sensing? Physical Signal Sampling Compression Communication Application 5
Why Compressed Sensing? Physical Signal Sampling Compression Communication Application Physical Signal Compressive Sampling Communication Decoding Application Shifts computation to a capable server 5
Transform Domain Analysis 6
Transform Domain Analysis We usually acquire signals in the time or spatial domain 6
Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly 6
Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly 6
Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. 6
Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. Or, in this case, by the index of the FFT coefficient and its complex value 6
Transform Domain Analysis We usually acquire signals in the time or spatial domain By looking at the signal in another domain, the signal may be represented more compactly Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. Or, in this case, by the index of the FFT coefficient and its complex value Sine wave is sparse in frequency domain 6
Lossy Compression 7
Lossy Compression This is known as Transform Domain Compression 7
Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal 7
Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals 7
Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals A necessary property for transforms is invertibility 7
Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals A necessary property for transforms is invertibility It would also be nice if there were efficient algorithms to convert the signals to transform between domains 7
Lossy Compression This is known as Transform Domain Compression The domain in which the signal can be most compactly represented depends on the signal The signal processing world has been coming up with domains for many classes of signals A necessary property for transforms is invertibility It would also be nice if there were efficient algorithms to convert the signals to transform between domains But why is it called lossy compression? 7
Lossy Compression When we transform the signal to the right domain, some coefficients stand out but lots will be near zero The top few coeffs describe the signal well enough 8
Lossy Compression When we transform the signal to the right domain, some coefficients stand out but lots will be near zero The top few coeffs describe the signal well enough 8
Lossy Compression When we transform the signal to the right domain, some coefficients stand out but lots will be near zero The top few coeffs describe the signal well enough 8
Lossy Compression 9
Lossy Compression JPEG (100%) : 407462 bytes, ~ 2x gain 9
Lossy Compression JPEG (100%) : 407462 bytes, ~ 2x gain JPEG (10%) : 7544 bytes, ~ 100x gain 9
Lossy Compression JPEG (100%) : 407462 bytes, ~ 2x gain JPEG (10%) : 7544 bytes, ~ 100x gain JPEG (1%) : 2942 bytes, ~ 260x gain 9
Compressing a Sine Wave 10
Compressing a Sine Wave Assume we re interesting in acquiring a single sine wave x(t) in a noiseless environment 10
Compressing a Sine Wave Assume we re interesting in acquiring a single sine wave x(t) in a noiseless environment An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. 10
Compressing a Sine Wave Assume we re interesting in acquiring a single sine wave x(t) in a noiseless environment An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. Question: What s the best way to find the parameters? 10
Compressing a Sine Wave 11
Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements 11
Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements Questions: 11
Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements Questions: What are good measurements? 11
Compressing a Sine Wave Technically, to estimate three parameters one needs three good measurements Questions: What are good measurements? How do you estimate f, a, φ from three measurements? 11
Compressed Sensing 12
Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 12
Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: 12
Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = asin(2πft i + φ) i {1,2, 3} 12
Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = asin(2πft i + φ) i {1,2, 3} Feasible solution space is much smaller φ a 12
Compressed Sensing With three samples: z 1, z2, z3 of the sine wave at times t1, t2, t3 We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = asin(2πft i + φ) i {1,2, 3} Feasible solution space is much smaller As the number of constraints grows (from more measurements), the feasible solution space shrinks Exhaustive search over this space reveals the right answer φ a 12
Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. 13
Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. 13
Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. x Sine wave. Amplitude represented by color 13
Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Ψ (Fourier Transform) x Sine wave. Amplitude represented by color 13
Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. y = Ψ (Fourier Transform) x Sine wave. Amplitude represented by color 13
Formulating the Problem We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. y = Ψ (Fourier Transform) x Sine wave. Amplitude represented by color j2π ft+φ ae 13
Sampling Matrix We could also write out the sampling process in matrix form 14
Sampling Matrix We could also write out the sampling process in matrix form x 14
Sampling Matrix We could also write out the sampling process in matrix form Φ x 14
Sampling Matrix We could also write out the sampling process in matrix form z = Φ x 14
Sampling Matrix We could also write out the sampling process in matrix form z = Φ x Three non-zero entries at some good locations 14
Sampling Matrix We could also write out the sampling process in matrix form Three measurements z = Φ x Three non-zero entries at some good locations 14
Sampling Matrix We could also write out the sampling process in matrix form Three measurements z = Φ k x n Three non-zero entries at some good locations 14
Exhaustive Search Objective of exhaustive search: Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) Our search is now guided by the fact that y is a sparse vector Rewriting constraints: z = Φx y = Ψx z = ΦΨ 1 y 15
Exhaustive Search Objective of exhaustive search: Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) Our search is now guided by the fact that y is a sparse vector Rewriting constraints: z = Φx y = Ψx z = ΦΨ 1 y ŷ = arg min y s.t. y 0 z = ΦΨ 1 y y 0 {i : y i 0} 15
Exhaustive Search Objective of exhaustive search: Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) Our search is now guided by the fact that y is a sparse vector Rewriting constraints: z = Φx y = Ψx z = ΦΨ 1 y ŷ = arg min y s.t. y 0 z = ΦΨ 1 y y 0 {i : y i 0} This optimization problem is NP-Hard! 15
l1 Minimization Approximate the l 0 norm to an l1 norm ŷ = argmin y s.t. y 1 z = ΦΨ 1 y y 1 = i y i This problem can now be solved efficiently using linear programming techniques This approximation was not new The big leap in Compressed Sensing was a theorem that showed that under the right conditions, this approximation was exact! 16
The Restricted Isometry Property ŷ = argmin y s.t. y 1 z = ΦΨ 1 y Rewrite as: z = A y For any positive integer constant s, find the smallest δ s such that: (1 δ s ) y 2 Ay 2 (1+ δ s ) y 2 holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries 17
The Restricted Isometry Property ŷ = argmin y s.t. y 1 z = ΦΨ 1 y Rewrite as: z = A y For any positive integer constant s, find the smallest δ s such that: (1 δ s ) y 2 Ay 2 (1+ δ s ) y 2 holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries The closer δ s (A) is to 0, the better the matrix combination A is at capturing unique features of the signal 17
CS Recovery Theorem Theorem: Assume that δ s (A) < 2-1 for some matrix A, then the solution to the l1 minimization problem obeys: [Candes-Romberg- Tao-05] ŷ y l1 C 0 for some small positive constant C 0 ŷ y s l1 ŷ y l 2 C 0 s ŷ y s l1 y s is an approximation of a non-sparse vector with only its s-largest entries If y is s-sparse, the reconstruction is exact 18
Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) * y 19
Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) Ψ -1 (Inverse Fourier Transform) * y 19
Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) Φ * Ψ -1 (Inverse Fourier Transform) * y 19
Gaussian Random Projections Gaussian: independent realizations of N (0, 1 n ) z = Φ * Ψ -1 (Inverse Fourier Transform) * y 19
Bernoulli Random Projections Realizations of equiprobable Bernoulli RV z = Φ +1 n, 1 n * Ψ -1 (Inverse Fourier Transform) * y 20
Uniform Random Sampling Select samples uniformly randomly z = Φ * Ψ -1 (Inverse Fourier Transform) * y 21
Per-Module Energy Consumption on Mica 23456(789: %! $! #! "! ;<= >?) 001 ;@=A3(1B ;D<<A<E(1A85(78F:! "!!! C+! +!! #+!! "!&'( )* "!&'( )* #!&'( )* #!&'( )* $!&'( )* $!&'( )* #+!&' )* #+!&' )* "!#%&',-.* ;<= >?) 001 ;@=A3(1B "!#%&',-.* "!#%&',*/001 "!#%&',*/001 FFT computation higher than transmission cost Highest consumer in CS is the random number generator 22
Compressive Sampling Physical Signal x n Sampling z = I n x Time domain samples Physical Signal x n Compressive Sampling z = Φx k x n k < n Randomized measurements 23
Compressive Sampling Physical Signal x n Sampling z = I n x Compression y = Ψz n x n Compressed domain samples Physical Signal x n Compressive Sampling z = Φx k x n k < n Decoding y = argmin y y 1 s.t. z = ΦΨ 1 y Compressed domain samples 24
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples 25
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples When communication channel is lossy: 25
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples When communication channel is lossy: Use retransmissions to recover lost data 25
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples When communication channel is lossy: Use retransmissions to recover lost data Or, use error (erasure) correcting codes 25
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Communication Missing samples 26
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Channel Coding Communication Channel Missing Decoding samples Recovered compressed domain samples 26
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Channel Coding w = Ωy m x n m > n Communication w l = Cw Channel Missing Decoding samples ŷ = ( CΩ) + w l Recovered compressed domain samples 26
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Done at application layer Channel Coding w = Ωy m x n m > n Communication w l = Cw Channel Missing Decoding samples ŷ = ( CΩ) + w l Recovered compressed domain samples 26
Handling Missing Data Physical Signal Sampling Compression x n z = I n x y = Ψz n x n Compressed domain samples Done at application layer Channel Coding w = Ωy m x n m > n Communication w l = Cw Channel Missing Decoding samples ( ) + w l ŷ = CΩ Done at physical layer Can t exploit signal characteristics Recovered compressed domain samples 26
CS Erasure Coding Physical Signal x n Compressive Sampling z = Φx k x n k < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples 27
CS Erasure Coding Physical Signal x n Compressive Sampling z = Φx k x n k < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples Physical Signal x n Compressive Sampling z = Φx m x n k < m < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples 27
CS Erasure Coding Physical Signal x n Compressive Sampling z = Φx k x n k < n Over-sampling in CS is Erasure Coding! Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples Physical Signal x n Compressive Sampling z = Φx m x n k < m < n Communication z l = Cz Decoding y = argmin y y 1 s.t. z l = CΦΨ 1 y Compressed domain samples 27
Features of CS Erasure Coding No need of additional channel coding block Redundancy achieved by oversampling Recovery is resilient to incorrect channel estimates Traditional channel coding fails if redundancy is inadequate Decoding is free if CS was used for compression anyway 28
Features of CS Erasure Coding No need of additional channel coding block Redundancy achieved by oversampling Recovery is resilient to incorrect channel estimates Traditional channel coding fails if redundancy is inadequate Decoding is free if CS was used for compression anyway Intuition: Channel Coding spreads information out over measurements Compression (Source Coding) - compact information in few measurements CSEC - spreads information while compacting! 28
Effects of Missing Samples on CS z = Φ x 29
Effects of Missing Samples on CS z = Φ x Missing samples at the receiver 29
Effects of Missing Samples on CS z = Φ x Missing samples at the receiver Same as missing rows in the sampling matrix 29
Effects of Missing Samples on CS z = Φ x What happens if we over-sample? 29
Effects of Missing Samples on CS z = Φ x What happens if we over-sample? Can we recover the lost data? 29
Effects of Missing Samples on CS z = Φ x What happens if we over-sample? Can we recover the lost data? How much over-sampling is needed? 29
Some CS Results Theorem: If k samples of a length n signal are acquired uniformly randomly (if each sample is equiprobable) and reconstruction is performed in the Fourier basis: [Rudelson06] s C k log 4 (n) w.h.p. Where s is the sparsity of the signal 30
Extending CS Results Claim: When m>k samples are acquired uniformly randomly and communicated through a memoryless binary erasure channel that drops m-k samples, the received k samples are still equiprobable. Implies that bound on sparsity condition should hold. If bound is tight, over-sampling rate (m-k) is same as loss rate [Charbiwala10] 31
Evaluating the RIP Create CS Sampling+Domain Matrix Φ Simulate Channel * Compute RIP constant of received matrix Ψ -1 (Inverse Fourier Transform) 10 3 instances, size 256x1024 32
Evaluating the RIP Create CS Sampling+Domain Matrix A= Φ* Ψ -1 Simulate Channel Compute RIP constant of received matrix 10 3 instances, size 256x1024 32
Evaluating the RIP Create CS Sampling+Domain Matrix Simulate Channel Compute RIP constant of received matrix A= Φ* Ψ -1 A = C*Φ* Ψ -1 10 3 instances, size 256x1024 32
RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 33
RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant 33
RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant 20 % Oversampling - RIP constant recovers 33
RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 34
RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 34
RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn t recover 34
RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn t recover Oversampling + Interleaving - RIP constant recovers 34
Signal Recovery Performance Evaluation Create Signal CS Sampling Interleave Samples Lossy Channel CS Recovery Reconstruction Error? 35
In Memoryless Channels Baseline performance - No Loss 36
In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 36
In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery 36
In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely 36
In Bursty Channels Baseline performance - No Loss 37
In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 37
In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely 37
In Bursty Channels Baseline performance - No Loss Oversampling + Interleaving - Still incomplete recovery 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely 37
In Bursty Channels Worse than baseline Baseline performance - No Loss Oversampling + Interleaving - Still incomplete recovery 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely Better than baseline Recovery incomplete because of low interleaving depth Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance 37
In Bursty Channels Worse than baseline Baseline performance - No Loss Oversampling + Interleaving - Still incomplete recovery 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn t recover completely Better than baseline Recovery incomplete because of low interleaving depth Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance 37
In Real 802.15.4 Channel Baseline performance - No Loss 38
In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 38
In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery 38
In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely 38
Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mj) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense and Send Sense, Compress (FFT) and Send CS and Send (1/4 th rate) Sense and Send with Reed Solomon Sense, Compress and Send with RS CSEC and Send 39
Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mj) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense and Send Sense, Compress (FFT) and Send CS and Send (1/4 th rate) Sense and Send with Reed Solomon Sense, Compress and Send with RS CSEC and Send 39
Summary Oversampling is a valid erasure coding strategy for compressive reconstruction For binary erasure channels, an oversampling rate equal to loss rate is sufficient (empirical) CS erasure coding can be rate-less like fountain codes Allows adaptation to varying channel conditions Can be computationally more efficient than traditional erasure codes 40
Closing Remarks CSEC spreads information out while compacting No free lunch syndrome: Data rate requirement is higher than if using good source and channel coding independently But, then, computation cost is higher too This can be done using over-sampling too Can use CS streaming with feedback CSEC requires knowledge of signal model If signal is non-stationary, model needs to be updated during recovery CSEC requires knowledge of channel conditions 41
Thank You