Nonuniform multi level crossing for signal reconstruction

Similar documents
Signal Resampling Technique Combining Level Crossing and Auditory Features

Weight functions for signal reconstruction based on level crossings

10 Speech and Audio Signals

Time division multiplexing The block diagram for TDM is illustrated as shown in the figure

Digital Audio. Lecture-6

CHAPTER 4. PULSE MODULATION Part 2

Chapter 2: Digitization of Sound

Image Enhancement in Spatial Domain

Lab/Project Error Control Coding using LDPC Codes and HARQ

Waveform Encoding - PCM. BY: Dr.AHMED ALKHAYYAT. Chapter Two

PULSE CODE MODULATION (PCM)

ECE 556 BASICS OF DIGITAL SPEECH PROCESSING. Assıst.Prof.Dr. Selma ÖZAYDIN Spring Term-2017 Lecture 2

Pulse Code Modulation

Laboratory Assignment 2 Signal Sampling, Manipulation, and Playback

Communications and Signals Processing

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals

UNEQUAL POWER ALLOCATION FOR JPEG TRANSMISSION OVER MIMO SYSTEMS. Muhammad F. Sabir, Robert W. Heath Jr. and Alan C. Bovik

Overview of Code Excited Linear Predictive Coder

EEE 309 Communication Theory

Mel Spectrum Analysis of Speech Recognition using Single Microphone

TIME encoding of a band-limited function,,

Analog and Telecommunication Electronics

EC 6501 DIGITAL COMMUNICATION UNIT - II PART A

QUANTIZATION NOISE ESTIMATION FOR LOG-PCM. Mohamed Konaté and Peter Kabal

Telecommunication Electronics

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam

Theory of Telecommunications Networks

Chapter 8. Representing Multimedia Digitally

OFDM Transmission Corrupted by Impulsive Noise

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 22.

CHAPTER 3 ADAPTIVE MODULATION TECHNIQUE WITH CFO CORRECTION FOR OFDM SYSTEMS

1.Discuss the frequency domain techniques of image enhancement in detail.

Communications Theory and Engineering

Pulse Code Modulation

DIGITAL COMMUNICATION

Analysis of Complex Modulated Carriers Using Statistical Methods

Fundamentals of Digital Audio *

Michael F. Toner, et. al.. "Distortion Measurement." Copyright 2000 CRC Press LLC. <

Techniques for Extending Real-Time Oscilloscope Bandwidth

14 fasttest. Multitone Audio Analyzer. Multitone and Synchronous FFT Concepts

Communication Engineering Prof. Surendra Prasad Department of Electrical Engineering Indian Institute of Technology, Delhi

Chapter 2 Channel Equalization

Nonlinear Companding Transform Algorithm for Suppression of PAPR in OFDM Systems

THE TREND toward implementing systems with low

MAS160: Signals, Systems & Information for Media Technology. Problem Set 4. DUE: October 20, 2003

DIGITAL processing has become ubiquitous, and is the

USE OF BASIC ELECTRONIC MEASURING INSTRUMENTS Part II, & ANALYSIS OF MEASUREMENT ERROR 1

Optimum Power Allocation in Cooperative Networks

Chapter 5: Signal conversion

Department of Electronics and Communication Engineering 1

Interpolation Error in Waveform Table Lookup

Blind Blur Estimation Using Low Rank Approximation of Cepstrum

3432 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007

ECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003

Subband Analysis of Time Delay Estimation in STFT Domain

ANALOG-TO-DIGITAL CONVERTERS

Downloaded from 1

Interleaved PC-OFDM to reduce the peak-to-average power ratio

3D Distortion Measurement (DIS)

Asynchronous analog-to-digital converter based on level-crossing sampling scheme

A New Class of Asynchronous Analog-to-Digital Converters Based on Time Quantization

Lecture 3 Concepts for the Data Communications and Computer Interconnection

SIGNALS AND SYSTEMS LABORATORY 13: Digital Communication

CHAPTER 6 SIGNAL PROCESSING TECHNIQUES TO IMPROVE PRECISION OF SPECTRAL FIT ALGORITHM

EEE482F: Problem Set 1

Chapter IV THEORY OF CELP CODING

Performance study of Text-independent Speaker identification system using MFCC & IMFCC for Telephone and Microphone Speeches

Voice Transmission --Basic Concepts--

Multirate Digital Signal Processing

Sampling and Reconstruction of Analog Signals

18.8 Channel Capacity

NOISE ESTIMATION IN A SINGLE CHANNEL

Exploring QAM using LabView Simulation *

TE 302 DISCRETE SIGNALS AND SYSTEMS. Chapter 1: INTRODUCTION

1.1 Introduction to the book

Techniques for Extending Real-Time Oscilloscope Bandwidth

II Year (04 Semester) EE6403 Discrete Time Systems and Signal Processing

Performance Analysis of a 1-bit Feedback Beamforming Algorithm

Non-coherent pulse compression - concept and waveforms Nadav Levanon and Uri Peer Tel Aviv University

Synthesis Algorithms and Validation

Course 2: Channels 1 1

Islamic University of Gaza. Faculty of Engineering Electrical Engineering Department Spring-2011

ON SYMBOL TIMING RECOVERY IN ALL-DIGITAL RECEIVERS

Enhancement of Speech Signal Based on Improved Minima Controlled Recursive Averaging and Independent Component Analysis

Autocorrelator Sampler Level Setting and Transfer Function. Sampler voltage transfer functions

CLOCK AND DATA RECOVERY (CDR) circuits incorporating

6.450: Principles of Digital Communication 1

Department of Mechanical and Aerospace Engineering. MAE334 - Introduction to Instrumentation and Computers. Final Examination.

EE482: Digital Signal Processing Applications

Lab 3.0. Pulse Shaping and Rayleigh Channel. Faculty of Information Engineering & Technology. The Communications Department

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Speech Compression Using Voice Excited Linear Predictive Coding

PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM

Signal Code Modulation for Broadband Wireless Systems

Laboratory 1: Uncertainty Analysis

EIE 441 Advanced Digital communications

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Differentially Coherent Detection: Lower Complexity, Higher Capacity?

THE problem of acoustic echo cancellation (AEC) was

2.1 BASIC CONCEPTS Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal.

Transcription:

6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven by a growing demand for intelligent and high speed analog-to-digital converter (ADC) with low-power processor, increasing efforts have been made to improve level crossing based sampling techniques. An asynchronous level crossing sampling scheme records a new sample whenever the source signal crosses a threshold level. Consequently, more samples are recorded during fast changing intervals and fewer samples are recorded during relatively quiescent intervals. As a result, the signal is sampled nonuniformly. If the quiescent intervals are long and the number of these long intervals is large, then the average number of samples recorded would be relatively low. However, the recorded samples contain sufficient information that enables a fairly accurate reconstruction of the source signal. The recorded samples can be represented with very high accuracy; essentially because highly accurate clocks are much easier to build than circuits that quantize amplitudes very accurately. Also, asynchronous level crossing sampling is attractive because it can be implemented with a single-

comparator circuit[38]. Several case studies in ADC s show that level crossing based on asynchronous sampling technique can be more effective than synchronous ADCs. The 1-bit ADC (bipolar) is optimized improving the dynamic range such that quantization error effectively decreases[76]. The level crossing sampling scheme has been demonstrated for speech applications using CMOS technology and a voltage mode approach for the analog parts of the converter. Electrical simulations prove that the figure of Merit of asynchronous level crossing converters increased compared to uniform sampling ADCs[5, 4]. Level crossing sampling scheme has also been suggested in literature for non-bandlimited signals[25], random processes[16], band limited gaussian random processes[45], reconstruction from nonuniform sampling[38] and for monitoring and control systems[9, 46, 51]. The level crossing sampling strategy is also known as an event-based sampling[15, 44], Lebesgue sampling[11], send-on-delta concept[46] or deadband concept[51]. In general, conventional uniform sampling is with uniform time-step and variable amplitude. There is a trade-off between the requirements of bandwidth and the dynamic range to obtain a certain resolution. Sampling at the Nyquist rate requires smallest bandwidth but large number of quantization levels to achieve high resolution. Increasing the bandwidth decreases the need for large number of quantization levels thus reducing the quantization error power and increasing the number of samples. At the extreme, the signal can be sampled capturing its characteristics using level crossing concept. Several signals have interesting statistical properties but uniform sampling does not take advantage of them. Signals such as electro cardiograms, speech signals, temperature sensors, pressure sensors, seismic signals are almost always constant and may vary significantly during brief moments. In level crossing, the characteristics of the waveform play a vital role in approximation of the input signal. It has been proved in [5, 47], that level crossing sampling approach can lead to reduction in number of samples. The other advantage of level crossing sampling is that sampling frequency and quantization levels are decided by the signal itself. However, the methods developed for various 91

cases use either constant threshold step size quantization levels (linear levels) or manually determined levels. The problem of primary interest is to determine statistical information on automatic distribution of quantization levels based on the characteristics of the input signal. Linear threshold level allocation scheme is simple but not efficient in terms of data bit usage for the following reason. The linear threshold allocation will result in a higher SNR at the region of higher amplitude than the region of lower amplitude. This increased SNR at the higher signal amplitude does not increase the perceived audio quality because humans are most sensitive to lower amplitude components[41]. To overcome these problems, we propose a non-linear quantization approach based on logarithmic and Incomplete Beta Function (IBF) which dynamically assign the number of quantization levels exploiting this auditory motivation. The chapter is organized as follows. In section 6.2 level crossing based sampling approach with the proposed nonuniform threshold allocation scheme is described. The incorporation of linear, logarithmic and IBF functions to formulate a rule for allocation of nonuniform threshold levels in multi level crossing is discussed in section 6.3. In section 6.4 experimental setup for testing the proposed approach and results are discussed. Also, the performance and analysis of the proposed method is discussed. Section 6.5 is devoted to conclusions. 6.2 Level crossing based irregular sampling model Level Crossing Analysis represents an approach to interpretation and characterization of time signals by relating frequency and amplitude information. Measurement of level crossing of a signal is defined as the crossings of a threshold level l by consecutive samples. Definition: Let w (x) be a deterministic weight function and p (x) be the probability density function of a source signal. The level sampler L f(.) density distribution with a 92

deterministic level allocation weight function f (.) is a mapping L f(.) : R f (p (x), w (x), Z) : L f(.) = (p (x) w (x)) N where N is the total number of nonuniform levels. R and Z denotes the set of real and integer numbers respectively. The symbol represents convolution. Since the quantization levels are irregularly spaced across the amplitude range of the signal, it increases the efficiency of bit usage. The spacing of the levels is decided by the importance of the amplitude segments which is discussed in section 6.3. A sample is recorded when the input signal crosses one of the nonuniformly spaced levels. The precession of time of the recorded sample is decided by the local timer τ. Definition: Let L f(.) = {l 1, l 2,...l N } be the set of nonuniformly spaced levels and 2 b = N quantization levels with b bit resolution. The level crossing of the threshold level l i by a signal s(t) with period T is given by L f(.) (I ni ) = l i iff ( ( ) ) ( ( ) ) i 1 i s N T l i s N T l i < (6.1) where n sub intervals are defined by I ni = ( i 1 N T, i N T), i = 1, 2,...n The level crossing problem is depicted in figure 6.1 where the samples are recorded whenever the input signal crosses the threshold levels. If a sample is recorded and transmitted every time a level crossing occurs, the encoding procedure is called asynchronous delta modulation[26]. 6.3 Weight functions for irregular sampling Determining the positions of threshold levels on an amplitude scale is very important as it has a huge impact on the performance of coding. Unfortunately there is no theory available to determine the locations of threshold levels which exploit the statistics of a random variable under a particular distribution. Furthermore, 93

Figure 6.1: Level Crossing sampling. t 1, t 2, t 3, t 4, t 5, t 6, t 7, t 8 denotes the recorded samples due to levels l 1, l 2, l 3, l 4 which are nonuniformly spaced. the uniform threshold levels are not the efficient coding of the levels because they do not take advantage of the statistical properties of the signal. The basic idea behind the weight functions is to emphasize the amplitude regions where (speech) signal is dominant and to attenuate the amplitude regions which are less important considering auditory properties. As a result, signals with lesser activity in higher amplitude regions compared to the lower amplitude regions will have less number of levels at higher amplitude region. Hence, basic methodology in level crossing based irregular sampling is to choose a weight function which encourages the important amplitude regions. The present study discusses distribution of nonuniform threshold levels based on the three weight functions namely linear, logarithmic and Incomplete Beta Functions (IBF). IBF probability distribution function (PDF) can attain variety of shapes; this allows the user to select the distribution which exploits the auditory properties. This viewpoint suggests family of distributions for weight functions. Hence, we have proposed two more weight 94

functions namely linear and logarithmic to study and analyze the characteristics of proposed approach. 6.3.1 Linear function Although human auditory perception certainly does not use a linear function, this group of mapping methods renders acceptable results for a wide range of applications. Its strength is its simplicity and speed. The linear function is defined by linear (n) = n The vector for linear weight function is generated by concatenating the symmetric linear function vectors. However, computing the importance of amplitude regions in linear scale is not merely a matter of mathematical convenience. There is a more compelling and physical consideration to be taken into account related to the importance of amplitude regions. Natural primary representation for characterization of most physical systems is linear. The weight of amplitude regions increases linearly towards the center of amplitude scale. Hence, the important amplitude regions are emphasized linearly by using linear weight function as shown in figure 6.3(a). 6.3.2 Logarithmic function A logarithm of a number x in base b is a number n such that x = b n, where the value b must be neither nor a root of 1. It is usually written as log b (x) = n When x and b are further restricted to positive real numbers, the logarithm is a unique real number. Our sense of hearing perceives equal ratios of frequencies as equal differences in pitch. Representation of importance of amplitude on a logarithmic scale can be helpful when the importance of regions varies monotonically. 95

Logarithmic rule assigns less number of levels to the corner amplitude regions and more levels are assigned logarithmically in important amplitude regions. The center amplitude regions (near zero amplitude regions) are considered to be important amplitude regions. This issue, however is not whether to accept or reject logarithmic rule but to appreciate where it fits in, and where it does not. 6.3.3 Incomplete Beta Function The Beta function is a continuous distribution defined over a range of real values. Additionally, both of its end points are fixed at exact locations and it belongs to the flexible family of distributions. The lack of data to decide the exact positions of levels for a given signal creates problems concerning the quality of level sampled data. In such cases, an expert will have to assume the level positions on amplitude scale. For this reason, the flexible incomplete beta distribution, capable of attaining a variety of shapes could be used in level crossing applications. Because of its extreme flexibility, the distribution appears ideally suited for the computation of number of levels for a specific amplitude region of a speech signal. A generalization of the incomplete beta function is defined by[58] B (z, α, β) z u α 1 ( 1 u β 1) du (6.2) = z α [ 1 α + 1 β α + 1 z+ + (1 β) (n b) z n + n!(α + n) The Incomplete beta function I (z, α, β) is defined by ] (6.3) I (z, α, β) B (z, α, β) B (α, β) 1 B (α, β) z (6.4) u α 1 ( 1 u β 1) du (6.5) Equation 6.5 has the limiting values I (α, β) = and I (α, β) = 1. The shape of the incomplete beta function obtained from equation 6.5 depends on the choice 96

of its two parameters α and β. The parameters are any real number greater than zero; depending on their values, the incomplete beta function generated will have the inverted U, the triangle or the general bell shape of the unimodal function as shown in figure 6.2. Estimating these parameters is a challenge since these parameters control the number of levels for a given amplitude range of a speech signal along with the signal probability density function. The lower amplitude regions are important than the extreme corner amplitude regions in speech signal since humans are more sensitive to the lower amplitude regions. With the knowledge of this auditory information, one can obtain the approximation for the perceptually motivated IBF weight function. We have empirically chosen the values of α =.2 and β =.2. When α =.2 and β =.2 the near zero amplitude regions are more highlighted compared to the corner amplitude regions. However, the selection was made under worst case assumption to provide some general criterion to obtain best PDF out of family of PDF curves. Keeping the auditory motivation information, we use the IBF for estimating levels. 6.3.4 Level estimation In a deterministic environment, the accuracy of the signal reconstruction depends on several parameters such as positioning of the levels, total number of levels, statistical properties of the signal etc. If weight functions are directly applied for level estimation, amplitude activity information of a given signal will not be used which results in biased level estimation. Hence, level distribution PDF is convolved with signal PDF to correct for the biased distribution of levels. This ensures that level distribution is unbiased. Specifically, for a given signal we analyze its structural behavior by estimating its PDF. The signal histogram is approximated to obtain the signal PDF p (x). Now, consider a signal with amplitude PDF p (x) and weight function w (x). Let N be the total number of levels. The locations of N levels are estimated by the distribution L f(.) (x) = p (x) w (x) (6.6) 97

Figure 6.2: IBF distribution for various α, β. The IBF distribution is characterized by the parameters α, β values. L f(.) (x) gives the probability distribution of levels and guides the distribution of N levels over the amplitude range. As expected, the spacing of N levels are not uniform and they are nonuniformly spaced over the amplitude range. Each level can be represented with log 2 (N) bits. Since the levels are nonuniformly spaced depending on the importance of the amplitude segment, we efficiently utilize the quantization levels by ignoring the amplitude regions with less activity. Hence only amplitude regions with higher activity and important lower amplitude regions will be allocated more number of levels using the weight function w (x) and signal amplitude PDF p (x). The histogram of sample speech signal is shown in figure 6.3(a), along with plot of PDF of linear weight function (Figure 6.3(b)), PDF of logarithmic weight function (Figure 6.3(c)) and IBF weight function for α =.2, β =.2 (Figure 6.3(d)). The steps employed for the proposed approach are summarized as follows. 1. Input signal s [n] is normalized to lie within [ 1, 1] and made zero mean. 98

Figure 6.3: (a) Signal histogram of a clean speech signal. (b) PDF of linear weight function (c) PDF of logarithmic weight function (d) PDF of Incomplete Beta Function for values α =.2, β =.2 2. Find the signal histogram. Approximate the signal histogram to find the PDF of the speech signal. 3. For each weight function and for varying number of bins (used to compute the weight function) (a) Find the distribution of quantization levels (b) Find the level crossings of the input signal. Store the level crossed sample value and its position. 6.4 Experimental evaluation In this section, the performance of the proposed approach is evaluated for speech signals. We have run simulations for the level crossing based sampling of speech signals from TIMIT database[23]. The TIMIT speech signals are sampled at 16 99

KHz sampling rate and each sample size is 16 bit. Speech signals are chosen from T EST/DR1 folder which contains seven male and four female adult speakers thereby yielding a total of 1 signals. The PDF of the speech signal is estimated by computing the amplitude histogram of the signal with 1 bins. The total number of quantization levels required to sample the given signal are set to 16, 32, 64 and 128. The accuracy of distribution of the levels computed from equation 6 also depends on the number of bins used to compute convoluted PDF of the signal with weight function. The levels are estimated for 2, 4, 6, 8, 1 bins for comparison and analysis. We evaluated the system with proposed linear, logarithmic and IBF weight functions. The performance of the proposed method is evaluated computing SNR and compression ratio. The performance measure SNR can be interpreted as SNR = 1log 1 ( 1 N 1 N N i=1 s ) (i)2 N i=1 s (i)2 s (i) 2 where s (i) represents the original speech signal and s (i) denotes the reconstructed signal. Computation of SNR can be interpreted as the speed-up factor by which level crossing sampler achieves the same precision as the uniform sampling method. The ability to recover the uniform samples from its data representation of unequal sample values is also important. In our study, we applied direct interpolation scheme, polynomial curve fitting to approximate original signal from level crossed signal. Compression ratio is used to quantify the reduction in datarepresentation size produced by the proposed method and is defined as the ratio between the uncompressed size (original signal size) and the compressed size (level crossed sample size). compressionratio = number of samples in original signal number of samples used in reconstruction of signal The simulation results are approximated analytically using quadratic polynomial. 1

Figure 6.4: Performance of linear weight function (a) Histogram bin versus SNR. (b) Quantization level versus Compression ratio (c) Quantization level versus SNR. (d) SNR versus Compression ratio. Figure 6.5: Experimental results for logarithmic weight function (a) Histogram bin versus SNR. (b) Quantization level versus Compression ratio (c) Quantization level versus SNR. (d) SNR versus Compression ratio. 11

Figure 6.6: Performance of IBF weight function (a) Histogram bin versus SNR. (b) Quantization level versus Compression ratio (c) Quantization level versus SNR. (d) SNR versus Compression ratio By comparing IBF, logarithmic and linear rule results, we analyze the performances. We investigated relationship between SNR and the histogram bins used to compute the signal amplitude histogram. The simulation results are depicted in figure 6.4(a), 6.5(a) and 6.6(a). SNR of the resampled signal generally improves as the histogram bins increase for all the levels. This shows that increasing the resolution of the amplitude scale helps in accurate distribution of the levels thereby increasing the SNR. The IBF rule gives high SNR consistently compared to the logarithmic and linear rule at all bins. The characteristic graph appears non monotonic for 16 levels in figure 6.4(a). This is due to the distribution of 16 quantization levels on amplitude range by the linear weight function. For 16 levels, linear weight function clusters most of the quantization levels near amplitude region without distributing much levels to other amplitude regions. Hence, level crossing based sampling process results in poor performance for linear weight function. As the number of histogram bins and quantization levels increase, more 12

quantization levels are spread across amplitude range. Hence, SNR increases as the number of histogram bins and quantization levels increases. The performance of logarithmic rule (Figure 6.5(a)) is slightly less than that of IBF rule (Figure 6.6(a)) for all the levels. The best performance is observed for IBF rule with 128 levels. In case of linear rule, 1 db drop in SNR is observed compared to IBF for 128 levels. Similarly, higher SNR is achieved for IBF and logarithmic rule compared to linear rule at all levels (Figure 6.4(a), figure 6.5(a) and figure 6.6(a)). This proves that increasing the quantization levels increases the SNR. However, increasing the quantization levels considerably decreases the compression ratio. The comparison of compression ratio at various levels for the three rules is shown in figure 6.4(b), 6.5(b) and 6.6(b). We observe that logarithmic rule slightly outperforms IBF rule. The logarithmic rule gives higher SNR for lesser levels and the ratio decreases as the levels are increased. Linear rule results in low compression ratio for all levels. For higher levels all the rules give similar results. Since, the linear rule forces quantization levels to cluster near zero amplitude segments which are more prone to noise in speech signals, the compression ratio considerably decreases compared to IBF and logarithmic rules. Both IBF and logarithmic rules spread the quantization levels at near zero amplitude regions, and hence they perform better. The results of SNR versus levels (Figure 6.4(c), figure 6.5(c) and figure 6.6(c)) show that IBF and logarithmic rule performance is superior to linear rule at all levels. It is observed that IBF and logarithmic rules produce almost similar performance results. Minimum SNR for 16 levels is near 3 db in IBF and logarithmic rule whereas minimum SNR in linear rule for 16 levels is.4 db. Figure 6.4(d), 6.5(d) and 6.6(d) shows the plot of SNR versus compression ratio. The characteristic curve appears to be concave for IBF rule, linear for logarithmic rule and convex for linear rule. Increasing SNR decreases the compression ratio rapidly due to increased number of level crossings. Linear rule spreads the quantization levels linearly away from near zero amplitude regions. Maximum number of quantization levels is assigned to zero amplitude region which is prone 13

to noise. Hence, linear weight function results in more level crossings, poor compression ratio and poor SNR. Compression ratio of greater than 2.5 is achieved in IBF and logarithmic rules (Figure 6.6(d) and figure 6.5(d)) with SNR 2.5dB. From figure 6.4(d), we see that compression ratio less than 2.5 is achieved with SNR db resulting in a poor performance for linear rule. As the SNR increases, IBF and logarithmic rule achieve good performance compared to linear rule due to the spread of quantization levels. Furthermore for higher SNR values the compression ratio drops drastically for IBF and logarithmic rule, whereas the drop in compression ratio for increasing SNR is much slower due to the distribution of levels by the linear rule. The IBF and logarithmic rules consistently outperformed linear rule for all SNR values as shown in figure 6.4(d), figure 6.5(d) and figure 6.6(d). Performance of IBF and logarithmic rules are considerably better than linear rule with higher SNR for higher compression ratio, which is nonetheless better performance. Comparison of IBF, logarithmic and linear rule shows that the IBF rule and logarithmic outperforms linear rule. Also, performances of IBF slightly outperforms logarithmic rule in compression ratio and SNR. Figure 6.7 compares the plot of input signal(speech signal from TIMIT database) with reconstructed signal. In this example, 32 quantization levels are distributed using deterministic weight functions. The reconstructed signals from logarithmic and IBF functions appears to be approximately similar compared to the linear weight function. Linear weight function assigns more levels near corner amplitudes ( 1, +1) compared to logarithmic and IBF weight functions. Hence the plot of linear rule has more samples near the corner amplitude regions than IBF and logarithmic weight function. However, humans are not very much sensitive to the corner amplitude regions. Therefore, reconstruction by logarithmic and IBF are better than linear weight function. The error graphs of reconstructed signals shows that the error is less in logarithmic and IBF weight functions compared to linear weight function. The behavioral patterns of IBF, logarithmic and linear rule appear to be similar except in SNR versus ratio analysis. IBF rule is based on the auditory prop- 14

1 Input signal Amplitude.5.5.2.4.6.8 1 Sample 1.2 1.4 1.6 1.8 2 x 1 4 Reconstructed signal with IBF weight function 1 Amplitude Amplitude Amplitude 1.2.4.6.8 1 1.2 1.4 1.6 1.8 2 Sample x 1 4 Reconstructed signal with logarithmic weight function 1 1.2.4.6.8 1 1.2 1.4 1.6 1.8 2 Sample x 1 4 Reconstructed signal with linear weight function 1 1.2.4.6.8 1 Sample 1.2 1.4 1.6 1.8 2 x 1 4.5 Error in reconstruction with IBF weight function Amplitude.5 1.2.4.6.8 1 Sample 1.2 1.4 1.6 1.8 2 x 1 4 Error in reconstruction with logarithmic weight function.5 Amplitude.5 1.2.4.6.8 1 Sample 1.2 1.4 1.6 1.8 2 x 1 4 Error in reconstruction with linear weight function.5 Amplitude.5 1.2.4.6.8 1 Sample 1.2 1.4 1.6 1.8 2 x 1 4 Figure 6.7: Comparison of input signal and reconstructed signal with linear, logarithmic and IBF weight functions. In this example, 32 quantization levels are distributed using deterministic weight functions. The error graphs of reconstructed signals shows that the error is less in logarithmic and IBF weight functions compared to linear weight function. 15

erties of the humans. IBF rule distributes more levels in the critical amplitude regions. Similar to IBF, logarithmic rule also considers that near zero amplitude regions are important than the corner amplitude regions. The priority of the amplitude regions varies logarithmically from corner amplitude regions to near zero value amplitude regions. Linear rule considers that each amplitude region is equally important. Hence, the SNR of the resampled signal remains consistently superior to linear rule. Lack of levels at critical amplitude regions of the signal decreases the SNR of the resampled signal. The performance of the proposed approaches is fairly consistent with that of Sayiner[68]. This experimental analysis illustrates that signal with special statistical behavior such as speech, medical signals are not suitable for uniform sampling. These types of signals can be more efficiently sampled using a level crossing scheme. 6.5 Summary This chapter presents a new threshold level allocation schemes for level crossing based on nonuniform sampling which dynamically assigns the number of quantization levels depending on the importance of the given amplitude range of the input signal. Proposed methods take the advantage of statistical properties of the signal and allocate the nonuniformly spaced quantization levels across the amplitude range. The proposed level allocation scheme for nonuniform sampling based on level crossing may motivate directed attempts to augment traditional methods that will improve their ability. Overall, these results motivate continued work on level crossing based on nonuniform sampling for improving sampling performance and analyzing the signals. Simplicity but significantly good performance of logarithmic weight function is what is observed. In general logarithmic is best because implementation complexity of logarithmic is much lesser than IBF wight function. 16