AN IMPLEMENTATION OF VIRTUAL ACOUSTIC SPACE FOR NEUROPHYSIOLOGICAL STUDIES OF DIRECTIONAL HEARING

Size: px
Start display at page:

Download "AN IMPLEMENTATION OF VIRTUAL ACOUSTIC SPACE FOR NEUROPHYSIOLOGICAL STUDIES OF DIRECTIONAL HEARING"

Transcription

1 CHAPTER 5 AN IMPLEMENTATION OF VIRTUAL ACOUSTIC SPACE FOR NEUROPHYSIOLOGICAL STUDIES OF DIRECTIONAL HEARING Richard A. Reale, Jiashu Chen, Joseph E. Hind and John F. Brugge 1. INTRODUCTION Sound produced by a free-field source and recorded near the cat s eardrum has been transformed by a direction-dependent Free-fieldto-Eardrum Transfer Function (FETF) or, in the parlance of human psychophysics, a Head-Related-Transfer-Function (HRTF). We prefer to use here the term FETF since, for the cat at least, the function includes significant filtering by structures in addition to the head. This function preserves direction-dependent spectral features of the incident sound that, together with interaural time and interaural level differences, are believed to provide the important cues used by a listener in localizing the source of a sound in space. The set of FETFs representing acoustic space for one subject is referred to as a Virtual Acoustic Space (VAS). This term applies because these functions can be used to synthesize accurate replications of the signals near the eardrums for any sound-source direction contained in the set. 1-6 The combination of VAS and earphone delivery of synthesized signals is proving to be a Virtual Auditory Space: Generation and Applications, edited by Simon Carlile R.G. Landes Company.

2 154 Virtual Auditory Space: Generation and Applications powerful tool to study parametrically the mechanisms of directional hearing. This approach enables the experimenter to control dichotically with earphones each of the important acoustic cues resulting from a free-field sound source while obviating the physical problems associated with a moveable loudspeaker or an array of speakers. The number of FETFs required to represent most, if not all, of auditory space at a high spatial resolution is prohibitively large when a measurement of the acoustic waveform is made for each sound direction. This limitation has been overcome by the development of a mathematical model that calculates FETFs from a linear combination of separate functions of frequency and direction. 7,8 The model is a low-dimensional representation of a subject s VAS that provides for interpolation while maintaining a high degree of fidelity with empirically measured FETFs. The use of this realistic and quantitative model for VAS, coupled with an interactive high-speed computer graphical interface and a spectrally-compensated earphone delivery system, provides for simulation of an unlimited repertoire of sound sources and their positions and movements in virtual acoustic space, including many that could not be presented easily in the usual free-field laboratory. In this paper we summarize the techniques we have devised for creating a VAS, and illustrate how this approach can be used to study physiological mechanisms of directional hearing at the level of auditory cortex in experimental animals. Further details can be found in original papers on this subject. 2. FETF IN THE CAT 2.1. FETF ESTIMATION FROM FREE-FIELD RECORDINGS Calculation of the FETF requires two recordings from a free-field source, one near the animal s eardrum for each source location, and the other in the free-field without the animal being present. The techniques for making these measurements in the cat involved varying the direction of a loudspeaker in a spherical coordinate system covering 360 in azimuth and 126 in elevation. Typically, a left- and right-ear recording was made for each of 1800 or more positions at which the loudspeaker was located. 9 Figure 5.1 shows schematically the elements of a recording system used to derive an FETF. Because the cat hears sounds with frequencies as least as high as 40 khz, a rectangular digital pulse (10 µs in width) is used as a broadband input test signal, d(n). The outputs of the system recorded near the eardrum, and in the absence of the animal, are designated y(n) and u(n), respectively. An empirical estimation of an FETF is determined simply by dividing the discrete Fourier transform (DFT) of y(n) by the DFT of u(n) for each sound-source direction. 10 The validity of this empirical estimation depends upon having recordings with an adequate signal-to-noise ratio (SNR) in the relevant frequency band and a denominator without zeros in its spec-

3 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 155 trum. In our free-field data, both recordings had low SNR at high (> 30 khz) and low (< 1.5 khz) frequencies because the output of the loudspeaker diminished in these frequency ranges. Previously we alleviated this problem by subjectively post-processing recordings in the frequency domain to restrict signal bandwidth and circumvent the attendant problem of complex division. 5 In practice, only a minority of the tens of hundreds of free-field measurements for a given subject will suffer from low SNR. Data from these problematic sample directions could always be appropriately filtered through individualized post hoc processing. We now employ, however, a more objective technique that accurately estimates FETFs without introducing artifacts into frequency regions where SNR is low. This technique employs finite-impulse-response (FIR) filters. 11 Under ideal conditions, the impulse response of the FETF, h(n), becomes the following deconvolution problem, y(n) = d(n) h(n). (Eq. 5.1) In our current technique, h(n) is modeled as an FIR filter with coefficients determined objectively using a least-squares error criterion. The FIR filter is computed entirely in the time domain based on the principle of linear prediction. Thus, it redresses problems encountered previously with our subjective use of empirical estimation. Figure 5.2 compares directly the results obtained with the empirical and FIR filter estimation techniques. To simulate conditions of varying SNR for the purpose of this comparison, we added random noise of different amplitudes to a fixed pair of free-field recordings. For each SNR tested, the difference between the known FETF and that obtained by empirical DFT estimation or the FIR technique was expressed as percent relative error. At high SNR (59 db) both techniques yield excellent estimates of the FETF. At low SNR (24 db), however, the FIR method is clearly the superior of the two. Having derived a reliable set of FETFs, the question arises as to the relative merits of different modes of data display in facilitating visualization of the spatial distribution of these functions. Perhaps the simplest scheme is to show successive FETFs on the same plot. A typical sequence of four successive FETFs for 9 steps in elevation with azimuth fixed at 0 is shown in Figure 5.3A. The corresponding plot for four successive 9 steps in azimuth with elevation maintained at 0 is presented in part (D) of that figure. Such basic displays do provide information on the directional properties of the transformation but only a few discrete directions are represented, and it is difficult to visualize how these transformations fit into the total spatial domain. Figure 5.3B is a three-dimensional surface plot of FETFs for 15 values of elevation from -36 (bottom) to +90 (top) in steps of 9 and with azimuth fixed at 0. This surface contains the four FETFs in (A) which are identified by arrows at the left edge of the 3-D surface.

4 156 Virtual Auditory Space: Generation and Applications Fig Schematic diagram illustrating the factors that act upon an input signal d(n)to a loudspeaker and result in the recording of free-field signals u(n)and y(n). In the absence of the animal, the signal u(n)is recorded by a Probe Tube microphone with impulse response m(n). The acoustic delay term, f(n), represents the travel time from loudspeaker to probe tube microphone. With the animal present, the signal y(n)is recorded near the eardrum with the same microphone. In practice the acoustic delay with the cat present is made equal to f(n).

5 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 157 Fig FETF magnitude spectra derived from free-field recordings using the empirical estimation method (A, B) or the least-squares FIR filter method (C, D). The derivations are obtained at two (59 db and 24 db) signal-to-noise ratios (SNR). Comparison of Percent Relative Error (right panel) shows the FIR estimation is superior at low SNR.

6 158 Virtual Auditory Space: Generation and Applications

7 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 159 In similar fashion, Figure 5.3E shows a 3-D surface plot consisting of FETFs for 29 values of azimuth, ranging from 0 (bottom) to -126 (top) in steps of 9 and with elevation fixed at 0. This surface includes the four FETFs in (B) which are marked by arrows at the left of the surface. Surface plots facilitate appreciation of the interplay between frequency, azimuth and elevation in generating the fine structure of the FETFs and call attention to systematic shifts of some of the features with frequency and direction. However, none of the foregoing plots directly displays the angles of azimuth and elevation. One graphical method that does provide angular displays is a technique long used by engineers and physicists in studying the directional properties of devices such as radio antennas, loudspeakers and microphones, namely polar plots of gain vs angular direction. In such plots the magnitude of the gain, commonly in db, is represented by the length of a radius vector whose angle, measured with respect to a reference direction, illustrates the direction in space for that particular measurement. Since the gain of a system is typically represented as a complex quantity, a second polar plot showing phase or other measure of timing may also prove informative. In the present application, the length of the radius vector is made proportional to the log magnitude of the FETF (in db) for a specified frequency while the angle of the vector denotes either azimuth (at a specified elevation) or elevation (at a specified azimuth). If desired, a second set of plots can illustrate the variation with spatial direction of FETF phase or onset time difference. In Figure 5.3C the variation of FETF magnitude with elevation is plotted in polar format for four values of azimuth and a fixed frequency of 10.9 khz. Similarly, Figure 5.3F shows the variation of FETF magnitude with azimuth for four values of elevation and the same frequency of 10.9 khz. While rectangular plots can depict the same in- Fig. 5.3 (opposite). Three modes of graphical representation to illustrate variation in magnitude of the FETF with frequency (FREQ), azimuth (AZ) and elevation (EL). All data are for the left ear. AZ specifies direction in the horizontal plane with AZ = 0 directly in front and AZ = 180 directly behind the cat. Elevation specifies direction in the vertical plane with EL = 0 coincident with the interaural axis and EL = 90 directly above the cat. (A) FETF magnitude vs FREQ for four values of EL with AZ = 0. (B) Three-dimensional surface plot of FETF for 15 values of EL from +90 (top) to -36 (bottom) to in steps of 9 and AZ = 0. This surface includes the four FETF functions in (A) that are identified by arrows at left edge of surface. (C) Polar plot of FETF vs EL with fixed FREQ of 10.9 khz and for four values of AZ: 0, -9, -18 and -27. Magnitude of FETF in db is plotted along radius vector. Note that EL varies from +90 (directly overhead) to -36. (D) FETF magnitude vs FREQ for four values of AZ with EL = 0. (E) Same as (B) except surface represents 15 values of AZ from -126 (top) to 0 (bottom) in steps of 9 and EL = 0. This surface includes the four FETFs in (D) which are identified by arrows at left edge of surface. (F) Polar plot of FETF vs AZ with fixed FREQ of 10.9 khz and for four values of EL: -18, -9, 0 and +9.

8 160 Virtual Auditory Space: Generation and Applications formation, the polar plot seems, intuitively, to assist in visualizing the basic directional attributes of a sound receiver, namely the angles that describe its azimuth and elevation. 3. EARPHONE DELIVERY OF VAS STIMULI The success of the VAS technique also depends upon the design of an earphone sound delivery system that can maintain the fidelity of VAS-synthesized waveforms. Typically, a sealed earphone delivery and measurement system introduces undesirable spectral transformations into the stimulus presented to the ear. To help overcome this problem, we use a specially designed insert earphone for the cat that incorporates a condenser microphone with attached probe tube for sound measurements near the eardrum. 4 The frequency response of the insert earphone system measured in vivo is characterized by a relatively flat response (typically less than ±15 db from 312 to Hz), with no sharp excursions in that frequency range. Ideally, the frequency response of both sound systems would be characterized by a flat magnitude and a linear phase spectrum. In practice, neither the earphone nor the measuring probe microphone used in our studies has such ideal characteristics. Thus, we currently employ least-squares FIR filters to compensate for these nonideal transducers. These filters are designed in the same manner as those employed for FETF estimation because they are solutions to essentially the same problem; namely, minimizing the error between a transforming system s output and a reference signal. 11 In order to judge the suitability of least-squares FIR filters for compensating our insert earphone system, comparisons were made between VAS-synthesized signals and the waveforms actually produced by the earphone sound-delivery system. Figure 5.4 illustrates data from one cat that are representative of signal fidelity under conditions of earphone stimulus delivery. The upper panel presents a sample VAS signal for a particular sound-source direction that would be recorded near the cat s eardrum in the free field. The middle panel shows the same signal as delivered by the earphone, compensated by the leastsquares FIR filter technique. To the eye they appear nearly identical; the correlation coefficient is greater than The same comparison was made for 3632 directions in this cat s VAS and the distribution of correlation coefficients displayed as a histogram. The agreement is seen to be very good, indicting accurate compensation of these wideband stimuli throughout the cat s VAS. As a rule, our FIR filters perform admirably, with comparisons typically resulting in correlation coefficients exceeding MATHEMATICAL MODEL OF VAS 4.1. SPATIAL FEATURE EXTRACTION AND REGULARIZATION (SFER) MODEL FOR FETFS IN THE FREQUENCY DOMAIN

9 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 161 Empirically measured FETFs represent discrete directions in the acoustic space for any individual. 9 There is, however, no a priori method for interpolation of FETFs at intermediate directions. Therefore, lacking a parametric model, the number of discrete measurements increases in direct proportion to the desired resolution of the sampled space. For example, to define the space surrounding the cat at a resolution of 4.5 requires approximately 3200 unique directions for each ear. Fig Top panel: Time waveform of an ear canal signal recorded by probe tube microphone. Middle panel: Time waveform of corresponding signal recorded in an ear canal simulator when delivered through insert earphone system that was compensated by a least-squares FIR filter. Lower panel: Distribution of correlation coefficients in one cat for 3632 directions obtained between the time waveform of free-field signal and the corresponding waveform delivered through compensated insert earphone system.

10 162 Virtual Auditory Space: Generation and Applications This situation is undesirable both in terms of the time required for complete data collection and because physical constraints preclude measurements at so many required directions. To address these limitations, a functional model of the FETF was devised and validated based on empirical measurements from a cat and a KEMAR model. 7,8 Implementation of this parametric model means that FETFs can be synthesized for any direction and at any resolution in the sample space. Such a need arises, for example, in the simulation of moving sound sources or reverberate environments. Additionally, the use of a functional model should aid in the analysis of a multidimensional acoustical FETF database, especially when cross-subject or cross-species differences are studied. In this model, a Karhunen-Loeve expansion is used to represent each FETF as a weighted sum of eigen functions. These eigen functions are obtained by applying eigen decomposition to the data covariance matrix that is formed from a limited number (several hundreds) of empirically measured FETFs. Only a small number (a few dozen) of the eigen functions are theoretically necessary to account for at least 99.9% of the covariance in the empirically measured FETFs. Therefore, the expansion has resulted in a low-dimensional representation of FETF space; the eigen functions are termed eigen transfer functions (ETF) because they are functions only of frequency. An FETF for any given spatial direction is synthesized as a weighted sum of the ETFs. Importantly, the weights are functions of spatial direction (azimuth and elevation) only and, thus, are termed spatial characteristic functions (SCF). Sample SCFs, with data points restricted to coordinates at the measured azimuth and elevations, are obtained by back-projecting the empirically measured FETFs onto the ETFs. Spatially continuous SCFs are then obtained by fitting these SCF samples with two-dimensional splines. The fitting process is often termed regularization. The complete paradigm is therefore termed a spatial feature extraction and regularization (SFER) model. Detailed acoustical validation of the SFER model was performed on one cat in which 593 measured FETFs were used to construct the covariance matrix. Twenty of the ETFs were found to be significant. 8 The performance of the model is veracious, since comparison of more than 1800 synthesized vs empirically measured FETFs indicates nearly identical functions. Errors between modeled and measured FETFs are generally less than one percent for any direction in which the measured data have adequate signal-to-noise characteristics. Importantly, the model s performance is similarly high when locations are interpolated between the 593 input directions; these modeled FETFs are compared with measured data that were not used to form the covariance matrix TIME DOMAIN AND BINAURAL EXTENSION

11 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 163 TO THE SFER MODEL The SFER model for complex-valued FETFs was originally designed as a monaural representation, and was implemented entirely in the frequency domain. In order to study the neural mechanisms of binaural directional hearing using an on-line interactive VAS paradigm, the SFER model was implemented in the time domain and extended to provide binaural information. Figure 5.5 shows the components of the complete model in block-diagram form. There are several advantages to these extensions. First, an interaural time difference is a natural consequence of the time-domain approach; this parameter was lacking in the previous frequency-domain model. Second, the time domain does not involve complex-number calculations. This becomes especially advantageous when synthesizing moving sound sources or reverberant environments on-line in response to changing experimental conditions. Lastly, most digital signal processors are optimized for doing filtering in the time domain, viz. convolution. Eventually, if a real time implementation of VAS is achieved using these processors, a time-domain approach is clearly preferable Data Covariance Matrix The SFER model produces a lower dimensional representation of the FETF space (see section 4.1) by applying an eigen decomposition to the covariance matrix of the input set of FETFs. In the time do- Fig Block diagram illustrating how an input set of measured FETF impulse responses from one cat are processed by the binaural SFER model in the time domain to yield a left- and right-ear impulse response for any direction on the partially sampled sphere. See text for description of model components.

12 164 Virtual Auditory Space: Generation and Applications main approach, the input data for each modeled cat consist of the impulse responses from a subset (number chosen = P) of measured FETFs obtained at 9 increments on a spherical coordinate system spanning 360 in azimuth and 126 in elevation, and centered on the interaural axis. The impulse response ( h j ) of a measured FETF at each ( j =1,2,...,P ) direction is estimated by the least-squares FIR method (see section 2.1) using the free-field recordings y(n), and u(n), shown in Figure 5.1. The FETF impulse response covariance matrix ( R(h)) is then defined as: P [ ] R(h) = Λ j h j e 0 Λ j h j e 0 j=1 Where, the operation denoted by [ ] T and e o = 1 P P j= 1 Λ j [ ] T (Eq. 5.2) is the matrix transpose, and h Λ j = 1 sin(el j ) Here, e o is the average value of the P-measured FETF impulse responses, and Λ j is a simple function of elevation at that direction used to compensate for the different separations (measured in arc length) between measured FETFs obtained at different elevations on the sphere Eigen impulse response (EIR) In actual use, a measured FETF impulse response is a 256-point vector and the covariance matrix is, therefore, real and symmetric with dimensions 256-by-256. Therefore, the output of the eigen decomposition defined by Eq. 5.2 is an eigen matrix of dimension 256-by-256 whose columns are 256-point eigen vectors. These eigen vectors constitute the new basis functions that represent the FETF impulse-response space. Accordingly, an eigen vector is termed an eigen impulse response (EIR) and is analogous to the eigen transfer function (ETF) notation employed in the frequency domain. Figure 5.6 shows the first- through fourth-order EIRs and their corresponding ETFs for one representative cat. In the frequency domain, the features of ETFs resemble the major characteristics that define measured FETFs (see section 2.1). The relative importance of an EIR is related to its eigen value because the eigen value represents the energy of all measured FETF impulse responses projected onto that particular EIR basis vector. The sum of all 256 eigen values will indicate that 100% of the energy in the measured FETF impulse response space is accounted for by the 256 EIRs. The number ( M ) of EIRs (i.e., the dimensionality) is dra- j

13 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 165 Fig The first- through fourth-order EIR and their corresponding ETF for one representative cat using the SFER model. Note the general similarity between ETFs and measured FETFs. matically lower if only somewhat less than 100% of the energy is to be represented by the expansion. The mean-squared-error ( MSE) associated with use of M EIRs to represent the P-measured FETF impulse responses is given by: MSE = 256 λ i i= M +1 (Eq. 5.3) Where the eigen values have been arranged in descending order, λ 1 λ 2... λ 256 A value of M = 20 was obtained for each of three modeled cats when

14 166 Virtual Auditory Space: Generation and Applications the MSE was set to 0.1%. Thus, the SFER model with a dimension of only 20 EIRs will represent a subspace accounting for about 99.9% of the energy in all the measured FETF impulse responses for each representative cat. In terms of the model, any of the P -measured FETF impulse responses ( h) is synthesized from the EIRs by the following relationship: 20 h()= j w ˆ i EIR i + e o i= 1 (Eq. 5.4) Here j is the index that chooses any one of the P spatial directions in the input measurement set, and ŵ i, a weighting function that is the projection of h(j) onto the i th EIR. Formally: w h j EIR T i = () [ i] Equation 5.4 is often termed a Karhunen-Loeve expansion. It is important to realize that this expansion effects a separation of time coordinates from spatial coordinates. That is, the elements of the EIR i column vectors correspond only to the time dimension, while the w i are functions only of spatial coordinates (i.e., azimuth and elevation) Spatial feature extraction and regularization The weighting functions w i are termed spatial characteristic functions (SCF), since by virtue of Equation 5.4 they are functions only of azimuth and elevation. These SCFs are defined only for the discrete coordinate-pairs in the input measurement set. A continuous function can be determined by applying regression to these discrete samples, in a framework of regularization. 12 In our application, a software package, RK-pack, is used to regularize the discrete spatial samples into SCFs that are continuous functions, ŵ i, of spatial coordinates. 13 The first- through fourth-order SCFs from one representative modeled cat are shown as mesh plots in Figure 5.7. Previous work indicated that features of SCFs of different order are closely related to the head and external ear geometry of cats and humans. 7,8 For example, the major peak in the first-order SCF is related to the direction of the pinna opening in this modeled cat. More detailed comparisons and speculations concerning the use of SCFs to describe the physical characteristics of the external ear are presented elsewhere. 7 In our current implementation, only 20 EIRs and their corresponding SCFs are necessary to simulate free-signals with a high degree of fidelity for the cat. Any direction, d, on the partially sampled sphere, including those intermediate to the original P-measurement locations, can be represented by the impulse response,

15 Implementation of Virtual Acoustic Space for Studies of Directional Hearing h( d)= w ˆ i( d) EIR i + e o (Eq. 5.5) i= 1 Thus, the SFER model reduces to a particularly simple implementation that is a linear combination of EIRs weighted by SCFs Regularization of ITD samples and binaural output In order to extend the SFER model for binaural studies, ITD information needs to be extracted from the free-field data at the P-measurement sites used as input for each modeled cat. These data consist of the free-field recordings y(n) and u(n), shown in Figure 5.1, from which a position dependent time delay can be calculated by comparing the signal onset of y(n) with the signal onset of u(n). The onset of a recording is defined as the 10% height of the first rising edge of the signal with respect to its global absolute maximum. In practice, this monaural delay-versuslocation function is estimated for only one ear. The delay function for the opposite ear is then derived by assuming a symmetrical head. The ITD function for these discrete locations is simply the algebraic difference, using the Fig Mesh plots of the first- through fourth-order regularized SCF for the same cat whose EIR are shown in Figure 5.6. All plots are magnitude only.

16 168 Virtual Auditory Space: Generation and Applications direction (AZ = 0, EL = 0 ) as the zero reference. An ITD function, continuous in azimuth and elevation, can be derived for the partial sphere by applying the same regularization paradigm as used to regularize SCFs (see section 4.2.3). Figure 5.5 depicts the ITD extraction and regularization modules and their incorporation to produce a VAS signal appropriate for the left and right ear Acoustic validation of the implementation The time domain binaural SFER model for VAS was validated by comparing the model s output, as either an FETF or its impulse response, to the empirically-measured data for each modeled cat. In general, two comparisons are made: (1) as a fidelity check, at the P-measurement locations that were used to determine both EIRs and SCFs; and (2) as a predictability or interpolation check, at a large number of other locations that were not used to determine the model parameters. The degree to which the model reproduced the measurement was similarly exact regardless of the comparison employed. Figure 5.8 depicts a representative comparison between data for a modeled and measured direction, both in the time- and frequency-domains. The error is generally so small that it cannot be detected by visual inspection. Therefore, a quantitative comparison is made by calculating the percentmean-squared error between the modeled and the measured data for all directions in the comparison. Figure 5.8 illustrates this error distribution for one modeled cat. The shape and mean (0.94%) of the distribution are comparable to the results obtained from the frequency domain SFER model QUASI-REAL-TIME OPERATION The implementation of a mathematical representation (SFER model) for VAS is computationally demanding since even with a low-dimensional model dozens of EIRs and their weighting functions (SCFs) must be combined to compute the FETF for every sound-source direction. To make matters worse, we have chosen to minimize the number of off-line calculations that provide static initialization data to the model, and instead require that the software calculate direction-dependent stimuli from unprocessed input variables. This computation provides greater flexibility of stimulus control during an actual experiment by permitting a large number of stimulus parameters to be changed interactively to best suit the current experimental conditions. The resolution of the VAS is one major parameter that is not fixed by an initialization procedure, but is computed on-demand, and it can be set as small as 0.1 degree. The transfer characteristics of the earphone sound delivery system may have to be measured at several times during the course of a long experiment to insure stability. Therefore, earphone compensation is not statically linked to VAS stimulus synthesis, but rather is accomplished dynamically as part of the on-demand computation.

17 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 169 Fig Comparison between modeled and measured impulse response of FETF. Left column: SFER model of impulse response and corresponding FETF for a representative sound-source direction. Middle column: impulse response and FETF measured at the corresponding direction using the techniques discussed in section 2. Right column: Distribution of Percent Relative Error between modeled and measured impulse responses for 3632 directions in one cat.

18 170 Virtual Auditory Space: Generation and Applications Fig Block diagram illustrating the components and interactions employed to implement a VAS suitable to the on-line demands of neurophysiological experiments in the cat. Fortunately, in recent years the processing power of desktop workstations and personal computers has increased dramatically while the cost of these instruments has actually dropped. We employ a combination of laboratory and desktop workstations as independent components that are connected over ethernet hardware (Fig. 5.9). We refer to this combination as providing a quasi-real-time implementation because the computation and delivery of a VAS-synthesized stimulus is not instantaneous but rather occurs as distinctly separate events that require a fixed duration for completion. The interaction with the enduser has been handled by a window-based user interface on a MicroVax-II/GPX computer. This VAS interface program controls stimulus setup and delivery, data acquisition, and graphical display of results. Upon receipt of setup variables, VAS stimuli are synthesized on a VAX-3000 Alpha workstation and the resulting stimulus waveforms transferred back to program-control memory for subsequent earphone delivery. All parameters of stimulus delivery, together with stimulus waveforms for the Left- and Right-Ear channels, are passed to a Digital Stimulus System that produces independent signals through 16-bit D/A converters and provides synchronization pulses for data acquisition procedures initiated by the VAS control program. 14,15

19 Implementation of Virtual Acoustic Space for Studies of Directional Hearing RESPONSES OF SINGLE CORTICAL NEURONS TO DIRECTIONAL SOUNDS 5.1. VIRTUAL SPACE RECEPTIVE FIELD (VSRF) Virtual acoustic space is a propitious technique to study in a systematic way a neuron s sensitivity to the direction of simulated free-field sounds. 5,16 The synthesized right- and left-ear stimulus waveforms for each direction surrounding an individual will mimic all the significant acoustic features contained in sound from a free-field source. The deterministic nature of virtual space means that any arbitrary signal can be transformed for any selected combination and sequence of soundsource directions. Furthermore, when virtual space stimuli are delivered through compensated earphones, the traditional advantages of dichotic delivery become accessible. In order to achieve our principal aim of understanding cortical mechanisms of directional hearing, we have been exploring in detail the sensitivity of auditory cortical neurons to the important and independently-controllable parameters involved in detecting sound-source direction: intensity, timing, and spectrum. Here we present examples of data from these studies to illustrate how the advantages of the VAS approach may be brought to bear on studies of mechanisms of directional hearing. For our studies, the direction of a VAS stimulus is referenced to an imaginary sphere centered on the cat s head (Fig. 5.10). The sphere is bisected into front and rear hemispheres in order to show the relationship to a spatial receptive field (VSRF) that is plotted on a representation of the interior surface of the sphere. The VSRF is composed of those directions in the sampled space from which a sound evokes a response from the cell. In order to evaluate the extent of a VSRF, a stimulus set consisting of approximately 1650 regularly spaced spherical directions covering 360 in azimuth (horizontal direction) and 126 in elevation (vertical direction) is generally employed. The maximum level of the stimulus set is fixed for each individual cat according to the earphone acoustic calibration. Intensity is then expressed as decibels of attenuation (db ATTN) from this maximum and can be varied over a range of 127 db. Usually each virtual space direction in the set is tested once, using a random order of selection. About 12 minutes is required to present the entire virtual space stimulus set at one intensity level, and to collect and display single-neuron data on-line. Typically, cortical neurons respond to a sound from any effective direction with a burst of 1-4 action potentials time-locked to stimulus onset. The representative VSRF shown in Figure 5.10C depicts the directions and surface area (interpolating to the nearest sampled locus) at which a burst was elicited from a single cell located in the left primary (AI) auditory cortex. Thus, with respect to the midline of the cat, the cell responded almost exclusively to directions in the contralateral (right side of midline) hemifield. Using this sampling paradigm,

20 172 Virtual Auditory Space: Generation and Applications Fig Coordinates of virtual acoustic space and their relationship to position of the cat. (A) The direction of a sound source is referenced to a spherical coordinate system with the animal s interaural axis centered within an imaginary sphere. (B) The sphere has been bisected into a FRONT and REAR hemisphere and the REAR hemisphere rotated 180 as if hinged to the FRONT. (C) Experimental data representing the spatial receptive field of a single neuron are plotted on orthographic projections of the interior surface of the imaginary sphere.

21 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 173 the majority of cells (~60%) recorded in the cat s primary (AI) auditory field showed this contralateral preference. 5 Smaller populations of neurons showed a preference for the ipsilateral hemifield (~10%), or the frontal hemisphere (~7%). Some receptive fields showed little or no evidence for direction selectivity, and are termed omnidirectional (~16%). The remainder exhibited complex VSRFs RELATIONSHIPS OF INTENSITY TO THE VSRF Receptive field properties at any intensity level are influenced by both monaural and binaural cues. Which cue dominates in determining the size, shape and location of the VSRF depends, in part, on the overall stimulus intensity. Generally, at threshold intensity the VSRF is made up of a relatively few effective sound source directions, and these typically fall on or near the acoustic axis, above the interaural plane and either to the left or to the right of the midline. Raising intensity by 10 to 20 db above threshold invariably results in new effective directions being recruited into the receptive field. Further increases in intensity results in further expansion of the VSRF in some cells but not in others. This is illustrated by the data of Figures 5.11 and 5.12, collected from two cells that had similar characteristic frequency and similar thresholds to the same virtual space stimulus set. In analyzing these data, we took advantage of the fact that with the VAS paradigm it is possible to deliver stimuli to each ear independently. In both cases, presenting VAS signals to the contralateral ear alone at an intensity (70 db ATTN) within db of threshold resulted in VSRFs confined largely to the upper part of the contralateral frontal quadrant of VAS, on or near the acoustic axis. Response to stimulation of the ipsilateral ear alone was either very weak or nonexistent (not shown) in both cases. When stimulus intensity was raised by 20 db (to 50 db ATTN), the responses of both neurons spread to include essentially all of VAS. These spatial response patterns could only have been formed, obviously, using the direction-dependent spectral features associated with the contralateral ear. Under binaural listening conditions the size, shape and locations of VSRFs obtained at 70 db ATTN were essentially the same as those observed under monaural conditions at that intensity. This is interpreted to mean that at low intensity the receptive field was dominated by the input from sounds in contralateral space. At 50 db ATTN, however, the situation was quite different for the two neurons. For the neuron illustrated in Figure 5.12 responses spread, but remained concentrated in contralateral acoustic space. We interpret this to mean that sound coming from ipsilateral (left) frontal directions suppressed the discharge of the cell, since at 50 db ATTN the stimuli from these directions were quite capable of exciting the neuron if the ipsilateral ear was not engaged. For the cell illustrated Figure 5.11 there was no such restriction in

22 174 Virtual Auditory Space: Generation and Applications the VSRF at 50 db ATTN; the sound in the ipsilateral hemifield had little or no influence on the cell s discharge at any direction. Taking advantage of our earphone-delivery system, it was possible to probe possible mechanism(s) that underlie the formation of these VSRFs by studying quantitatively responses to changing parameters of monaural and binaural stimuli. Spike count-vs-intensity functions obtained under tone-burst conditions (lower left panels) showed that both neurons responded to high-frequency sounds; characteristic frequen- Fig Virtual Space Receptive Field (VSRF) of an AI neuron at two intensities showing the results of stimulation of the contralateral ear alone (top row) and of the two ears together (middle row). Spike countvs-intensity functions obtained with tone-burst stimuli delivered to the contralateral ear alone (bottom row, left) or to two ears together (bottom row, right).

23 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 175 cies were 15 and 15.5 khz, respectively. Because interaural intensity difference (IID) is known to operate as a sound-localization cue at high frequency, we proceeded to study this parameter in isolation as well (lower right panels). For the neuron in Figure 5.12, the spike count-vs-iid function shows a steep slope for IIDs between -20 and +30 db, which is the range of IID achieved by a normal cat at this frequency. These data provide good evidence that the strong inhibitory input from the ipsilateral ear revealed by these functions accounts Fig Virtual Space Receptive Field (VSRF) of an AI neuron at two intensities showing the results of stimulation of the contralateral ear alone (top row) and of the two ears together (middle row). Spike countvs-intensity functions obtained with tone-burst stimuli delivered to the contralateral alone (bottom row, left) or to two ears together (bottom row, right).

24 176 Virtual Auditory Space: Generation and Applications for the restriction of this cell s VSRF to the contralateral hemifield. The spike count-vs-iid function illustrated in Figure 5.11 suggests why the VSRF of this neuron was not restricted to the contralateral hemifield: ipsilateral inhibition was engaged only at IIDs greater than 30 db, which is beyond the range of IIDs available to the cat at these high frequencies. Thus, the VSRF for this cell was dominated by excitation evoked by sounds arriving from all virtual space directions COMPARISONS OF VSRFS OBTAINED USING THE VAS FROM DIFFERENT CATS The general pattern of location-dependent spectral features is very similar among the individual cats that were studied in the free field. 9 For the same sound-source direction, however, there can be significant differences among cats in the absolute values of the spectral transformation in both cats and humans. 9,17,18 Our models of virtual acoustic space mimic these general patterns as well as individual differences, and thereby provide an opportunity to study the sensitivities of AI neurons to these individualized realizations of a VAS. The VAS for each of three different cats was selected to obtain three sequential estimates of a single neuron s VSRF. The comparisons are shown in Figure 5.13 for several intensity levels. Differences in the VSRFs among cats are most noticeable at low intensity levels where the VSRF is smallest and attributable mainly to monaural input. Under this condition, the intensity for many directions in a cell s receptive field is near threshold level, and differences among the individualized VASs in absolute intensity at a fixed direction are accentuated by the all-or-none depiction of the VSRF. These results are typical of neurons that possess a large receptive field that grows with intensity to span most of an acoustic hemifield. At higher intensity levels, most directions are well above their threshold level where binaural interactions restrict the receptive field to the contralateral hemifield. Thus, while the VAS differs from one cat to the next, the neuronal mechanisms that must operate upon monaural and interaural intensity are sufficiently general to produce VSRFs that resemble one another in both extent and laterality TEMPORAL RELATIONSHIPS OF THE VSRF The characterization of a cortical neuron s spatial receptive field has, so far, been confined to responses to a single sound source of varying direction delivered in the absence of any other intentional stimulation (i.e., in anechoic space). Of course, this is a highly artificial situation since the natural acoustic environment of the cat contains multiple sound sources emanating from different directions and with different temporal separations. The use of VAS allows for the simulation of multiple sound sources, parametrically varied in their temporal separation and incident directions. Results of experiments using one

25 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 177 Fig VSRFs of an AI neuron obtained with three different realizations of VAS. Each VAS realization was obtained from the SFER model using an input set of measured FETF impulse responses from three different cats (see section 4). Rows illustrate VSRFs obtained at different intensities for each VAS. such paradigm give us some of the first insights into the directional selectivity of AI neurons under conditions of competing sounds. There is good physiological evidence that the output of an auditory cortical neuron in response to a given sound can be critically influenced by its response to a preceding sound, measured on a time scale of tens or hundreds of milliseconds. We have studied this phenomenon under VAS conditions. The responses of the cell in Figure 5.14

26 178 Virtual Auditory Space: Generation and Applications Fig Response of an AI neuron to a one- or two-sound stimulus located with the cell s VSRF. Presentation of one sound from an effective direction (e.g., AZ = 27, EL = 18 ; white-filled circle) in the VSRF results in the time-locked discharge of the neuron (topmost dot raster). Remaining dot rasters show the response to two sounds that arrive at different times, but from the same direction. The response to the lagging sound is systematically suppressed as the delay between sounds is reduced from 300 to 50 milliseconds.

27 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 179 are representative of the interactions observed. The VSRF occupied most of acoustic space at the intensity used in this example. The topmost dot raster displays the time-locked discharge of the neuron to repetitive presentation of a sound from one particularly effective direction (AZ = 27, EL = 18 ) in the VSRF. The remaining dot rasters show the responses to a pair of sounds that arrive at different times, but from the same direction. The delay between sounds was varied from 50 to 300 milliseconds, The response to the lagging sound was robust and essentially identical to the leading sound for delays of 125 msec and greater. At delays shorter than 125 msec, the effect of the leading sound was to suppress the response to the lagging one, which was complete when the temporal separation was reduced to 50 msec. In some cells, the joint response to the two sounds could be facilitative at critical temporal separations below about 20 msec (not shown). The directional selectivity of most cortical neurons recorded appeared to be influenced by pairs of sounds with temporal separations similar to those illustrated in Figure There is also evidence from previous dichotic earphone experiments that a cortical cell s output to competing sounds has a spectral basis as well as a temporal one We have studied the interaction of spectral and temporal dependencies by presenting one sound from a fixed direction within the cell s VSRF and a second sound that was varied systematically in direction but at a fixed delay with respect to the first. This means that the spectrum of the delayed sound changed from direction to direction while that of the leading sound remained constant. A VSRF was then obtained in response to the lagging sound at several different temporal separations. Figure 5.15 shows the progressive reduction in size of the VSRF to the lagging sound that was associated with shortening of the delay between the lead and lag sounds from 150 to 50 milliseconds. Note that this cell is the same one whose delay data were illustrated in Figure In such cells, the directional selectivity under competing sound conditions appeared to be altered dynamically and systematically by factors related to both temporal separation and sound source direction INTERNAL STRUCTURE OF THE VSRF The mapping of VSRFs shown in previous illustrations is only meant to depict the areal extent over which the cell s output can be influenced. This may be considered the neuron s spatial tuning curve. However, both the discharge rate and response latency of a cortical neuron can vary within its spatial receptive field. 22 In some cells, a gradient was observed in the receptive field consisting of a small region in the frontal hemifield where sounds evoked a high discharge rate and short response latency surrounded by a larger region where sounds evoked relatively lower rates and longer latencies. One paradigm used to reveal this variation in response metrics is to stimulate

28 180 Virtual Auditory Space: Generation and Applications Fig VSRF of an AI neuron derived from the response to the lagging sound of a twosound stimulus. The two sounds arrived at different times and from different directions. The leading sound was fixed at an effective direction (e.g., AZ = 27, EL = 18 ) in the VSRF. Progressive shortening of the delay to the lagging sound from 150 to 50 milliseconds resulted in a concurrent reduction in the VSRF size.

29 Implementation of Virtual Acoustic Space for Studies of Directional Hearing 181 repetitively at closely spaced directions spanning the azimuthal dimension. 23,24 In this approach, the VSRF is used to guide the selection of relevant azimuthal locations. This analysis was performed on the cell illustrated in Figure The histograms show the mean first-spike latency and summed spike-count to 40 stimulus repetitions as a function of azimuth in the frontal hemifield. The azimuthal functions were determined at three different elevations. Regardless of elevation, response latency was always shortest and spike output always greatest in the contralateral frontal quadrant (0 to +90 ) of the VSRF. The degree of modulation of the output was, however, dependent on elevation. The strongest effects of varying azimuthal position were near the interaural line (E = 0 ) and the weakest effects were at the highest elevation. For other cortical cells, both the VSRF and its internal structure were found to contain more complicated patterns. Fig Spatial variation in response latency and strength across a VSRF of an AI neuron. Azimuth functions (response-vs-azimuth) were obtained at three fixed elevations in the frontal hemifield. Histograms show the mean first-spike latency and summed spike-count to 40 stimulus repetitions at each direction along the azimuth.

Binaural Hearing. Reading: Yost Ch. 12

Binaural Hearing. Reading: Yost Ch. 12 Binaural Hearing Reading: Yost Ch. 12 Binaural Advantages Sounds in our environment are usually complex, and occur either simultaneously or close together in time. Studies have shown that the ability to

More information

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL

A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL 9th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, -7 SEPTEMBER 7 A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL PACS: PACS:. Pn Nicolas Le Goff ; Armin Kohlrausch ; Jeroen

More information

Auditory Localization

Auditory Localization Auditory Localization CMPT 468: Sound Localization Tamara Smyth, tamaras@cs.sfu.ca School of Computing Science, Simon Fraser University November 15, 2013 Auditory locatlization is the human perception

More information

A cat's cocktail party: Psychophysical, neurophysiological, and computational studies of spatial release from masking

A cat's cocktail party: Psychophysical, neurophysiological, and computational studies of spatial release from masking A cat's cocktail party: Psychophysical, neurophysiological, and computational studies of spatial release from masking Courtney C. Lane 1, Norbert Kopco 2, Bertrand Delgutte 1, Barbara G. Shinn- Cunningham

More information

Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma

Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma & Department of Electrical Engineering Supported in part by a MURI grant from the Office of

More information

Spatial Audio Reproduction: Towards Individualized Binaural Sound

Spatial Audio Reproduction: Towards Individualized Binaural Sound Spatial Audio Reproduction: Towards Individualized Binaural Sound WILLIAM G. GARDNER Wave Arts, Inc. Arlington, Massachusetts INTRODUCTION The compact disc (CD) format records audio with 16-bit resolution

More information

Shift of ITD tuning is observed with different methods of prediction.

Shift of ITD tuning is observed with different methods of prediction. Supplementary Figure 1 Shift of ITD tuning is observed with different methods of prediction. (a) ritdfs and preditdfs corresponding to a positive and negative binaural beat (resp. ipsi/contra stimulus

More information

Computational Perception /785

Computational Perception /785 Computational Perception 15-485/785 Assignment 1 Sound Localization due: Thursday, Jan. 31 Introduction This assignment focuses on sound localization. You will develop Matlab programs that synthesize sounds

More information

ME scope Application Note 01 The FFT, Leakage, and Windowing

ME scope Application Note 01 The FFT, Leakage, and Windowing INTRODUCTION ME scope Application Note 01 The FFT, Leakage, and Windowing NOTE: The steps in this Application Note can be duplicated using any Package that includes the VES-3600 Advanced Signal Processing

More information

Introduction. Chapter Time-Varying Signals

Introduction. Chapter Time-Varying Signals Chapter 1 1.1 Time-Varying Signals Time-varying signals are commonly observed in the laboratory as well as many other applied settings. Consider, for example, the voltage level that is present at a specific

More information

HRIR Customization in the Median Plane via Principal Components Analysis

HRIR Customization in the Median Plane via Principal Components Analysis 한국소음진동공학회 27 년춘계학술대회논문집 KSNVE7S-6- HRIR Customization in the Median Plane via Principal Components Analysis 주성분분석을이용한 HRIR 맞춤기법 Sungmok Hwang and Youngjin Park* 황성목 박영진 Key Words : Head-Related Transfer

More information

Sound source localization and its use in multimedia applications

Sound source localization and its use in multimedia applications Notes for lecture/ Zack Settel, McGill University Sound source localization and its use in multimedia applications Introduction With the arrival of real-time binaural or "3D" digital audio processing,

More information

speech signal S(n). This involves a transformation of S(n) into another signal or a set of signals

speech signal S(n). This involves a transformation of S(n) into another signal or a set of signals 16 3. SPEECH ANALYSIS 3.1 INTRODUCTION TO SPEECH ANALYSIS Many speech processing [22] applications exploits speech production and perception to accomplish speech analysis. By speech analysis we extract

More information

Computational Perception. Sound localization 2

Computational Perception. Sound localization 2 Computational Perception 15-485/785 January 22, 2008 Sound localization 2 Last lecture sound propagation: reflection, diffraction, shadowing sound intensity (db) defining computational problems sound lateralization

More information

Signal Processing for Digitizers

Signal Processing for Digitizers Signal Processing for Digitizers Modular digitizers allow accurate, high resolution data acquisition that can be quickly transferred to a host computer. Signal processing functions, applied in the digitizer

More information

Smart antenna for doa using music and esprit

Smart antenna for doa using music and esprit IOSR Journal of Electronics and Communication Engineering (IOSRJECE) ISSN : 2278-2834 Volume 1, Issue 1 (May-June 2012), PP 12-17 Smart antenna for doa using music and esprit SURAYA MUBEEN 1, DR.A.M.PRASAD

More information

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS

INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR PROPOSING A STANDARDISED TESTING ENVIRONMENT FOR BINAURAL SYSTEMS 20-21 September 2018, BULGARIA 1 Proceedings of the International Conference on Information Technologies (InfoTech-2018) 20-21 September 2018, Bulgaria INVESTIGATING BINAURAL LOCALISATION ABILITIES FOR

More information

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4 SOPA version 2 Revised July 7 2014 SOPA project September 21, 2014 Contents 1 Introduction 2 2 Basic concept 3 3 Capturing spatial audio 4 4 Sphere around your head 5 5 Reproduction 7 5.1 Binaural reproduction......................

More information

Audio Engineering Society. Convention Paper. Presented at the 124th Convention 2008 May Amsterdam, The Netherlands

Audio Engineering Society. Convention Paper. Presented at the 124th Convention 2008 May Amsterdam, The Netherlands Audio Engineering Society Convention Paper Presented at the 124th Convention 2008 May 17 20 Amsterdam, The Netherlands The papers at this Convention have been selected on the basis of a submitted abstract

More information

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner. Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb 2008. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum,

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 VIRTUAL AUDIO REPRODUCED IN A HEADREST PACS: 43.25.Lj M.Jones, S.J.Elliott, T.Takeuchi, J.Beer Institute of Sound and Vibration Research;

More information

Acoustics Research Institute

Acoustics Research Institute Austrian Academy of Sciences Acoustics Research Institute Spatial SpatialHearing: Hearing: Single SingleSound SoundSource Sourcein infree FreeField Field Piotr PiotrMajdak Majdak&&Bernhard BernhardLaback

More information

SEPTEMBER VOL. 38, NO. 9 ELECTRONIC DEFENSE SIMULTANEOUS SIGNAL ERRORS IN WIDEBAND IFM RECEIVERS WIDE, WIDER, WIDEST SYNTHETIC APERTURE ANTENNAS

SEPTEMBER VOL. 38, NO. 9 ELECTRONIC DEFENSE SIMULTANEOUS SIGNAL ERRORS IN WIDEBAND IFM RECEIVERS WIDE, WIDER, WIDEST SYNTHETIC APERTURE ANTENNAS r SEPTEMBER VOL. 38, NO. 9 ELECTRONIC DEFENSE SIMULTANEOUS SIGNAL ERRORS IN WIDEBAND IFM RECEIVERS WIDE, WIDER, WIDEST SYNTHETIC APERTURE ANTENNAS CONTENTS, P. 10 TECHNICAL FEATURE SIMULTANEOUS SIGNAL

More information

EWGAE 2010 Vienna, 8th to 10th September

EWGAE 2010 Vienna, 8th to 10th September EWGAE 2010 Vienna, 8th to 10th September Frequencies and Amplitudes of AE Signals in a Plate as a Function of Source Rise Time M. A. HAMSTAD University of Denver, Department of Mechanical and Materials

More information

III. Publication III. c 2005 Toni Hirvonen.

III. Publication III. c 2005 Toni Hirvonen. III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on

More information

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner. Perception of pitch AUDL4007: 11 Feb 2010. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum, 2005 Chapter 7 1 Definitions

More information

GAIN COMPARISON MEASUREMENTS IN SPHERICAL NEAR-FIELD SCANNING

GAIN COMPARISON MEASUREMENTS IN SPHERICAL NEAR-FIELD SCANNING GAIN COMPARISON MEASUREMENTS IN SPHERICAL NEAR-FIELD SCANNING ABSTRACT by Doren W. Hess and John R. Jones Scientific-Atlanta, Inc. A set of near-field measurements has been performed by combining the methods

More information

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Mel Spectrum Analysis of Speech Recognition using Single Microphone International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree

More information

Time Matters How Power Meters Measure Fast Signals

Time Matters How Power Meters Measure Fast Signals Time Matters How Power Meters Measure Fast Signals By Wolfgang Damm, Product Management Director, Wireless Telecom Group Power Measurements Modern wireless and cable transmission technologies, as well

More information

System Identification and CDMA Communication

System Identification and CDMA Communication System Identification and CDMA Communication A (partial) sample report by Nathan A. Goodman Abstract This (sample) report describes theory and simulations associated with a class project on system identification

More information

Subband Analysis of Time Delay Estimation in STFT Domain

Subband Analysis of Time Delay Estimation in STFT Domain PAGE 211 Subband Analysis of Time Delay Estimation in STFT Domain S. Wang, D. Sen and W. Lu School of Electrical Engineering & Telecommunications University of ew South Wales, Sydney, Australia sh.wang@student.unsw.edu.au,

More information

Psychoacoustic Cues in Room Size Perception

Psychoacoustic Cues in Room Size Perception Audio Engineering Society Convention Paper Presented at the 116th Convention 2004 May 8 11 Berlin, Germany 6084 This convention paper has been reproduced from the author s advance manuscript, without editing,

More information

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway Interference in stimuli employed to assess masking by substitution Bernt Christian Skottun Ullevaalsalleen 4C 0852 Oslo Norway Short heading: Interference ABSTRACT Enns and Di Lollo (1997, Psychological

More information

3D Distortion Measurement (DIS)

3D Distortion Measurement (DIS) 3D Distortion Measurement (DIS) Module of the R&D SYSTEM S4 FEATURES Voltage and frequency sweep Steady-state measurement Single-tone or two-tone excitation signal DC-component, magnitude and phase of

More information

3D sound image control by individualized parametric head-related transfer functions

3D sound image control by individualized parametric head-related transfer functions D sound image control by individualized parametric head-related transfer functions Kazuhiro IIDA 1 and Yohji ISHII 1 Chiba Institute of Technology 2-17-1 Tsudanuma, Narashino, Chiba 275-001 JAPAN ABSTRACT

More information

Binaural hearing. Prof. Dan Tollin on the Hearing Throne, Oldenburg Hearing Garden

Binaural hearing. Prof. Dan Tollin on the Hearing Throne, Oldenburg Hearing Garden Binaural hearing Prof. Dan Tollin on the Hearing Throne, Oldenburg Hearing Garden Outline of the lecture Cues for sound localization Duplex theory Spectral cues do demo Behavioral demonstrations of pinna

More information

Appendix III Graphs in the Introductory Physics Laboratory

Appendix III Graphs in the Introductory Physics Laboratory Appendix III Graphs in the Introductory Physics Laboratory 1. Introduction One of the purposes of the introductory physics laboratory is to train the student in the presentation and analysis of experimental

More information

COMMUNICATIONS BIOPHYSICS

COMMUNICATIONS BIOPHYSICS XVI. COMMUNICATIONS BIOPHYSICS Prof. W. A. Rosenblith Dr. D. H. Raab L. S. Frishkopf Dr. J. S. Barlow* R. M. Brown A. K. Hooks Dr. M. A. B. Brazier* J. Macy, Jr. A. ELECTRICAL RESPONSES TO CLICKS AND TONE

More information

ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES

ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES Abstract ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES William L. Martens Faculty of Architecture, Design and Planning University of Sydney, Sydney NSW 2006, Australia

More information

PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM

PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM Abstract M. A. HAMSTAD 1,2, K. S. DOWNS 3 and A. O GALLAGHER 1 1 National Institute of Standards and Technology, Materials

More information

FFT 1 /n octave analysis wavelet

FFT 1 /n octave analysis wavelet 06/16 For most acoustic examinations, a simple sound level analysis is insufficient, as not only the overall sound pressure level, but also the frequency-dependent distribution of the level has a significant

More information

Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction

Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction Improving room acoustics at low frequencies with multiple loudspeakers and time based room correction S.B. Nielsen a and A. Celestinos b a Aalborg University, Fredrik Bajers Vej 7 B, 9220 Aalborg Ø, Denmark

More information

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-), Verona, Italy, December 7-9,2 AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Tapio Lokki Telecommunications

More information

Measuring impulse responses containing complete spatial information ABSTRACT

Measuring impulse responses containing complete spatial information ABSTRACT Measuring impulse responses containing complete spatial information Angelo Farina, Paolo Martignon, Andrea Capra, Simone Fontana University of Parma, Industrial Eng. Dept., via delle Scienze 181/A, 43100

More information

Reducing comb filtering on different musical instruments using time delay estimation

Reducing comb filtering on different musical instruments using time delay estimation Reducing comb filtering on different musical instruments using time delay estimation Alice Clifford and Josh Reiss Queen Mary, University of London alice.clifford@eecs.qmul.ac.uk Abstract Comb filtering

More information

I R UNDERGRADUATE REPORT. Stereausis: A Binaural Processing Model. by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG

I R UNDERGRADUATE REPORT. Stereausis: A Binaural Processing Model. by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG UNDERGRADUATE REPORT Stereausis: A Binaural Processing Model by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG 2001-6 I R INSTITUTE FOR SYSTEMS RESEARCH ISR develops, applies and teaches advanced methodologies

More information

Sound Source Localization using HRTF database

Sound Source Localization using HRTF database ICCAS June -, KINTEX, Gyeonggi-Do, Korea Sound Source Localization using HRTF database Sungmok Hwang*, Youngjin Park and Younsik Park * Center for Noise and Vibration Control, Dept. of Mech. Eng., KAIST,

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 1, 21 http://acousticalsociety.org/ ICA 21 Montreal Montreal, Canada 2 - June 21 Psychological and Physiological Acoustics Session appb: Binaural Hearing (Poster

More information

ENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE

ENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE BeBeC-2016-D11 ENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE 1 Jung-Han Woo, In-Jee Jung, and Jeong-Guon Ih 1 Center for Noise and Vibration Control (NoViC), Department of

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner. Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb 2009. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence

More information

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA

Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA Audio Engineering Society Convention Paper Presented at the 139th Convention 2015 October 29 November 1 New York, USA 9447 This Convention paper was selected based on a submitted abstract and 750-word

More information

How Accurate is Your Directivity Data?

How Accurate is Your Directivity Data? How Accurate is Your Directivity Data? A white paper detailing an idea from Ron Sauro: A new method and measurement facility for high speed, complex data acquisition of full directivity balloons By Charles

More information

Pressure vs. decibel modulation in spectrotemporal representations: How nonlinear are auditory cortical stimuli?

Pressure vs. decibel modulation in spectrotemporal representations: How nonlinear are auditory cortical stimuli? Pressure vs. decibel modulation in spectrotemporal representations: How nonlinear are auditory cortical stimuli? 1 2 1 1 David Klein, Didier Depireux, Jonathan Simon, Shihab Shamma 1 Institute for Systems

More information

Spatial Audio & The Vestibular System!

Spatial Audio & The Vestibular System! ! Spatial Audio & The Vestibular System! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 13! stanford.edu/class/ee267/!! Updates! lab this Friday will be released as a video! TAs

More information

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues DeLiang Wang Perception & Neurodynamics Lab The Ohio State University Outline of presentation Introduction Human performance Reverberation

More information

Analysis of Frontal Localization in Double Layered Loudspeaker Array System

Analysis of Frontal Localization in Double Layered Loudspeaker Array System Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia Analysis of Frontal Localization in Double Layered Loudspeaker Array System Hyunjoo Chung (1), Sang

More information

Structure of Speech. Physical acoustics Time-domain representation Frequency domain representation Sound shaping

Structure of Speech. Physical acoustics Time-domain representation Frequency domain representation Sound shaping Structure of Speech Physical acoustics Time-domain representation Frequency domain representation Sound shaping Speech acoustics Source-Filter Theory Speech Source characteristics Speech Filter characteristics

More information

Multi-channel Active Control of Axial Cooling Fan Noise

Multi-channel Active Control of Axial Cooling Fan Noise The 2002 International Congress and Exposition on Noise Control Engineering Dearborn, MI, USA. August 19-21, 2002 Multi-channel Active Control of Axial Cooling Fan Noise Kent L. Gee and Scott D. Sommerfeldt

More information

THE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES

THE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES THE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES J. Bouše, V. Vencovský Department of Radioelectronics, Faculty of Electrical

More information

Envelopment and Small Room Acoustics

Envelopment and Small Room Acoustics Envelopment and Small Room Acoustics David Griesinger Lexicon 3 Oak Park Bedford, MA 01730 Copyright 9/21/00 by David Griesinger Preview of results Loudness isn t everything! At least two additional perceptions:

More information

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido

The Discrete Fourier Transform. Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido The Discrete Fourier Transform Claudia Feregrino-Uribe, Alicia Morales-Reyes Original material: Dr. René Cumplido CCC-INAOE Autumn 2015 The Discrete Fourier Transform Fourier analysis is a family of mathematical

More information

The analysis of multi-channel sound reproduction algorithms using HRTF data

The analysis of multi-channel sound reproduction algorithms using HRTF data The analysis of multichannel sound reproduction algorithms using HRTF data B. Wiggins, I. PatersonStephens, P. Schillebeeckx Processing Applications Research Group University of Derby Derby, United Kingdom

More information

Digital Signal Processing of Speech for the Hearing Impaired

Digital Signal Processing of Speech for the Hearing Impaired Digital Signal Processing of Speech for the Hearing Impaired N. Magotra, F. Livingston, S. Savadatti, S. Kamath Texas Instruments Incorporated 12203 Southwest Freeway Stafford TX 77477 Abstract This paper

More information

Pattern Recognition. Part 6: Bandwidth Extension. Gerhard Schmidt

Pattern Recognition. Part 6: Bandwidth Extension. Gerhard Schmidt Pattern Recognition Part 6: Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Institute of Electrical and Information Engineering Digital Signal Processing and System Theory

More information

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time.

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time. 2. Physical sound 2.1 What is sound? Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time. Figure 2.1: A 0.56-second audio clip of

More information

Listening with Headphones

Listening with Headphones Listening with Headphones Main Types of Errors Front-back reversals Angle error Some Experimental Results Most front-back errors are front-to-back Substantial individual differences Most evident in elevation

More information

MEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY

MEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY AMBISONICS SYMPOSIUM 2009 June 25-27, Graz MEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY Martin Pollow, Gottfried Behler, Bruno Masiero Institute of Technical Acoustics,

More information

IMPULSE RESPONSE MEASUREMENT WITH SINE SWEEPS AND AMPLITUDE MODULATION SCHEMES. Q. Meng, D. Sen, S. Wang and L. Hayes

IMPULSE RESPONSE MEASUREMENT WITH SINE SWEEPS AND AMPLITUDE MODULATION SCHEMES. Q. Meng, D. Sen, S. Wang and L. Hayes IMPULSE RESPONSE MEASUREMENT WITH SINE SWEEPS AND AMPLITUDE MODULATION SCHEMES Q. Meng, D. Sen, S. Wang and L. Hayes School of Electrical Engineering and Telecommunications The University of New South

More information

Introduction. 1.1 Surround sound

Introduction. 1.1 Surround sound Introduction 1 This chapter introduces the project. First a brief description of surround sound is presented. A problem statement is defined which leads to the goal of the project. Finally the scope of

More information

Lateralisation of multiple sound sources by the auditory system

Lateralisation of multiple sound sources by the auditory system Modeling of Binaural Discrimination of multiple Sound Sources: A Contribution to the Development of a Cocktail-Party-Processor 4 H.SLATKY (Lehrstuhl für allgemeine Elektrotechnik und Akustik, Ruhr-Universität

More information

Linear Time-Invariant Systems

Linear Time-Invariant Systems Linear Time-Invariant Systems Modules: Wideband True RMS Meter, Audio Oscillator, Utilities, Digital Utilities, Twin Pulse Generator, Tuneable LPF, 100-kHz Channel Filters, Phase Shifter, Quadrature Phase

More information

Multirate Digital Signal Processing

Multirate Digital Signal Processing Multirate Digital Signal Processing Basic Sampling Rate Alteration Devices Up-sampler - Used to increase the sampling rate by an integer factor Down-sampler - Used to increase the sampling rate by an integer

More information

Fundamentals of Digital Audio *

Fundamentals of Digital Audio * Digital Media The material in this handout is excerpted from Digital Media Curriculum Primer a work written by Dr. Yue-Ling Wong (ylwong@wfu.edu), Department of Computer Science and Department of Art,

More information

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Tone-in-noise detection: Observed discrepancies in spectral integration Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands Armin Kohlrausch b) and

More information

This tutorial describes the principles of 24-bit recording systems and clarifies some common mis-conceptions regarding these systems.

This tutorial describes the principles of 24-bit recording systems and clarifies some common mis-conceptions regarding these systems. This tutorial describes the principles of 24-bit recording systems and clarifies some common mis-conceptions regarding these systems. This is a general treatment of the subject and applies to I/O System

More information

A triangulation method for determining the perceptual center of the head for auditory stimuli

A triangulation method for determining the perceptual center of the head for auditory stimuli A triangulation method for determining the perceptual center of the head for auditory stimuli PACS REFERENCE: 43.66.Qp Brungart, Douglas 1 ; Neelon, Michael 2 ; Kordik, Alexander 3 ; Simpson, Brian 4 1

More information

Objectives. Abstract. This PRO Lesson will examine the Fast Fourier Transformation (FFT) as follows:

Objectives. Abstract. This PRO Lesson will examine the Fast Fourier Transformation (FFT) as follows: : FFT Fast Fourier Transform This PRO Lesson details hardware and software setup of the BSL PRO software to examine the Fast Fourier Transform. All data collection and analysis is done via the BIOPAC MP35

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 7.2 MICROPHONE ARRAY

More information

Intensity Discrimination and Binaural Interaction

Intensity Discrimination and Binaural Interaction Technical University of Denmark Intensity Discrimination and Binaural Interaction 2 nd semester project DTU Electrical Engineering Acoustic Technology Spring semester 2008 Group 5 Troels Schmidt Lindgreen

More information

BIOLOGICALLY INSPIRED BINAURAL ANALOGUE SIGNAL PROCESSING

BIOLOGICALLY INSPIRED BINAURAL ANALOGUE SIGNAL PROCESSING Brain Inspired Cognitive Systems August 29 September 1, 2004 University of Stirling, Scotland, UK BIOLOGICALLY INSPIRED BINAURAL ANALOGUE SIGNAL PROCESSING Natasha Chia and Steve Collins University of

More information

Theory of Telecommunications Networks

Theory of Telecommunications Networks Theory of Telecommunications Networks Anton Čižmár Ján Papaj Department of electronics and multimedia telecommunications CONTENTS Preface... 5 1 Introduction... 6 1.1 Mathematical models for communication

More information

Response spectrum Time history Power Spectral Density, PSD

Response spectrum Time history Power Spectral Density, PSD A description is given of one way to implement an earthquake test where the test severities are specified by time histories. The test is done by using a biaxial computer aided servohydraulic test rig.

More information

Module 5. DC to AC Converters. Version 2 EE IIT, Kharagpur 1

Module 5. DC to AC Converters. Version 2 EE IIT, Kharagpur 1 Module 5 DC to AC Converters Version 2 EE IIT, Kharagpur 1 Lesson 37 Sine PWM and its Realization Version 2 EE IIT, Kharagpur 2 After completion of this lesson, the reader shall be able to: 1. Explain

More information

Chapter 5. Signal Analysis. 5.1 Denoising fiber optic sensor signal

Chapter 5. Signal Analysis. 5.1 Denoising fiber optic sensor signal Chapter 5 Signal Analysis 5.1 Denoising fiber optic sensor signal We first perform wavelet-based denoising on fiber optic sensor signals. Examine the fiber optic signal data (see Appendix B). Across all

More information

Electronic Noise Effects on Fundamental Lamb-Mode Acoustic Emission Signal Arrival Times Determined Using Wavelet Transform Results

Electronic Noise Effects on Fundamental Lamb-Mode Acoustic Emission Signal Arrival Times Determined Using Wavelet Transform Results DGZfP-Proceedings BB 9-CD Lecture 62 EWGAE 24 Electronic Noise Effects on Fundamental Lamb-Mode Acoustic Emission Signal Arrival Times Determined Using Wavelet Transform Results Marvin A. Hamstad University

More information

Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences

Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences Acoust. Sci. & Tech. 24, 5 (23) PAPER Upper hemisphere sound localization using head-related transfer functions in the median plane and interaural differences Masayuki Morimoto 1;, Kazuhiro Iida 2;y and

More information

CHAPTER 2 MICROSTRIP REFLECTARRAY ANTENNA AND PERFORMANCE EVALUATION

CHAPTER 2 MICROSTRIP REFLECTARRAY ANTENNA AND PERFORMANCE EVALUATION 43 CHAPTER 2 MICROSTRIP REFLECTARRAY ANTENNA AND PERFORMANCE EVALUATION 2.1 INTRODUCTION This work begins with design of reflectarrays with conventional patches as unit cells for operation at Ku Band in

More information

Overview of Code Excited Linear Predictive Coder

Overview of Code Excited Linear Predictive Coder Overview of Code Excited Linear Predictive Coder Minal Mulye 1, Sonal Jagtap 2 1 PG Student, 2 Assistant Professor, Department of E&TC, Smt. Kashibai Navale College of Engg, Pune, India Abstract Advances

More information

CHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR

CHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR 22 CHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR 2.1 INTRODUCTION A CI is a device that can provide a sense of sound to people who are deaf or profoundly hearing-impaired. Filters

More information

Development of multichannel single-unit microphone using shotgun microphone array

Development of multichannel single-unit microphone using shotgun microphone array PROCEEDINGS of the 22 nd International Congress on Acoustics Electroacoustics and Audio Engineering: Paper ICA2016-155 Development of multichannel single-unit microphone using shotgun microphone array

More information

Block diagram of proposed general approach to automatic reduction of speech wave to lowinformation-rate signals.

Block diagram of proposed general approach to automatic reduction of speech wave to lowinformation-rate signals. XIV. SPEECH COMMUNICATION Prof. M. Halle G. W. Hughes J. M. Heinz Prof. K. N. Stevens Jane B. Arnold C. I. Malme Dr. T. T. Sandel P. T. Brady F. Poza C. G. Bell O. Fujimura G. Rosen A. AUTOMATIC RESOLUTION

More information

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters FIR Filter Design Chapter Intended Learning Outcomes: (i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters (ii) Ability to design linear-phase FIR filters according

More information

REDUCING THE NEGATIVE EFFECTS OF EAR-CANAL OCCLUSION. Samuel S. Job

REDUCING THE NEGATIVE EFFECTS OF EAR-CANAL OCCLUSION. Samuel S. Job REDUCING THE NEGATIVE EFFECTS OF EAR-CANAL OCCLUSION Samuel S. Job Department of Electrical and Computer Engineering Brigham Young University Provo, UT 84602 Abstract The negative effects of ear-canal

More information

Processor Setting Fundamentals -or- What Is the Crossover Point?

Processor Setting Fundamentals -or- What Is the Crossover Point? The Law of Physics / The Art of Listening Processor Setting Fundamentals -or- What Is the Crossover Point? Nathan Butler Design Engineer, EAW There are many misconceptions about what a crossover is, and

More information

Audio Engineering Society. Convention Paper. Presented at the 117th Convention 2004 October San Francisco, CA, USA

Audio Engineering Society. Convention Paper. Presented at the 117th Convention 2004 October San Francisco, CA, USA Audio Engineering Society Convention Paper Presented at the 117th Convention 004 October 8 31 San Francisco, CA, USA This convention paper has been reproduced from the author's advance manuscript, without

More information

SAMPLING THEORY. Representing continuous signals with discrete numbers

SAMPLING THEORY. Representing continuous signals with discrete numbers SAMPLING THEORY Representing continuous signals with discrete numbers Roger B. Dannenberg Professor of Computer Science, Art, and Music Carnegie Mellon University ICM Week 3 Copyright 2002-2013 by Roger

More information

Estimating the Properties of DWDM Filters Before Designing and Their Error Sensitivity and Compensation Effects in Production

Estimating the Properties of DWDM Filters Before Designing and Their Error Sensitivity and Compensation Effects in Production Estimating the Properties of DWDM Filters Before Designing and Their Error Sensitivity and Compensation Effects in Production R.R. Willey, Willey Optical Consultants, Charlevoix, MI Key Words: Narrow band

More information

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters

(i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters FIR Filter Design Chapter Intended Learning Outcomes: (i) Understanding of the characteristics of linear-phase finite impulse response (FIR) filters (ii) Ability to design linear-phase FIR filters according

More information

4.5 Fractional Delay Operations with Allpass Filters

4.5 Fractional Delay Operations with Allpass Filters 158 Discrete-Time Modeling of Acoustic Tubes Using Fractional Delay Filters 4.5 Fractional Delay Operations with Allpass Filters The previous sections of this chapter have concentrated on the FIR implementation

More information