COM325 Computer Speech and Hearing

Similar documents
Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.

Perception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.

Perception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.

Hearing and Deafness 2. Ear as a frequency analyzer. Chris Darwin

AUDL GS08/GAV1 Auditory Perception. Envelope and temporal fine structure (TFS)

Complex Sounds. Reading: Yost Ch. 4

Psycho-acoustics (Sound characteristics, Masking, and Loudness)

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

Phase and Feedback in the Nonlinear Brain. Malcolm Slaney (IBM and Stanford) Hiroko Shiraiwa-Terasawa (Stanford) Regaip Sen (Stanford)

AUDITORY ILLUSIONS & LAB REPORT FORM

III. Publication III. c 2005 Toni Hirvonen.

Acoustics, signals & systems for audiology. Week 4. Signals through Systems

You know about adding up waves, e.g. from two loudspeakers. AUDL 4007 Auditory Perception. Week 2½. Mathematical prelude: Adding up levels

The role of intrinsic masker fluctuations on the spectral spread of masking

An introduction to physics of Sound

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Loudness & Temporal resolution

Signals & Systems for Speech & Hearing. Week 6. Practical spectral analysis. Bandpass filters & filterbanks. Try this out on an old friend

Sound is the human ear s perceived effect of pressure changes in the ambient air. Sound can be modeled as a function of time.

Linguistics 401 LECTURE #2. BASIC ACOUSTIC CONCEPTS (A review)

Distortion products and the perceived pitch of harmonic complex tones

HCS 7367 Speech Perception

Structure of Speech. Physical acoustics Time-domain representation Frequency domain representation Sound shaping

Math and Music: Understanding Pitch

MUSC 316 Sound & Digital Audio Basics Worksheet

Binaural Hearing. Reading: Yost Ch. 12

Chapter 16. Waves and Sound

Signals, Sound, and Sensation

Human Auditory Periphery (HAP)

Preeti Rao 2 nd CompMusicWorkshop, Istanbul 2012

Imagine the cochlea unrolled

The psychoacoustics of reverberation

Music 171: Amplitude Modulation

Acoustics, signals & systems for audiology. Week 9. Basic Psychoacoustic Phenomena: Temporal resolution

ECE 556 BASICS OF DIGITAL SPEECH PROCESSING. Assıst.Prof.Dr. Selma ÖZAYDIN Spring Term-2017 Lecture 2

MUS 302 ENGINEERING SECTION

ALTERNATING CURRENT (AC)

Tone-in-noise detection: Observed discrepancies in spectral integration. Nicolas Le Goff a) Technische Universiteit Eindhoven, P.O.

AUDL 4007 Auditory Perception. Week 1. The cochlea & auditory nerve: Obligatory stages of auditory processing

Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma

Mel Spectrum Analysis of Speech Recognition using Single Microphone

PHYSICS LAB. Sound. Date: GRADE: PHYSICS DEPARTMENT JAMES MADISON UNIVERSITY

8.3 Basic Parameters for Audio

Chapter 12. Preview. Objectives The Production of Sound Waves Frequency of Sound Waves The Doppler Effect. Section 1 Sound Waves

Mel- frequency cepstral coefficients (MFCCs) and gammatone filter banks

SGN Audio and Speech Processing

A mechanical wave is a disturbance which propagates through a medium with little or no net displacement of the particles of the medium.

Computational Perception. Sound localization 2

6.551j/HST.714j Acoustics of Speech and Hearing: Exam 2

ANALYSIS AND EVALUATION OF IRREGULARITY IN PITCH VIBRATO FOR STRING-INSTRUMENT TONES

Biomedical Signals. Signals and Images in Medicine Dr Nabeel Anwar

Communications Theory and Engineering

SOUND QUALITY EVALUATION OF FAN NOISE BASED ON HEARING-RELATED PARAMETERS SUMMARY INTRODUCTION

The quality of the transmission signal The characteristics of the transmission medium. Some type of transmission medium is required for transmission:

AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES

Chapter 17. The Principle of Linear Superposition and Interference Phenomena

Physics 101. Lecture 21 Doppler Effect Loudness Human Hearing Interference of Sound Waves Reflection & Refraction of Sound

Lab 8. ANALYSIS OF COMPLEX SOUNDS AND SPEECH ANALYSIS Amplitude, loudness, and decibels

2920 J. Acoust. Soc. Am. 102 (5), Pt. 1, November /97/102(5)/2920/5/$ Acoustical Society of America 2920

Synchronous Overlap and Add of Spectra for Enhancement of Excitation in Artificial Bandwidth Extension of Speech

SPEECH AND SPECTRAL ANALYSIS

I R UNDERGRADUATE REPORT. Stereausis: A Binaural Processing Model. by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG

Acoustic Phonetics. Chapter 8

The EarSpring Model for the Loudness Response in Unimpaired Human Hearing

IN a natural environment, speech often occurs simultaneously. Monaural Speech Segregation Based on Pitch Tracking and Amplitude Modulation

Experiments in two-tone interference

Fundamentals of Music Technology

Friedrich-Alexander Universität Erlangen-Nürnberg. Lab Course. Pitch Estimation. International Audio Laboratories Erlangen. Prof. Dr.-Ing.

Temporal resolution AUDL Domain of temporal resolution. Fine structure and envelope. Modulating a sinusoid. Fine structure and envelope

Project 0: Part 2 A second hands-on lab on Speech Processing Frequency-domain processing

University of Washington Department of Electrical Engineering Computer Speech Processing EE516 Winter 2005

From Last Time Wave Properties. Description of a Wave. Water waves? Water waves occur on the surface. They are a kind of transverse wave.

Introduction to Equalization

SGN Audio and Speech Processing

Advanced Audiovisual Processing Expected Background

A102 Signals and Systems for Hearing and Speech: Final exam answers

Musical Acoustics, C. Bertulani. Musical Acoustics. Lecture 13 Timbre / Tone quality I

Music and Engineering: Just and Equal Temperament

Laboratory Assignment 2 Signal Sampling, Manipulation, and Playback

Definition of Sound. Sound. Vibration. Period - Frequency. Waveform. Parameters. SPA Lundeen

ABC Math Student Copy

Principles of Musical Acoustics

LAB 2 Machine Perception of Music Computer Science 395, Winter Quarter 2005

the human chapter 1 Traffic lights the human User-centred Design Light Vision part 1 (modified extract for AISD 2005) Information i/o

Musical Acoustics, C. Bertulani. Musical Acoustics. Lecture 14 Timbre / Tone quality II

Sound Waves and Beats

THE PHENOMENON OF BEATS AND THEIR CAUSES

Octave generalization of specific interference effects in memory for tonal pitch*

Surround: The Current Technological Situation. David Griesinger Lexicon 3 Oak Park Bedford, MA

Chapter 7. Waves and Sound

Results of Egan and Hake using a single sinusoidal masker [reprinted with permission from J. Acoust. Soc. Am. 22, 622 (1950)].

An unnatural test of a natural model of pitch perception: The tritone paradox and spectral dominance

Ch17. The Principle of Linear Superposition and Interference Phenomena. The Principle of Linear Superposition

Pitch and Harmonic to Noise Ratio Estimation

Quiz on Chapters 13-15

A3D Contiguous time-frequency energized sound-field: reflection-free listening space supports integration in audiology

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 AUDITORY EVOKED MAGNETIC FIELDS AND LOUDNESS IN RELATION TO BANDPASS NOISES

Auditory modelling for speech processing in the perceptual domain

Imperfect pitch: Gabor s uncertainty principle and the pitch of extremely brief sounds

Introduction of Audio and Music

Topic. Spectrogram Chromagram Cesptrogram. Bryan Pardo, 2008, Northwestern University EECS 352: Machine Perception of Music and Audio

Transcription:

COM325 Computer Speech and Hearing Part III : Theories and Models of Pitch Perception Dr. Guy Brown Room 145 Regent Court Department of Computer Science University of Sheffield Email: g.brown@dcs.shef.ac.uk http://www.dcs.shef.ac.uk/~guy/hearing SLIDE 1

1. Introduction COM325 COMPUTER SPEECH AND HEARING Definition: Pitch is that attribute of auditory sensation in terms of which sounds may be ordered on a musical scale (American Standards Association). Pitch is related to the repetition rate of a signal. The repetition rate of a sinusoidal tone is its frequency, and the repetition rate of a complex tone is its fundamental frequency (F0). In this part of the course, we will: Present some theories of pitch perception, which fall into two main classes - timing and pattern recognition theories; Discuss experimentally observed pitch phenomena which support or contradict these theories; Outline a computational model of pitch perception. SLIDE 2

2. Why is pitch important? Intone languages, pitch variation signifies phonetic or syllabic distinctions. For example, in Shona (spoken in Zimbabwe), kutshera is to draw water, whereas kutshera is to dig (underscore and overscore indicate low and high pitch). All languages use pitch variation to convey meaning. Try saying this lecture is really interesting to express: (i) sarcasm (ii) incredulity (iii) agreement Pitch is a cue for determining the number of acoustic sources present in a mixture, and for grouping sound components which originate from a single source. Pitch is an important compositional element in music. Pitch range is a good cue to speaker gender and, to a lesser extent, age. Pitch is a characteristic of non-speech sources that resonate when struck. SLIDE 3

3. The pitch of a sinusoid The pitch of a pure tone is related to its frequency, although other factors such as duration and (to a lesser extent) intensity can influence perceived pitch [1]. The smallest change in frequency that can be detected is called the frequency difference limen (DLF). This is measured by presenting listeners with two tones with slightly different frequencies in sequence, and asking which has the higher pitch. The DLF is remarkably small; at 1 khz the DLF is 2 Hz (i.e., 0.2%). Audio demo: dependence of pitch on duration You will hear tones of 300, 1000 and 3000 Hz in bursts of 1, 2, 4, 8, 16, 32, 64 and 128 periods. The percept changes from a click to a tone. Note whether or not you hear a pitch for each condition. Q. How many periods were necessary to establish a sense of pitch? SLIDE 4

3.1. Relationship between frequency and pitch Units of pitch are mels (from melody ). The pitch of a 1000 Hz tone is arbitrarily set at 1000 mels. Relationship between pitch (in mels) and frequency is derived by asking listeners to adjust the frequency of a tone so that it has half the pitch of a reference tone of equal loudness. The growth of perceived pitch is less rapid than the change in frequency (the same was true of the relationship between loudness and intensity). SLIDE 5

3.2. Pattern recognition and timing theories The two main pitch theories - pattern recognition and timing theories - are inspired by place and timing mechanisms of frequency coding in the auditory nerve. Theories of pitch require more than an explanation of how frequency is coded - they must also describe how a pitch percept is computed from neural signals. Hence, the pattern recognition and timing theories propose differing accounts of processing beyond the auditory nerve. Channel Number 30 25 20 15 10 5 0 5 10 15 20 25 5 10 15 20 25 Time [ms] Timing: intervals between phase-locked spikes in the auditory nerve Channel Number 30 25 20 15 10 5 0.5 1 1.5 2 2.5 x 10 4 Place: position of maximum displacement on basilar membrane SLIDE 6

3.3. Frequency coding by place Pitch may be coded by the position of the peak in the auditory excitation pattern. However, excitation patterns may hardly differ at the peaks; figure shows auditory response for tones of frequency 1000 Hz and 1005 Hz (a frequency difference greater than the DLF). Place coding predicts that DLF should vary in the same way as critical bandwidth; discrimination should be good at low frequencies where bandwidth is narrow, and poor at high frequencies where bandwidth is wide. Firing rate (spikes/sec) 60 55 50 45 40 35 30 25 20 15 10 400 600 800 1000 1200 1400 1600 Channel Centre Frequency [Hz] Not a perfect fit to the data - some other mechanism is involved. SLIDE 7

3.4. Frequency coding by timing Timing is preserved by phase-locking in the auditory nerve. A range of fibres may be phase-locked, since center frequencies continuously overlap. However, auditory nerve fibres cannot fire more than a few hundred times per second. How are time intervals above this rate coded? A fibre need not fire on every cycle. If fibres fire every n cycles, intervals accumulate at multiples of the tone period. Firing every 5 cycles Firing every cycle Firing every 3 cycles 0 5 10 15 Time (ms) See diagram - for a 1 khz tone, we get intervals at 1 ms, 2 ms, 3 ms and so on. A process which looked for the greatest common divisor of these intervals would correctly identify the frequency as 1 khz. SLIDE 8

3.5. Place coding vs. timing coding The timing theory can account for small DLFs if we assume that variability in the timing of spikes is reduced by averaging over many fibres. Phase locking is only maintained in the auditory nerve up to 4 khz or so. Above this frequency the DLF increases considerably. Timing mechanisms also necessary to explain pitch perception for very short tones, which would generate a blurred place representation. So, likely that auditory system uses timing and place coding. Plausible that timing mechanisms dominate up to 4 khz, and place mechanisms dominate thereafter. SLIDE 9

4. The pitch of complex sounds Place theories fall down badly for complex sounds. The classic demonstration of this is the missing fundamental. Consider a harmonic series with F0 f Hz; that is, the stimulus consists of a series of pure tones with frequencies nf Hz, where n is 1, 2, 3, 4 and so on. This sound has a pitch corresponding to the fundamental frequency (F0). Now, suppose the component at F0 is removed. Listeners still hear a pitch at f Hz. Amplitude Pitch heard at f Hz f 2f 3f 4f 5f 6f Frequency Amplitude Pitch still heard at f Hz f 2f 3f 4f 5f 6f Frequency Since there is no energy at the F0, this phenomenon is known as virtual pitch. Q. Why does this experiment present problems for theories of pitch based on place coding? SLIDE 10

Audio demo: the missing fundamental You ll hear a complex tone with a fundamental frequency of 200 Hz, consisting of 10 harmonics. First, the complex is presented complete, then without the fundamental, then without the lowest two harmonics, and so on. Q. Did the pitch of the complex change? Q. The bandwidth of telephone speech is approximately 300 Hz to 3 khz. Comments? SLIDE 11

4.1. Pattern recognition theories of pitch perception How can virtual pitch arise? Perhaps the auditory system uses the whole excitation pattern to compute the pitch; it might hypothesize a range of pitches, and find the one with the best fit to the harmonics in the excitation pattern. Such pattern recognition models of pitch are not dependent on place theories of coding (e.g., the pattern that is presented to the pitch mechanism may have been derived from spike intervals). What distinguishes pattern recognition models from other models is the assumption that the pattern contains resolved harmonics. SLIDE 12

4.2. Resolved and unresolved harmonics A resolved harmonic is represented as a separate peak of activity at its frequency. If two harmonics lie within the same critical bandwidth, they are not separately resolved in the output of the auditory filter array. Since critical bandwidths are narrow at low frequencies and wide at high frequencies, we see resolved harmonics at low frequencies and unresolved harmonics at high frequencies. The precise point at which harmonics become unresolved depends on the fundamental frequency of the stimulus. For example, if the fundamental is 200 Hz, harmonics will become unresolved at the frequency at which the critical bandwidth exceeds 200 Hz (at around 1500 Hz). Amplitude Amplitude Low frequency: narrow critical bands, resolved harmonics Frequency High frequency: wider critical bands, unresolved harmonics Frequency SLIDE 13

4.3. Timing theories of pitch perception Pure timing theories propose that pitch results from unresolved harmonics. The response in mid- and high-frequency regions of the auditory filter array is amplitude modulated. CF = 100 Hz (resolved) CF = 2 khz (unresolved) 250 Amplitude 60 40 20 0-20 -40-60 Amplitude 200 150 100 50 0-50 -100-150 -200 5 10 15 20 25 30 35 40 Time [ms] -250 5 10 15 20 25 30 35 40 Time [ms] The figure shows the response of two auditory filters to a harmonic complex with F0 of 100 Hz. The output of the filter with CF = 100 Hz is a single resolved harmonic, but when CF = 2 khz several harmonics interact in the same filter. The time between pulses in the 2 khz channel is 10 ms, which corresponds to the F0; so amplitude modulation could provide a cue to pitch. SLIDE 14

4.4. Beating COM325 COMPUTER SPEECH AND HEARING Amplitude modulation occurs because of beating between adjacent harmonics. Adding two tones that are close in frequency produces a waveform which has an AM rate equal to the difference in frequency between the tones. 1 2 3 4 5 18 Hz 20 Hz Audio demo: beats You will hear two pure tones with frequencies of 1000 Hz and 1004 Hz, first presented separately and then presented together. The sequence is presented twice. Q. What is the frequency of the beat in this example? SLIDE 15

4.5. Pattern recognition theories vs. timing theories Pattern recognition theories Signal Auditory periphery Timing theories Frequency analysis Resolved harmonics Timing analysis Unresolved harmonics Frequency Calculation of best fitting fundamental Time Calculation of most frequent interval Pitch estimate SLIDE 16

4.6. Challenges for theories of pitch perception Pitch of resolved harmonics only. It is possible to perceive a pitch based only on resolved harmonics, i.e. when there is no possibility of interaction between components. Pitch of unresolved harmonics only. It is possible to perceive a pitch based only on unresolved harmonics. Dominance. The 3rd, 4th and 5th harmonics tend to dominate the pitch percept. Mistuned harmonics. If a single component of a harmonic complex is mistuned so that its frequency is not an exact multiple of the F0, it can be heard as a separate tone. Q. Do the above findings support the pattern recognition theory or timing theory? SLIDE 17

5. A computational model of pitch perception Many models of pitch perception has been proposed, but we ll concentrate on one; the correlogram [2]. See [3] and [4] for other models. This model performs an autocorrelation on the output of each channel of an auditory model, defined as: N acg( τ) = xt ()xt ( τ) t = 1 where x(t) is the signal and N is the window length over which the autocorrelation is computed. The parameter τ is the autocorrelation delay (lag). You should recognise this as the convolution of x(t) with itself. The autocorrelation has a maximum at zero lag, and for a signal with period p it attains its next maximum at a lag of p. It also has periods at 2p, 3p, 4p and so on. Summing the autocorrelation functions of each channel gives rise to a pooled autocorrelation; the biggest peak in the pooled function occurs at the pitch period. SLIDE 18

5.1. Computing a correlogram Auditory Nerve Correlogram 30 30 25 25 Channel Number 20 15 Channel Number 20 15 10 10 5 5 0 5 10 15 20 25 30 5 10 15 20 25 30 Time [ms] 0 5 10 15 20 25 30 5 10 15 20 25 30 Autocorrelation Delay [ms] Harmonic complex with F0 = 100 Hz. Summary correlogram shows peak at 10 ms lag, indicating that this is the pitch period. Note the duplicate peaks at 20 ms and 30 ms. 3.5 x 107 3 2.5 2 1.5 1 Summary Correlogram 0.5 0 5 10 15 20 25 30 Autocorrelation Delay [ms] SLIDE 19

5.2. Why does the correlogram work? Channels that are responding to a particular frequency component show a peak in the autocorrelation function at the period of that frequency, and also at multiples. For example, consider the first four harmonics of a 200 Hz fundamental: Harmonic Frequency [Hz] Time lags at which an autocorrelation peak occurs [ms] 200 5.0, 10.0, 15.0, 20.0, 25.0 400 2.5, 5.0, 7.5, 10.0, 12.5 600 1.66, 3.33, 5.0, 6.66, 8.33 800 1.25, 2.5, 3.75, 5.0, 6.25 Each channel has a peak at 5 ms (period of the 200 Hz fundamental). Higher channels also have a peak at this period because they beat at a frequency corresponding to the difference between adjacent harmonics (also 200 Hz). Q. Is the correlogram related to the timing or pattern recognition theory of pitch perception (or both)? SLIDE 20

5.3. Explaining pitch phenomena The figures show that the correlogram can account for the missing fundamental (B,D), pitch of resolved harmonics only (A,C), pitch of unresolved harmonics only (B,D) and dominance (C,D). Both signals have F0 = 100 Hz. A 30 100 Hz + 200 Hz + 300 Hz 2100 Hz + 2200 Hz + 2300 Hz B 30 25 25 Channel Number 20 15 10 Channel Number 20 15 10 5 5 C 0 2.2 x 106 2 2.5 5 7.5 10 12.5 15 17.5 20 Autocorrelation Delay [ms] D 0 2.6 x 106 2.4 2.5 5 7.5 10 12.5 15 17.5 20 Autocorrelation Delay [ms] 1.8 2.2 1.6 2 1.4 1.8 1.2 1.6 1 1.4 0.8 0 2.5 5 7.5 10 12.5 15 17.5 20 Autocorrelation Delay [ms] 1.2 0 2.5 5 7.5 10 12.5 15 17.5 20 Autocorrelation Delay [ms] SLIDE 21

5.4. The pooled autocorrelation function and pitch strength The height of the peak in the pooled autocorrelation function can be interpreted as a measure of pitch strength. 10 x 106 9 IRN 1 iterations Iterated ripple noise (IRN) is created by adding a time delayed random noise signal to itself [5]. A weak pitch is apparent for one iteration, becoming more salient as the number of iterations is increased. Noise in IRN out 8 7 6 5 4 0 2.5 5 7.5 10 12.5 15 17.5 20 Autocorrelation Delay [ms] 8 x IRN 10 iterations 106 7.5 7 z -n 6.5 6 5.5 The correlogram shows the right pattern. Examples are for IRN with delay of 10 ms. 5 4.5 4 3.5 0 2.5 5 7.5 10 12.5 15 17.5 20 Autocorrelation Delay [ms] SLIDE 22

5.5. What is a good model of pitch perception? Good models of pitch perception should not only perform as well as humans, but they should make the same mistakes too. Audio demo: circularity in pitch judgment This demonstration uses a cycle of complex tones, each composed of 10 partials separated by octave intervals. The tones are windowed with a raised cosine: Moving the frequencies of the partials upwards in steps results in an ever ascending scale, which is an acoustic analogue of Escher s staircase visual illusion. Decibels Log frequency SLIDE 23

6. Summary COM325 COMPUTER SPEECH AND HEARING Pitch is largely determined by the repetition rate of a signal (frequency for tones, fundamental frequency for complex sounds). Theories of pitch are influenced by two possible mechanisms of frequency coding in the cochlea; timing and place. Simple place-based theories of pitch cannot apply to complex sounds (e.g., the missing fundamental). Timing theories rely on beating in frequency regions where individual harmonics are not resolved. Pattern recognition theories rely on resolved harmonics. Neither theory fits all of the data - it is likely that the auditory system uses both mechanisms. A computational model of pitch perception which combines periodicities in resolved and unresolved harmonic regions can account for the majority of psychophysical pitch phenomena. SLIDE 24

7. References COM325 COMPUTER SPEECH AND HEARING [1] B.C.J. Moore (1989) An introduction to the psychology of hearing, Academic Press. [2] M. Slaney & R. Lyon (1993) On the importance of time - a temporal representation of sound. In Visual Representations of Speech Signals, Ed. Cooke, Beet and Crawford, Wiley. [3] D. Hermes (1993) Pitch analysis. In Visual Representations of Speech Signals, Ed. Cooke, Beet and Crawford, Wiley. [4] W. Hess (1983) Pitch determination of speech signals, Springer. [5] W. A. Yost, R. A. Patterson & S. Sheft (1996) A time domain description for the pitch strength of iterated rippled noise. Journal of the Acoustical Society of America, 99, pp. 1066-1078. SLIDE 25

Tutorial questions COM325 COMPUTER SPEECH AND HEARING 1. Run the MAD demonstration called auto, which illustrates fundamental frequency analysis by applying autocorrelation directly to the signal waveform (i.e., no auditory filters are involved). Answer the tutorial questions associated with this demo. 2. Run the MAD detuning demonstration, which demonstrates the effect of mistuning a harmonic on the pitch of a complex tone (see slide 17). 3. Play with the MAD demonstration called vowelexplorer. This allows you to generate a mixture of two vowel sounds, and to see the corresponding basilar membrane response and correlogram. When you can clearly hear the pitch of each vowel, do you see two clear peaks in the pooled correlogram function? 4. Use the MATLAB function irn to generate iterated ripple noise (IRN). Write a program that uses a loop to generate IRN with the same delay but with the number of iterations varying between 0 and 20. Play each signal using soundsc - does the pitch become more salient as the number of iterations is increased? 5. Write a MATLAB function that measures the frequency difference limen (DLF). Your function should present the listener with a reference tone of fixed frequency, followed by another tone whose frequency is slightly above or below that of the reference. Your function should ask the listener to indicate whether the second tone was lower or higher than the first, and record the results for several trials. Use the tone function. SLIDE 26