CS 188: Artificial Intelligence Spring Speech in an Hour

Similar documents
Announcements. Today. Speech and Language. State Path Trellis. HMMs: MLE Queries. Introduction to Artificial Intelligence. V22.

Statistical NLP Spring Unsupervised Tagging?

Speech Processing. Undergraduate course code: LASC10061 Postgraduate course code: LASC11065

SPEECH AND SPECTRAL ANALYSIS

Source-filter Analysis of Consonants: Nasals and Laterals

Acoustic Phonetics. How speech sounds are physically represented. Chapters 12 and 13

Speech Signal Analysis

Reading: Johnson Ch , Ch.5.5 (today); Liljencrants & Lindblom; Stevens (Tues) reminder: no class on Thursday.

INTRODUCTION TO ACOUSTIC PHONETICS 2 Hilary Term, week 6 22 February 2006

COMP 546, Winter 2017 lecture 20 - sound 2

Linguistic Phonetics. Spectral Analysis

The source-filter model of speech production"

EE482: Digital Signal Processing Applications

Lab 8. ANALYSIS OF COMPLEX SOUNDS AND SPEECH ANALYSIS Amplitude, loudness, and decibels

Structure of Speech. Physical acoustics Time-domain representation Frequency domain representation Sound shaping

Acoustic Phonetics. Chapter 8

Signal Processing for Speech Applications - Part 2-1. Signal Processing For Speech Applications - Part 2

Speech Recognition. Mitch Marcus CIS 421/521 Artificial Intelligence

WaveSurfer. Basic acoustics part 2 Spectrograms, resonance, vowels. Spectrogram. See Rogers chapter 7 8

Speech Synthesis using Mel-Cepstral Coefficient Feature

Digital Signal Processing

Speech Synthesis; Pitch Detection and Vocoders

About waves. Sounds of English. Different types of waves. Ever done the wave?? Why do we care? Tuning forks and pendulums

Cepstrum alanysis of speech signals

Digitized signals. Notes on the perils of low sample resolution and inappropriate sampling rates.

Source-filter analysis of fricatives

Foundations of Language Science and Technology. Acoustic Phonetics 1: Resonances and formants

APPLICATIONS OF DSP OBJECTIVES

Source-Filter Theory 1

Resonance and resonators

SGN Audio and Speech Processing

From Ladefoged EAP, p. 11

Performance Analysis of MFCC and LPCC Techniques in Automatic Speech Recognition

Synchronous Overlap and Add of Spectra for Enhancement of Excitation in Artificial Bandwidth Extension of Speech

University of Washington Department of Electrical Engineering Computer Speech Processing EE516 Winter 2005

Pitch Period of Speech Signals Preface, Determination and Transformation

Mel Spectrum Analysis of Speech Recognition using Single Microphone

E : Lecture 8 Source-Filter Processing. E : Lecture 8 Source-Filter Processing / 21

EC 6501 DIGITAL COMMUNICATION UNIT - II PART A

Audio processing methods on marine mammal vocalizations

Review: Frequency Response Graph. Introduction to Speech and Science. Review: Vowels. Response Graph. Review: Acoustic tube models

Introducing COVAREP: A collaborative voice analysis repository for speech technologies

Quantification of glottal and voiced speech harmonicsto-noise ratios using cepstral-based estimation

Dimension Reduction of the Modulation Spectrogram for Speaker Verification

Applications of Music Processing

SOUND SOURCE RECOGNITION AND MODELING

International Journal of Engineering and Techniques - Volume 1 Issue 6, Nov Dec 2015

speech signal S(n). This involves a transformation of S(n) into another signal or a set of signals

Comparison of Spectral Analysis Methods for Automatic Speech Recognition

Determining Guava Freshness by Flicking Signal Recognition Using HMM Acoustic Models

SGN Audio and Speech Processing

Linguistics 401 LECTURE #2. BASIC ACOUSTIC CONCEPTS (A review)

AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND TRANSFER FUNCTIONS

SPEECH ENHANCEMENT USING PITCH DETECTION APPROACH FOR NOISY ENVIRONMENT

Performance study of Text-independent Speaker identification system using MFCC & IMFCC for Telephone and Microphone Speeches

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

Isolated Digit Recognition Using MFCC AND DTW

Musical Acoustics, C. Bertulani. Musical Acoustics. Lecture 14 Timbre / Tone quality II

Project 0: Part 2 A second hands-on lab on Speech Processing Frequency-domain processing

Learning to Unlearn and Relearn Speech Signal Processing using Neural Networks: current and future perspectives

An introduction to physics of Sound

Auditory Based Feature Vectors for Speech Recognition Systems

Recap the waveform. Complex waves (dạnh sóng phức tạp) and spectra. Recap the waveform

Sound Recognition. ~ CSE 352 Team 3 ~ Jason Park Evan Glover. Kevin Lui Aman Rawat. Prof. Anita Wasilewska

Digital Speech Processing and Coding

Speech and Music Discrimination based on Signal Modulation Spectrum.

Lab 3 FFT based Spectrum Analyzer

Signals A Preliminary Discussion EE442 Analog & Digital Communication Systems Lecture 2

2: Audio Basics. Audio Basics. Mark Handley

L19: Prosodic modification of speech

EE 225D LECTURE ON SPEECH SYNTHESIS. University of California Berkeley

HIGH RESOLUTION SIGNAL RECONSTRUCTION

Automatic Text-Independent. Speaker. Recognition Approaches Using Binaural Inputs

An Experimentally Measured Source Filter Model: Glottal Flow, Vocal Tract Gain and Output Sound from a Physical Model

MFCC AND GMM BASED TAMIL LANGUAGE SPEAKER IDENTIFICATION SYSTEM

Communications Theory and Engineering

An Approach to Very Low Bit Rate Speech Coding

ECEn 487 Digital Signal Processing Laboratory. Lab 3 FFT-based Spectrum Analyzer

8A. ANALYSIS OF COMPLEX SOUNDS. Amplitude, loudness, and decibels

Pattern Recognition. Part 6: Bandwidth Extension. Gerhard Schmidt

Basic Characteristics of Speech Signal Analysis

Lecture 6: Speech modeling and synthesis

A Correlation-Maximization Denoising Filter Used as An Enhancement Frontend for Noise Robust Bird Call Classification

IMPROVING QUALITY OF SPEECH SYNTHESIS IN INDIAN LANGUAGES. P. K. Lehana and P. C. Pandey

CS101 Lecture 18: Audio Encoding. What You ll Learn Today

VOICE COMMAND RECOGNITION SYSTEM BASED ON MFCC AND DTW

Speech Perception Speech Analysis Project. Record 3 tokens of each of the 15 vowels of American English in bvd or hvd context.

Mel- frequency cepstral coefficients (MFCCs) and gammatone filter banks

Experimental evaluation of inverse filtering using physical systems with known glottal flow and tract characteristics

HCS / ACN 6389 Speech Perception Lab

MUSC 316 Sound & Digital Audio Basics Worksheet

T Automatic Speech Recognition: From Theory to Practice

Performance analysis of voice activity detection algorithm for robust speech recognition system under different noisy environment

describe sound as the transmission of energy via longitudinal pressure waves;

Linguistic Phonetics. The acoustics of vowels

Electric Guitar Pickups Recognition

Acoustics and Fourier Transform Physics Advanced Physics Lab - Summer 2018 Don Heiman, Northeastern University, 1/12/2018

Discrete Fourier Transform (DFT)

Chapter 1: Introduction to audio signal processing

Acoustic Resonance Lab

Transcription:

CS 188: Artificial Intelligence Spring 2006 Lecture 19: Speech Recognition 3/23/2006 Dan Klein UC Berkeley Many slides from Dan Jurafsky Speech in an Hour Speech input is an acoustic wave form s p ee ch l a b l to a transition: Graphs from Simon Arnfield s web tutorial on speech, Sheffield: http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/ 1

frequency Frequency gives pitch; amplitude gives volume sampling at ~8 khz phone, ~16 khz mic (khz=1000 cycles/sec) frequency amplitude Spectral Analysis s p ee ch l a b Fourier transform of wave displayed as a spectrogram darkness indicates energy at each frequency Acoustic Feature Sequence Time slices are translated into acoustic feature vectors (~39 real numbers per slice)..a 12 a 13 a 12 a 14 a 14.. Now we have to figure out a mapping from sequences of acoustic observations to words. 2

The Speech Recognition Problem We want to predict a sentence given an acoustic sequence: s The noisy channel approach: Build a generative model of production (encoding) To decode, we use Bayes rule to write Now, we have to find a sentence maximizing this product Why is this progress? s* = arg max P( s A) P ( A, s ) = P ( s ) P ( A s ) s* = arg max P( s A) s = arg max P( s) P( A s) / P( A) s = arg max P( s) P( A s) s Other Noisy-Channel Processes Handwriting recognition P( text strokes) P( text) P( strokes text) OCR P( text pixels) P( text) P( pixels text) Spelling Correction P( text typos) P( text) P( typos text) Translation? P( english french) P( english) P( french english) 3

Digitizing Speech She just had a baby What can we learn from a wavefile? Vowels are voiced, long, loud Length in time = length in space in waveform picture Voicing: regular peaks in amplitude When stops closed: no peaks: silence. Peaks = voicing:.46 to.58 (vowel [iy], from second.65 to.74 (vowel [ax]) and so on Silence of stop closure (1.06 to 1.08 for first [b], or 1.26 to 1.28 for second [b]) Fricatives like [sh] intense irregular pattern; see.33 to.46 4

Examples from Ladefoged pad bad spat Simple Periodic Sound Waves 0.99 0 0.99 0 0.02 Time (s) Y axis: Amplitude = amount of air pressure at that point in time Zero is normal air pressure, negative is rarefaction X axis: time. Frequency = number of cycles per second. Frequency = 1/Period 20 cycles in.02 seconds = 1000 cycles/second = 1000 Hz 5

Adding 100 Hz + 1000 Hz Waves 0.99 0 0.9654 0 0.05 Time (s) Spectrum Frequency components (100 and 1000 Hz) on x-axis Amplitude 100 Frequency in Hz 1000 6

Part of [ae] from had Note complex wave repeating nine times in figure Plus smaller waves which repeats 4 times for every large pattern Large wave has frequency of 250 Hz (9 times in.036 seconds) Small wave roughly 4 times this, or roughly 1000 Hz Two little tiny waves on top of peak of 1000 Hz waves Back to Spectra Spectrum represents these freq components Computed by Fourier transform, algorithm which separates out each frequency component of wave. x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude) Peaks at 930 Hz, 1860 Hz, and 3020 Hz. 7

Mel Freq. Cepstral Coefficients Do FFT to get spectral information Like the spectrogram/spectrum we saw earlier Apply Mel scaling Linear below 1kHz, log above, equal samples above and below 1kHz Models human ear; more sensitivity in lower freqs Plus Discrete Cosine Transformation Final Feature Vector 39 (real) features per 10 ms frame: 12 MFCC features 12 Delta MFCC features 12 Delta-Delta MFCC features 1 (log) frame energy 1 Delta (log) frame energy 1 Delta-Delta (log frame energy) So each frame is represented by a 39D vector For your projects: We ll just use two frequencies: the first two formants 8

Why these Peaks? Articulatory facts: Vocal cord vibrations create harmonics The mouth is a selective amplifier Depending on shape of mouth, some harmonics are amplified more than others Vowel [i] sung at successively higher pitch. 1 2 3 4 5 6 7 Figures from Ratree Wayland slides from his website 9

Deriving Schwa Reminder of basic facts about sound waves f = c/λ c = speed of sound (approx 35,000 cm/sec) A sound with λ=10 meters: f = 35 Hz (35,000/1000) A sound with λ=2 centimeters: f = 17,500 Hz (35,000/2) Resonances of the vocal tract The human vocal tract as an open tube Closed end Open end Length 17.5 cm. Air in a tube of a given length will tend to vibrate at resonance frequency of tube. Constraint: Pressure differential should be maximal at (closed) glottal end and minimal at (open) lip end. Figure from W. Barry Speech Science slides 10

From Sundberg Computing the 3 Formants of Schwa Let the length of the tube be L F 1 = c/λ 1 = c/(4l) = 35,000/4*17.5 = 500Hz F 2 = c/λ 2 = c/(4/3l) = 3c/4L = 3*35,000/4*17.5 = 1500Hz F 1 = c/λ 2 = c/(4/5l) = 5c/4L = 5*35,000/4*17.5 = 2500Hz So we expect a neutral vowel to have 3 resonances at 500, 1500, and 2500 Hz These vowel resonances are called formants 11

From Mark Liberman s Web site Seeing formants: the spectrogram 12

How to read spectrograms bab: closure of lips lowers all formants: so rapid increase in all formants at beginning of "bab dad: first formant increases, but F2 and F3 slight fall gag: F2 and F3 come together: this is a characteristic of velars. Formant transitions take longer in velars than in alveolars or labials From Ladefoged A Course in Phonetics HMMs for Speech 13

HMMs for Continuous Observations? Before: discrete, finite set of observations Now: spectral feature vectors are real-valued! Solution 1: discretization Solution 2: continuous emissions models Gaussians Multivariate Gaussians Mixtures of Multivariate Gaussians A state is progressively: Context independent subphone (~3 per phone) Context dependent phone (=triphones) State-tying of CD phone Viterbi Decoding 14

ASR Lexicon: Markov Models Viterbi with 2 Words + Unif. LM Null transition from the end-state of each word to start-state of all (both) words. Figure from Huang et al page 612 15

Markov Process with Unigram LM Figure from Huang et al page 617 Markov Process with Bigrams Figure from Huang et al page 618 16