Recurrent neural networks Modelling sequential data. MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 1
|
|
- Noah Sutton
- 5 years ago
- Views:
Transcription
1 Recurrent neural networks Modelling sequential data MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 1
2 Recurrent Neural Networks 1: Modelling sequential data Steve Renals Machine Learning Practical MLP Lecture 9 15 November 2017 / 20 November 2017 MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 2
3 Sequential Data We often wish to model data that is a sequence or trajectory through time, for instance audio signals, text (sequences of characters/words), currency exchange rates, motion of animal Modelling sequential data Invariances across time The current state depends on the past Need to share data across time Convolutional networks model invariances across space can we do something similar across time? Yes - time-delay neural networks Can we use units to act as memories? Yes - recurrent networks MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 3
4 Recap: Space invariance 6x4x4 Pooling Layers 6x8x8Feature Maps 3x12x12 Pooling Layers Input 28x28 3x24x24 Feature Maps Local connectivity Weight sharing MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 4
5 Modelling sequences t=3 Imagine modelling a time sequence of 3D vectors t=2 t=1 t=0 x1 x2 x3 MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 5
6 Modelling sequences input hidden output Imagine modelling a time sequence of 3D vectors Can model fixed context with a feed-forward network with previous time input vectors added to the network input x1 x2 x3 x1 x2 x3 x1 x2 x3 t-2 t-1 2 frames of context t MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 5
7 Modelling sequences t-t t-3 t-2 T frames of context t-1 t output fully-connected layer 1D conv layer x1 x2 x3 input Imagine modelling a time sequence of 3D vectors Can model fixed context with a feed-forward network with previous time input vectors added to the network input Model using 1-dimension convolutions in time - time-delay neural network (TDNN) MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 5
8 Modelling sequences t-t t-3 t-2 T frames of context t-1 t output fully-connected layer 1D conv layer x1 x2 x3 input Imagine modelling a time sequence of 3D vectors Can model fixed context with a feed-forward network with previous time input vectors added to the network input Model using 1-dimension convolutions in time - time-delay neural network (TDNN) Network takes into account a finite context MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 5
9 speeds up the training by 5x in the baseline TDNN architecture shown in action in Figure TDNNs 1. t-7 t+2 t-10 t-4 t-1 t t Layer 4 Layer 3 Layer 2 the parameters, just as in the cross-entropy t Our previous smbr-based training reci on the ASpIRE setup, so we introduced a recipe which we have since found to be use in other LVCSR tasks. In the smbr objective function, as for rors are not penalized. This can lead to larg tion errors when decoding with smbr traine TDNN Correcting operating this asymmetry on 23in the smbr ob frames penalizing ofinsertions, context was shown to improv mance of smbr models by 10% relative. Without sub-sampling training [22, 24], the frame error is alway (blue+red) reference is silence, which means that ins vocalized-noise is not penalized), but inserti With regionssub-sampling are not penalized. (red) In other words t-11 t-8 t-5 t-2 t+1 t+4 t+7 reference alignment is silence are treated sp in our implementation several phones, inc Layer 1 calized noise and non-spoken noise, are tre t-13 t+9 these purposes.) In our modified smbr tr treat silence as any other phone, except tha Peddinti et al, Reverberation robust acoustic modeling using i-vectors phones withare time collapsed delay neural into a single class for th Figure 1: Computation in TDNN with sub-sampling (red) and putation. This means that replacing one sile networks, without sub-sampling Interspeech-2015, (blue+red) other silence phone is not penalized (e.g. rep The hyper-parameters which define the MLPsub-sampled Lecture 9 Recurrent TDNNNeural Networks 1: Modelling sequential data 6
10 Wavenet van den Oord et al (2016), WaveNet: A Generative Model for Raw Audio, MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 7
11 Networks with state Feed-forward = finite context: feed-forward networks (even fancy ones like Wavenet) compute the output based on a finite input history. Sometimes the required context is known, but often it is not State units: we would like a network with state across time if an event happens, we can potentially know about that event many time steps in the future State units as memory remember things for (potentially) an infinite time State units as information compression compress a sequence into a state representation Recurrent networks with state units h delay x MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 8
12 Recurrent networks output recurrent hidden input x1 x2 x3 t MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 9
13 Graphical model of a recurrent network y h delay x MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 10
14 Graphical model of a recurrent network y y t 1 y t y t+1 h delay h t 1 h t h t+1 x x t 1 x t x t+1 Unfold a recurrent network in time MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 10
15 Simple recurrent network y k (t) = softmax d h j (t) = sigmoid s=0 ( H r=0 w (1) js x s(t) + w (2) kr hr (t) + b k H r=0 ) jr h r (t 1) +b j } {{ } Recurrent part Output (t) Hidden (t) Input (t) Hidden (t-1) MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 11
16 Recurrent network unfolded in time Output (t-1) Output (t) Output (t+1) w (2) w (2) w (2) Hidden (t-1) Hidden (t) Hidden (t+1) w (1) w (1) w (1) Input (t-1) Input (t) Input (t+1) View an RNN for a sequence of T inputs as a T -layer network with shared weights MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 12
17 Recurrent network unfolded in time Output (t-1) Output (t) Output (t+1) w (2) w (2) w (2) Hidden (t-1) Hidden (t) Hidden (t+1) w (1) w (1) w (1) Input (t-1) Input (t) Input (t+1) View an RNN for a sequence of T inputs as a T -layer network with shared weights MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 12
18 Recurrent network unfolded in time Output (t-1) Output (t) Output (t+1) w (2) w (2) w (2) Hidden (t-1) Hidden (t) Hidden (t+1) w (1) w (1) w (1) Input (t-1) Input (t) Input (t+1) View an RNN for a sequence of T inputs as a T -layer network with shared weights Train an RNN by doing backprop through this unfolded network Weight sharing if two weights are constrained to be equal (w 1 = w 2 ) then they will stay equal if the weight changes are equal ( E/ w 1 = E/ w 2 ) achieve this by updating with ( E/ w 1 + E/ w 2 ) (cf Conv Nets) MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 12
19 Bidirectional RNN Output (t-1) Output (t) Output (t+1) RHid (t-1) RHid (t) RHid (t+1) FHid (t-1) FHid (t) FHid (t+1) Input (t-1) Input (t) Input (t+1) Output a prediction that depends on the whole input sequence Bidirectional RNN combine an RNN moving forward in time, with one moving backwards in time State units provide a combined representation that depends on both the past and the future MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 13
20 Back-propagation through time (BPTT) We can train a network by unfolding and back-propagating through time, summing the derivatives for each weight as we go through the sequence More efficiently, run as a recurrent network cache the unit outputs at each timestep cache the output errors at each timestep then backprop from the final timestep to zero, computing the derivatives at each step compute the weight updates by summing the derivatives across time Expensive backprop for a 1,000 item sequence equivalent to a 1,000-layer feed-forward network Truncated BPTT backprop through just a few time steps (e.g. 20) MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 14
21 Example 1: speech recognition with recurrent networks Phoneme Probabilities freq (Hz) Recurrent Neural Network Speech Acoustics T Robinson et al (1996). The use of recurrent networks in continuous speech recognition, in Automatic Speech and Speaker Recognition Advanced Topics (Lee et al (eds)), Kluwer, downloads/publications/1996/ rnn4csr96.pdf time (ms) MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 15
22 Example 2: recurrent network language models T Mikolov et al (2010). Recurrent Neural Network Based Language Model, Interspeech groups/speech/publi/2010/mikolov_ interspeech2010_is pdf MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 16
23 Summary Model sequences using finite context using feed-forward networks with convolutions in time (TDNNs, Wavenet) Model sequences using infinite context using recurrent neural networks (RNNs) Unfolding an RNN gives a deep feed-forward network with shared weights Train using back-propagation through time Back-propagation through time (Historical) examples on speech recognition and language modelling Reading: Goodfellow et al, chapter 10 (sections 10.1, 10.2, 10.3) Next lecture: LSTM, sequence-sequence models MLP Lecture 9 Recurrent Neural Networks 1: Modelling sequential data 17
Recurrent neural networks Modelling sequential data. MLP Lecture 9 / 13 November 2018 Recurrent Neural Networks 1: Modelling sequential data 1
Recurrent neural networks Modelling sequential data MLP Lecture 9 / 13 November 2018 Recurrent Neural Networks 1: Modelling sequential data 1 Recurrent Neural Networks 1: Modelling sequential data Steve
More informationRecurrent neural networks Modelling sequential data. MLP Lecture 9 Recurrent Networks 1
Recurrent neural networks Modelling sequential data MLP Lecture 9 Recurrent Networks 1 Recurrent Networks Steve Renals Machine Learning Practical MLP Lecture 9 16 November 2016 MLP Lecture 9 Recurrent
More informationDeep Learning Basics Lecture 9: Recurrent Neural Networks. Princeton University COS 495 Instructor: Yingyu Liang
Deep Learning Basics Lecture 9: Recurrent Neural Networks Princeton University COS 495 Instructor: Yingyu Liang Introduction Recurrent neural networks Dates back to (Rumelhart et al., 1986) A family of
More informationNeural Network Part 4: Recurrent Neural Networks
Neural Network Part 4: Recurrent Neural Networks Yingyu Liang Computer Sciences 760 Fall 2017 http://pages.cs.wisc.edu/~yliang/cs760/ Some of the slides in these lectures have been adapted/borrowed from
More informationCoursework 2. MLP Lecture 7 Convolutional Networks 1
Coursework 2 MLP Lecture 7 Convolutional Networks 1 Coursework 2 - Overview and Objectives Overview: Use a selection of the techniques covered in the course so far to train accurate multi-layer networks
More informationRobustness (cont.); End-to-end systems
Robustness (cont.); End-to-end systems Steve Renals Automatic Speech Recognition ASR Lecture 18 27 March 2017 ASR Lecture 18 Robustness (cont.); End-to-end systems 1 Robust Speech Recognition ASR Lecture
More informationGenerating an appropriate sound for a video using WaveNet.
Australian National University College of Engineering and Computer Science Master of Computing Generating an appropriate sound for a video using WaveNet. COMP 8715 Individual Computing Project Taku Ueki
More informationDeep learning architectures for music audio classification: a personal (re)view
Deep learning architectures for music audio classification: a personal (re)view Jordi Pons jordipons.me @jordiponsdotme Music Technology Group Universitat Pompeu Fabra, Barcelona Acronyms MLP: multi layer
More informationAre there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1
Are there alternatives to Sigmoid Hidden Units? MLP Lecture 6 Hidden Units / Initialisation 1 Hidden Unit Transfer Functions Initialising Deep Networks Steve Renals Machine Learning Practical MLP Lecture
More information11/13/18. Introduction to RNNs for NLP. About Me. Overview SHANG GAO
Introduction to RNNs for NLP SHANG GAO About Me PhD student in the Data Science and Engineering program Took Deep Learning last year Work in the Biomedical Sciences, Engineering, and Computing group at
More informationDeep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation
Deep Neural Networks (2) Tanh & ReLU layers; Generalisation and Regularisation Steve Renals Machine Learning Practical MLP Lecture 4 9 October 2018 MLP Lecture 4 / 9 October 2018 Deep Neural Networks (2)
More informationTraining neural network acoustic models on (multichannel) waveforms
View this talk on YouTube: https://youtu.be/si_8ea_ha8 Training neural network acoustic models on (multichannel) waveforms Ron Weiss in SANE 215 215-1-22 Joint work with Tara Sainath, Kevin Wilson, Andrew
More informationCSC321 Lecture 11: Convolutional Networks
CSC321 Lecture 11: Convolutional Networks Roger Grosse Roger Grosse CSC321 Lecture 11: Convolutional Networks 1 / 35 Overview What makes vision hard? Vison needs to be robust to a lot of transformations
More informationNeural Network Acoustic Models for the DARPA RATS Program
INTERSPEECH 2013 Neural Network Acoustic Models for the DARPA RATS Program Hagen Soltau, Hong-Kwang Kuo, Lidia Mangu, George Saon, Tomas Beran IBM T. J. Watson Research Center, Yorktown Heights, NY 10598,
More informationRadio Deep Learning Efforts Showcase Presentation
Radio Deep Learning Efforts Showcase Presentation November 2016 hume@vt.edu www.hume.vt.edu Tim O Shea Senior Research Associate Program Overview Program Objective: Rethink fundamental approaches to how
More informationAudio Effects Emulation with Neural Networks
DEGREE PROJECT IN TECHNOLOGY, FIRST CYCLE, 15 CREDITS STOCKHOLM, SWEDEN 2017 Audio Effects Emulation with Neural Networks OMAR DEL TEJO CATALÁ LUIS MASÍA FUSTER KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL
More informationThe Munich 2011 CHiME Challenge Contribution: BLSTM-NMF Speech Enhancement and Recognition for Reverberated Multisource Environments
The Munich 2011 CHiME Challenge Contribution: BLSTM-NMF Speech Enhancement and Recognition for Reverberated Multisource Environments Felix Weninger, Jürgen Geiger, Martin Wöllmer, Björn Schuller, Gerhard
More informationMusic Recommendation using Recurrent Neural Networks
Music Recommendation using Recurrent Neural Networks Ashustosh Choudhary * ashutoshchou@cs.umass.edu Mayank Agarwal * mayankagarwa@cs.umass.edu Abstract A large amount of information is contained in the
More informationLearning to Unlearn and Relearn Speech Signal Processing using Neural Networks: current and future perspectives
Learning to Unlearn and Relearn Speech Signal Processing using Neural Networks: current and future perspectives Mathew Magimai Doss Collaborators: Vinayak Abrol, Selen Hande Kabil, Hannah Muckenhirn, Dimitri
More informationArtificial Neural Networks. Artificial Intelligence Santa Clara, 2016
Artificial Neural Networks Artificial Intelligence Santa Clara, 2016 Simulate the functioning of the brain Can simulate actual neurons: Computational neuroscience Can introduce simplified neurons: Neural
More informationLearning the Speech Front-end With Raw Waveform CLDNNs
INTERSPEECH 2015 Learning the Speech Front-end With Raw Waveform CLDNNs Tara N. Sainath, Ron J. Weiss, Andrew Senior, Kevin W. Wilson, Oriol Vinyals Google, Inc. New York, NY, U.S.A {tsainath, ronw, andrewsenior,
More informationAudio Effects Emulation with Neural Networks
Escola Tècnica Superior d Enginyeria Informàtica Universitat Politècnica de València Audio Effects Emulation with Neural Networks Trabajo Fin de Grado Grado en Ingeniería Informática Autor: Omar del Tejo
More informationarxiv: v2 [cs.sd] 22 May 2017
SAMPLE-LEVEL DEEP CONVOLUTIONAL NEURAL NETWORKS FOR MUSIC AUTO-TAGGING USING RAW WAVEFORMS Jongpil Lee Jiyoung Park Keunhyoung Luke Kim Juhan Nam Korea Advanced Institute of Science and Technology (KAIST)
More informationEndpoint Detection using Grid Long Short-Term Memory Networks for Streaming Speech Recognition
INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Endpoint Detection using Grid Long Short-Term Memory Networks for Streaming Speech Recognition Shuo-Yiin Chang, Bo Li, Tara N. Sainath, Gabor Simko,
More informationAcoustic modelling from the signal domain using CNNs
Acoustic modelling from the signal domain using CNNs Pegah Ghahremani 1, Vimal Manohar 1, Daniel Povey 1,2, Sanjeev Khudanpur 1,2 1 Center of Language and Speech Processing 2 Human Language Technology
More informationConvolutional neural networks
Convolutional neural networks Themes Curriculum: Ch 9.1, 9.2 and http://cs231n.github.io/convolutionalnetworks/ The simple motivation and idea How it s done Receptive field Pooling Dilated convolutions
More informationarxiv: v1 [cs.ne] 5 Feb 2014
LONG SHORT-TERM MEMORY BASED RECURRENT NEURAL NETWORK ARCHITECTURES FOR LARGE VOCABULARY SPEECH RECOGNITION Haşim Sak, Andrew Senior, Françoise Beaufays Google {hasim,andrewsenior,fsb@google.com} arxiv:12.1128v1
More informationAudio Augmentation for Speech Recognition
Audio Augmentation for Speech Recognition Tom Ko 1, Vijayaditya Peddinti 2, Daniel Povey 2,3, Sanjeev Khudanpur 2,3 1 Huawei Noah s Ark Research Lab, Hong Kong, China 2 Center for Language and Speech Processing
More informationDeep Learning. Dr. Johan Hagelbäck.
Deep Learning Dr. Johan Hagelbäck johan.hagelback@lnu.se http://aiguy.org Image Classification Image classification can be a difficult task Some of the challenges we have to face are: Viewpoint variation:
More informationAUDIO TAGGING WITH CONNECTIONIST TEMPORAL CLASSIFICATION MODEL USING SEQUENTIAL LABELLED DATA
AUDIO TAGGING WITH CONNECTIONIST TEMPORAL CLASSIFICATION MODEL USING SEQUENTIAL LABELLED DATA Yuanbo Hou 1, Qiuqiang Kong 2 and Shengchen Li 1 Abstract. Audio tagging aims to predict one or several labels
More informationAttention-based Multi-Encoder-Decoder Recurrent Neural Networks
Attention-based Multi-Encoder-Decoder Recurrent Neural Networks Stephan Baier 1, Sigurd Spieckermann 2 and Volker Tresp 1,2 1- Ludwig Maximilian University Oettingenstr. 67, Munich, Germany 2- Siemens
More information신경망기반자동번역기술. Konkuk University Computational Intelligence Lab. 김강일
신경망기반자동번역기술 Konkuk University Computational Intelligence Lab. http://ci.konkuk.ac.kr kikim01@kunkuk.ac.kr 김강일 Index Issues in AI and Deep Learning Overview of Machine Translation Advanced Techniques in
More informationFEATURE COMBINATION AND STACKING OF RECURRENT AND NON-RECURRENT NEURAL NETWORKS FOR LVCSR
FEATURE COMBINATION AND STACKING OF RECURRENT AND NON-RECURRENT NEURAL NETWORKS FOR LVCSR Christian Plahl 1, Michael Kozielski 1, Ralf Schlüter 1 and Hermann Ney 1,2 1 Human Language Technology and Pattern
More informationMachine Learning Practical Part 2: Group Projects. MLP Lecture 11 MLP Part 2: Group Projects 1
Machine Learning Practical Part 2: Group Projects MLP Lecture 11 MLP Part 2: Group Projects 1 MLP Part 2: Group Projects Steve Renals Machine Learning Practical MLP Lecture 11 24 January 2018 http://www.inf.ed.ac.uk/teaching/courses/mlp/
More informationAutomatic Speech Recognition (CS753)
Automatic Speech Recognition (CS753) Lecture 9: Brief Introduction to Neural Networks Instructor: Preethi Jyothi Feb 2, 2017 Final Project Landscape Tabla bol transcription Music Genre Classification Audio
More informationCS 7643: Deep Learning
CS 7643: Deep Learning Topics: Toeplitz matrices and convolutions = matrix-mult Dilated/a-trous convolutions Backprop in conv layers Transposed convolutions Dhruv Batra Georgia Tech HW1 extension 09/22
More informationIntroduction to Machine Learning
Introduction to Machine Learning Perceptron Barnabás Póczos Contents History of Artificial Neural Networks Definitions: Perceptron, Multi-Layer Perceptron Perceptron algorithm 2 Short History of Artificial
More informationConvolutional Neural Networks for Small-footprint Keyword Spotting
INTERSPEECH 2015 Convolutional Neural Networks for Small-footprint Keyword Spotting Tara N. Sainath, Carolina Parada Google, Inc. New York, NY, U.S.A {tsainath, carolinap}@google.com Abstract We explore
More informationONE of the important modules in reliable recovery of
1 Neural Network Detection of Data Sequences in Communication Systems Nariman Farsad, Member, IEEE, and Andrea Goldsmith, Fellow, IEEE Abstract We consider detection based on deep learning, and show it
More informationI D I A P. Hierarchical and Parallel Processing of Modulation Spectrum for ASR applications Fabio Valente a and Hynek Hermansky a
R E S E A R C H R E P O R T I D I A P Hierarchical and Parallel Processing of Modulation Spectrum for ASR applications Fabio Valente a and Hynek Hermansky a IDIAP RR 07-45 January 2008 published in ICASSP
More informationDetection and Segmentation. Fei-Fei Li & Justin Johnson & Serena Yeung. Lecture 11 -
Lecture 11: Detection and Segmentation Lecture 11-1 May 10, 2017 Administrative Midterms being graded Please don t discuss midterms until next week - some students not yet taken A2 being graded Project
More informationCS 229, Project Progress Report SUNet ID: Name: Ajay Shanker Tripathi
CS 229, Project Progress Report SUNet ID: 06044535 Name: Ajay Shanker Tripathi Title: Voice Transmogrifier: Spoofing My Girlfriend s Voice Project Category: Audio and Music The project idea is an easy-to-state
More informationA Fuller Understanding of Fully Convolutional Networks. Evan Shelhamer* Jonathan Long* Trevor Darrell UC Berkeley in CVPR'15, PAMI'16
A Fuller Understanding of Fully Convolutional Networks Evan Shelhamer* Jonathan Long* Trevor Darrell UC Berkeley in CVPR'15, PAMI'16 1 pixels in, pixels out colorization Zhang et al.2016 monocular depth
More informationarxiv: v2 [cs.cl] 20 Feb 2018
IMPROVED TDNNS USING DEEP KERNELS AND FREQUENCY DEPENDENT GRID-RNNS F. L. Kreyssig, C. Zhang, P. C. Woodland Cambridge University Engineering Dept., Trumpington St., Cambridge, CB2 1PZ U.K. {flk24,cz277,pcw}@eng.cam.ac.uk
More informationLecture 23 Deep Learning: Segmentation
Lecture 23 Deep Learning: Segmentation COS 429: Computer Vision Thanks: most of these slides shamelessly adapted from Stanford CS231n: Convolutional Neural Networks for Visual Recognition Fei-Fei Li, Andrej
More informationCSC 578 Neural Networks and Deep Learning
CSC 578 Neural Networks and Deep Learning Fall 2018/19 6. Convolutional Neural Networks (Some figures adapted from NNDL book) 1 Convolution Neural Networks 1. Convolutional Neural Networks Convolution,
More informationNeural Networks The New Moore s Law
Neural Networks The New Moore s Law Chris Rowen, PhD, FIEEE CEO Cognite Ventures December 216 Outline Moore s Law Revisited: Efficiency Drives Productivity Embedded Neural Network Product Segments Efficiency
More informationTIME-FREQUENCY CONVOLUTIONAL NETWORKS FOR ROBUST SPEECH RECOGNITION. Vikramjit Mitra, Horacio Franco
TIME-FREQUENCY CONVOLUTIONAL NETWORKS FOR ROBUST SPEECH RECOGNITION Vikramjit Mitra, Horacio Franco Speech Technology and Research Laboratory, SRI International, Menlo Park, CA {vikramjit.mitra, horacio.franco}@sri.com
More informationDeep Learning for Human Activity Recognition: A Resource Efficient Implementation on Low-Power Devices
Deep Learning for Human Activity Recognition: A Resource Efficient Implementation on Low-Power Devices Daniele Ravì, Charence Wong, Benny Lo and Guang-Zhong Yang To appear in the proceedings of the IEEE
More informationReverse Correlation for analyzing MLP Posterior Features in ASR
Reverse Correlation for analyzing MLP Posterior Features in ASR Joel Pinto, G.S.V.S. Sivaram, and Hynek Hermansky IDIAP Research Institute, Martigny École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
More informationArtificial Intelligence and Deep Learning
Artificial Intelligence and Deep Learning Cars are now driving themselves (far from perfectly, though) Speaking to a Bot is No Longer Unusual March 2016: World Go Champion Beaten by Machine AI: The Upcoming
More informationUsing RASTA in task independent TANDEM feature extraction
R E S E A R C H R E P O R T I D I A P Using RASTA in task independent TANDEM feature extraction Guillermo Aradilla a John Dines a Sunil Sivadas a b IDIAP RR 04-22 April 2004 D a l l e M o l l e I n s t
More informationDecoding EEG Waves for Visual Attention to Faces and Scenes
Decoding EEG Waves for Visual Attention to Faces and Scenes Taylor Berger and Chen Yi Yao Mentors: Xiaopeng Zhao, Soheil Borhani Brain Computer Interface Applications: Medical Devices (e.g. Prosthetics,
More informationMINE 432 Industrial Automation and Robotics
MINE 432 Industrial Automation and Robotics Part 3, Lecture 5 Overview of Artificial Neural Networks A. Farzanegan (Visiting Associate Professor) Fall 2014 Norman B. Keevil Institute of Mining Engineering
More informationDeep Neural Network Architectures for Modulation Classification
Deep Neural Network Architectures for Modulation Classification Xiaoyu Liu, Diyu Yang, and Aly El Gamal School of Electrical and Computer Engineering Purdue University Email: {liu1962, yang1467, elgamala}@purdue.edu
More informationConvolutional Networks Overview
Convolutional Networks Overview Sargur Srihari 1 Topics Limitations of Conventional Neural Networks The convolution operation Convolutional Networks Pooling Convolutional Network Architecture Advantages
More informationBEAMNET: END-TO-END TRAINING OF A BEAMFORMER-SUPPORTED MULTI-CHANNEL ASR SYSTEM
BEAMNET: END-TO-END TRAINING OF A BEAMFORMER-SUPPORTED MULTI-CHANNEL ASR SYSTEM Jahn Heymann, Lukas Drude, Christoph Boeddeker, Patrick Hanebrink, Reinhold Haeb-Umbach Paderborn University Department of
More informationGoogle Speech Processing from Mobile to Farfield
Google Speech Processing from Mobile to Farfield Michiel Bacchiani Tara Sainath, Ron Weiss, Kevin Wilson, Bo Li, Arun Narayanan, Ehsan Variani, Izhak Shafran, Kean Chin, Ananya Misra, Chanwoo Kim, and
More informationAnnouncements. Today. Speech and Language. State Path Trellis. HMMs: MLE Queries. Introduction to Artificial Intelligence. V22.
Introduction to Artificial Intelligence Announcements V22.0472-001 Fall 2009 Lecture 19: Speech Recognition & Viterbi Decoding Rob Fergus Dept of Computer Science, Courant Institute, NYU Slides from John
More information(Towards) next generation acoustic models for speech recognition. Erik McDermott Google Inc.
(Towards) next generation acoustic models for speech recognition Erik McDermott Google Inc. It takes a village and 250 more colleagues in the Speech team Overview The past: some recent history The present:
More informationApplication of Artificial Neural Networks in Autonomous Mission Planning for Planetary Rovers
Application of Artificial Neural Networks in Autonomous Mission Planning for Planetary Rovers 1 Institute of Deep Space Exploration Technology, School of Aerospace Engineering, Beijing Institute of Technology,
More informationMultiple-Layer Networks. and. Backpropagation Algorithms
Multiple-Layer Networks and Algorithms Multiple-Layer Networks and Algorithms is the generalization of the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer functions.
More informationCG401 Advanced Signal Processing. Dr Stuart Lawson Room A330 Tel: January 2003
CG40 Advanced Dr Stuart Lawson Room A330 Tel: 23780 e-mail: ssl@eng.warwick.ac.uk 03 January 2003 Lecture : Overview INTRODUCTION What is a signal? An information-bearing quantity. Examples of -D and 2-D
More informationAN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE. A Thesis by. Andrew J. Zerngast
AN IMPROVED NEURAL NETWORK-BASED DECODER SCHEME FOR SYSTEMATIC CONVOLUTIONAL CODE A Thesis by Andrew J. Zerngast Bachelor of Science, Wichita State University, 2008 Submitted to the Department of Electrical
More informationAn Introduction to Artificial Intelligence, Machine Learning, and Neural networks. Carola F. Berger
An Introduction to Artificial Intelligence, Machine Learning, and Neural networks ATA58 Carola F. Berger Outline What is Artificial Intelligence (AI)? What does it do? How does it work? Will there be a
More informationSpeaker Recognition Using Real vs Synthetic Parallel Data for DNN Channel Compensation
Speaker Recognition Using Real vs Synthetic Parallel Data for DNN Channel Compensation Fred Richardson, Michael Brandstein, Jennifer Melot, and Douglas Reynolds MIT Lincoln Laboratory {frichard,msb,jennifer.melot,dar}@ll.mit.edu
More informationCollection of re-transmitted data and impulse responses and remote ASR and speaker verification. Igor Szoke, Lada Mosner (et al.
Collection of re-transmitted data and impulse responses and remote ASR and speaker verification. Igor Szoke, Lada Mosner (et al.) BUT Speech@FIT LISTEN Workshop, Bonn, 19.7.2018 Why DRAPAK project To ship
More informationarxiv: v1 [cs.ce] 9 Jan 2018
Predict Forex Trend via Convolutional Neural Networks Yun-Cheng Tsai, 1 Jun-Hao Chen, 2 Jun-Jie Wang 3 arxiv:1801.03018v1 [cs.ce] 9 Jan 2018 1 Center for General Education 2,3 Department of Computer Science
More informationLearning New Articulator Trajectories for a Speech Production Model using Artificial Neural Networks
Learning New Articulator Trajectories for a Speech Production Model using Artificial Neural Networks C. S. Blackburn and S. J. Young Cambridge University Engineering Department (CUED), England email: csb@eng.cam.ac.uk
More informationCHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK
CHAPTER 4 LINK ADAPTATION USING NEURAL NETWORK 4.1 INTRODUCTION For accurate system level simulator performance, link level modeling and prediction [103] must be reliable and fast so as to improve the
More informationImproving reverberant speech separation with binaural cues using temporal context and convolutional neural networks
Improving reverberant speech separation with binaural cues using temporal context and convolutional neural networks Alfredo Zermini, Qiuqiang Kong, Yong Xu, Mark D. Plumbley, Wenwu Wang Centre for Vision,
More informationDERIVATION OF TRAPS IN AUDITORY DOMAIN
DERIVATION OF TRAPS IN AUDITORY DOMAIN Petr Motlíček, Doctoral Degree Programme (4) Dept. of Computer Graphics and Multimedia, FIT, BUT E-mail: motlicek@fit.vutbr.cz Supervised by: Dr. Jan Černocký, Prof.
More informationStatistical Tests: More Complicated Discriminants
03/07/07 PHY310: Statistical Data Analysis 1 PHY310: Lecture 14 Statistical Tests: More Complicated Discriminants Road Map When the likelihood discriminant will fail The Multi Layer Perceptron discriminant
More informationSignal Processing for Speech Applications - Part 2-1. Signal Processing For Speech Applications - Part 2
Signal Processing for Speech Applications - Part 2-1 Signal Processing For Speech Applications - Part 2 May 14, 2013 Signal Processing for Speech Applications - Part 2-2 References Huang et al., Chapter
More informationWaveNet Vocoder and its Applications in Voice Conversion
The 2018 Conference on Computational Linguistics and Speech Processing ROCLING 2018, pp. 96-110 The Association for Computational Linguistics and Chinese Language Processing WaveNet WaveNet Vocoder and
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 8, NOVEMBER
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 8, NOVEMBER 2011 2439 Transcribing Mandarin Broadcast Speech Using Multi-Layer Perceptron Acoustic Features Fabio Valente, Member,
More informationCh. 3: Image Compression Multimedia Systems
4/24/213 Ch. 3: Image Compression Multimedia Systems Prof. Ben Lee (modified by Prof. Nguyen) Oregon State University School of Electrical Engineering and Computer Science Outline Introduction JPEG Standard
More informationAN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND TRANSFER FUNCTIONS
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND TRANSFER FUNCTIONS Kuldeep Kumar 1, R. K. Aggarwal 1 and Ankita Jain 2 1 Department of Computer Engineering, National Institute
More informationDNN AND CNN WITH WEIGHTED AND MULTI-TASK LOSS FUNCTIONS FOR AUDIO EVENT DETECTION
DNN AND CNN WITH WEIGHTED AND MULTI-TASK LOSS FUNCTIONS FOR AUDIO EVENT DETECTION Huy Phan, Martin Krawczyk-Becker, Timo Gerkmann, and Alfred Mertins University of Lübeck, Institute for Signal Processing,
More informationDecoding Brainwave Data using Regression
Decoding Brainwave Data using Regression Justin Kilmarx: The University of Tennessee, Knoxville David Saffo: Loyola University Chicago Lucien Ng: The Chinese University of Hong Kong Mentor: Dr. Xiaopeng
More informationContinuous time and Discrete time Signals and Systems
Continuous time and Discrete time Signals and Systems 1. Systems in Engineering A system is usually understood to be an engineering device in the field, and a mathematical representation of this system
More informationA Hybrid Architecture using Cross Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots
A Hybrid Architecture using Cross Correlation and Recurrent Neural Networks for Acoustic Tracking in Robots John C. Murray, Harry Erwin and Stefan Wermter Hybrid Intelligent Systems School for Computing
More informationUsing Deep Learning for Sentiment Analysis and Opinion Mining
Using Deep Learning for Sentiment Analysis and Opinion Mining Gauging opinions is faster and more accurate. Abstract How does a computer analyze sentiment? How does a computer determine if a comment or
More informationLesson 08. Convolutional Neural Network. Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni.
Lesson 08 Convolutional Neural Network Ing. Marek Hrúz, Ph.D. Katedra Kybernetiky Fakulta aplikovaných věd Západočeská univerzita v Plzni Lesson 08 Convolution we will consider 2D convolution the result
More informationMel Spectrum Analysis of Speech Recognition using Single Microphone
International Journal of Engineering Research in Electronics and Communication Mel Spectrum Analysis of Speech Recognition using Single Microphone [1] Lakshmi S.A, [2] Cholavendan M [1] PG Scholar, Sree
More informationA Comparison of MLP, RNN and ESN in Determining Harmonic Contributions from Nonlinear Loads
A Comparison of MLP, RNN and ESN in Determining Harmonic Contributions from Nonlinear Loads Jing Dai, Pinjia Zhang, Joy Mazumdar, Ronald G Harley and G K Venayagamoorthy 3 School of Electrical and Computer
More informationFormant Estimation and Tracking using Deep Learning
Formant Estimation and Tracking using Deep Learning Yehoshua Dissen and Joseph Keshet Department of Computer Science Bar-Ilan University, Ramat-Gan, Israel disseny1@cs.biu.ac.il, joseph.keshet@biu.ac.il
More informationINTRODUCTION TO DEEP LEARNING. Steve Tjoa June 2013
INTRODUCTION TO DEEP LEARNING Steve Tjoa kiemyang@gmail.com June 2013 Acknowledgements http://ufldl.stanford.edu/wiki/index.php/ UFLDL_Tutorial http://youtu.be/ayzoubkuf3m http://youtu.be/zmnoatzigik 2
More informationDistance Estimation and Localization of Sound Sources in Reverberant Conditions using Deep Neural Networks
Distance Estimation and Localization of Sound Sources in Reverberant Conditions using Deep Neural Networks Mariam Yiwere 1 and Eun Joo Rhee 2 1 Department of Computer Engineering, Hanbat National University,
More informationIMPROVING WIDEBAND SPEECH RECOGNITION USING MIXED-BANDWIDTH TRAINING DATA IN CD-DNN-HMM
IMPROVING WIDEBAND SPEECH RECOGNITION USING MIXED-BANDWIDTH TRAINING DATA IN CD-DNN-HMM Jinyu Li, Dong Yu, Jui-Ting Huang, and Yifan Gong Microsoft Corporation, One Microsoft Way, Redmond, WA 98052 ABSTRACT
More informationAttention-based Information Fusion using Multi-Encoder-Decoder Recurrent Neural Networks
Attention-based Information Fusion using Multi-Encoder-Decoder Recurrent Neural Networks Stephan Baier1, Sigurd Spieckermann2 and Volker Tresp1,2 1- Ludwig Maximilian University Oettingenstr. 67, Munich,
More informationAn Introduction to Convolutional Neural Networks. Alessandro Giusti Dalle Molle Institute for Artificial Intelligence Lugano, Switzerland
An Introduction to Convolutional Neural Networks Alessandro Giusti Dalle Molle Institute for Artificial Intelligence Lugano, Switzerland Sources & Resources - Andrej Karpathy, CS231n http://cs231n.github.io/convolutional-networks/
More informationGESTURE RECOGNITION WITH 3D CNNS
April 4-7, 2016 Silicon Valley GESTURE RECOGNITION WITH 3D CNNS Pavlo Molchanov Xiaodong Yang Shalini Gupta Kihwan Kim Stephen Tyree Jan Kautz 4/6/2016 Motivation AGENDA Problem statement Selecting the
More informationProgress in the BBN Keyword Search System for the DARPA RATS Program
INTERSPEECH 2014 Progress in the BBN Keyword Search System for the DARPA RATS Program Tim Ng 1, Roger Hsiao 1, Le Zhang 1, Damianos Karakos 1, Sri Harish Mallidi 2, Martin Karafiát 3,KarelVeselý 3, Igor
More informationSIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB
SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB S. Kajan, J. Goga Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology, Slovak University
More informationarxiv: v1 [cs.sd] 7 Jun 2017
SOUND EVENT DETECTION USING SPATIAL FEATURES AND CONVOLUTIONAL RECURRENT NEURAL NETWORK Sharath Adavanne, Pasi Pertilä, Tuomas Virtanen Department of Signal Processing, Tampere University of Technology
More informationarxiv: v1 [cs.sd] 29 Jun 2017
to appear at 7 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics October 5-, 7, New Paltz, NY MULTI-SCALE MULTI-BAND DENSENETS FOR AUDIO SOURCE SEPARATION Naoya Takahashi, Yuki
More informationLecture 11-1 CNN introduction. Sung Kim
Lecture 11-1 CNN introduction Sung Kim 'The only limit is your imagination' http://itchyi.squarespace.com/thelatest/2012/5/17/the-only-limit-is-your-imagination.html Lecture 7: Convolutional
More informationGUJARAT TECHNOLOGICAL UNIVERSITY
Type of course: Compulsory GUJARAT TECHNOLOGICAL UNIVERSITY SUBJECT NAME: Digital Signal Processing SUBJECT CODE: 2171003 B.E. 7 th SEMESTER Prerequisite: Higher Engineering Mathematics, Different Transforms
More informationFrequency Estimation from Waveforms using Multi-Layered Neural Networks
INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Frequency Estimation from Waveforms using Multi-Layered Neural Networks Prateek Verma & Ronald W. Schafer Stanford University prateekv@stanford.edu,
More information