Pitch estimation using spiking neurons
|
|
- Easter O’Brien’
- 5 years ago
- Views:
Transcription
1 Pitch estimation using spiking s K. Voutsas J. Adamy Research Assistant Head of Control Theory and Robotics Lab Institute of Automatic Control Control Theory and Robotics Lab Institute of Automatic Control Control Theory and Robotics Lab Darmstadt University of Technology Darmstadt University of Technology Petersenstr. 2 / D Darmstadt Landgraf-Georg-Str. 4 / D Darmstadt kvoutsas@rtr.tu-darmstadt.de adamy@rtr.tu-darmstadt.de ABSTRACT The paper introduces a brain-like al model for sound processing. The Periodicity Analyzing Network (PAN) is a bio-inspired neural network of spiking s simulating certain parts of nuclei of the auditory system in detail. The PAN consists of complex models of s, which can be used for understanding the dynamics of individual s and the mechanisms of structured neural networks of the auditory system. Because of the cochlear frequency analysis, a responds strongest at its characteristic frequency (CF). In addition to its CF, a coincidence is tuned to a certain periodicity, i.e. a certain modulation frequency of an AM signal, also called the best modulation frequency (BMF). Following the cochlear filtering, each PAN responds to the encoded carrier and modulation information according to its BMF and CF, thus forming a spatial structure, where the representations of CF and BMF, encoding carrier and modulation frequency respectively, are roughly orthogonal. On a technical level, the network is able to process fundamental frequency characteristics of harmonic sound signals. The PAN model may therefore be used in audio signal processing applications, such as periodicity analysis, pitch extraction and the cocktail party problem. Introduction Most common pitch estimation algorithms are based upon time domain (temporal) or frequency domain (spatial) methods. Autocorrelation, zero crossings or maximum likelihood methods in the time domain [, 2, 3] or the cepstrum method and the harmonic product spectrum method in the frequency domain [4, 5] provide a wide range of different algorithms for pitch estimation and extraction. Most of these methods are mathematical models and only vaguely based on the physiological models of hearing. Some other methods combine the advantages of spatial and temporal processing methods [6] and only one biologically inspired spatiotemporal method for pitch estimation is so far widely known [7]. A new biologically based spatiotemporal approach to the pitch estimation problem is introduced in this paper. The Periodicity Analyzing Network (PAN) is a spiking neural network based on neural mechanisms, utilizing complex models, and attempting to simulate certain parts of nuclei of the auditory system in detail. It can be used both for
2 purposes of understanding the mechanisms of a structured neural network of the auditory system and for periodicity analysis and sound source localization tasks with amplitude modulation (AM) signals in technical applications.. Physiological fundamentals. The auditory pathway The external ear (pinna) bundles the arriving sound, which is then directed through the outer auditory canal to the ear drum and to the inner ear. The basilar membrane in the cochlea is tonotopically organized. This feature of the cochlea enables a decomposition of the traveling wave generated by the incoming acoustic stimulus at different points of the membrane, thus activating a frequency filtering mechanism, which filters higher frequencies at the beginning of the cochlea and lower at the end of it. The basilar membrane in the cochlea is lined with sensitive hair cells, which trigger the generation of nerve signals that are sent through the auditory nerve (AN) to the central nervous system (CNS). The AN transfers spike encoded sound signals to the three centers of the cochlear nucleus (CN) (Fig. (a)). The first neural processing levels of periodicity analysis occur in the CN [8]. The DCN and the PVCN nuclei forward the signal to the nucleus of the lateral lemniscus (NLL), and to the inferior colliculus (IC), the next processing levels of periodicity analysis [8]. The resulting information is transferred to the auditory cortex (AC) via the medial corpus geniculatum (MGB). A spiking neural network was developed, which makes use of the described interconnections in the auditory pathway. The neural network is able to perform periodicity analysis tasks as described in the following section along with biological evidence from electrophysiological experiments supporting this model..2 Physiological structure of the periodicity analyzing network The neural network described in this section is a correlation network of spiking s. The basic structure of the periodicity analysis model (Fig. (b)) consists of a trigger, an oscillator, an integrator complex, and a coincidence. Exemplary al potentials describing the function of the four modules of the network driven with an optimal stimulus are shown in the right part of Fig. (b). The function of the network is based upon the correlation of delayed and undelayed al responses of the depicted s to envelopes of AM signals. These responses converge finally at s acting as coincidence detectors [8]. Each modulation period of an AM signal triggers the trigger (Fig. (b)), which triggers a rapid oscillation (oscillator potential in Fig. (b)) with a predefined frequency. Parallel to that process, the integrator responds to the same cycle only with a longer delay (integration period of the integrator). The coincidence will be activated, despite the different delay times of the two previous units, provided that the integration period equals the period of the AM signal. A coincidence will respond more often, when its inputs are synchronized, i.e. when the oscillation and integration delay periods of its inputs have approximately the same duration. Thus, modulation periods, m τ m, with m =, 2,..., which activate the oscillations and drive the coincidence unit can be computed from the following linear equation:
3 Cortex t m Corpus geniculatum MGB Auditory nerve Colliculus inferior IC Trigger - t m Nucleus cochlearis DCN PVCN AVCN Cochlea MSO MNTB NLL LSO Oscillator Coincidence Inhibition FF Integrator - 2 FF2 n t c Coincidence equation: t m= n t c - kmax tk t k Figure. (a) The auditory pathway, (b) The periodicity analyzing neural model and some exemplary al potentials of a PAN module. The model is driven with a stimulus generating equal oscillation and integration delay periods and therefore a coincidence for the specific module m τ = n τ k τ (.) m c k where m, n are small integers, and k =,,...,k max. n τ c is the integration period, which consists of n carrier periods and which is the time the integrated input signal needs to reach a certain threshold. /τ c is the carrier frequency of the AM signal, /τ k the frequency of the oscillations and k max the number of the oscillations triggered by the modulation of the AM signal which are required for the synchronization of the two inputs of the coincidence unit. The parameter m takes into account the fact that coincidence s respond also to harmonics (m>) of the modulation frequency of the AM signal, which implicates an ambiguity of IC s with respect to harmonically related signals. A solution to this problem based on electrophysiology results is proposed by [9] and is also tested in the present model. Because of the cochlear frequency analysis, a responds strongest at its characteristic frequency (CF). In addition to its CF, a coincidence is tuned to a certain periodicity, i.e. a certain modulation frequency of an AM signal, also called the best modulation frequency (BMF). Therefore, different trigger, oscillator, integrator, and coincidence units are needed to cover the range of periodicity of AM signals. The biological evidence supporting the hypothesis about the existence of such periodicity analysis in the auditory system are described in detail in []. The periodicity analysis model explains the selectivity of the s of the midbrain for a specific BMF. Utilizing a model of cochlear filtering, a mechanism of encoding the carrier and modulation information of an AM signal and numerous PANs in parallel differently tuned for various CFs and BMFs, we can simulate the response of the IC to AM signals (Fig. 2(a)). We can therefore perform periodicity analysis and pitch extraction. The implementation of the modules up to the input of the PAN and the simulation of the PAN model are described in the following section.
4 (a) Frequency Band Pass Rectifier Cochlear Filter DCN Integrator VCN Envelope Coder CN Frequency ICC Coincidence Detector Periodicity (b) Trigger Oscillator Inhibition Modulation In Carrier 2 In2 Integrator Coincidence Flip-Flop Flip-Flop 2 Figure 2. (a) A highly simplified scheme of the tonotopic and periodotopic organization of the auditory brainstem []. Following the cochlear filtering, the modules of the PAN respond to the encoded carrier and modulation information according to their BMF and CF, thus forming a spatial structure, where the tonotopic and periodotopic axes of the IC s are roughly orthogonal. (b) Block diagram of the PAN model implementation corresponding to the physiological model of Fig. (b). 2. Simulation of the PAN model 2. Models of the cochlea and of the inner hair-cells A model of the cochlear filtering mechanism is used to simulate the band-pass decomposition of a sound signal and the tonotopic organization of the cochlea. A corresponding band-pass filterbank is used, where the filterbank consists of a series of band-pass filters, the so-called ERB-filters [2]. The equivalent rectangular bandwidth (ERB) corresponds to the bandwidth of each filter of the human cochlea along various points on the basilar membrane based on psychoacoustic measurements. The decomposition of the AM signal in the cochlea is followed by a simulation of the inner hair-cells, which transform the mechanical response of each filter to electrical pulses [3]. At every positive zero-crossing of the filtered signal a spike is triggered. The amplitude of each spike equals. A spike train for each filter is thus generated, which is then used as encoded information about the modulation and the carrier frequency of the AM signal. A more detailed description of the cochlea and inner hair-cell models can be found in []. 2.2 Simulating s The functional structure of the chemical synapse model can be seen in Fig. 3. An incoming spike from the presynaptic releases synaptic vesicles containing neurotransmitters. The vesicle emission mechanism is simulated with a look-up table providing a certain predefined amount of vesicles each time the subsystem is enabled by an incoming spike, as seen in Fig. 3. The transmitter molecules diffuse to the postsynaptic through the synaptic cleft. The decay of the transmitter
5 concentration is simulated by a leaky-integrator. The amount of transmitters on the postsynaptic changes its permeability to certain ions. Ion channels are thus gradually opened, receiving even more ions, forming a current moving towards the soma of the using a resistance mechanism which forms a gradually increasing post synaptic current (PSC). PSCs can be either excitatory or inhibitory (EPSC or IPSC), depending on the ions rushing through the postsynaptic membrane. This mechanism is simulated by the weight function of the synapse model. The overall time needed for the diffusion of the transmitters and the transmission of the PSCs to the soma is modelled with a predefined time delay for each synapse. A soma model based on an integrate-and-fire model [4] was especially here developed for the PAN simulation. A leaky integrate-and-f\/ire consists of a leak resistance R, in parallel to a capacitance C driven by an external current I. The will fire only if the excitatory input is strong enough to overcome the leak. The voltage u across the capacitor can be interpreted as the membrane potential of the. The voltage u starts from zero and increases or decreases in dependence of the synaptic input. When the voltage u reaches a threshold ϑ, the fires instantly a spike, and returns to the initial value of u=v. After an absolute refractory period, during which the cannot fire due to hyperpolarization of the membrane, and a relative refractory period, during which the can fire only when a very strong input exists, the cell is ready to fire again. A detailed description of the models, the tunable parameters and their value regions can be found in []. AP input OR conductance s local time transmitter emission weight transmitter amount latency transmitter concentration s leackage PSC output other incoming PSCs s integrator 2 leackage current delay2 z norm membrane potential 2 AP spike output generation. threshold z delay refractory period Out gaussian noise integrator2 s Figure 3. Block diagrams of the chemical synapse and the leaky integrate-and-fire soma model implemented in MATLAB SIMULINK. 2.2 Simulating the PAN model Based upon the biological model seen in Fig. (b), a simulation model utilizing the model described above was developed, Fig. 2(b). The implemented PAN unit is functionally similar to its biological analogon described in section.2 including also a third inhibitory connection to the coincidence. Furthermore, a new function of the PAN is proposed here to cover stimuli at higher frequencies. The trigger and the integrator receive the two PAN inputs, one encoding the modulation and the other the carrier frequency of the acoustic signal. The trigger is synchronised to the incoming signal from the inner hair-cells model and triggers the oscillator, which is implemented by only one oscillating in our model. One spike (AP) of the trigger is sufficient for the oscillator to release a series of spikes with a predefined frequency, thus providing the coincidence time window needed for the periodicity analysis. The flip-flop s
6 synchronize the accumulation of spikes in the integrator with the output of the trigger and the integrator provides spikes to the coincidence, which also has a third input simulating the modulation coupled inhibition of the coincidence mechanism. This inhibition mechanism suppresses reactions of the coincidence to harmonics of the preferred BMF of a specific PAN unit. Depending on the frequency of the incoming signals we propose a dual-function mode scheme for the PAN model. When receiving low frequency stimuli (< khz), the response of the integrator is coupled to each modulation period of the stimulus [], while for high frequency stimuli (above khz), the integrator and thus the flip-flop structure respond every two modulation periods of the stimulus (Fig. 4). The advantage of the second mode is that the integrator and the flip-flop s are still able to respond phase coupled to higher frequency stimuli, while, if working in the first mode, this would not be the case and one would need a population of s to encode higher frequency stimuli. Therefore, system simplicity and robustness (higher frequencies can be better encoded with fewer s) and model execution time are positively affected by the introduction of the proposed dual-mode scheme. Carrier frequency APs (a) Trigger APs (d) Integrator PSP (b) Oscillator s APs (e) Integrator APs Time in sec (c) Coincidence APs Time in sec (f) Figure4. AP and PSP plots of a PAN unit tuned for a Hz to 6Hz (modulation/carrier frequency) signal and tested with signal.(a) APs of the encoded carrier frequency as received from the cochlear filterbank, (b) PSP of the integrator to the incoming APs of (a), (c) resulting APs of the integrator, (c) APs of the trigger, which receives various cochlear filterbank channels and decodes the modulation frequency of the signal, (d) oscillator APs generated at each incoming AP of the trigger seen in (d), and (f) coincidence APs, resulting from the temporal coincidence of (c) and (e) and thus encoding the specific carrier to modulation frequency ratio of the incoming signal.
7 Each block of the model consists of a as described in Section 2.2, with the trigger, and the oscillator having one, the integrator and the flip-flop s having two, and the oscillator having three synaptic inputs. Numerous parameters of each can be tuned according to the CF and the BMF that one PAN unit should maximally react to. Among these parameters are the amount of transmitters, the time delay and the weight of each synaptic model. The threshold, the leakage current and the refractory period of each soma model can be optimised for every PAN unit. Adjusting the parameters of a PAN unit can be done by using optimization algorithms and is a challenging task for further research. 3. An example of pitch estimation The tests presented in Fig. 5 show an aspect of the evaluation of a PAN unit. One PAN unit tuned for a specific modulation to carrier frequency ratio of an arbitrary incoming stimulus and for a specific CF is tested with a wide range of SAM stimuli. 5 modulation frequencies ranging from 6 to 2 Hz and 5 carrier frequencies ranging from 3 to 4 Hz were tested. As seen in both exemplary cases, the maximum response of the PAN unit is correctly placed at the tuned (desired) ratio. Existing responses in the neighbourhood of the maximum response can be suppressed utilizing a winner-take-all neural network at the output layer of a complete PAN array, thus providing an increased efficiency of the model. Figure 5. Simulation results of two PAN units, the one on the left tuned to react at a Hz modulation to 6 Hz carrier frequency AM signal and the one on the right for a 5 Hz modulation to 8 Hz carrier frequency AM signal. The PAN units were tested with 225 SAM signals with different combinations of modulation (6 to 2 Hz) and carrier (3 to 4 Hz) frequencies. 4. Summary and conclusions The simulation results of the complete auditory spatial tonotopic and periodotopic structure consisting of PAN units show, that it is possible to combine processing tasks with detailed models of spiking s and neural networks based on al mechanisms to obtain technical applications that perform comparable to the auditory system.
8 Furthermore, an accurate periodicity analysis mechanism providing pitch estimation can be implemented using the PAN unit. The tonotopic and periodotopic structure proposed in this paper can therefore be used for distinguishing one among many simultaneously speaking persons. A further improvement is proposed with a dual-mode function scheme to cover a wide range over frequencies of incoming stimuli. 5. Literature [] A. E. Rosenberg, M. R. Sambur: New techniques for automatic speaker verification, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-23, pp , 975. [2] N. J. Miller: Pitch detection by data reduction, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-23, pp , 975. [3] J. J. Dubnowski, R. W. Schafer, L. R. Rabiner: Real-time digital hardware pitch detector, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-24, pp. 2-8, 976. [4] R. W. Schafer, L. R. Rabiner: System for automatic formant analysis of voiced speech, J. Acoust. Soc. Am., vol. 47, pp , 97. [5] A. M. Noll: Cepstrum pitch determination, J. Acoust. Soc. Am., vol. 4, no. 2, pp , 967. [6] T. Tolonen, M. Karjalainen: A computationally efficient multipitch analysis model, IEEE Trans. Speech Audio Processing, Vol. 8(6), S , 2. [7] M. Slaney, R.F. Lyon: A perceptual pitch detector, in Proc. of IEEE Int. Conf. on Acoustics Speech and Signal Processing, Vol., S , 99. [8] G. Langner: Neuronal periodicity coding and pitch effects, in Central Auditory Processing and Neural Modeling (Ed. Poon, and Brugge), New York: Plenum Press, pp. 3-4, 998. [9] M. Ochse, G. Langner: Modulation tuning in the auditory midbrain of gerbils: bandpasses are formed by inhibition, Proc. 5th Meet. of the German Neurosc. Soc., pp , 23. [] K. Voutsas, G. Langner, J. Adamy, M. Ochse: A brain-like neural network for periodicity analysis, Trans. Systems, Man, and Cybernetics, Part B, submitted November 23, accepted as a regular paper, July 24. [] G. Langner, M. Sams, P. Heil, H. Schulze: Frequency and periodicity are represented in orthogonal maps in the human auditory cortex: evidence from magnetoencephalography, Journal Comp. Physiol., Vol. 8, S , 997. [2] R. Patterson, I. Nimmo-Smith, J. Holdsworth, P. Rice: Spiral VOS final report: Part A, the auditory filterbank, Internal Report, University of Cambridge, England, 988. [3] R. Meddis, M.J. Hewitt, and T.M. Shackleton: Implementation details of a computational model of the inner hair-cell/auditory-nerve sysnapse, J. Acoust. Soc. Am., vol. 87(4), pp , 99. [4] C. Koch, C.H. Mo, W. Softky: Single-Cell Models, in The Handbook of Brain Theory and Neural Networks (M.A. Arbib, Ed.), 2nd ed., Cambridge, MA: MIT Press, 23, S
AUDL 4007 Auditory Perception. Week 1. The cochlea & auditory nerve: Obligatory stages of auditory processing
AUDL 4007 Auditory Perception Week 1 The cochlea & auditory nerve: Obligatory stages of auditory processing 1 Think of the ear as a collection of systems, transforming sounds to be sent to the brain 25
More informationTHE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES
THE MATLAB IMPLEMENTATION OF BINAURAL PROCESSING MODEL SIMULATING LATERAL POSITION OF TONES WITH INTERAURAL TIME DIFFERENCES J. Bouše, V. Vencovský Department of Radioelectronics, Faculty of Electrical
More informationImagine the cochlea unrolled
2 2 1 1 1 1 1 Cochlea & Auditory Nerve: obligatory stages of auditory processing Think of the auditory periphery as a processor of signals 2 2 1 1 1 1 1 Imagine the cochlea unrolled Basilar membrane motion
More informationSpectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma
Spectro-Temporal Methods in Primary Auditory Cortex David Klein Didier Depireux Jonathan Simon Shihab Shamma & Department of Electrical Engineering Supported in part by a MURI grant from the Office of
More informationAuditory modelling for speech processing in the perceptual domain
ANZIAM J. 45 (E) ppc964 C980, 2004 C964 Auditory modelling for speech processing in the perceptual domain L. Lin E. Ambikairajah W. H. Holmes (Received 8 August 2003; revised 28 January 2004) Abstract
More informationNeuronal correlates of pitch in the Inferior Colliculus
Neuronal correlates of pitch in the Inferior Colliculus Didier A. Depireux David J. Klein Jonathan Z. Simon Shihab A. Shamma Institute for Systems Research University of Maryland College Park, MD 20742-3311
More informationPredicting discrimination of formant frequencies in vowels with a computational model of the auditory midbrain
F 1 Predicting discrimination of formant frequencies in vowels with a computational model of the auditory midbrain Laurel H. Carney and Joyce M. McDonough Abstract Neural information for encoding and processing
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM PACS: 43.66.Ba, 43.66.Dc Dau, Torsten; Jepsen, Morten L.; Ewert,
More informationPerception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb A. Faulkner.
Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 5: 12 Feb 2009. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence
More informationPerception of pitch. Definitions. Why is pitch important? BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb A. Faulkner.
Perception of pitch BSc Audiology/MSc SHS Psychoacoustics wk 4: 7 Feb 2008. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum,
More informationPerception of pitch. Importance of pitch: 2. mother hemp horse. scold. Definitions. Why is pitch important? AUDL4007: 11 Feb A. Faulkner.
Perception of pitch AUDL4007: 11 Feb 2010. A. Faulkner. See Moore, BCJ Introduction to the Psychology of Hearing, Chapter 5. Or Plack CJ The Sense of Hearing Lawrence Erlbaum, 2005 Chapter 7 1 Definitions
More informationA Silicon Model of an Auditory Neural Representation of Spectral Shape
A Silicon Model of an Auditory Neural Representation of Spectral Shape John Lazzaro 1 California Institute of Technology Pasadena, California, USA Abstract The paper describes an analog integrated circuit
More informationSignals & Systems for Speech & Hearing. Week 6. Practical spectral analysis. Bandpass filters & filterbanks. Try this out on an old friend
Signals & Systems for Speech & Hearing Week 6 Bandpass filters & filterbanks Practical spectral analysis Most analogue signals of interest are not easily mathematically specified so applying a Fourier
More informationAN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES
Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-), Verona, Italy, December 7-9,2 AN AUDITORILY MOTIVATED ANALYSIS METHOD FOR ROOM IMPULSE RESPONSES Tapio Lokki Telecommunications
More informationHCS 7367 Speech Perception
HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Power spectrum model of masking Assumptions: Only frequencies within the passband of the auditory filter contribute to masking. Detection is based
More informationA Silicon Model Of Auditory Localization
Communicated by John Wyatt A Silicon Model Of Auditory Localization John Lazzaro Carver A. Mead Department of Computer Science, California Institute of Technology, MS 256-80, Pasadena, CA 91125, USA The
More informationA CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL
9th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, -7 SEPTEMBER 7 A CLOSER LOOK AT THE REPRESENTATION OF INTERAURAL DIFFERENCES IN A BINAURAL MODEL PACS: PACS:. Pn Nicolas Le Goff ; Armin Kohlrausch ; Jeroen
More informationThe psychoacoustics of reverberation
The psychoacoustics of reverberation Steven van de Par Steven.van.de.Par@uni-oldenburg.de July 19, 2016 Thanks to Julian Grosse and Andreas Häußler 2016 AES International Conference on Sound Field Control
More informationNeural Processing of Amplitude-Modulated Sounds: Joris, Schreiner and Rees, Physiol. Rev. 2004
Neural Processing of Amplitude-Modulated Sounds: Joris, Schreiner and Rees, Physiol. Rev. 2004 Richard Turner (turner@gatsby.ucl.ac.uk) Gatsby Computational Neuroscience Unit, 02/03/2006 As neuroscientists
More informationIII. Publication III. c 2005 Toni Hirvonen.
III Publication III Hirvonen, T., Segregation of Two Simultaneously Arriving Narrowband Noise Signals as a Function of Spatial and Frequency Separation, in Proceedings of th International Conference on
More informationPhase and Feedback in the Nonlinear Brain. Malcolm Slaney (IBM and Stanford) Hiroko Shiraiwa-Terasawa (Stanford) Regaip Sen (Stanford)
Phase and Feedback in the Nonlinear Brain Malcolm Slaney (IBM and Stanford) Hiroko Shiraiwa-Terasawa (Stanford) Regaip Sen (Stanford) Auditory processing pre-cosyne workshop March 23, 2004 Simplistic Models
More informationTranscription of Piano Music
Transcription of Piano Music Rudolf BRISUDA Slovak University of Technology in Bratislava Faculty of Informatics and Information Technologies Ilkovičova 2, 842 16 Bratislava, Slovakia xbrisuda@is.stuba.sk
More informationHearing and Deafness 2. Ear as a frequency analyzer. Chris Darwin
Hearing and Deafness 2. Ear as a analyzer Chris Darwin Frequency: -Hz Sine Wave. Spectrum Amplitude against -..5 Time (s) Waveform Amplitude against time amp Hz Frequency: 5-Hz Sine Wave. Spectrum Amplitude
More informationSeparation and Recognition of multiple sound source using Pulsed Neuron Model
Separation and Recognition of multiple sound source using Pulsed Neuron Model Kaname Iwasa, Hideaki Inoue, Mauricio Kugler, Susumu Kuroyanagi, Akira Iwata Nagoya Institute of Technology, Gokiso-cho, Showa-ku,
More informationRecurrent Timing Neural Networks for Joint F0-Localisation Estimation
Recurrent Timing Neural Networks for Joint F0-Localisation Estimation Stuart N. Wrigley and Guy J. Brown Department of Computer Science, University of Sheffield Regent Court, 211 Portobello Street, Sheffield
More informationUsing the Gammachirp Filter for Auditory Analysis of Speech
Using the Gammachirp Filter for Auditory Analysis of Speech 18.327: Wavelets and Filterbanks Alex Park malex@sls.lcs.mit.edu May 14, 2003 Abstract Modern automatic speech recognition (ASR) systems typically
More informationCHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR
22 CHAPTER 2 FIR ARCHITECTURE FOR THE FILTER BANK OF SPEECH PROCESSOR 2.1 INTRODUCTION A CI is a device that can provide a sense of sound to people who are deaf or profoundly hearing-impaired. Filters
More informationA VLSI-Based Model of Azimuthal Echolocation in the Big Brown Bat
Autonomous Robots 11, 241 247, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. A VLSI-Based Model of Azimuthal Echolocation in the Big Brown Bat TIMOTHY HORIUCHI Electrical and
More informationc 2014 Brantly A. Sturgeon
c 2014 Brantly A. Sturgeon AUDITORY MODEL COMPARISON AND OPTIMIZATION USING DYNAMIC TIME WARPING BY BRANTLY A. STURGEON THESIS Submitted in partial fulfillment of the requirements for the degree of Master
More informationI R UNDERGRADUATE REPORT. Stereausis: A Binaural Processing Model. by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG
UNDERGRADUATE REPORT Stereausis: A Binaural Processing Model by Samuel Jiawei Ng Advisor: P.S. Krishnaprasad UG 2001-6 I R INSTITUTE FOR SYSTEMS RESEARCH ISR develops, applies and teaches advanced methodologies
More informationCharacterization of Auditory Evoked Potentials From Transient Binaural beats Generated by Frequency Modulating Sound Stimuli
University of Miami Scholarly Repository Open Access Dissertations Electronic Theses and Dissertations 2015-05-22 Characterization of Auditory Evoked Potentials From Transient Binaural beats Generated
More informationLecture 13 Read: the two Eckhorn papers. (Don t worry about the math part of them).
Read: the two Eckhorn papers. (Don t worry about the math part of them). Last lecture we talked about the large and growing amount of interest in wave generation and propagation phenomena in the neocortex
More informationA Pole Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data
A Pole Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data Richard F. Lyon Google, Inc. Abstract. A cascade of two-pole two-zero filters with level-dependent
More informationAcoustics, signals & systems for audiology. Week 4. Signals through Systems
Acoustics, signals & systems for audiology Week 4 Signals through Systems Crucial ideas Any signal can be constructed as a sum of sine waves In a linear time-invariant (LTI) system, the response to a sinusoid
More informationTesting of Objective Audio Quality Assessment Models on Archive Recordings Artifacts
POSTER 25, PRAGUE MAY 4 Testing of Objective Audio Quality Assessment Models on Archive Recordings Artifacts Bc. Martin Zalabák Department of Radioelectronics, Czech Technical University in Prague, Technická
More informationCoding and computing with balanced spiking networks. Sophie Deneve Ecole Normale Supérieure, Paris
Coding and computing with balanced spiking networks Sophie Deneve Ecole Normale Supérieure, Paris Cortical spike trains are highly variable From Churchland et al, Nature neuroscience 2010 Cortical spike
More informationJohn Lazzaro and Carver Mead Department of Computer Science California Institute of Technology Pasadena, California, 91125
Lazzaro and Mead Circuit Models of Sensory Transduction in the Cochlea CIRCUIT MODELS OF SENSORY TRANSDUCTION IN THE COCHLEA John Lazzaro and Carver Mead Department of Computer Science California Institute
More informationFigure 1. Artificial Neural Network structure. B. Spiking Neural Networks Spiking Neural networks (SNNs) fall into the third generation of neural netw
Review Analysis of Pattern Recognition by Neural Network Soni Chaturvedi A.A.Khurshid Meftah Boudjelal Electronics & Comm Engg Electronics & Comm Engg Dept. of Computer Science P.I.E.T, Nagpur RCOEM, Nagpur
More informationCN510: Principles and Methods of Cognitive and Neural Modeling. Neural Oscillations. Lecture 24
CN510: Principles and Methods of Cognitive and Neural Modeling Neural Oscillations Lecture 24 Instructor: Anatoli Gorchetchnikov Teaching Fellow: Rob Law It Is Much
More informationCMOS Architecture of Synchronous Pulse-Coupled Neural Network and Its Application to Image Processing
CMOS Architecture of Synchronous Pulse-Coupled Neural Network and Its Application to Image Processing Yasuhiro Ota Bogdan M. Wilamowski Image Information Products Hdqrs. College of Engineering MINOLTA
More informationPERFORMANCE COMPARISON BETWEEN STEREAUSIS AND INCOHERENT WIDEBAND MUSIC FOR LOCALIZATION OF GROUND VEHICLES ABSTRACT
Approved for public release; distribution is unlimited. PERFORMANCE COMPARISON BETWEEN STEREAUSIS AND INCOHERENT WIDEBAND MUSIC FOR LOCALIZATION OF GROUND VEHICLES September 1999 Tien Pham U.S. Army Research
More informationSOUND QUALITY EVALUATION OF FAN NOISE BASED ON HEARING-RELATED PARAMETERS SUMMARY INTRODUCTION
SOUND QUALITY EVALUATION OF FAN NOISE BASED ON HEARING-RELATED PARAMETERS Roland SOTTEK, Klaus GENUIT HEAD acoustics GmbH, Ebertstr. 30a 52134 Herzogenrath, GERMANY SUMMARY Sound quality evaluation of
More informationROBUST PITCH TRACKING USING LINEAR REGRESSION OF THE PHASE
- @ Ramon E Prieto et al Robust Pitch Tracking ROUST PITCH TRACKIN USIN LINEAR RERESSION OF THE PHASE Ramon E Prieto, Sora Kim 2 Electrical Engineering Department, Stanford University, rprieto@stanfordedu
More informationSupplementary Material
Supplementary Material Orthogonal representation of sound dimensions in the primate midbrain Simon Baumann, Timothy D. Griffiths, Li Sun, Christopher I. Petkov, Alex Thiele & Adrian Rees Methods: Animals
More informationIntroduction to cochlear implants Philipos C. Loizou Figure Captions
http://www.utdallas.edu/~loizou/cimplants/tutorial/ Introduction to cochlear implants Philipos C. Loizou Figure Captions Figure 1. The top panel shows the time waveform of a 30-msec segment of the vowel
More informationA cat's cocktail party: Psychophysical, neurophysiological, and computational studies of spatial release from masking
A cat's cocktail party: Psychophysical, neurophysiological, and computational studies of spatial release from masking Courtney C. Lane 1, Norbert Kopco 2, Bertrand Delgutte 1, Barbara G. Shinn- Cunningham
More informationBinaural Sound Localization Systems Based on Neural Approaches. Nick Rossenbach June 17, 2016
Binaural Sound Localization Systems Based on Neural Approaches Nick Rossenbach June 17, 2016 Introduction Barn Owl as Biological Example Neural Audio Processing Jeffress model Spence & Pearson Artifical
More informationAUDL GS08/GAV1 Signals, systems, acoustics and the ear. Loudness & Temporal resolution
AUDL GS08/GAV1 Signals, systems, acoustics and the ear Loudness & Temporal resolution Absolute thresholds & Loudness Name some ways these concepts are crucial to audiologists Sivian & White (1933) JASA
More informationYou know about adding up waves, e.g. from two loudspeakers. AUDL 4007 Auditory Perception. Week 2½. Mathematical prelude: Adding up levels
AUDL 47 Auditory Perception You know about adding up waves, e.g. from two loudspeakers Week 2½ Mathematical prelude: Adding up levels 2 But how do you get the total rms from the rms values of two signals
More informationHuman Auditory Periphery (HAP)
Human Auditory Periphery (HAP) Ray Meddis Department of Human Sciences, University of Essex Colchester, CO4 3SQ, UK. rmeddis@essex.ac.uk A demonstrator for a human auditory modelling approach. 23/11/2003
More informationA unitary model of pitch perception Ray Meddis and Lowel O Mard Department of Psychology, Essex University, Colchester CO4 3SQ, United Kingdom
A unitary model of pitch perception Ray Meddis and Lowel O Mard Department of Psychology, Essex University, Colchester CO4 3SQ, United Kingdom Received 15 March 1996; revised 22 April 1997; accepted 12
More informationChapter 2 A Silicon Model of Auditory-Nerve Response
5 Chapter 2 A Silicon Model of Auditory-Nerve Response Nonlinear signal processing is an integral part of sensory transduction in the nervous system. Sensory inputs are analog, continuous-time signals
More informationLecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex
Lecture 4 Foundations and Cognitive Processes in Visual Perception From the Retina to the Visual Cortex 1.Vision Science 2.Visual Performance 3.The Human Visual System 4.The Retina 5.The Visual Field and
More informationTIME encoding of a band-limited function,,
672 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS, VOL. 53, NO. 8, AUGUST 2006 Time Encoding Machines With Multiplicative Coupling, Feedforward, and Feedback Aurel A. Lazar, Fellow, IEEE
More informationNeuromorphic VLSI Event-Based devices and systems
Neuromorphic VLSI Event-Based devices and systems Giacomo Indiveri Institute of Neuroinformatics University of Zurich and ETH Zurich LTU, Lulea May 28, 2012 G.Indiveri (http://ncs.ethz.ch/) Neuromorphic
More informationDetection of external stimuli Response to the stimuli Transmission of the response to the brain
Sensation Detection of external stimuli Response to the stimuli Transmission of the response to the brain Perception Processing, organizing and interpreting sensory signals Internal representation of the
More informationCOM325 Computer Speech and Hearing
COM325 Computer Speech and Hearing Part III : Theories and Models of Pitch Perception Dr. Guy Brown Room 145 Regent Court Department of Computer Science University of Sheffield Email: g.brown@dcs.shef.ac.uk
More informationIN a natural environment, speech often occurs simultaneously. Monaural Speech Segregation Based on Pitch Tracking and Amplitude Modulation
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 5, SEPTEMBER 2004 1135 Monaural Speech Segregation Based on Pitch Tracking and Amplitude Modulation Guoning Hu and DeLiang Wang, Fellow, IEEE Abstract
More informationEffects of Firing Synchrony on Signal Propagation in Layered Networks
Effects of Firing Synchrony on Signal Propagation in Layered Networks 141 Effects of Firing Synchrony on Signal Propagation in Layered Networks G. T. Kenyon,l E. E. Fetz,2 R. D. Puffl 1 Department of Physics
More informationAn Auditory Localization and Coordinate Transform Chip
An Auditory Localization and Coordinate Transform Chip Timothy K. Horiuchi timmer@cns.caltech.edu Computation and Neural Systems Program California Institute of Technology Pasadena, CA 91125 Abstract The
More informationBinaural Mechanisms that Emphasize Consistent Interaural Timing Information over Frequency
Binaural Mechanisms that Emphasize Consistent Interaural Timing Information over Frequency Richard M. Stern 1 and Constantine Trahiotis 2 1 Department of Electrical and Computer Engineering and Biomedical
More informationThe Human Auditory System
medial geniculate nucleus primary auditory cortex inferior colliculus cochlea superior olivary complex The Human Auditory System Prominent Features of Binaural Hearing Localization Formation of positions
More informationAES London 2010 Workshop W6
AES London 2010 Workshop W6 Sunday, May 23, 14:00 15:45 (Room C2) W6 - How Do We Evaluate High Resolution Formats for Digital Audio? Chair: Hans van Maanen, Temporal Coherence - The Netherlands Panelists:
More informationExploiting envelope fluctuations to achieve robust extraction and intelligent integration of binaural cues
The Technology of Binaural Listening & Understanding: Paper ICA216-445 Exploiting envelope fluctuations to achieve robust extraction and intelligent integration of binaural cues G. Christopher Stecker
More informationIN practically all listening situations, the acoustic waveform
684 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 3, MAY 1999 Separation of Speech from Interfering Sounds Based on Oscillatory Correlation DeLiang L. Wang, Associate Member, IEEE, and Guy J. Brown
More informationAuditory Based Feature Vectors for Speech Recognition Systems
Auditory Based Feature Vectors for Speech Recognition Systems Dr. Waleed H. Abdulla Electrical & Computer Engineering Department The University of Auckland, New Zealand [w.abdulla@auckland.ac.nz] 1 Outlines
More informationA learning, biologically-inspired sound localization model
A learning, biologically-inspired sound localization model Elena Grassi Neural Systems Lab Institute for Systems Research University of Maryland ITR meeting Oct 12/00 1 Overview HRTF s cues for sound localization.
More informationRetina. last updated: 23 rd Jan, c Michael Langer
Retina We didn t quite finish up the discussion of photoreceptors last lecture, so let s do that now. Let s consider why we see better in the direction in which we are looking than we do in the periphery.
More informationSOUND 1 -- ACOUSTICS 1
SOUND 1 -- ACOUSTICS 1 SOUND 1 ACOUSTICS AND PSYCHOACOUSTICS SOUND 1 -- ACOUSTICS 2 The Ear: SOUND 1 -- ACOUSTICS 3 The Ear: The ear is the organ of hearing. SOUND 1 -- ACOUSTICS 4 The Ear: The outer ear
More informationLimulus eye: a filter cascade. Limulus 9/23/2011. Dynamic Response to Step Increase in Light Intensity
Crab cam (Barlow et al., 2001) self inhibition recurrent inhibition lateral inhibition - L17. Neural processing in Linear Systems 2: Spatial Filtering C. D. Hopkins Sept. 23, 2011 Limulus Limulus eye:
More informationSpeech Synthesis using Mel-Cepstral Coefficient Feature
Speech Synthesis using Mel-Cepstral Coefficient Feature By Lu Wang Senior Thesis in Electrical Engineering University of Illinois at Urbana-Champaign Advisor: Professor Mark Hasegawa-Johnson May 2018 Abstract
More informationIan C. Bruce Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205
A phenomenological model for the responses of auditory-nerve fibers: I. Nonlinear tuning with compression and suppression Xuedong Zhang Hearing Research Center and Department of Biomedical Engineering,
More informationSensation and Perception
Page 94 Check syllabus! We are starting with Section 6-7 in book. Sensation and Perception Our Link With the World Shorter wavelengths give us blue experience Longer wavelengths give us red experience
More informationSWITCHED CAPACITOR BASED IMPLEMENTATION OF INTEGRATE AND FIRE NEURAL NETWORKS
Journal of ELECTRICAL ENGINEERING, VOL. 54, NO. 7-8, 23, 28 212 SWITCHED CAPACITOR BASED IMPLEMENTATION OF INTEGRATE AND FIRE NEURAL NETWORKS Daniel Hajtáš Daniela Ďuračková This paper is dealing with
More informationSupplementary Materials for
advances.sciencemag.org/cgi/content/full/2/6/e1501326/dc1 Supplementary Materials for Organic core-sheath nanowire artificial synapses with femtojoule energy consumption Wentao Xu, Sung-Yong Min, Hyunsang
More informationA Neural Edge-Detection Model for Enhanced Auditory Sensitivity in Modulated Noise
A Neural Edge-etection odel for Enhanced Auditory Sensitivity in odulated Noise Alon Fishbach and Bradford J. ay epartment of Biomedical Engineering and Otolaryngology-HNS Johns Hopkins University Baltimore,
More informationSound Synthesis Methods
Sound Synthesis Methods Matti Vihola, mvihola@cs.tut.fi 23rd August 2001 1 Objectives The objective of sound synthesis is to create sounds that are Musically interesting Preferably realistic (sounds like
More informationA GENERAL SYSTEM DESIGN & IMPLEMENTATION OF SOFTWARE DEFINED RADIO SYSTEM
A GENERAL SYSTEM DESIGN & IMPLEMENTATION OF SOFTWARE DEFINED RADIO SYSTEM 1 J. H.VARDE, 2 N.B.GOHIL, 3 J.H.SHAH 1 Electronics & Communication Department, Gujarat Technological University, Ahmadabad, India
More informationCME312- LAB Manual DSB-SC Modulation and Demodulation Experiment 6. Experiment 6. Experiment. DSB-SC Modulation and Demodulation
Experiment 6 Experiment DSB-SC Modulation and Demodulation Objectives : By the end of this experiment, the student should be able to: 1. Demonstrate the modulation and demodulation process of DSB-SC. 2.
More informationFeasibility of Vocal Emotion Conversion on Modulation Spectrogram for Simulated Cochlear Implants
Feasibility of Vocal Emotion Conversion on Modulation Spectrogram for Simulated Cochlear Implants Zhi Zhu, Ryota Miyauchi, Yukiko Araki, and Masashi Unoki School of Information Science, Japan Advanced
More informationMULTIPLE F0 ESTIMATION IN THE TRANSFORM DOMAIN
10th International Society for Music Information Retrieval Conference (ISMIR 2009 MULTIPLE F0 ESTIMATION IN THE TRANSFORM DOMAIN Christopher A. Santoro +* Corey I. Cheng *# + LSB Audio Tampa, FL 33610
More informationA102 Signals and Systems for Hearing and Speech: Final exam answers
A12 Signals and Systems for Hearing and Speech: Final exam answers 1) Take two sinusoids of 4 khz, both with a phase of. One has a peak level of.8 Pa while the other has a peak level of. Pa. Draw the spectrum
More informationComparison of Spectral Analysis Methods for Automatic Speech Recognition
INTERSPEECH 2013 Comparison of Spectral Analysis Methods for Automatic Speech Recognition Venkata Neelima Parinam, Chandra Vootkuri, Stephen A. Zahorian Department of Electrical and Computer Engineering
More informationVERY LARGE SCALE INTEGRATION signal processing
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 9, SEPTEMBER 1997 723 Auditory Feature Extraction Using Self-Timed, Continuous-Time Discrete-Signal Processing
More informationUniversity of Washington Department of Electrical Engineering Computer Speech Processing EE516 Winter 2005
University of Washington Department of Electrical Engineering Computer Speech Processing EE516 Winter 2005 Lecture 5 Slides Jan 26 th, 2005 Outline of Today s Lecture Announcements Filter-bank analysis
More informationA binaural auditory model and applications to spatial sound evaluation
A binaural auditory model and applications to spatial sound evaluation Ma r k o Ta k a n e n 1, Ga ë ta n Lo r h o 2, a n d Mat t i Ka r ja l a i n e n 1 1 Helsinki University of Technology, Dept. of Signal
More informationAUDL GS08/GAV1 Auditory Perception. Envelope and temporal fine structure (TFS)
AUDL GS08/GAV1 Auditory Perception Envelope and temporal fine structure (TFS) Envelope and TFS arise from a method of decomposing waveforms The classic decomposition of waveforms Spectral analysis... Decomposes
More informationThe EarSpring Model for the Loudness Response in Unimpaired Human Hearing
The EarSpring Model for the Loudness Response in Unimpaired Human Hearing David McClain, Refined Audiometrics Laboratory, LLC December 2006 Abstract We describe a simple nonlinear differential equation
More informationAn auditory model that can account for frequency selectivity and phase effects on masking
Acoust. Sci. & Tech. 2, (24) PAPER An auditory model that can account for frequency selectivity and phase effects on masking Akira Nishimura 1; 1 Department of Media and Cultural Studies, Faculty of Informatics,
More informationAcross frequency processing with time varying spectra
Bachelor thesis Across frequency processing with time varying spectra Handed in by Hendrike Heidemann Study course: Engineering Physics First supervisor: Prof. Dr. Jesko Verhey Second supervisor: Prof.
More informationSpectral and temporal processing in the human auditory system
Spectral and temporal processing in the human auditory system To r s t e n Da u 1, Mo rt e n L. Jepsen 1, a n d St e p h a n D. Ew e r t 2 1Centre for Applied Hearing Research, Ørsted DTU, Technical University
More informationComplex Sounds. Reading: Yost Ch. 4
Complex Sounds Reading: Yost Ch. 4 Natural Sounds Most sounds in our everyday lives are not simple sinusoidal sounds, but are complex sounds, consisting of a sum of many sinusoids. The amplitude and frequency
More informationNeural Coding of Multiple Stimulus Features in Auditory Cortex
Neural Coding of Multiple Stimulus Features in Auditory Cortex Jonathan Z. Simon Neuroscience and Cognitive Sciences Biology / Electrical & Computer Engineering University of Maryland, College Park Computational
More informationTime-frequency computational model for echo-delay resolution in sonar images of the big brown bat, Eptesicus fuscus
Time-frequency computational model for echo-delay resolution in sonar images of the big brown bat, Eptesicus fuscus Nicola Neretti 1,2, Mark I. Sanderson 3, James A. Simmons 3, Nathan Intrator 2,4 1 Brain
More information14 fasttest. Multitone Audio Analyzer. Multitone and Synchronous FFT Concepts
Multitone Audio Analyzer The Multitone Audio Analyzer (FASTTEST.AZ2) is an FFT-based analysis program furnished with System Two for use with both analog and digital audio signals. Multitone and Synchronous
More informationComputing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation
Computing with Biologically Inspired Neural Oscillators: Application to Color Image Segmentation Authors: Ammar Belatreche, Liam Maguire, Martin McGinnity, Liam McDaid and Arfan Ghani Published: Advances
More information40 Hz Event Related Auditory Potential
40 Hz Event Related Auditory Potential Ivana Andjelkovic Advanced Biophysics Lab Class, 2012 Abstract Main focus of this paper is an EEG experiment on observing frequency of event related auditory potential
More informationGammatone Cepstral Coefficient for Speaker Identification
Gammatone Cepstral Coefficient for Speaker Identification Rahana Fathima 1, Raseena P E 2 M. Tech Student, Ilahia college of Engineering and Technology, Muvattupuzha, Kerala, India 1 Asst. Professor, Ilahia
More informationEffect of filter spacing and correct tonotopic representation on melody recognition: Implications for cochlear implants
Effect of filter spacing and correct tonotopic representation on melody recognition: Implications for cochlear implants Kalyan S. Kasturi and Philipos C. Loizou Dept. of Electrical Engineering The University
More informationSignal detection in the auditory midbrain: Neural correlates and mechanisms of spatial release from masking
Signal detection in the auditory midbrain: Neural correlates and mechanisms of spatial release from masking by Courtney C. Lane B. S., Electrical Engineering Rice University, 1996 SUBMITTED TO THE HARVARD-MIT
More information