Using sound levels for location tracking
|
|
- Anissa Hutchinson
- 6 years ago
- Views:
Transcription
1 Using sound levels for location tracking Sasha Ames CMPE250 Multimedia Systems University of California, Santa Cruz Abstract We present an experiemnt to attempt to track the location of sound sources. This is a very controlled experiment using four microphones at the corner of a square field in which we make the sounds for tracking. We have software to perform the sound tracking. This software works either by interpolation, or finds locations via a formula derived from a model. We show disappointing results. 1. Introduction The experiment described in this paper involves the use of multiple audio input channels in attempt to show locations of sound sources. Georgiou and Kyrakiakis mention this as applicable to tracking the locations of speakers in a video conference [4]. Brandstein has done work in location tracking using microphone arrays [3, 5]. His main focus was to use time delay as the metric for locations. This experiment, however, uses differences in amplitude between microphones to determine the sound location. My experience with recording audio on microphones has taught me that the recording level on a microphone changes as a sound moves closer or further away from it. For example, I have recorded drum kit using a pair of microphones to get better coverage of the kit. The drums on the left hand side of the kit (snare, hi hat) are recorded at a stronger level than those on the right (ride cymbal, floor tom) by a microphone placed on the left side of a kit, and vice-versa. This is apparent when examining the audio data for each channel. Therefore, I figured that the same principle might be used to localize the positions of sounds within a field bound by some microphones. We accomplish this by first capturing audio data with the locations already known, by which we train our software to determine the locations of any other sound recorded within the field. Since sound events with their locations unknown may never exactly match the amplitudes recorded with known locations, interpolation may be an option to best approximate the location. Additionally, we know that formulae exist to describe relationships of measured sound levels to distance from the source. Thus, we can try to use a formula for the distance using multiple recorded levels and compare that to results by interpolation A simple example using two microphones A sound registers at 12 on microphone A and 28 on microphone B. I ve already recorded sounds at positions registering 10 on A and 30 on B, 20 A and 20 on B, 30 on A and 10 on B. I can interpolate that the sound is probably close to the 10 on A, 20 on B spot, but slightly towards the 20/20 spot. 2. Proposed Experimental methodology and design 2.1. Experiment layout I place my four microphones at the corners of an imaginary rectangle of s predetermined size (probably 10 x10 ). Since the microphones may have directionality, they should be oriented as to best pick up sound within the rectangle. Each microphone 1
2 Figure 1. Sound gathering setup diagram. The spot enclosed by concentric circles represents a sound event, with each circle being of diminishing amplitude as they get further from the center. The microphone in the upper-right corner shall record the greatest amplitude for the event, followed, by the lower-right, then upper-left, etc. must be properly calibrated to the loudest sound possible within the experiment at a very close proximity. We want to try to have the best possible SNR and take advantage of all quantization steps for 16 bit audio Recording of audio data We record on four audio channels simultaneously by use of the Echo Layla audio hardware [1] that supports simultaneous recording and playback on up to eight channels. We record the audio using conventional multi-track recording software for the PC, such as Cakewalk [6]. Although the Wave file format supports any number of audio channels (more than 2 for stereo), we have recorded to four separate audio files. We have also used the multi-tracking software to clean up the audio data, that is, removed unneeded segments of silence and unexpected noise, thus, resulting in the audio that interested to work with. I subdivide the rectangle enclosed by the microphones into a grid,and at each point on the grid I have recorded a loud percussive sound on all four channels, keeping track of each location so they may be later be given as input to the software along with the audio data. This will form the audio data from which I can derive the known sound location data to be used later for interpolation. Since I am unfortunately unable to track sound locations in real time, I have prerecorded the sounds with locations unknown that need to be tracked. Again, I have recorded on four audio channels via the four microphones. All sounds shall be in the rectangle, but they do not need to be at points on the grid and I won t keep track of exact locations. I should generally try to record where I made these sounds so I may compare the results when I attempt to later track the data Methodology for location tracking We can location track by two methods: 1) using a formula given three measured amplitude values or 2) interpolate from pairs of measured amplitudes and distances. 2
3 Formula for plotting locations Given the two methods from which we may find distances of the sound source to the microphone, each uses the same formula ultimately to plot the location of the source. By placing our microphones on the corners of a square, we have significantly simplified the calculations needed to determine the locations. It would be possible to pinpoint the locations of the source given the distances to only three microphones placed in a room, assuming all lie on the same plane, but this would require much more complex calculations. We need the following values defined for the formula: Let w be the width of square, which also means w is the distance between a pair of microphones not in opposite corners of the square. Let d 1 be the distance from the sound source to the microphone in the upper left corner of the square. Let d 2 be the distance from the sound source to the microphone in the lower left corner of the square. Let d 3 be the distance from the sound source to the microphone in the upper right corner of the square. Finally, let d 4 be the distance from the sound source to the microphone in the lower right corner of the square. Though we have four distance values to use, we need only a pair from adjacent microphones. In order to find the pair of coordinates (x,y), given we decide to use d 1 andd 2 our formula for x and y are: x = d2 1 d2 2 + w2 2w y = d1 2 x2 To give a better approximation, we can repeat this formula for each pair of distances d 1 and d 3, d 2 and d 4, d 3 and d 4. The four sets of pairs can then be averaged together to give a better approximation of the location Formula for determining distance Given the amplitudes of the sound registered at the four microphones, we can calculate the distances we need. Let I be the true intensity of the sound, d n be any one of the distances mentioned above, and I n be the amplitude measured by the corresponding microphone. The relationship between these is: I n = I d 2 n I may remain unknown throughout, as it will prove to not be relevant. Using this formula for each microphone and the prior one for the location, we may derive a formula for a distance, given three of the amplitudes: I 1 I2 2 d 1 = I 3w 2 I 1 I 2 I3 2 I1 2I2 2 2I 1I3 2I 3 2I 1 I 2 I w2 8I2 2 2I2 2 I2 3 I1 2I2 2 2I 1I3 2I 3 2I 1 I 2 I3 2 + I2 3 (I2 1 I2 2 2I 1I3 2I 3 2I 1 I 2 I I2 2 I2 3 )w4 + ( 2I 1 I2 2I 3w 2 2I2 2 I2 3 2(I1 2I2 2 2I 1I3 2I 3 2I 1 I 2 I I2 2 I2 3 ) This can be repeated for each triplet of amplitudes to find a corresponding distance, i.e. for d 2 use I 2, I 4, and I1, etc Procedure for interpolation As an alternative to using the formula above to calculate distances, we may interpolate the distance given some measured amplitude and a table of amplitudes corresponding to distances for a given microphone. We may wish to consider performing interpolation because it is possible our microphones may not perform exactly as they should with the above equation, or they are not properly calibrated, i.e. I n = k n I/d 2 n, where k 1 k 2 k 3, etc. Our method of interpolation is Lagrange s classical formula of polynomial interpolation [2]. This algorithm involves two loops and runs in O(n 2 ), where n is the number of data points collected for interpolation. Our x 1..n are the measured amplitudes, y 1..n are the corresponding measured distances, and x is our measured amplitude with distance unknown. Once we find the four distances from each of the microphones, we may apply to the formula in section Some basic assumptions We need to make some assumptions about the experiment for the sake of simplification. It is certainly not impossible to account for these, but it would require a much more complex implementation. First, we assume that the microphones are omnidirectional with respect to a 90 degree field in front of them, as they are placed on the corners of our grid, facing the center. Second, there are no objects occluding the sound from any of the microphones, as to affect the measured amplitude. Third, acoustical properties of the room will not affect the measured amplitudes. 3
4 3. Software Implementation I have implemented two pieces of software. One is an xml table constructor for use in interpolation. The other is the actual location tracker. Both share a module to scan audio files for audio events of which we wish to record or determine a location. I have completed this implementation in the Java programming language, given its ease for rapid development and acceptable performance Audio File Reader The audio file reader processes files that are stored in the WAV file format. This was the default output format for the Cakewalk multi-tracking software for audio exports, and common in Windows environments. In WAV audio files, the samples are stored in little endian encoding, and so they must be decoded as it is not how Java stores its integers. Thus, this file reader reads individual samples from the audio data in that fashion. For each, file, the audio reader at first read a small number of samples to determine the baseline noise level. Next it shall scan through the file one sample at a time until it encounters data at some threshold above the noise level. This indicates the start of an audio event. The reader can report events either through finding the peak or power level. Peaks are found by finding the greatest absolute sample within a window after the start of an event. Powers are found using a formula over some defined constant window. Once determined, the reader shall read and disregard samples until it reaches the baseline level. Then, once again we repeat the process of scanning until the next event is found. For more continuous audio where there are not isolated events for which we wish to find the location, we can just repeat finding either peaks or power levels within a window location data compiling software The location data compiling software has been very specifically set up for my experimental setup, which has audio events recorded with locations known on a grid with 25 points (5x5). The software is set up to read four audio files and expect to find exactly 25 audio events in each. The locations for each event is predetermined and methodical (like raster scan ordering) so corresponding locations on the grid may be recorded with the peak or power value. The event data is written in XML format, with an element for each audio event. Each element contains the peak or power value as an integer for each microphone (4 in total), with peak values ranging from 0 to (absolute values for 16 bit PCM audio data), and the x and y coordinates on the grid, ranging from 0 to Location tracking software The location tracking software uses the same audio file reader module to find audio events of which we wish to find the location within the grid. We run four audio reader modules for data from each microphone simultaneously in separate threads. Each reader fills a buffer corresponding to its microphone when peak or power data from events are determined. If there is data available in all four, the main thread consumes from the front of each buffer and proceeds to determine the distances for the event. The software determines distances of the event to each of the microphones by using implementations of the methods described in 2.3. It has a mode for each. For the formula mode using three measured amplitude values, the measured amplitudes are simply determined by applying the formula. Once all distances are determined, the location (coordinates) of the event may be determined. Interpolation mode has additional steps. Before we may interpolate, tables must be constructed for each microphone. Before processing any audio data, the software reads in the XML data generated by the other software previously mentioned in 3.2, and for each point it calculates the distance from it to each of the microphones. For instance, (0,0) is (srqt2 from microphone 1, while (4, 4) is the same to microphone 4. Each distance is placed in the table with the measured amplitude. It is from these tables that we apply the interpolation algorithm described in and subsequently find the location. Once we have determined the coordinates of the audio event on the 10 x10 grid, we can plot it graphically. The software includes a module that plots events on a 2-D window with the representation of a 10x10 grid. The software performs the plotting in real time as the events are processed from the audio files. 4
5 4. Results The results for this experiment should somehow answer the question of: how well can the software approximate the locations of recorded sound? We shall show in this section that the results were well below my expectations. Experimentation with the software ran with the following mode switches peak vs. power level and interpolation vs. formula application distance approximation thus resulting in 4 possible modes. I ran the trials using four sets of audio files, one file for each microphone in the set. The first set of files was also used to generate the interpolation table. We do have the most successful results from running the software over the same audio tracks used to create the interpolation data in peak mode with interpolation. This is to be expected. With the exception of the final row of points, the locations were plotted exactly where I should expect them. This would serve as a baseline for the interpolation mode. However, when the software was run on that data in formula application, mode, very few points are plotted at all and they are not in the correct location. Interpolation for the first set of audio files with locations unknown failed to produce any points plotted of the locations. Using the formula application mode with the first unknown audio set produced some points plotted, but their path does not resemble the that of the sound from when I gathered the data. Some points may be correct, but it is unclear how many. Furthermore, for the second two sets of audio files, the results show no to very sparse points plotted at all. 5. Conclusion The question remains of what went wrong. With no other choice I performed the experiment with four different microphones. These microphones varied in quality, directionality, and perhaps a nonlinear response to the audio signal. I made all attempts to prevent sound from being occluded by any objects in the field including myself, but that may have proved to be too difficult for one person to do on a first attempt. Finally, it is highly possible that the nature of the sounds itself that I used may not have worked for this experiment. They may have not been consistent enough to work using the simple model and interpolation, or they did not produce the consistent spherical waves that would be necessary as well. In all, it was of course disappointing that I could not track locations for my groups of unknown audio. However, I consider this work a positive experience in a number of ways. It required serious planning to devise the experiment, do the audio capture, write and test the software. I was forced to refine my ideas and consider what was really required to implement the software to try to accomplish this experiment. I had intended to return to the set up the audio capture environment a second time to try to produce better results, but unforeseen circumstances prevented that. In the end, I may had results not to my liking, but I am pleased with having the opportunity to attempt this work. Additionally, I am proud of the software written for this project, and I feel working on that has given me a better feel for what goes into audio processing. References [1] Echo layla product description page. [2] Numerical interpolation: Polynomial interpolation. interpolation/num interpolation.cfm#polynomial. [3] M. Brandstein, J. Adcock, and H. Silverman. A closed-form location estimator for use with room environment microphone arrays. Speech and Audio Processing IEEE Transactions on, 5:45 50, [4] P. Georgiou, C. Kyriakakis, and P. Tsakalides. Robust time delay estimation for sound source localization in noisy environments. In Applications of Signal Processing to Audio and Acoustics 1997 IEEE ASSP Workshop on, pages 19 22, [5] D. Sturim, M. Brandstein, and H. Silverman. Tracking multiple talkers using microphone-array measurements. In Acoustics, Speech, and Signal Processing, ICASSP-97., 1997 IEEE International Conference on, volume 1, pages , [6] I. Twelve Tone Systems. Cakewalk website,
CI-22. BASIC ELECTRONIC EXPERIMENTS with computer interface. Experiments PC1-PC8. Sample Controls Display. Instruction Manual
CI-22 BASIC ELECTRONIC EXPERIMENTS with computer interface Experiments PC1-PC8 Sample Controls Display See these Oscilloscope Signals See these Spectrum Analyzer Signals Instruction Manual Elenco Electronics,
More informationPrinceton ELE 201, Spring 2014 Laboratory No. 2 Shazam
Princeton ELE 201, Spring 2014 Laboratory No. 2 Shazam 1 Background In this lab we will begin to code a Shazam-like program to identify a short clip of music using a database of songs. The basic procedure
More informationLaboratory Assignment 2 Signal Sampling, Manipulation, and Playback
Laboratory Assignment 2 Signal Sampling, Manipulation, and Playback PURPOSE This lab will introduce you to the laboratory equipment and the software that allows you to link your computer to the hardware.
More informationNonuniform multi level crossing for signal reconstruction
6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven
More informationROBUST PITCH TRACKING USING LINEAR REGRESSION OF THE PHASE
- @ Ramon E Prieto et al Robust Pitch Tracking ROUST PITCH TRACKIN USIN LINEAR RERESSION OF THE PHASE Ramon E Prieto, Sora Kim 2 Electrical Engineering Department, Stanford University, rprieto@stanfordedu
More informationGAIN COMPARISON MEASUREMENTS IN SPHERICAL NEAR-FIELD SCANNING
GAIN COMPARISON MEASUREMENTS IN SPHERICAL NEAR-FIELD SCANNING ABSTRACT by Doren W. Hess and John R. Jones Scientific-Atlanta, Inc. A set of near-field measurements has been performed by combining the methods
More informationMicrophone Array Design and Beamforming
Microphone Array Design and Beamforming Heinrich Löllmann Multimedia Communications and Signal Processing heinrich.loellmann@fau.de with contributions from Vladi Tourbabin and Hendrik Barfuss EUSIPCO Tutorial
More informationSpeech Enhancement using Wiener filtering
Speech Enhancement using Wiener filtering S. Chirtmay and M. Tahernezhadi Department of Electrical Engineering Northern Illinois University DeKalb, IL 60115 ABSTRACT The problem of reducing the disturbing
More informationUnited Codec. 1. Motivation/Background. 2. Overview. Mofei Zhu, Hugo Guo, Deepak Music 422 Winter 09 Stanford University.
United Codec Mofei Zhu, Hugo Guo, Deepak Music 422 Winter 09 Stanford University March 13, 2009 1. Motivation/Background The goal of this project is to build a perceptual audio coder for reducing the data
More informationMODEL 5002 PHASE VERIFICATION BRIDGE SET
CLARKE-HESS COMMUNICATION RESEARCH CORPORATION clarke-hess.com MODEL 5002 PHASE VERIFICATION BRIDGE SET TABLE OF CONTENTS WARRANTY i I BASIC ASSEMBLIES I-1 1-1 INTRODUCTION I-1 1-2 BASIC ASSEMBLY AND SPECIFICATIONS
More informationBit Error Probability Computations for M-ary Quadrature Amplitude Modulation
KING ABDULLAH UNIVERSITY OF SCIENCE AND TECHNOLOGY ELECTRICAL ENGINEERING DEPARTMENT Bit Error Probability Computations for M-ary Quadrature Amplitude Modulation Ronell B. Sicat ID: 4000217 Professor Tareq
More informationDEEP LEARNING BASED AUTOMATIC VOLUME CONTROL AND LIMITER SYSTEM. Jun Yang (IEEE Senior Member), Philip Hilmes, Brian Adair, David W.
DEEP LEARNING BASED AUTOMATIC VOLUME CONTROL AND LIMITER SYSTEM Jun Yang (IEEE Senior Member), Philip Hilmes, Brian Adair, David W. Krueger Amazon Lab126, Sunnyvale, CA 94089, USA Email: {junyang, philmes,
More informationA Fast Segmentation Algorithm for Bi-Level Image Compression using JBIG2
A Fast Segmentation Algorithm for Bi-Level Image Compression using JBIG2 Dave A. D. Tompkins and Faouzi Kossentini Signal Processing and Multimedia Group Department of Electrical and Computer Engineering
More informationULTRASONIC SIGNAL PROCESSING TOOLBOX User Manual v1.0
ULTRASONIC SIGNAL PROCESSING TOOLBOX User Manual v1.0 Acknowledgment The authors would like to acknowledge the financial support of European Commission within the project FIKS-CT-2000-00065 copyright Lars
More information! Instrument Guide - Drums
Intro to Drums Drums can seem a complex instrument at first due to the variety of different types and components. A typical drum kit is made up of three types of components: drums, cymbals, and hardware.
More informationSound Waves and Beats
Physics Topics Sound Waves and Beats If necessary, review the following topics and relevant textbook sections from Serway / Jewett Physics for Scientists and Engineers, 9th Ed. Traveling Waves (Serway
More informationCONTENTS PREFACE. Chapter 1 Monitoring... 1 CHAPTER 2 THE MICROPHONE Welcome To The Audio Recording Basic Training...xi
iii CONTENTS PREFACE Welcome To The Audio Recording Basic Training...xi Chapter 1 Monitoring... 1 The Listening Environment... 1 Determining The Listening Position... 2 Standing Waves... 2 Acoustic Quick
More information8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and
8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE
More informationMultiple Sound Sources Localization Using Energetic Analysis Method
VOL.3, NO.4, DECEMBER 1 Multiple Sound Sources Localization Using Energetic Analysis Method Hasan Khaddour, Jiří Schimmel Department of Telecommunications FEEC, Brno University of Technology Purkyňova
More informationVMS ML-2 Quick Start
VMS ML-2 Quick Start This quick start guide will help you get acquainted with your VMS ML-2 so you can start making the best sounding music with it, fast. Making Connections 1. Connect your ML-2 via a
More informationActivity monitoring and summarization for an intelligent meeting room
IEEE Workshop on Human Motion, Austin, Texas, December 2000 Activity monitoring and summarization for an intelligent meeting room Ivana Mikic, Kohsia Huang, Mohan Trivedi Computer Vision and Robotics Research
More informationDESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS
DESIGN AND IMPLEMENTATION OF AN ALGORITHM FOR MODULATION IDENTIFICATION OF ANALOG AND DIGITAL SIGNALS John Yong Jia Chen (Department of Electrical Engineering, San José State University, San José, California,
More informationContents. MIDI Test Additional Setup Latency On to Making Music... 41
Teach Yourself CuBase Contents Introduction........................ 3 Chapter 1: Mac or PC?................ 4 QuickGuide: Cubase 5 s System Requirements for Mac.................. 5 QuickGuide: Cubase 5
More informationGame Mechanics Minesweeper is a game in which the player must correctly deduce the positions of
Table of Contents Game Mechanics...2 Game Play...3 Game Strategy...4 Truth...4 Contrapositive... 5 Exhaustion...6 Burnout...8 Game Difficulty... 10 Experiment One... 12 Experiment Two...14 Experiment Three...16
More informationBmMT 2015 Puzzle Round November 7, 2015
BMmT Puzzle Round 2015 The puzzle round is a team round. You will have one hour to complete the twelve puzzles on the round. Calculators and other electronic devices are not permitted. The puzzles are
More informationInternational Snow Science Workshop
MULTIPLE BURIAL BEACON SEARCHES WITH MARKING FUNCTIONS ANALYSIS OF SIGNAL OVERLAP Thomas S. Lund * Aerospace Engineering Sciences The University of Colorado at Boulder ABSTRACT: Locating multiple buried
More informationZLS38500 Firmware for Handsfree Car Kits
Firmware for Handsfree Car Kits Features Selectable Acoustic and Line Cancellers (AEC & LEC) Programmable echo tail cancellation length from 8 to 256 ms Reduction - up to 20 db for white noise and up to
More informationSUPERVISED SIGNAL PROCESSING FOR SEPARATION AND INDEPENDENT GAIN CONTROL OF DIFFERENT PERCUSSION INSTRUMENTS USING A LIMITED NUMBER OF MICROPHONES
SUPERVISED SIGNAL PROCESSING FOR SEPARATION AND INDEPENDENT GAIN CONTROL OF DIFFERENT PERCUSSION INSTRUMENTS USING A LIMITED NUMBER OF MICROPHONES SF Minhas A Barton P Gaydecki School of Electrical and
More informationPRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM
PRACTICAL ASPECTS OF ACOUSTIC EMISSION SOURCE LOCATION BY A WAVELET TRANSFORM Abstract M. A. HAMSTAD 1,2, K. S. DOWNS 3 and A. O GALLAGHER 1 1 National Institute of Standards and Technology, Materials
More informationMEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY
AMBISONICS SYMPOSIUM 2009 June 25-27, Graz MEASURING DIRECTIVITIES OF NATURAL SOUND SOURCES WITH A SPHERICAL MICROPHONE ARRAY Martin Pollow, Gottfried Behler, Bruno Masiero Institute of Technical Acoustics,
More informationA Memory-Efficient Method for Fast Computation of Short 15-Puzzle Solutions
A Memory-Efficient Method for Fast Computation of Short 15-Puzzle Solutions Ian Parberry Technical Report LARC-2014-02 Laboratory for Recreational Computing Department of Computer Science & Engineering
More informationSound Synthesis Methods
Sound Synthesis Methods Matti Vihola, mvihola@cs.tut.fi 23rd August 2001 1 Objectives The objective of sound synthesis is to create sounds that are Musically interesting Preferably realistic (sounds like
More informationMana Recording Studios
Mana Recording Studios Mixing Requirements and Guidelines This document is list of requirements and guidelines for projects that are being sent to us for mixing. The goal of this document is to help us
More informationAssessing the accuracy of directional real-time noise monitoring systems
Proceedings of ACOUSTICS 2016 9-11 November 2016, Brisbane, Australia Assessing the accuracy of directional real-time noise monitoring systems Jesse Tribby 1 1 Global Acoustics Pty Ltd, Thornton, NSW,
More informationPOSSIBLY the most noticeable difference when performing
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 15, NO. 7, SEPTEMBER 2007 2011 Acoustic Beamforming for Speaker Diarization of Meetings Xavier Anguera, Associate Member, IEEE, Chuck Wooters,
More informationDesign and Implementation on a Sub-band based Acoustic Echo Cancellation Approach
Vol., No. 6, 0 Design and Implementation on a Sub-band based Acoustic Echo Cancellation Approach Zhixin Chen ILX Lightwave Corporation Bozeman, Montana, USA chen.zhixin.mt@gmail.com Abstract This paper
More informationAND9023/D. Feedback Path Measurement Tool APPLICATION NOTE INTRODUCTION
Feedback Path Measurement Tool APPLICATION NOTE INTRODUCTION The Feedback (FB) Path Measurement Tool is a new capability included with ON Semiconductor digital amplifiers, beginning with the SA3286. This
More informationSOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4
SOPA version 2 Revised July 7 2014 SOPA project September 21, 2014 Contents 1 Introduction 2 2 Basic concept 3 3 Capturing spatial audio 4 4 Sphere around your head 5 5 Reproduction 7 5.1 Binaural reproduction......................
More informationMultiple Antenna Techniques
Multiple Antenna Techniques In LTE, BS and mobile could both use multiple antennas for radio transmission and reception! In LTE, three main multiple antenna techniques! Diversity processing! The transmitter,
More informationDrum Transcription Based on Independent Subspace Analysis
Report for EE 391 Special Studies and Reports for Electrical Engineering Drum Transcription Based on Independent Subspace Analysis Yinyi Guo Center for Computer Research in Music and Acoustics, Stanford,
More informationMichael E. Lockwood, Satish Mohan, Douglas L. Jones. Quang Su, Ronald N. Miles
Beamforming with Collocated Microphone Arrays Michael E. Lockwood, Satish Mohan, Douglas L. Jones Beckman Institute, at Urbana-Champaign Quang Su, Ronald N. Miles State University of New York, Binghamton
More informationTime division multiplexing The block diagram for TDM is illustrated as shown in the figure
CHAPTER 2 Syllabus: 1) Pulse amplitude modulation 2) TDM 3) Wave form coding techniques 4) PCM 5) Quantization noise and SNR 6) Robust quantization Pulse amplitude modulation In pulse amplitude modulation,
More informationOutcome 7 Review. *Recall that -1 (-5) means
Outcome 7 Review Level 2 Determine the slope of a line that passes through A(3, -5) and B(-2, -1). Step 1: Remember that ordered pairs are in the form (x, y). Label the points so you can substitute into
More informationMaking a Recording in the Booth
Making a Recording in the Booth From UW Phonetics/Sociolinguistics Lab Wiki This page is a quick start guide to making a recording in the soundproof booth. Before continuing with the guide, check to ensure
More informationImage and Video Processing
Image and Video Processing () Image Representation Dr. Miles Hansard miles.hansard@qmul.ac.uk Segmentation 2 Today s agenda Digital image representation Sampling Quantization Sub-sampling Pixel interpolation
More informationLocalization (Position Estimation) Problem in WSN
Localization (Position Estimation) Problem in WSN [1] Convex Position Estimation in Wireless Sensor Networks by L. Doherty, K.S.J. Pister, and L.E. Ghaoui [2] Semidefinite Programming for Ad Hoc Wireless
More informationIncuCyte ZOOM Scratch Wound Processing Overview
IncuCyte ZOOM Scratch Wound Processing Overview The IncuCyte ZOOM Scratch Wound assay utilizes the WoundMaker-IncuCyte ZOOM-ImageLock Plate system to analyze both 2D-migration and 3D-invasion in label-free,
More information3 RD GENERATION BE HEARD AND HEAR, LOUD AND CLEAR
3 RD GENERATION BE HEARD AND HEAR, LOUD AND CLEAR The ultimate voice and communications solution, MaxxVoice is a suite of state-of-the-art technologies created by Waves Audio, recipient of a 2011 Technical
More informationA Highly Generalised Automatic Plugin Delay Compensation Solution for Virtual Studio Mixers
A Highly Generalised Automatic Plugin Delay Compensation Solution for Virtual Studio Mixers Tebello Thejane zyxoas@gmail.com 12 July 2006 Abstract While virtual studio music production software may have
More information1 ICT Laboratory Overview - CIT Master
1 ICT Laboratory Overview - CIT Master 1.1 Introduction The ANT part of the ICT laboratory held in the winter term is meant to be solved in groups of two in an independent fashion with minimal help from
More informationA Technical Introduction to Audio Cables by Pear Cable
A Technical Introduction to Audio Cables by Pear Cable What is so important about cables anyway? One of the most common questions asked by consumers faced with purchasing cables for their audio or home
More informationSubband Analysis of Time Delay Estimation in STFT Domain
PAGE 211 Subband Analysis of Time Delay Estimation in STFT Domain S. Wang, D. Sen and W. Lu School of Electrical Engineering & Telecommunications University of ew South Wales, Sydney, Australia sh.wang@student.unsw.edu.au,
More informationA Bi-level Block Coding Technique for Encoding Data Sequences with Sparse Distribution
Paper 85, ENT 2 A Bi-level Block Coding Technique for Encoding Data Sequences with Sparse Distribution Li Tan Department of Electrical and Computer Engineering Technology Purdue University North Central,
More information2D Floor-Mapping Car
CDA 4630 Embedded Systems Final Report Group 4: Camilo Moreno, Ahmed Awada ------------------------------------------------------------------------------------------------------------------------------------------
More informationRELIABILITY OF GUIDED WAVE ULTRASONIC TESTING. Dr. Mark EVANS and Dr. Thomas VOGT Guided Ultrasonics Ltd. Nottingham, UK
RELIABILITY OF GUIDED WAVE ULTRASONIC TESTING Dr. Mark EVANS and Dr. Thomas VOGT Guided Ultrasonics Ltd. Nottingham, UK The Guided wave testing method (GW) is increasingly being used worldwide to test
More informationAiro Interantional Research Journal September, 2013 Volume II, ISSN:
Airo Interantional Research Journal September, 2013 Volume II, ISSN: 2320-3714 Name of author- Navin Kumar Research scholar Department of Electronics BR Ambedkar Bihar University Muzaffarpur ABSTRACT Direction
More informationEQ s & Frequency Processing
LESSON 9 EQ s & Frequency Processing Assignment: Read in your MRT textbook pages 403-441 This reading will cover the next few lessons Complete the Quiz at the end of this chapter Equalization We will now
More informationA Comparison Between Camera Calibration Software Toolboxes
2016 International Conference on Computational Science and Computational Intelligence A Comparison Between Camera Calibration Software Toolboxes James Rothenflue, Nancy Gordillo-Herrejon, Ramazan S. Aygün
More informationCalibration of Microphone Arrays for Improved Speech Recognition
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Calibration of Microphone Arrays for Improved Speech Recognition Michael L. Seltzer, Bhiksha Raj TR-2001-43 December 2001 Abstract We present
More informationSt. Marks Arrays. <coeff sets 1 & 2, excel doc w/ steering values, array program, > 1. System Setup Wiring & Connection diagram...
St. Marks Arrays Contents 0. Included Documents: 1. System Setup......... 2 1.1 Wiring & Connection diagram..... 2 1.2 Optimum Equipment
More informationCompensation of Analog-to-Digital Converter Nonlinearities using Dither
Ŕ periodica polytechnica Electrical Engineering and Computer Science 57/ (201) 77 81 doi: 10.11/PPee.2145 http:// periodicapolytechnica.org/ ee Creative Commons Attribution Compensation of Analog-to-Digital
More informationDigital Signal Processing of Speech for the Hearing Impaired
Digital Signal Processing of Speech for the Hearing Impaired N. Magotra, F. Livingston, S. Savadatti, S. Kamath Texas Instruments Incorporated 12203 Southwest Freeway Stafford TX 77477 Abstract This paper
More informationM-16DX 16-Channel Digital Mixer
M-16DX 16-Channel Digital Mixer Workshop Using the M-16DX with a DAW 2007 Roland Corporation U.S. All rights reserved. No part of this publication may be reproduced in any form without the written permission
More informationAcoustic Based Angle-Of-Arrival Estimation in the Presence of Interference
Acoustic Based Angle-Of-Arrival Estimation in the Presence of Interference Abstract Before radar systems gained widespread use, passive sound-detection based systems were employed in Great Britain to detect
More information3D Printed Metamaterial Acoustics Lens University of Illinois at Urbana-Champaign Spring 2016 Daniel Gandy & Guangya Niu
3D Printed Metamaterial Acoustics Lens University of Illinois at Urbana-Champaign Spring 2016 Daniel Gandy & Guangya Niu 1 Introduction Acoustic lenses, which focus sound in much the same way that an optical
More informationConnect 4. Figure 1. Top level simplified block diagram.
Connect 4 Jonathon Glover, Ryan Sherry, Sony Mathews and Adam McNeily Electrical and Computer Engineering Department School of Engineering and Computer Science Oakland University, Rochester, MI e-mails:jvglover@oakland.edu,
More informationDSP VLSI Design. DSP Systems. Byungin Moon. Yonsei University
Byungin Moon Yonsei University Outline What is a DSP system? Why is important DSP? Advantages of DSP systems over analog systems Example DSP applications Characteristics of DSP systems Sample rates Clock
More informationCHAPTER 5. Digitized Audio Telemetry Standard. Table of Contents
CHAPTER 5 Digitized Audio Telemetry Standard Table of Contents Chapter 5. Digitized Audio Telemetry Standard... 5-1 5.1 General... 5-1 5.2 Definitions... 5-1 5.3 Signal Source... 5-1 5.4 Encoding/Decoding
More informationObjectives. Materials
. Objectives Activity 8 To plot a mathematical relationship that defines a spiral To use technology to create a spiral similar to that found in a snail To use technology to plot a set of ordered pairs
More informationSelect and apply a range of processes to enhance sound in a performance context. Level 3 Credits 6 Student Name:
28007 - Select and apply a range of processes to enhance sound in a performance context Level 3 Credits 6 Student Name: Students are free to use this template for providing evidence for this Unit Standard.
More informationAudio Signal Compression using DCT and LPC Techniques
Audio Signal Compression using DCT and LPC Techniques P. Sandhya Rani#1, D.Nanaji#2, V.Ramesh#3,K.V.S. Kiran#4 #Student, Department of ECE, Lendi Institute Of Engineering And Technology, Vizianagaram,
More informationA Parametric Model for Spectral Sound Synthesis of Musical Sounds
A Parametric Model for Spectral Sound Synthesis of Musical Sounds Cornelia Kreutzer University of Limerick ECE Department Limerick, Ireland cornelia.kreutzer@ul.ie Jacqueline Walker University of Limerick
More informationIncuCyte ZOOM Fluorescent Processing Overview
IncuCyte ZOOM Fluorescent Processing Overview The IncuCyte ZOOM offers users the ability to acquire HD phase as well as dual wavelength fluorescent images of living cells producing multiplexed data that
More informationConvention e-brief 400
Audio Engineering Society Convention e-brief 400 Presented at the 143 rd Convention 017 October 18 1, New York, NY, USA This Engineering Brief was selected on the basis of a submitted synopsis. The author
More informationMultivariate Regression Algorithm for ID Pit Sizing
IV Conferencia Panamericana de END Buenos Aires Octubre 2007 Abstract Multivariate Regression Algorithm for ID Pit Sizing Kenji Krzywosz EPRI NDE Center 1300 West WT Harris Blvd. Charlotte, NC 28262 USA
More informationReducing Magnetic Interaction in Reed Relay Applications
RELAY APPLICATIONS MEDER electronic Reducing Magnetic Interaction in Reed Relay Applications Reed Relays are susceptible to magnetic effects which may degrade performance under certain conditions. This
More informationfile://c:\all_me\prive\projects\buizentester\internet\utracer3\utracer3_pag5.html
Page 1 of 6 To keep the hardware of the utracer as simple as possible, the complete operation of the utracer is performed under software control. The program which controls the utracer is called the Graphical
More informationME scope Application Note 01 The FFT, Leakage, and Windowing
INTRODUCTION ME scope Application Note 01 The FFT, Leakage, and Windowing NOTE: The steps in this Application Note can be duplicated using any Package that includes the VES-3600 Advanced Signal Processing
More informationAcoustic Resonance Lab
Acoustic Resonance Lab 1 Introduction This activity introduces several concepts that are fundamental to understanding how sound is produced in musical instruments. We ll be measuring audio produced from
More informationBEAMFORMING WITHIN THE MODAL SOUND FIELD OF A VEHICLE INTERIOR
BeBeC-2016-S9 BEAMFORMING WITHIN THE MODAL SOUND FIELD OF A VEHICLE INTERIOR Clemens Nau Daimler AG Béla-Barényi-Straße 1, 71063 Sindelfingen, Germany ABSTRACT Physically the conventional beamforming method
More informationTutorial on the Statistical Basis of ACE-PT Inc. s Proficiency Testing Schemes
Tutorial on the Statistical Basis of ACE-PT Inc. s Proficiency Testing Schemes Note: For the benefit of those who are not familiar with details of ISO 13528:2015 and with the underlying statistical principles
More informationON THE ENUMERATION OF MAGIC CUBES*
1934-1 ENUMERATION OF MAGIC CUBES 833 ON THE ENUMERATION OF MAGIC CUBES* BY D. N. LEHMER 1. Introduction. Assume the cube with one corner at the origin and the three edges at that corner as axes of reference.
More informationSEPTEMBER VOL. 38, NO. 9 ELECTRONIC DEFENSE SIMULTANEOUS SIGNAL ERRORS IN WIDEBAND IFM RECEIVERS WIDE, WIDER, WIDEST SYNTHETIC APERTURE ANTENNAS
r SEPTEMBER VOL. 38, NO. 9 ELECTRONIC DEFENSE SIMULTANEOUS SIGNAL ERRORS IN WIDEBAND IFM RECEIVERS WIDE, WIDER, WIDEST SYNTHETIC APERTURE ANTENNAS CONTENTS, P. 10 TECHNICAL FEATURE SIMULTANEOUS SIGNAL
More information! Understanding Microphones
! Understanding Microphones A microphoneʼs job is generally to try to capture, as closely as possible, a sound source. This could be a voice or an instrument. We can also use a microphone to infuse a specific
More informationIndoor Location Detection
Indoor Location Detection Arezou Pourmir Abstract: This project is a classification problem and tries to distinguish some specific places from each other. We use the acoustic waves sent from the speaker
More information[Q] DEFINE AUDIO AMPLIFIER. STATE ITS TYPE. DRAW ITS FREQUENCY RESPONSE CURVE.
TOPIC : HI FI AUDIO AMPLIFIER/ AUDIO SYSTEMS INTRODUCTION TO AMPLIFIERS: MONO, STEREO DIFFERENCE BETWEEN STEREO AMPLIFIER AND MONO AMPLIFIER. [Q] DEFINE AUDIO AMPLIFIER. STATE ITS TYPE. DRAW ITS FREQUENCY
More informationAsst. Prof. Thavatchai Tayjasanant, PhD. Power System Research Lab 12 th Floor, Building 4 Tel: (02)
2145230 Aircraft Electricity and Electronics Asst. Prof. Thavatchai Tayjasanant, PhD Email: taytaycu@gmail.com aycu@g a co Power System Research Lab 12 th Floor, Building 4 Tel: (02) 218-6527 1 Chapter
More informationSpring 2005 Group 6 Final Report EZ Park
18-551 Spring 2005 Group 6 Final Report EZ Park Paul Li cpli@andrew.cmu.edu Ivan Ng civan@andrew.cmu.edu Victoria Chen vchen@andrew.cmu.edu -1- Table of Content INTRODUCTION... 3 PROBLEM... 3 SOLUTION...
More informationDESIGN OF GLOBAL SAW RFID TAG DEVICES C. S. Hartmann, P. Brown, and J. Bellamy RF SAW, Inc., 900 Alpha Drive Ste 400, Richardson, TX, U.S.A.
DESIGN OF GLOBAL SAW RFID TAG DEVICES C. S. Hartmann, P. Brown, and J. Bellamy RF SAW, Inc., 900 Alpha Drive Ste 400, Richardson, TX, U.S.A., 75081 Abstract - The Global SAW Tag [1] is projected to be
More informationNew System Simulator Includes Spectral Domain Analysis
New System Simulator Includes Spectral Domain Analysis By Dale D. Henkes, ACS Figure 1: The ACS Visual System Architect s System Schematic With advances in RF and wireless technology, it is often the case
More informationBy: Valerie Chen, Coco Chou, Amelia Whitworth
Software Python: a programming software that supports multiple external applications - Interprets signals from proximity sensors. - Measures distance to determine drum set sound. - Communicates sensor
More informationPerformance Factors. Technical Assistance. Fundamental Optics
Performance Factors After paraxial formulas have been used to select values for component focal length(s) and diameter(s), the final step is to select actual lenses. As in any engineering problem, this
More informationDetermination of an unknown frequency (beats)
Teacher's/Lecturer's Sheet Determination of an unknown frequency (beats) (Item No.: P6011900) Curricular Relevance Area of Expertise: Physics Education Level: Age 16-19 Topic: Acoustics Subtopic: Wave
More informationBriefing. Briefing 24 People. Keep everyone s attention with the presenter front and center. C 2015 Cisco and/or its affiliates. All rights reserved.
Briefing 24 People Keep everyone s attention with the presenter front and center. 3 1 4 2 Product ID Product CTS-SX80-IPST60-K9 Cisco TelePresence Codec SX80 1 Included in CTS-SX80-IPST60-K9 Cisco TelePresence
More informationFinal Exam Study Guide: Introduction to Computer Music Course Staff April 24, 2015
Final Exam Study Guide: 15-322 Introduction to Computer Music Course Staff April 24, 2015 This document is intended to help you identify and master the main concepts of 15-322, which is also what we intend
More informationLC-10 Chipless TagReader v 2.0 August 2006
LC-10 Chipless TagReader v 2.0 August 2006 The LC-10 is a portable instrument that connects to the USB port of any computer. The LC-10 operates in the frequency range of 1-50 MHz, and is designed to detect
More informationDrawing Bode Plots (The Last Bode Plot You Will Ever Make) Charles Nippert
Drawing Bode Plots (The Last Bode Plot You Will Ever Make) Charles Nippert This set of notes describes how to prepare a Bode plot using Mathcad. Follow these instructions to draw Bode plot for any transfer
More informationA Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor
A Novel Approach of Compressing Images and Assessment on Quality with Scaling Factor Umesh 1,Mr. Suraj Rana 2 1 M.Tech Student, 2 Associate Professor (ECE) Department of Electronic and Communication Engineering
More informationSpring 2004 M2.1. Lab M2. Ultrasound: Interference, Wavelength, and Velocity
Spring 2004 M2.1 Lab M2. Ultrasound: Interference, Wavelength, and Velocity The purpose in this lab exercise is to become familiar with the properties of waves: frequency, wavelength, phase and velocity.
More informationImage De-Noising Using a Fast Non-Local Averaging Algorithm
Image De-Noising Using a Fast Non-Local Averaging Algorithm RADU CIPRIAN BILCU 1, MARKKU VEHVILAINEN 2 1,2 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720, Tampere FINLAND
More information