A Predefined Command Recognition System Using a Ceiling Microphone Array in Noisy Housing Environments

Similar documents
/07/$ IEEE 111

Missing-Feature based Speech Recognition for Two Simultaneous Speech Signals Separated by ICA with a pair of Humanoid Ears

Auditory System For a Mobile Robot

Multiple Sound Sources Localization Using Energetic Analysis Method

Sound Source Localization in Median Plane using Artificial Ear

Search and Track Power Charge Docking Station Based on Sound Source for Autonomous Mobile Robot Applications

Robust Low-Resource Sound Localization in Correlated Noise

Automatic Speech Recognition Improved by Two-Layered Audio-Visual Integration For Robot Audition

Automotive three-microphone voice activity detector and noise-canceller

Sound Source Localization using HRTF database

Improvement in Listening Capability for Humanoid Robot HRP-2

Evaluating Real-time Audio Localization Algorithms for Artificial Audition in Robotics

Speaker Localization in Noisy Environments Using Steered Response Voice Power

Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming

Simultaneous Recognition of Speech Commands by a Robot using a Small Microphone Array

Mel Spectrum Analysis of Speech Recognition using Single Microphone

Development of a Robot Quizmaster with Auditory Functions for Speech-based Multiparty Interaction

ADAPTIVE ANTENNAS. TYPES OF BEAMFORMING

Joint Position-Pitch Decomposition for Multi-Speaker Tracking

Performance Evaluation of Nonlinear Speech Enhancement Based on Virtual Increase of Channels in Reverberant Environments

Two-channel Separation of Speech Using Direction-of-arrival Estimation And Sinusoids Plus Transients Modeling

arxiv: v1 [cs.sd] 4 Dec 2018

Embedded Auditory System for Small Mobile Robots

Research Article DOA Estimation with Local-Peak-Weighted CSP

High-speed Noise Cancellation with Microphone Array

Sound Processing Technologies for Realistic Sensations in Teleworking

Proceedings of Meetings on Acoustics

Omnidirectional Sound Source Tracking Based on Sequential Updating Histogram

Localization of underwater moving sound source based on time delay estimation using hydrophone array

Leak Energy Based Missing Feature Mask Generation for ICA and GSS and Its Evaluation with Simultaneous Speech Recognition

A Hybrid Framework for Ego Noise Cancellation of a Robot

WIND SPEED ESTIMATION AND WIND-INDUCED NOISE REDUCTION USING A 2-CHANNEL SMALL MICROPHONE ARRAY

From Monaural to Binaural Speaker Recognition for Humanoid Robots

The psychoacoustics of reverberation

Airo Interantional Research Journal September, 2013 Volume II, ISSN:

Improving Meetings with Microphone Array Algorithms. Ivan Tashev Microsoft Research

Effects of Reverberation on Pitch, Onset/Offset, and Binaural Cues

Applying the Filtered Back-Projection Method to Extract Signal at Specific Position

6-channel recording/reproduction system for 3-dimensional auralization of sound fields

HANDSFREE VOICE INTERFACE FOR HOME NETWORK SERVICE USING A MICROPHONE ARRAY NETWORK

29th TONMEISTERTAGUNG VDT INTERNATIONAL CONVENTION, November 2016

Michael E. Lockwood, Satish Mohan, Douglas L. Jones. Quang Su, Ronald N. Miles

Nonlinear postprocessing for blind speech separation

ONE of the most common and robust beamforming algorithms

Automatic Text-Independent. Speaker. Recognition Approaches Using Binaural Inputs

Speech Enhancement Based On Spectral Subtraction For Speech Recognition System With Dpcm

Acoustic sound source tracking for a object using precise Doppler-shift m Proceedings of the 21st Europea Processing Conference (EUSIPCO): 1-5

Robotic Spatial Sound Localization and Its 3-D Sound Human Interface

Single Channel Speaker Segregation using Sinusoidal Residual Modeling

LOCALIZATION AND IDENTIFICATION OF PERSONS AND AMBIENT NOISE SOURCES VIA ACOUSTIC SCENE ANALYSIS

Microphone Array Feedback Suppression. for Indoor Room Acoustics

Design and Evaluation of Two-Channel-Based Sound Source Localization over Entire Azimuth Range for Moving Talkers

Sound source localisation in a robot

Time-of-arrival estimation for blind beamforming

Multiple Audio Spots Design Based on Separating Emission of Carrier and Sideband Waves

Voice Activity Detection

Enhancing 3D Audio Using Blind Bandwidth Extension

Acoustic Beamforming for Hearing Aids Using Multi Microphone Array by Designing Graphical User Interface

Microphone Array Design and Beamforming

University of Huddersfield Repository

REAL-TIME BLIND SOURCE SEPARATION FOR MOVING SPEAKERS USING BLOCKWISE ICA AND RESIDUAL CROSSTALK SUBTRACTION

DESIGN OF VOICE ALARM SYSTEMS FOR TRAFFIC TUNNELS: OPTIMISATION OF SPEECH INTELLIGIBILITY

On the Plane Wave Assumption in Indoor Channel Modelling

Evaluation of clipping-noise suppression of stationary-noisy speech based on spectral compensation

FREQUENCY RESPONSE AND LATENCY OF MEMS MICROPHONES: THEORY AND PRACTICE

Performance study of Text-independent Speaker identification system using MFCC & IMFCC for Telephone and Microphone Speeches

Towards an intelligent binaural spee enhancement system by integrating me signal extraction. Author(s)Chau, Duc Thanh; Li, Junfeng; Akagi,

Chapter 4 DOA Estimation Using Adaptive Array Antenna in the 2-GHz Band

Adaptive Fingerprint Binarization by Frequency Domain Analysis

Binaural Speaker Recognition for Humanoid Robots

Live multi-track audio recording

Three-dimensional sound field simulation using the immersive auditory display system Sound Cask for stage acoustics

Proceedings of Meetings on Acoustics

Binaural Hearing. Reading: Yost Ch. 12

BEAMFORMING WITHIN THE MODAL SOUND FIELD OF A VEHICLE INTERIOR

Loudspeaker Array Case Study

Using sound levels for location tracking

Epoch Extraction From Emotional Speech

SPECTRAL COMBINING FOR MICROPHONE DIVERSITY SYSTEMS

POSSIBLY the most noticeable difference when performing

Can binary masks improve intelligibility?

ROOM AND CONCERT HALL ACOUSTICS MEASUREMENTS USING ARRAYS OF CAMERAS AND MICROPHONES

Ultrasound Bioinstrumentation. Topic 2 (lecture 3) Beamforming

Ultrasonic Level Detection Technology. ultra-wave

FROM BLIND SOURCE SEPARATION TO BLIND SOURCE CANCELLATION IN THE UNDERDETERMINED CASE: A NEW APPROACH BASED ON TIME-FREQUENCY ANALYSIS

EXPERIMENTAL EVALUATION OF MODIFIED PHASE TRANSFORM FOR SOUND SOURCE DETECTION

Distributed Vision System: A Perceptual Information Infrastructure for Robot Navigation

Sound Source Localization in Reverberant Environment using Visual information

Microphone Array Power Ratio for Speech Quality Assessment in Noisy Reverberant Environments 1

ENHANCED PRECISION IN SOURCE LOCALIZATION BY USING 3D-INTENSITY ARRAY MODULE

AVAL: Audio-Visual Active Locator ECE-492/3 Senior Design Project Spring 2014

ODEON APPLICATION NOTE Calculation of Speech Transmission Index in rooms

ANECHOIC CHAMBER DIAGNOSTIC IMAGING

RASTA-PLP SPEECH ANALYSIS. Aruna Bayya. Phil Kohn y TR December 1991

Recent Advances in Acoustic Signal Extraction and Dereverberation

Speech Enhancement in Presence of Noise using Spectral Subtraction and Wiener Filter

SOUND 1 -- ACOUSTICS 1

The Role of High Frequencies in Convolutive Blind Source Separation of Speech Signals

Calibration of Microphone Arrays for Improved Speech Recognition

Microphone Array project in MSR: approach and results

Transcription:

Digital Human Symposium 29 March 4th, 29 A Predefined Command Recognition System Using a Ceiling Microphone Array in Noisy Housing Environments Yoko Sasaki a b Satoshi Kagami b c a Hiroshi Mizoguchi a b Tadashi Enomoto d y-sasaki@aist.go.jp s.kagami@aist.go.jp hm@rs.noda.tus.ac.jp enomoto.tadashi@b5.kepco.co.jp Abstract This paper describes a speech recognition system that detects basic voice commands for a mobile robot operating in a home space. The system recognizes arbitrary timed speech with position information in a noisy housing environment. The microphone array is attached to the ceiling, and localizes sound source direction in azimuth and elevation, then separates multiple sound sources using Delay and Sum Beam Forming(DSBF) and Frequency Band Separation(FBS) algorithm. We implement the sound localization and separation method on our 2 channel microphone array. The separated sound source is recognized using an open source speech recognizer. These sound localization, separation and recognition functions are implemented as online processing in real world. We define four indices to evaluate the performance of the recognition system, and the efficiency in a noisy environment or with distant sound sources is confirmed from experiments in varied conditions. Finally, an application for a mobile robot interface is reported. 1. INTRODUCTION Auditory system is useful for a robot system to communicate with human, or for the initial recognition of environmental change. To achieve efficient auditory functionality in the real world, combination of sound localization, separation and recognition in a total system is needed. There is an increasing amount of research developing robotic audition systems for human-robot interaction, and varied methods are proposed for each function of auditory system. Study on Blind Source Separation(BSS) based on Independent Component Analysis(ICA) is well performed. A real-time sound separation system[1] which has two microphones was proposed and showed efficient performance on hands-free sound recognition. It is applied to robot as human-robot verbal conversation interface. The method is known for high performance sound separation with small system. On the other hand, assumption for frequency independence between sound sources causes limitation for reverberant field or handling multiple sound sources. *a : School of Science and Technology, Tokyo University of Science *b : Digital Human Research Center, National Institute of Advanced Industrial Science and Technology (AIST) *c : CREST, Japan Science and Technology Agency (JST) *d : Kansai Electric Power Company, Inc. As for room integrated sensors, work using a 12ch microphone array [2] shows good performance for beam forming. It can localize a human voice in a room. Recent work [] reported tracking human voice using 64 channel microphone array distributed in a room and particle filtering techniques which improved tracking performance. Several methods are proposed for implementation in real world. GMM-based speech end-point detection method and outlier-robust generalized sidelobe canceler was implemented using a 12ch microphone array[4]. Missing feature algorithm is proposed in order to achieve robust localization and segmentation method[5]. As for total auditory system, real-time auditory system [6] is reported with simultaneous speech recognition system implemented on a robot embedded microphone array. It works for multiple sound sources in near range and shows efficient recognition rate for known speech source coming from front of the robot head using a preliminary measured impulse response and optimized parameters for recognition. Nakadai et. al reported an application of simultaneous multiple sounds recognition system using this system[7]. In this paper, we present for a newly developed online voice command recognition system in noisy environments using a microphone array. In order to achieve the detection of verbal command from arbitrary time and position in a home space, following functions are needed: 1) works as online system, 2) without using environmental information, ) robust recognition to various sound sources including non-speech sound sources. We propose a command recognition system using beamforming based sound localization and separation method implemented on a microphone array in each room of a home space. The extracted sound sources are recognized using a open source recognizer for close-talking microphone. The proposed system doesn t need preliminary measured environmental information, and can adapt to various environmental conditions such as multiple target sound sources witch require simultaneous recognition, or distant target sound sources together with noisy sound in near range of microphone array. We define four indexes to evaluate the recognition system and experiments to measure performance in varied conditions are performed. Finally, an application of the voice command recognition system for controlling mobile robot in a home environment is shown.

2. Sound Localization and Separation This section describes our approach to localize and separate multiple sound sources using microphone array. 2.1. FBS Based 2D Sound Localization Sound localization method has two main parts. First, Delay and Sum Beam Forming(DSBF) to enhance focused direction s signal to get sound pressure distribution, named spatial spectrum. Second, Frequency Band Selection(FBS)[8] as a kind of binary mask to filter out louder detected sound s signal and localize smaller power sound source simultaneously. The localization system detects multiple sound sources from the highest power intensity to the lowest at each time step. Aligning the phase of each signal amplifies the desired sounds and attenuates ambient noise. Let O be the microphone array center, and C j the focus position on the O centered hemisphere which has a larger radius than array size. Focus position is set by a linear triangular grid on the surface of sphere[9] to localize sound source position in azimuth and elevation. Using the median point of each uniform triangle on the sphere s surface as the focus point C j of DSBF, the system can estimate sound pressure distribution in two dimensions. Let t be time and (θ, φ) be azimuth and elevation of C j. Delay time τ ji for i-th microphone (1 i N)is defined by microphone arrangement. Observed wave y j in all directions is expressed as equation (): y j (ω, t) = N W i (ω, θ, φ) x(t + τ ji ) (1) i=1 where N is number of microphones and W i is corrective weight of microphone s directivity. We apply FBS method after DSBF to detect multiple sound sources. FBS assumes that the frequency components of each signal are independent and is a kind of binary mask that segregates targeted sound sources from mixed sound by selecting the frequency components judged to be from the targeted sound source. The process is as follows. Let X a (ω) be the frequency components of DSBF-enhanced signals of position a and X b (ω j ) be those of position b. Selected frequency component X as (ω) for position a is expressed as in Equation(2): { 1 if Xa (ω) X X as = M a X a M a (ω) = b (ω) otherwise (2) This process rejects the attenuated noise signal from the DSBF-enhanced signal. The segregated waveform is obtained by inverse Fourier transform of X as (ω). When frequency components of each signal are independent, FBS completely separate the desired sound source. This assumption is usually effective for human voice or everyday sounds of limited duration. The spatial spectrum for directional localization, which indicates sound pressure distribution over a frame length N is described as follows: ω i= Q K (θ, φ) = ( K 1 k= (1 M k(i)) Y (i) 2 ) ω K 1 i= k= (1 M k(i)) () where Y (ω) is Fast Fourier Transform of DSBF enhanced signal y, and M k is separation vector for the k-th loudest sound source generated by FBS. Fig. 1 shows calculation flow of FBS based multiple sound localization. In DSBF phase, the system scans the formed beam of DSBF on each spherical grid, and obtains spatial spectrum which indicates sound pressure distribution on hemisphere. In FBS phase, the system first detects the loudest sound direction as the maximum peak of spatial spectrum, then it filters out the loudest sound signal by FBS and localize the second stronger sound source, and so on. Signal input Band pass filter FBS phase Filter out the loudest signal Q (θ, Φ) Q 1 (θ, Φ) Detect the loudest source direction (θ, Φ ) DSBF phase Scan the focus Spatial spectrum Same as for more than sources Detect the 2 nd strongest direction (θ 1, Φ 1 ) Fig. 1 FBS based multiple sound sources localization 2.2. FBS Based Multiple Sound Separation Sound source separation algorithm is almost the same as localization. It segregates the sound sources of detected directions from localization stage. For robust recognition, mask M in equation (2) is revised, and mask vector M for separation is expressed as equation (4): 1 if X a (ω) X b (ω) M a(ω) =.5 else if X a (ω).5x b (ω) (4) otherwise For recognition stage, sound existence interval is extracted by commonly used power based Voice Activity Detection (VAD) function from separated sound sources. It assumes there are intervals of silence before and after speech, and detects beginning and end time of speech with reference to maximum power during the separated stream. This simple VAD is sufficient for our proposed system in that assumption, because VAD for our system

. COMMAND RECOGNITION SYSTEM This section describes overview of our command recognition system for online implementation..1. System Overview For online implementation, determining when the system should stop separation and start recognition is an important problem. Our system decides separation and recognition interval Tmax from the longest sentence in command dictionary. By separating past Tmax +Tp (Tp Tmax ) with a cycle of Tmax, the system can detects arbitrary timed voice command at more than one of separated sound stream. Fig. 2 shows calculation flow of the command recognition system. Localization module outputs azimuth and elevation pairs at each cycle of FFT data length, and the instantaneous sound source localization keeps running continuously. Sound source separation and recognition modules are running with Tmax intervals. From instant localization results of past Tmax + Tp, it estimates the number of sound sources and their positions, then segregates each detected sound source. VAD is applied for each separated source, and extracted sound sources are input to recognizer, if the data is not interrupted by a separation interval. t FBS based sound source localization.2. Microphone Array The proposed command recognition system is tested using 2 channel microphone array unit attached on the ceiling. Fig. shows the microphone array and its microphone arrangement. The microphone array has 2 omnidirectional electlet condenser microphones and can sample 2 data channels simultaneously. Sampling frequency is 16(kHz) and resolution is 16(bit). It localizes azimuth omni-directionally ( to 59(deg)), and elevation from directly below ( (deg)) to horizontal direction(9 (deg)). For implementation, our system set 16 grids on surface of hemisphere initially, and each grid is divided into four smaller triangles for fine search at detected source positions. Using 2ch array, the system works well for multiple sound sources and power difference in varied environments without using environmental information. 54mm 1-1 -2 - - -2-1 1 X-axis [mm] a) Array on ceiling 2 b) Arrangement Fig. 2 channel microphone array on ceiling Six microphone array units are attached to the ceiling of the experimental house Holone. Each unit works independently and recognition results with position information are sent to a mobile robot. Fig. 4 shows pictures of the microphone array units and the arrangement in each room. 2D sound directions 11 9 5 6 7 Y [m] Tmax interval microphone 2 Y-axis [mm] is mainly used to reject broken streams during a separation interval. It is applied for separated sound streams, then each stream always contains some sound signals, and noise level is stationary during a separated sound stream. result result 4 5 1 1 1 Detect sound positions Sound source separation VAD 2 5 7 9 11 1 X [m] a) Arrangement b) 1:entrance c) 2:study d) :living e) 4,5:kitchen f) 6:bedroom Command recognition Sound position with time (θ, Φ, t) and Recognized command Fig. 2 Calculation flow of the command recognition system For command recognition, we use the Japanese speech recognition engine Julian [1] with a use defined command dictionary. The command dictionary has words and 4 sentence constructs, such as go to somewhere, come here or greetings. The size of the command dictionary is limited to prevent error recognition from other sound sources (especially non speech sound sources). Fig. 4 Microphone array units in experimental house Holone 4. EXPERIMENT This section shows the experimental result of proposed command recognition system installed to 2 channel ceil-

ing microphone array. For implementation, calculation interval T max is set to.(sec), and data length parameter T p is set to.5(sec). 4.1. Evaluation of Sound Source Localization Accuracy of two dimensional sound source localization is evaluated. The experimental room has a reverberation time (T 6 ) of 45 (msec) and background noise level(l A ) is 2 (db). For calculation, the frame length is 124 points (64 msec) for one instance of localization. For evaluation of 2D directional localization, angular error is defined as the intervector angle between direction of the estimated sound source and that of the real sound source. As shown in Fig., the microphone array has rotationally-symmetric arrangement on one plane, and the performance of localization in the azimuth direction is uniform. Experiments are performed in two different conditions. The first condition (C1) is 1 sound source at different elevation angles. A loud speaker playing male speech is used as the sound source. Distance (r) between microphone array and speaker is 2.(m), and the sound source s SNR is about 15(dB) from background noise. The second condition (C2) is 2 sound sources at different distances. One loud speaker is set at (r, θ, φ)=(2(m), 18(deg), 6(deg)) from the array center, and it plays female speech(not including command sentences) or classical music as noise source. The other loud speaker is set at 9(deg) for azimuth angle and is varied over horizontal distance 1,, 5, 7 and 1 (m). Height from the microphone array stays constant at 1.15(m). Volume of the 2 sources is the same, and the SNR is about 15(dB) from background noise. Fig. 5 shows the result of condition C1 to evaluate performance shift between elevation angle. The result is average angle error of 2(sec) data for each direction. The angular error is 9 to 18(deg) and is not dependent on elevation angle. For more than 45(deg) of elevation angle, azimuth error is smaller than elevation and angle error is close to elevation error. This indicates that localization performance of elevation angle is weaker than azimuth direction and angle error is mainly caused by elevation error for more than 45(deg) of elevation. error (deg) 27 24 21 18 15 12 9 6 15 45 6 75 9 elevation (deg) Angle error Azimuth Elevation Fig. 5 Localization angular error for 1 sound source Here, the performance shift of distance between microphone array and sound source is evaluated. Fig. 6 shows the result of experiment with condition C2. X axis is horizontal distance from microphone array center to sound source. Angular error is 6 to 14 (deg) and its not dependent on distance. It is considered that vibration of angular error is caused by resolution of implemented spherical grid. As in Fig. 5, the elevation error is close in value to the angular error. error (deg) 27 24 21 18 15 12 9 6 2 4 6 8 1 Distance (m) Angle error Azimuth Elevation Fig. 6 Localization angle error for 2 sound sources The results of the two conditions indicate that localization accuracy is not affected by variation of elevation angle or sound pressure levels. Excluding locations directly below the microphone array and in horizontal directions, the system can localize multiple sound source positions in azimuth and elevation. 4.2. Evaluation Index of Recognition System Four indexes are defined to evaluate the recognition system. Word Correct Rate: proportion of correctly recognized words to total recognized words Task Achievement Rate: proportion of correct recognized command at target position and timing to total command utterance Error Recognition Rate: proportion of erroneously recognized command to total number of separated sound sources Target Separation Rate: proportion of separated sound sources at correct position and time to total command utterances. Word Correct Rate shows quality of separated source for recognition, and error is mainly caused by phenotypic variation of spoken Japanese. For example, both (place) (ni) (mo q te i q te) and (place) (e) (mo q te i ke) mean bring it to (place), but second and third words are registered independently in the word dictionary, and the index is 1/ in this instance. Task Achievement Rate excludes such difference. It counts the recognized sentence when the meaning is correct, and the index shows the efficiency of an application. Error Recognition Rate contains two different errors, one is error recognition of known voice command, and the other is false positive recognition of non-command sound source. Both errors affect applications using the recognition system, and the index also shows efficiency of application. Target Separation Rate counts separated sound sources after VAD, whose position and timing corresponds to real events. The index

shows performance of proposed localization and separation method. As shown in Fig. 2, separated sound sources can overlap, and the system sometimes recognizes one voice command in two intervals. Such overlapped recognition is excluded from calculating Task Achievement Rate and Target Separation Rate. In addition, calculation cost is evaluated by calculating elapsed time for three separate parts. First is the time elapsing from the start of voice command phonation to the start of separation module (A). Second part is from the start of separation module to output of separated sound sources after VAD (B). Third part is elapsed time of command recognition (C). A+B+C gives the total time from start of voice command phonation to the output of the recognition result. 4.. Basic Evaluation of Recognition System The performance of the recognition system is evaluated. Experimental conditions are same as in section IV- A. The result of condition C1 is shown in Fig. 7. At and 9 (deg) in elevation angle, target separation rate (pink line with square mark) is less than 9 (%), on the other hand, the performance has no large difference over elevation angles. Task recognition rate (green line with X mark) is similar to target separation rate, and the word correct rate is higher than 87(%). The error recognition rate is near (%). and the result suggests that the localization performance directly below the microphone array((deg)) and at horizontal direction (9(deg)) is a little worse than at other areas, on the other hand, the system shows constant performance over elevation angle. Rate (%) 1 9 8 7 15 45 6 75 9 Elevation (deg) Rate (%) 2 1 15 45 6 75 9 Elevation (deg) a) Correct value b) Error value Word Task Error Separated Fig. 7 Results of elevation changes (1 sound source) The result of distance evaluation is shown in Fig. 8. The experimental setup is condition C2 explained in section IV-A. The evaluation result is shown in Fig. 8. For less than 7 (m) distance, target separation rate is 1 (%). At 7 (m) distance, sound pressure level of received signal at microphone array is -5 (db) compared to the noise sound source set at 2(m) distant from the array. This result shows the efficiency of the proposed sound localization and separation system for multiple sound sources with different levels of sound pressure: On the other hand, the task recognition rate is drops with distance for distances more than 7 (m). Error recognition rate is less than 1 (%) and doesn t change with distance, but is higher than result in condition C1 overall. Error is mainly caused by noise sound source, which contains sounds not registered in the command dictionary. Degradation of sound separation for distant sound sources does not affect to error recognition rate. Rate (%) 1 9 8 7 2 4 6 8 1 Distance (m) Rate (%) 2 1 2 4 6 8 1 Distance (m) a) Correct value b) Error value Fig. 8 Result of distance change (2 sound sources) Word Task Error Separated 4.4. System Evaluation for Three Sound Sources The performance of sound sources are evaluated. Three experiments are performed as follows: EXP-A: command at (2.(m), 9(deg), 6(deg)), classical music at (2.5(m), 45(deg), 72(deg)) and female speech at (1.4(m), 18(deg), 45(deg)) EXP-B: command at (2.(m), 9(deg), 6(deg)), classical music at (2.5(m), 45(deg), 72(deg)) and command at (2.(m), 18(deg), 6(deg)) EXP-C: command at (2.(m), 6(deg), 6(deg)), female speech at (1.4(m), 15(deg), 45(deg)) and command at (.2(m), 18(deg), 72(deg)) Positions are described as (distance, azimuth, elevation), and the volume of each sound source is same level. Table 1 shows the evaluation indexes of sound sources condition. EXP-B and EXP-C have two command utterances and the indexes are calculated as a combined value. The indexes of EXP-B are a little smaller than EXP-A, but there is not so much of a difference. The result of EXP-C is worse than the others. The performance degradation is mainly caused by the low recognition rate for a distant command utterance at (.2(m), 18(deg), 72(deg)). The system failed to detect this sound source, affected by the presence of female speech close to microphone array. High percentage of error recognition rate is mainly caused by error recognition for the female speech. This result suggests that the recognition system has greater performance for detected commands and multiple command input at same time, on the other hand, it is susceptible to unknown signal input. 4.5. Processing Time of the Recognition System Table 2 shows the average elapsed time of each experiment. The system has a Pentium-4. GHz processor operated using Debian linux 2.4.27. The result of one sound source is average of all 14(sec) data of condition C1, which has arbitrary timed utterances. The results of

Table 1 Evaluation result of sound sources (%) Word Task Error Separated EXP-A 91.7 95.8 (2/24) 12. 95.8 (2/24) EXP-B 89.1 91.7 (44/48) 7. 9.8 (45/48) EXP-C 67. 79.2 (8/48) 19.1 85.4 (41/48) two and three sound sources are the average of 8(sec) data for each condition. Table 2 Elapsed time from start of voice command phonetion number of sound source 1 2 (A)time before start separation(sec) 2.1 2.5 2.2 (B) elapsed time for separation(sec).86 1.24 1.7 (C) elapsed time for recognition(sec).61.54.61 (A+B+C) total processing time(sec).48.8 4. Table Experimental setup for evaluation in Holone position source position source EXP-1 (15, ) set A (9, 45) female speech EXP-2 (15, ) set A (9, 6) classical music EXP- (15, 6) set B (9, 45) classical music EXP-4 (15, 6) set B (9, 6) female speech EXP-5 (15, 6) set A (285, 45) classical music EXP-6 (15, 6) set A (285, 45) female speech EXP-7 (15, 6) set B (285, 6) classical music EXP-8 (15, 6) set B (285, 6) female speech EXP-9 (15, ) set A (285, 45) female speech EXP-1 (15, ) set B (285, 6) classical music EXP-11 (15, ) set B (285, 6) female speech Processing time changes with environmental condition because calculation cost of sound separation and recognition is depends on number of sound sources. The time before the start of separation is 2.24(sec), averaged over all conditions, and average time of command utterance is 2.4(sec) (1.8(sec) at minimum and 2.89(sec) at maximum). This indicates that calculation interval T max is valid for the tested application. 4.6. Voice Command Recognition in Housing Environment This section describes experimental results using the ceiling microphone array at housing environment Holone. System performance is evaluated in bedroom(unit 6 in Fig. 4 a)). The room has reverberation time(t 6 ) of about 6(msec) and background noise level is about 4 (db). 11 experiments are performed. The experimental conditions are shown in Table. Data length of each experiment is 7 (sec). Positions of sound sources are described as (azimuth, elevation) angles. The height of the sound sources from the microphone array is stationary at 1.5 (m), and distance between microphone array and sound source is different in each condition. Command set A and B is arbitrary timed command utterance which randomly selected from the command dictionary, and female speech doesn t contains command sentence in dictionary. All experimental conditions have 2 sound sources, and SNR between sources is about (db). In EXP-1 to 4, two sound sources have enough distance. The condition in EXP-5 to 8 is (deg) in azimuth interval between 2 sources and elevation of command utterance is 6 (deg). The condition in EXP-9 to 11 is (deg) in azimuth interval between 2 sources and elevation of command utterance is (deg) The evaluation result is shown in Table 4. Excluding Table 4 Evaluation result of recognition system in Holone Word (%) Task (%) Error (%) Separated (%) EXP-1 61.1 1. (8/8) 21.9 1. (8/8) EXP-2 96.7 87.5 (7/8). 87.5 (7/8) EXP- 91.7 75. (6/8). 87.5 (7/8) EXP-4 64.5 75. (6/8) 26.1 1. (8/8) EXP-5 88.9 87.5 (7/8). 1. (8/8) EXP-6 9.5 1. (8/8) 1. 1. (8/8) EXP-7 94.1 87.5 (7/8). 1. (8/8) EXP-8 94.1 87.5 (7/8). 1. (8/8) EXP-9 76.2 62.5 (5/8).4 75. (6/8) EXP-1 85.1 87.5 (7/8) 6.1 1. (8/8) EXP-11 71.8 1. (8/8) 16. 1. (8/8)

EXP-9, target separation rate is near 1 (%) for the whole experiment. In EXP-9, it has minimum distance between 2 sound sources, and the system sometimes fail to detect two sound sources independently. The worst score of task achievement rate is 62.9 (%) in EXP-9. Error recognition rate becomes high when noise sound source contains speech compared to when it is classical music. Directional localization errors as an intervector angle between estimated and real sound positions of EXP-4, 9 and 11 are shown in Fig. 9. Angular error of EXP-4 and 11 is constant during the experiment, and result of EXP4 which has larger intervals between two sound sources performed better than result of EXP-11. Angle error of EXP-9 is varied in time. This is attributed to the other sound source near the target sound source. Kan-ichi, please go to the kitchen. a) call the robot during watching TV b) robot moves to the kitchen from bedroom angle error (deg) 15 EXP-4 EXP-9 EXP-11 12 9 6 Bring it to the study. c) Put coffee cup on the robot and say go to the study 1 2 4 5 start time (sec) 6 7 Fig. 9 Variation of directional localization error 4.7. Application for Mobile Robot The recognition system is applied to a mobile robot to confirm the ability of the system. The mobile robot has autonomous mobility in a known environment[11]. Input of a laser range finder mounted on the robot is used for localization, a particle filter based method locating a position and orientation of the robot in a map. Optimized A* algorithm [12] is used to plan a trajectory of the robot to the goal position. The mobile robot is given the recognized verbal command with sound position information from a ceiling microphone array system. Fig. 1 shows video clips of the experiment at Holone. A user watching TV in living room tells the robot(currently in the bed room) to go to the kitchen(a). Living room microphone array detects the user utterance and sounds from TV, and separates them from each other. When the robot receives the command, it plans the path from the bedroom to the kitchen and starts moving(a,b). In the kitchen, a user places a coffee cup on the robot and orders to the robot to go to the study(c). Then, the robot starts moving to the study(d). In the study, a different user takes the coffee cup from the robot, and released the robot by saying Thank you. (e). Then the robot goes back to the bedroom(f, g). In the experiment, the proposed recognition system recognized the command utterances, even when the TV is on as a noise sound source. d) The robot goes to the study Thank you. e) Get coffee and say thank you. f) The robot goes back to the bedroom g) The robot goes back to the bedroom Fig. 1 Snapshots on experiment at Holone with GUI

5. CONCLUSIONS AND FUTURE WORKS The paper reported an online voice command recognition system using a microphone array. DSBF and FBS method is used for multiple sound localization, and separation. The method is implemented to the 2ch microphone array attached to the ceiling. The separated sound sources are recognized using an open source recognizer Julian with a word limited command dictionary, then recognized command with sound position and time information are sent to the mobile robot. The system localizes sound source position in azimuth and elevation, and recognizes separated sound sources simultaneously. The system can recognize arbitrary timed voice commands in noisy environments using a 2 channel microphone array unit, and the system outputs the recognized command with sound position information. The system is tested on a compact microphone array unit and can be applied to robot embedded microphone array. By using DSBF and FBS method for multiple sound localization and separation, the microphone array system can localize sound source position in azimuth and elevation, with 1(deg) angle error on average. The proposed system works without environmental information such as impulse response, and the evaluation result in two different reverberation conditions shows similar performance. We defined four indexes to evaluate the performance of the proposed auditory system. The experimental results shows the robustness for multiple sound sources or distant sound sources. Target separation rate is more than 95 (%) on average, and task achievement rate is more than 86 (%) on average for two sound sources in the Holone experiments. And as shown in section IV-C, the task achievement rate is more than 7 (%) within a 1(m) radius with a noise sound source at near range. In this paper, word dictionary is limited simple command for mobile robot to prevent error recognition. Recognizer could works with a handled size of word dictionary for separated sound sources when input sound sources are only human voice included in dictionary. On the other hand, error recognition rate becomes high, when unknown sound sources exists. Future research is needed to detect human voice and reduce error recognition for non-modeled sound sources. References [1] Y. Mori, H. Saruwatari, T. Takatani, S. Ukai, K. Shikano, T. Hiekata and T. Morita Real-Time Implementation of Two-Stage Blind Source Separation Combining SIMO-ICA and Binary Masking, in Proc. of 25 International Workshop on Acoustic Echo and Noise Control (IWAENC25), pp. 229 22, September 25. [2] A. A. E. Weinstein, K. Steele and J. Glass, Loud: A 12-node modular microphone array and beamformer for intelligent computing spaces, MIT/LCS Technical Memo, Tech. Rep. MIT-LCS-TM-642, April 24. [] K. Nakadai, H. Nakajima, M. Murase, S. Kaijiri, K. Yamada, T. Nakamura, Y. Hasegawa, H. G. Okuno, and H. Tsujino, Robust tracking of multiple sound sources by spatial integration of room and robot microphone arrays, in Proc. of International Conference on Acoustics, Speech, and Signal Processing 26, pp. IV 29 2, Toulouse, France, May 26. [4] C. T. Ishi, S. Matsuda, T. Kanda, T. Jitsuhiro, H. Ishiguro, S. Nakamura, and N. Hagita, Robust speech recognition system for communication robots in real environments, in Proc. of IEEE-RASInternational Conference on Humanoid Robots(HUMANOIDS26), pp. 4 45, Genova, Italy, December 26. [5] S. Yamamoto, K. Nakadai, H. Tsujino, T. Yokoyama, and H. G. Okuno, Improvement of robot audition by interfacing sound source separation and automatic speech recognition with missing feature theory, in Proc. of IEEE-RAS International Conference on Robots and Automation (ICRA24), pp. 1517 52, New Orleans, May 24. [6] S. Yamamoto, K. Nakadai, M. Nakano, H. Tsujino, J- M. Valin, K. Komatani, T. Ogata, and H. G. Okuno, Real-Time Robot Audition System That Recognizes Simultaneous Speech in The Real World in Proc. of IEEE/RSJ Intrnational Conference on Intelligent Robots and Systems (IROS 26), pp. 5 58, Beijing, China, October, 26. [7] K. Nakadai, S. Yamamoto, H. G. Okuno, H. Nakajima, Y. Hasegawa and H. Tsujino, Development of A Robot Referee for Rock-Paper-Scissors Sound Games, in Proc. of JSAI Technical Report SIG- Challenge-A72-1, pp. 59 64, 27 (In Japanese). [8] M. Aoki, M. Okamoto, S. Aoki, H. Matsui, T. Sakurai, Y. Kaneda. Sound source segregation based on estimating incident angle of each frequency component of input signals acquired by multiple microphones. Acoustical Science and Technology, pp. 149 157, 21. [9] F. X. Giraldo. Lagrange-galerkin methods on spherical geodesic grids. Journal of Computational Physicspp. pp. 197 21, 1997. [1] A.Lee, T.Kawahara and K.Shikano, Julius an open source realtime large vocabulary recognition engine. in Proc. of European Conference on Speech Communication and Technology, pp. 1691 694, 21. [11] S. Thompson and S. Kagami, Continuous curvature trajectory generation with obstacle avoidance for car-like robots, in Proc. of International Conference on Computational Intelligence for Modeling Control and Automation(CIMCA25), Vienna, 25.

[12] James J. Kuffner, Efficient optimal search of Euclidean-cost grids and lattices, in Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 24.