Controlling Spatial Sound with Table-top Interface

Size: px
Start display at page:

Download "Controlling Spatial Sound with Table-top Interface"

Transcription

1 Controlling Spatial Sound with Table-top Interface Abstract Interactive table-top interfaces are multimedia devices which allow sharing information visually and aurally among several users. Table-top interfaces are utilized as groupware and are suitable for collaborative work, convenient for a group working on themes related to sound systems. Table-top interfaces for spatial sound environments are starting to be investigated in the field of the human interfaces. A representative table-top musical instrument is the reactable. In this paper, we present a way to control the position of multiple virtual sounds in a spatial sound environment via such a table-top interface. Sound localization is required to discriminate and clearly recognize sounds. We have been investigating musical table-top instruments which are capable of controlling multiple input and output channels in spatial sound environments. One of the main features of this newly developed system is that multiple users can independently control the spatialization of sounds in real-time. Yuya Sasamoto, Michael Cohen, & Julián Villegas Spatial Media Group, University of Aizu Aizu-Wakamatsu, Fukushima ; Japan {m , mcohen, julian}@u-aizu.ac.jp I. INTRODUCTION Table-top interfaces have been studied in the ubiquitous computing community [1][2][3]. Such devices are suitable for groupware as roomware in private spaces or office meetings [4]. Advances in table-top interfaces and input technologies have led to new systems to support co-located collaboration. Technology that supports consultation on a tabletop can take advantage of the rich experience people have with such natural styles of work and play. Table-top collaboration is being explored, including multitouch display with two dimensional gestures for interactions [5] and manipulation of virtual interface elements via affordances on table-top interfaces [6]. Most studies assume a meeting focused on visual information, including documents and images, and do not highlight the appropriateness of a table-top interface for adjusting sounds. Table-top interfaces can be deployed in collaborative activities featuring dynamically adjustable sound. For controlling stereophonic and surround sound, such interaction styles could be more natural to use such an interface than mouse or keyboard. Users can leverage their everyday experience, without depending on traditional computer-enforced idioms, for editing diffusable sound. As a basic idea of research, we focus on how to use a tabletop interface for meetings featuring spatialized sounds. We designed a prototype of table-top interface and arranged eight loudspeakers to experiment with it, as shown in Fig. 1. The system allows multiple users to control and steer independent audio channels. We use position data of tangible objects on table interface for configuration of sound spatialization. By using this table-top interface, users do not need to concentrate on the expression of words and are able to express sounds by moving tangible objects with spatial environment. Hearing spatial sounds in a meeting is promoted to improve mutual understanding. Fig. 1. Table-top interface and eight-loudspeaker array II. BACKGROUND 1) ReacTable: The reactable is an electronic musical instrument with a tangible user interface. It is a synthesizer which can make music from freely moving or rotating objects [7] on a table. Each control object has a designated role, such as looping sound sources, oscillators, and filters. Bringing objects close and virtually connecting them, sound sources can be mixed or effects can be added. Information such as fiducialid,position (x, y), andangle is acquired via markers [8] and tracked by original image processing software called. 2) Tuio: TUIO [9] is an open framework that defines a common protocol and API for tangible multitouch surfaces. As shown in Fig. 2, the protocol allows the transmission of an abstract description of interactive surfaces, including touch gestures and tangible object states. This protocol encodes control data from a tracker application (e.g., based on computer vision) and sends it to any client application capable of decoding the protocol. The TUIO protocol has been implemented using OpenSound Control [10] and is therefore usable on any platform supporting this protocol. There exists a growing number of TUIO-enabled tracker applications and TUIO client libraries for various programming environments, as well as applications that support the protocol. This combination of TUIO trackers, protocol, and client implementations allows rapid development of table-based tangible multitouch interfaces. TUIO has been designed mainly as an abstraction for interactive surfaces, but has also been used in many other related application areas.

2 multi-touch gestures tangible objects loudspeaker n virtual source loudspeaker m y Display Sensor TUIO protocol TUIO client application Fig. 2. TUIO (diagram adapted from Fig. 4. Stereophonic configuration formulated with vectors x 3) : The [11] is an open source, cross-platform, computer vision framework for fast, robust tracking of markers attached to physical objects, as well as for multitouch finger tracking. It was designed mainly as a toolkit for rapid development of table-based (TUI) and multitouch interactive surfaces. The application acquires a video stream from a camera, searches the stream frame by frame for predefined symbols, and sends data about identified symbols via a network socket to a listening application. The application was designed modularly, making it easy to add new image recognition and frame processing components. Fig. 3. Amoeba symbols 4) Fiducial Symbols: In virtual reality applications, fiducial symbols are often deliberately applied to objects in a scene so that the objects can be recognized in images. The symbol set called amoeba (Fig. 3) [8][12] was specifically developed for the reactable. In the fiducial engine the source image frame is first converted to a black and white image with an adaptive threshold algorithm. This image is then segmented into a region adjacency graph reflecting the containment structure of alternatingly nested black and white regions. This graph is searched for unique tree structures encoded into the fiducial symbols. Finally the identified trees are matched against a dictionary to retrieve unique marker ID numbers. 5) Vector Base Amplitude Panning (VBAP): Two of the most common techniques for spatial audio reproduction are VBAP [13] and Ambisonics [14]. They share the ability to place virtual sound sources anywhere on a surface outlined by a loudspeaker array. A major difference between them is the extent of the sweet spot, within which listeners can easily apprehend a projected soundscape. A large sweet spot can be supported in VBAP, whereas a narrow sweet spot is obtained with Ambisonics. In Ambisonics, the sound wave which comes out of coordinated loudspeakers (a minimum of four for planar fields) is composed by adjusting mutual phase, creating a desired acoustic field within the sweet spot but which is incoherent outside the sweet spot. For a spatial sound environment with table-top interface, VBAP is more suitable for the system because there might be several users around the table. For a two-dimensional VBAP method [15], the law of sines and the law of tangents model the directionalization: sinϕ sinϕ 0 = g n g m g n +g m (1) tanϕ tanϕ 0 = g n g m g n +g m (2) where 0 < ϕ 0 < 90 is the bearing of each speaker, ϕ 0 ϕ ϕ 0 is the bearing of a virtual source, and g n,g m [0,1] are gain factors. The law of sines is valid if the listener s head is pointing directly forward, but to support head rotation, as when a user turns to follow a virtual source, the law of tangents is more appropriate. As shown in Fig. 4, the unit vector p = [p n p m ] T, which points toward a virtual source, can be treated as a linear combination of loudspeaker vectors, p = g n l n +g m l m. (3) In Eqns. 1 3 g n and g m are gain factors, which can be treated as positive scalar variables. We may write the equation in matrix form, p T = gl nm (4) where g = [g n g m ] and L nm = [l n l m ] T. This equation can be solved if the inverse L 1 nm exists, III. A. Hardware Environment g = p T L 1 nm. (5) IMPLEMENTATION Mainly by controlling objects on the table-top interface from users playing music, input is generated, the ID and

3 Inputs Pure Data & Processing Outputs multiple audio channel source audio Sound fiducial ID fiducial ID, x, y Audio Interface Loudspeakers Tangible Ojbects Panning Sound Positions fiducial ID, x, y Projection Interaction Table Display Fig. 5. System diagram showing data flow and relationship among components localization data of each tangible object is organized as fiducialid [0,3], x & y [ 240,240] 2 in TUIO, then the data is passed to the sound and graphic systems. The basic system architecture consists of a Mac mini computer (OS X 10.8) connected to the audio interface, as well as to an Roland Edirol UA-101 audio interface to play musical sounds. The audio interface has ten input channels and ten output channels. It gets music sounds directly from the music player and distributes coordinated audio signals to eight loudspeakers (Yamaha MSP3) according to localization of each object on the table-top interface. A system diagram showing data flow and the relationship between components is presented in Fig. 5. A camera is mounted beneath the table to capture the location of the tabletop objects through the translucent screen. Dynamic graphics displayed through the surface follow each object on the table. An LCD video projector receives the computer video output (and loops it back to a monitor for programmer convenience). The system is built as a patch (like a procedure) in Pure Data (Pd), a visual programming data flow language specialized for interactive computer music and multimedia. The system processing is as follows: At first, multiple audio channel sources are received from the audio interface or generated in a Pd patch. Tangible objects are recognized their fiducial ID and localized to x & y by processing of the camera-captured scene. Object ID and position, based on the data from, are sent to sound controller, VBAP, and projection controller. Audio sources are selected by fiducial ID in sound controller. Panning is controlled in VBAP by information received from, and the gain of each speaker is calculated. The audio interface compiles the data comprising the location of sound sources, then outputs the diffused soundscape as modulated signals distributed to eight loudspeakers. The projection controller generates graphics using the fiducialid, x & y, which are projected to the table display. B. Software Environment A schematic of the software sound environment is shown in Fig. 6, including detail of the sound software environment in Fig. 5. The Pd system features four-channel input and eightchannel output, and consists of five kinds of objects: reactivision, multiple audio channel sources, panning controller, sound controller, and the signal objects. The loudspeaker setup is defined using define_loudspeakers, initialized by specifying the direction of loudspeakers. In our configuration, the loudspeakers are evenly spaced, separated at intervals of 45. To articulate control among the four input and eight output channels, there are four panning controllers. The runtime signal processing is shown as follows: Each gain factor of the corresponding loudspeakers is calculated at the same time in panning controllers to realize independent sound localization. Each panning controller is parameterized by x & y via fiducial ID of tangible objects, and azimuth is calculated. Users may optionally adjust spread control. VBAP objects are parameterized by define_loudspeakers,azimuth, andspread control. The gain factor is calculated in VBAP objects, as described above in II-5. In sound controller, audio sources are identified by fiducial ID.

4 Panning (x 4) x, y-axis azimuth spread control Sound fiducial ID VBAP Loudspeaker multiple audio channel sources audio Fig. 6. Schematic of software sound architecture The sound controller distributes audio sources along input channel number and fiducial ID to eight signal objects for modulation of the audio signals by the respective gain factors. The signal objects perform distribution of sound sources to eight loudspeakers. For simple example, a tangible object (fiducial ID = 0) sends location data (x = 100, y = 200) to the first panning controller. The panning controller calculates respective gain factors for eight loudspeakers. The sound controller sends audio source of each channel input to eight signal objects. Each signal object combines sound sources and respective gain factors, then distributes to all eight loudspeakers at once (not individually). fiducial makers affixed to the undersides. Users can determine a path for sound localization by handling and arranging the tangible objects. Fig. 8 shows a snapshot of a couple of users playing the table-top interface using four tangible objects. Our table-top interface is for groupware and assumed session play of up to four users. Multiple users can play collaboratively each other, including controlling sound localization with four tangible objects using their both of hands freely. Sound manipulation of users is a kind of cooperative group work. If they want to change a music or synthesizer, they can freely select a tune by using an audio device or synthesizer such as oscillator, phasor, sine wave, etc. C. User Manipulation Fig. 8. Snapshot of a couple of users playing the table-top interface Fig. 7. Four tangible objects and fiducial markers (amoeba design) All manipulations at the user side are performed using tangible objects, like those shown in Fig. 7. We use four square beverage coasters made of oak cork as tangible objects with IV. RESULT We have presented the implementation of a prototype tabletop interface with integrated spatial sound environment. We could realize three-dimensional spatial sound control through eight loudspeakers, adjusting multiple tangible objects in realtime. The wide sweet spot of the VBAP soundscape with eight

5 loudspeakers allows multiple users apprehension of spatial sound. No noticeable delay is observed regarding realtime calculation of the multisource soundscape, and users can freely express their spatially musical ideas. It became possible to express closely what users would like without a sense of incongruity. Because, contrary to our expectation, the frame rate in the reactivison is low (around 20 fps), recognition of the marker breaks off, and sounds may be disabled. The problem arises from the web camera s frame rate, which couldn t reach the maximum claimed in the product specification. V. CONCLUSION Table-top interfaces are utilized as groupware, suitable for collaborative activities. Such interfaces have potential for meetings with topic of sounds, especially surround music and spatial sound. We have presented the implementation of a prototype of table-top musical instrument with spatial sound environment. We have been investigating controlling position of multiple sounds in real-time with multitouch groupware. We succeeded in controlling position of four sounds in realtime by deploying four panning controllers, which calculate gain factors separately. VBAP features a large sweet spot; sound localization using VBAP with eight loudspeakers around a reactable was deemed adequate. When users control four independent sounds, they can play them without worrying about interference, and flexible, cooperative group activity is possible. However, sound localization performance is poor when tangible objects are moved quickly because the frame rate of the web camera which we used was insufficient for capturing the markers. For future work, we would like to extend the acoustic display to go beyond the current planar space, allow expression of fully three-dimensional sound, controlled by rotation of tangible objects. ReacTIVision can capture multitouch finger gesture on the table, so we will investigate the expression of spatial sound with such gestures. REFERENCES [1] G. D. Abowd and E. D. Mynatt, Charting past, present, and future research in ubiquitous computing, ACM Transactions on Computer- Human Interaction (TOCHI), vol. 7, no. 1, pp , [2] P. Isenberg, U. Hinrichs, M. Hancock, and S. Carpendale, Digital tables for collaborative information exploration, in Tabletops-Horizontal Interactive Displays. Springer, 2010, pp [3] W. G. Gardner, 3-D audio using loudspeakers. Kluwer Academic Pub, [4] N. A. Streitz, P. Tandler, C. Müller-Tomfelde, and S. Konomi, Roomware: Towards the next generation of human-computer: Interaction based on an integrated design of real and virtual worlds, Human- Computer Interaction in the New Millenium, Addison Wesley, pp , [5] W. Westerman, J. G. Elias, and A. Hedge, Multi-touch: A new tactile 2-d gesture interface for human-computer interaction, in Proc. of the Human Factors and Ergonomics Society Annual Meeting. SAGE Publications, 2001, pp [6] G. W. Fitzmaurice, H. Ishii, and W. A. Buxton, Bricks: laying the foundations for graspable user interfaces, in Proc. SIGCHI Conf. on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., 1995, pp [7] J. Patten, H. Ishii, J. Hines, and G. Pangaro, Sensetable: a wireless object tracking platform for tangible user interfaces, in Proc. SIGCHI Conf. on Human Factors in Computing Systems. ACM, 2001, pp [8] R. Bencina and M. Kaltenbrunner, The design and evolution of fiducials for the reactivision system, in Proc. Int. Conf. on Generative Systems in the Electronic Arts, [9] Kaltenbrunner, Martin and Bovermann, Till and Bencina, Ross and Costanza, Enrico, TUIO: A protocol for table-top tangible user interfaces, in Proc. Int. Workshop on Gesture in Human-Computer Interaction and Simulation, [10] M. Wright, A. Freed, and A. Momeni, Opensound control: state of the art 2003, in Proc. Conf. on New Interfaces for Musical Expression, 2003, pp [11] M. Kaltenbrunner and R. Bencina, : a computer-vision framework for table-based tangible interaction, in Proc. Int. Conf. on Tangible and Embedded Interaction. ACM, 2007, pp [12] R. Bencina, M. Kaltenbrunner, and S. Jorda, Improved topological fiducial tracking in the reactivision system, in Computer Vision and Pattern Recognition-Workshops, CVPR Workshops. IEEE Computer Society Conf. IEEE, 2005, pp [13] V. Pulkki, Generic panning tools for MAX/MSP, in Proc. Int. Computer Music Conf., 2000, pp [14] D. G. Malham and A. Myatt, 3-D sound spatialization using ambisonic techniques, Computer Music, vol. 19, no. 4, pp , [15] V. Pulkki, Virtual source positioning using vector base amplitude panning, JAES, vol. 45, no. 6, pp , 1997.

Dhvani : An Open Source Multi-touch Modular Synthesizer

Dhvani : An Open Source Multi-touch Modular Synthesizer 2012 International Conference on Computer and Software Modeling (ICCSM 2012) IPCSIT vol. XX (2012) (2012) IACSIT Press, Singapore Dhvani : An Open Source Multi-touch Modular Synthesizer Denny George 1,

More information

Prototyping of Interactive Surfaces

Prototyping of Interactive Surfaces LFE Medieninformatik Anna Tuchina Prototyping of Interactive Surfaces For mixed Physical and Graphical Interactions Medieninformatik Hauptseminar Wintersemester 2009/2010 Prototyping Anna Tuchina - 23.02.2009

More information

The reactable*: A Collaborative Musical Instrument

The reactable*: A Collaborative Musical Instrument The reactable*: A Collaborative Musical Instrument Martin Kaltenbrunner mkalten@iua.upf.es Sergi Jordà sjorda@iua.upf.es Günter Geiger ggeiger@iua.upf.es Music Technology Group Universitat Pompeu Fabra

More information

Linux Audio Conference 2009

Linux Audio Conference 2009 Linux Audio Conference 2009 3D-Audio with CLAM and Blender's Game Engine Natanael Olaiz, Pau Arumí, Toni Mateos, David García BarcelonaMedia research center Barcelona, Spain Talk outline Motivation and

More information

Interactive Multimedia Contents in the IllusionHole

Interactive Multimedia Contents in the IllusionHole Interactive Multimedia Contents in the IllusionHole Tokuo Yamaguchi, Kazuhiro Asai, Yoshifumi Kitamura, and Fumio Kishino Graduate School of Information Science and Technology, Osaka University, 2-1 Yamada-oka,

More information

Multi-touch technologies, the reactable* and building a multi-touch device for use in composition and performance. Timothy Roberts.

Multi-touch technologies, the reactable* and building a multi-touch device for use in composition and performance. Timothy Roberts. Multi-touch technologies, the reactable* and building a multi-touch device for use in composition and performance s2599923 Subject: Music Technology 6 Course Code: 3721QCM Lecturer: Dave Carter Word Count:

More information

Outline. Context. Aim of our projects. Framework

Outline. Context. Aim of our projects. Framework Cédric André, Marc Evrard, Jean-Jacques Embrechts, Jacques Verly Laboratory for Signal and Image Exploitation (INTELSIG), Department of Electrical Engineering and Computer Science, University of Liège,

More information

Virtual Sound Source Positioning and Mixing in 5.1 Implementation on the Real-Time System Genesis

Virtual Sound Source Positioning and Mixing in 5.1 Implementation on the Real-Time System Genesis Virtual Sound Source Positioning and Mixing in 5 Implementation on the Real-Time System Genesis Jean-Marie Pernaux () Patrick Boussard () Jean-Marc Jot (3) () and () Steria/Digilog SA, Aix-en-Provence

More information

tactile.motion: An ipad Based Performance Interface For Increased Expressivity In Diffusion Performance

tactile.motion: An ipad Based Performance Interface For Increased Expressivity In Diffusion Performance tactile.motion: An ipad Based Performance Interface For Increased Expressivity In Diffusion Performance Bridget Johnson Michael Norris Ajay Kapur New Zealand School of Music michael.norris@nzsm.ac.nz New

More information

Interactive Tables. ~Avishek Anand Supervised by: Michael Kipp Chair: Vitaly Friedman

Interactive Tables. ~Avishek Anand Supervised by: Michael Kipp Chair: Vitaly Friedman Interactive Tables ~Avishek Anand Supervised by: Michael Kipp Chair: Vitaly Friedman Tables of Past Tables of Future metadesk Dialog Table Lazy Susan Luminous Table Drift Table Habitat Message Table Reactive

More information

Interaction Design for the Disappearing Computer

Interaction Design for the Disappearing Computer Interaction Design for the Disappearing Computer Norbert Streitz AMBIENTE Workspaces of the Future Fraunhofer IPSI 64293 Darmstadt Germany VWUHLW]#LSVLIUDXQKRIHUGH KWWSZZZLSVLIUDXQKRIHUGHDPELHQWH Abstract.

More information

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction

Abstract. Keywords: Multi Touch, Collaboration, Gestures, Accelerometer, Virtual Prototyping. 1. Introduction Creating a Collaborative Multi Touch Computer Aided Design Program Cole Anagnost, Thomas Niedzielski, Desirée Velázquez, Prasad Ramanahally, Stephen Gilbert Iowa State University { someguy tomn deveri

More information

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4

SOPA version 2. Revised July SOPA project. September 21, Introduction 2. 2 Basic concept 3. 3 Capturing spatial audio 4 SOPA version 2 Revised July 7 2014 SOPA project September 21, 2014 Contents 1 Introduction 2 2 Basic concept 3 3 Capturing spatial audio 4 4 Sphere around your head 5 5 Reproduction 7 5.1 Binaural reproduction......................

More information

An infotainment table for interactive learning based on Infra-red touch overlays using TUIO protocol

An infotainment table for interactive learning based on Infra-red touch overlays using TUIO protocol An infotainment table for interactive learning based on Infra-red touch overlays using TUIO protocol Sanjana Khater Department of Information Technology, Pune Institute of Computer Technology, Savitribai

More information

ZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field

ZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field ZeroTouch: A Zero-Thickness Optical Multi-Touch Force Field Figure 1 Zero-thickness visual hull sensing with ZeroTouch. Copyright is held by the author/owner(s). CHI 2011, May 7 12, 2011, Vancouver, BC,

More information

Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces

Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces James Patten MIT Media Lab 20 Ames St. Cambridge, Ma 02139 +1 857 928 6844 jpatten@media.mit.edu Ben Recht MIT Media Lab

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

The analysis of multi-channel sound reproduction algorithms using HRTF data

The analysis of multi-channel sound reproduction algorithms using HRTF data The analysis of multichannel sound reproduction algorithms using HRTF data B. Wiggins, I. PatersonStephens, P. Schillebeeckx Processing Applications Research Group University of Derby Derby, United Kingdom

More information

Convention Paper Presented at the 124th Convention 2008 May Amsterdam, The Netherlands

Convention Paper Presented at the 124th Convention 2008 May Amsterdam, The Netherlands Audio Engineering Society Convention Paper Presented at the 124th Convention 2008 May 17 20 Amsterdam, The Netherlands The papers at this Convention have been selected on the basis of a submitted abstract

More information

Robotic Spatial Sound Localization and Its 3-D Sound Human Interface

Robotic Spatial Sound Localization and Its 3-D Sound Human Interface Robotic Spatial Sound Localization and Its 3-D Sound Human Interface Jie Huang, Katsunori Kume, Akira Saji, Masahiro Nishihashi, Teppei Watanabe and William L. Martens The University of Aizu Aizu-Wakamatsu,

More information

A TANGIBLE ENVIRONMENT FOR ANIMATIONS CREATIONS

A TANGIBLE ENVIRONMENT FOR ANIMATIONS CREATIONS A TANGIBLE ENVIRONMENT FOR ANIMATIONS CREATIONS ABSTRACT Storytelling is an essential activity in the life of children. By listening or sharing their stories and ideasthey give meaning to their world and

More information

Sound source localization and its use in multimedia applications

Sound source localization and its use in multimedia applications Notes for lecture/ Zack Settel, McGill University Sound source localization and its use in multimedia applications Introduction With the arrival of real-time binaural or "3D" digital audio processing,

More information

From Table System to Tabletop: Integrating Technology into Interactive Surfaces

From Table System to Tabletop: Integrating Technology into Interactive Surfaces From Table System to Tabletop: Integrating Technology into Interactive Surfaces Andreas Kunz 1 and Morten Fjeld 2 1 Swiss Federal Institute of Technology, Department of Mechanical and Process Engineering

More information

AURALIAS: An audio-immersive system for auralizing room acoustics projects

AURALIAS: An audio-immersive system for auralizing room acoustics projects AURALIAS: An audio-immersive system for auralizing room acoustics projects J.J. Embrechts (University of Liege, Intelsig group, Laboratory of Acoustics) REGION WALLONNE 1. The «AURALIAS» research project

More information

TANGIBLE COMPUTING FOR INTERACTIVE SONIFICATION OF MULTIVARIATE DATA. Thomas Hermann, Till Bovermann, Eckard Riedenklau, Helge Ritter

TANGIBLE COMPUTING FOR INTERACTIVE SONIFICATION OF MULTIVARIATE DATA. Thomas Hermann, Till Bovermann, Eckard Riedenklau, Helge Ritter TANGIBLE COMPUTING FOR INTERACTIVE SONIFICATION OF MULTIVARIATE DATA Thomas Hermann, Till Bovermann, Eckard Riedenklau, Helge Ritter Faculty of Technology, Bielefeld University, D-33501 Bielefeld, Germany,

More information

Improvisation and Tangible User Interfaces The case of the reactable

Improvisation and Tangible User Interfaces The case of the reactable Improvisation and Tangible User Interfaces The case of the reactable Nadir Weibel, Ph.D. Distributed Cognition and Human-Computer Interaction Lab University of California San Diego http://hci.ucsd.edu/weibel

More information

AUDIO-ENHANCED COLLABORATION AT AN INTERACTIVE ELECTRONIC WHITEBOARD. Christian Müller Tomfelde and Sascha Steiner

AUDIO-ENHANCED COLLABORATION AT AN INTERACTIVE ELECTRONIC WHITEBOARD. Christian Müller Tomfelde and Sascha Steiner AUDIO-ENHANCED COLLABORATION AT AN INTERACTIVE ELECTRONIC WHITEBOARD Christian Müller Tomfelde and Sascha Steiner GMD - German National Research Center for Information Technology IPSI- Integrated Publication

More information

DiamondTouch SDK:Support for Multi-User, Multi-Touch Applications

DiamondTouch SDK:Support for Multi-User, Multi-Touch Applications MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com DiamondTouch SDK:Support for Multi-User, Multi-Touch Applications Alan Esenther, Cliff Forlines, Kathy Ryall, Sam Shipman TR2002-48 November

More information

Spatial audio is a field that

Spatial audio is a field that [applications CORNER] Ville Pulkki and Matti Karjalainen Multichannel Audio Rendering Using Amplitude Panning Spatial audio is a field that investigates techniques to reproduce spatial attributes of sound

More information

SOUNDSTUDIO4D - A VR INTERFACE FOR GESTURAL COMPOSITION OF SPATIAL SOUNDSCAPES

SOUNDSTUDIO4D - A VR INTERFACE FOR GESTURAL COMPOSITION OF SPATIAL SOUNDSCAPES SOUNDSTUDIO4D - A VR INTERFACE FOR GESTURAL COMPOSITION OF SPATIAL SOUNDSCAPES James Sheridan 1, Gaurav Sood 1, Thomas Jacob 1,2, Henry Gardner 1, and Stephen Barrass 2 1 Departments of Computer Science

More information

Interior Design using Augmented Reality Environment

Interior Design using Augmented Reality Environment Interior Design using Augmented Reality Environment Kalyani Pampattiwar 2, Akshay Adiyodi 1, Manasvini Agrahara 1, Pankaj Gamnani 1 Assistant Professor, Department of Computer Engineering, SIES Graduate

More information

GESTURES. Luis Carriço (based on the presentation of Tiago Gomes)

GESTURES. Luis Carriço (based on the presentation of Tiago Gomes) GESTURES Luis Carriço (based on the presentation of Tiago Gomes) WHAT IS A GESTURE? In this context, is any physical movement that can be sensed and responded by a digital system without the aid of a traditional

More information

Multi-Modal User Interaction

Multi-Modal User Interaction Multi-Modal User Interaction Lecture 4: Multiple Modalities Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk MMUI, IV, Zheng-Hua Tan 1 Outline Multimodal interface

More information

MRT: Mixed-Reality Tabletop

MRT: Mixed-Reality Tabletop MRT: Mixed-Reality Tabletop Students: Dan Bekins, Jonathan Deutsch, Matthew Garrett, Scott Yost PIs: Daniel Aliaga, Dongyan Xu August 2004 Goals Create a common locus for virtual interaction without having

More information

NUI. Research Topic. Research Topic. Multi-touch TANGIBLE INTERACTION DESIGN ON MULTI-TOUCH DISPLAY. Tangible User Interface + Multi-touch

NUI. Research Topic. Research Topic. Multi-touch TANGIBLE INTERACTION DESIGN ON MULTI-TOUCH DISPLAY. Tangible User Interface + Multi-touch 1 2 Research Topic TANGIBLE INTERACTION DESIGN ON MULTI-TOUCH DISPLAY Human-Computer Interaction / Natural User Interface Neng-Hao (Jones) Yu, Assistant Professor Department of Computer Science National

More information

sink moving direction velocity radiation pattern Retrieval All Sources Ambient Sources Relevant Sources Silent Sources Sorting and Selecting

sink moving direction velocity radiation pattern Retrieval All Sources Ambient Sources Relevant Sources Silent Sources Sorting and Selecting Optimization of Sound Spatialization Resource Management through Clustering Jens Herder Spatial Media Group University of Aizu Fukushima-ken 965-8580 Japan voice: [+81](242)37-2537; fax: [+81](242)37-2549

More information

R (2) Controlling System Application with hands by identifying movements through Camera

R (2) Controlling System Application with hands by identifying movements through Camera R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity

More information

Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit

Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit Alan Esenther and Kent Wittenburg TR2005-105 September 2005 Abstract

More information

Ambisonics plug-in suite for production and performance usage

Ambisonics plug-in suite for production and performance usage Ambisonics plug-in suite for production and performance usage Matthias Kronlachner www.matthiaskronlachner.com Linux Audio Conference 013 May 9th - 1th, 013 Graz, Austria What? used JUCE framework to create

More information

Analysis of Frontal Localization in Double Layered Loudspeaker Array System

Analysis of Frontal Localization in Double Layered Loudspeaker Array System Proceedings of 20th International Congress on Acoustics, ICA 2010 23 27 August 2010, Sydney, Australia Analysis of Frontal Localization in Double Layered Loudspeaker Array System Hyunjoo Chung (1), Sang

More information

MIAP: Manifold-Interface Amplitude Panning in Max/MSP and Pure Data

MIAP: Manifold-Interface Amplitude Panning in Max/MSP and Pure Data MIAP: Manifold-Interface Amplitude Panning in Max/MSP and Pure Data Zachary Seldess Senior Audio Research Engineer Sonic Arts R&D, Qualcomm Institute University of California, San Diego zseldess@gmail.com!!

More information

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device

MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device Enkhbat Davaasuren and Jiro Tanaka 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577 Japan {enkhee,jiro}@iplab.cs.tsukuba.ac.jp Abstract.

More information

EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment

EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment EnhancedTable: Supporting a Small Meeting in Ubiquitous and Augmented Environment Hideki Koike 1, Shin ichiro Nagashima 1, Yasuto Nakanishi 2, and Yoichi Sato 3 1 Graduate School of Information Systems,

More information

Waves Nx VIRTUAL REALITY AUDIO

Waves Nx VIRTUAL REALITY AUDIO Waves Nx VIRTUAL REALITY AUDIO WAVES VIRTUAL REALITY AUDIO THE FUTURE OF AUDIO REPRODUCTION AND CREATION Today s entertainment is on a mission to recreate the real world. Just as VR makes us feel like

More information

PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS

PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS PERSONAL 3D AUDIO SYSTEM WITH LOUDSPEAKERS Myung-Suk Song #1, Cha Zhang 2, Dinei Florencio 3, and Hong-Goo Kang #4 # Department of Electrical and Electronic, Yonsei University Microsoft Research 1 earth112@dsp.yonsei.ac.kr,

More information

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops Sowmya Somanath Department of Computer Science, University of Calgary, Canada. ssomanat@ucalgary.ca Ehud Sharlin Department of Computer

More information

Information Layout and Interaction on Virtual and Real Rotary Tables

Information Layout and Interaction on Virtual and Real Rotary Tables Second Annual IEEE International Workshop on Horizontal Interactive Human-Computer System Information Layout and Interaction on Virtual and Real Rotary Tables Hideki Koike, Shintaro Kajiwara, Kentaro Fukuchi

More information

Spyractable: A Tangible User Interface Modular Synthesizer

Spyractable: A Tangible User Interface Modular Synthesizer Spyractable: A Tangible User Interface Modular Synthesizer Spyridon Potidis and Thomas Spyrou University of the Aegean, Dept. of Product and Systems Design Eng. Hermoupolis, Syros, Greece spotidis@aegean.gr,

More information

Multi-Loudspeaker Reproduction: Surround Sound

Multi-Loudspeaker Reproduction: Surround Sound Multi-Loudspeaker Reproduction: urround ound Understanding Dialog? tereo film L R No Delay causes echolike disturbance Yes Experience with stereo sound for film revealed that the intelligibility of dialog

More information

Tracking and Recognizing Gestures using TLD for Camera based Multi-touch

Tracking and Recognizing Gestures using TLD for Camera based Multi-touch Indian Journal of Science and Technology, Vol 8(29), DOI: 10.17485/ijst/2015/v8i29/78994, November 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Tracking and Recognizing Gestures using TLD for

More information

B360 Ambisonics Encoder. User Guide

B360 Ambisonics Encoder. User Guide B360 Ambisonics Encoder User Guide Waves B360 Ambisonics Encoder User Guide Welcome... 3 Chapter 1 Introduction.... 3 What is Ambisonics?... 4 Chapter 2 Getting Started... 5 Chapter 3 Components... 7 Ambisonics

More information

Electric Audio Unit Un

Electric Audio Unit Un Electric Audio Unit Un VIRTUALMONIUM The world s first acousmonium emulated in in higher-order ambisonics Natasha Barrett 2017 User Manual The Virtualmonium User manual Natasha Barrett 2017 Electric Audio

More information

Department of Computer Science and Engineering The Chinese University of Hong Kong. Year Final Year Project

Department of Computer Science and Engineering The Chinese University of Hong Kong. Year Final Year Project Digital Interactive Game Interface Table Apps for ipad Supervised by: Professor Michael R. Lyu Student: Ng Ka Hung (1009615714) Chan Hing Faat (1009618344) Year 2011 2012 Final Year Project Department

More information

Tactilis Mensa: Interactive Interface to the Art Collection Ecosystem

Tactilis Mensa: Interactive Interface to the Art Collection Ecosystem Tactilis Mensa: Interactive Interface to the Art Collection Ecosystem A creative work submitted in partial fulfilment of the requirements for the award of the degree BACHELOR OF CREATIVE ARTS (HONOURS)

More information

Microphone Array Design and Beamforming

Microphone Array Design and Beamforming Microphone Array Design and Beamforming Heinrich Löllmann Multimedia Communications and Signal Processing heinrich.loellmann@fau.de with contributions from Vladi Tourbabin and Hendrik Barfuss EUSIPCO Tutorial

More information

Immersive Guided Tours for Virtual Tourism through 3D City Models

Immersive Guided Tours for Virtual Tourism through 3D City Models Immersive Guided Tours for Virtual Tourism through 3D City Models Rüdiger Beimler, Gerd Bruder, Frank Steinicke Immersive Media Group (IMG) Department of Computer Science University of Würzburg E-Mail:

More information

CapWidgets: Tangible Widgets versus Multi-Touch Controls on Mobile Devices

CapWidgets: Tangible Widgets versus Multi-Touch Controls on Mobile Devices CapWidgets: Tangible Widgets versus Multi-Touch Controls on Mobile Devices Sven Kratz Mobile Interaction Lab University of Munich Amalienstr. 17, 80333 Munich Germany sven.kratz@ifi.lmu.de Michael Rohs

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue IV, April 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue IV, April 18,   ISSN International Journal of Computer Engineering and Applications, Volume XII, Issue IV, April 18, www.ijcea.com ISSN 2321-3469 AUGMENTED REALITY FOR HELPING THE SPECIALLY ABLED PERSONS ABSTRACT Saniya Zahoor

More information

HCI Outlook: Tangible and Tabletop Interaction

HCI Outlook: Tangible and Tabletop Interaction HCI Outlook: Tangible and Tabletop Interaction multiple degree-of-freedom (DOF) input Morten Fjeld Associate Professor, Computer Science and Engineering Chalmers University of Technology Gothenburg University

More information

DRAFT: SPARSH UI: A MULTI-TOUCH FRAMEWORK FOR COLLABORATION AND MODULAR GESTURE RECOGNITION. Desirée Velázquez NSF REU Intern

DRAFT: SPARSH UI: A MULTI-TOUCH FRAMEWORK FOR COLLABORATION AND MODULAR GESTURE RECOGNITION. Desirée Velázquez NSF REU Intern Proceedings of the World Conference on Innovative VR 2009 WINVR09 July 12-16, 2008, Brussels, Belgium WINVR09-740 DRAFT: SPARSH UI: A MULTI-TOUCH FRAMEWORK FOR COLLABORATION AND MODULAR GESTURE RECOGNITION

More information

Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL

Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL Design and Development of a Marker-based Augmented Reality System using OpenCV and OpenGL Yap Hwa Jentl, Zahari Taha 2, Eng Tat Hong", Chew Jouh Yeong" Centre for Product Design and Manufacturing (CPDM).

More information

DISTANCE CODING AND PERFORMANCE OF THE MARK 5 AND ST350 SOUNDFIELD MICROPHONES AND THEIR SUITABILITY FOR AMBISONIC REPRODUCTION

DISTANCE CODING AND PERFORMANCE OF THE MARK 5 AND ST350 SOUNDFIELD MICROPHONES AND THEIR SUITABILITY FOR AMBISONIC REPRODUCTION DISTANCE CODING AND PERFORMANCE OF THE MARK 5 AND ST350 SOUNDFIELD MICROPHONES AND THEIR SUITABILITY FOR AMBISONIC REPRODUCTION T Spenceley B Wiggins University of Derby, Derby, UK University of Derby,

More information

Convention e-brief 400

Convention e-brief 400 Audio Engineering Society Convention e-brief 400 Presented at the 143 rd Convention 017 October 18 1, New York, NY, USA This Engineering Brief was selected on the basis of a submitted synopsis. The author

More information

Augmented Reality Lecture notes 01 1

Augmented Reality Lecture notes 01 1 IntroductiontoAugmentedReality Lecture notes 01 1 Definition Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated

More information

GAME AUDIO LAB - AN ARCHITECTURAL FRAMEWORK FOR NONLINEAR AUDIO IN GAMES.

GAME AUDIO LAB - AN ARCHITECTURAL FRAMEWORK FOR NONLINEAR AUDIO IN GAMES. GAME AUDIO LAB - AN ARCHITECTURAL FRAMEWORK FOR NONLINEAR AUDIO IN GAMES. SANDER HUIBERTS, RICHARD VAN TOL, KEES WENT Music Design Research Group, Utrecht School of the Arts, Netherlands. adaptms[at]kmt.hku.nl

More information

Ubiquitous Home Simulation Using Augmented Reality

Ubiquitous Home Simulation Using Augmented Reality Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007 112 Ubiquitous Home Simulation Using Augmented Reality JAE YEOL

More information

Realtime Software Synthesis for Psychoacoustic Experiments David S. Sullivan Jr., Stephan Moore, and Ichiro Fujinaga

Realtime Software Synthesis for Psychoacoustic Experiments David S. Sullivan Jr., Stephan Moore, and Ichiro Fujinaga Realtime Software Synthesis for Psychoacoustic Experiments David S. Sullivan Jr., Stephan Moore, and Ichiro Fujinaga Computer Music Department The Peabody Institute of the Johns Hopkins University One

More information

GUIBDSS Gestural User Interface Based Digital Sixth Sense The wearable computer

GUIBDSS Gestural User Interface Based Digital Sixth Sense The wearable computer 2010 GUIBDSS Gestural User Interface Based Digital Sixth Sense The wearable computer By: Abdullah Almurayh For : Dr. Chow UCCS CS525 Spring 2010 5/4/2010 Contents Subject Page 1. Abstract 2 2. Introduction

More information

Faculty of Information Engineering & Technology. The Communications Department. Course: Advanced Communication Lab [COMM 1005] Lab 6.

Faculty of Information Engineering & Technology. The Communications Department. Course: Advanced Communication Lab [COMM 1005] Lab 6. Faculty of Information Engineering & Technology The Communications Department Course: Advanced Communication Lab [COMM 1005] Lab 6.0 NI USRP 1 TABLE OF CONTENTS 2 Summary... 2 3 Background:... 3 Software

More information

Gesture Recognition with Real World Environment using Kinect: A Review

Gesture Recognition with Real World Environment using Kinect: A Review Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

MPEG-4 Structured Audio Systems

MPEG-4 Structured Audio Systems MPEG-4 Structured Audio Systems Mihir Anandpara The University of Texas at Austin anandpar@ece.utexas.edu 1 Abstract The MPEG-4 standard has been proposed to provide high quality audio and video content

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

Projection Based HCI (Human Computer Interface) System using Image Processing

Projection Based HCI (Human Computer Interface) System using Image Processing GRD Journals- Global Research and Development Journal for Volume 1 Issue 5 April 2016 ISSN: 2455-5703 Projection Based HCI (Human Computer Interface) System using Image Processing Pankaj Dhome Sagar Dhakane

More information

Haptic presentation of 3D objects in virtual reality for the visually disabled

Haptic presentation of 3D objects in virtual reality for the visually disabled Haptic presentation of 3D objects in virtual reality for the visually disabled M Moranski, A Materka Institute of Electronics, Technical University of Lodz, Wolczanska 211/215, Lodz, POLAND marcin.moranski@p.lodz.pl,

More information

SLAPbook: tangible widgets on multi-touch tables in groupware environments

SLAPbook: tangible widgets on multi-touch tables in groupware environments SLAPbook: tangible widgets on multi-touch tables in groupware environments Malte Weiss, Julie Wagner, Roger Jennings, Yvonne Jansen, Ramsin Koshabeh, James D. Hollan, Jan Borchers To cite this version:

More information

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model

Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Evaluation of a new stereophonic reproduction method with moving sweet spot using a binaural localization model Sebastian Merchel and Stephan Groth Chair of Communication Acoustics, Dresden University

More information

Direction-Dependent Physical Modeling of Musical Instruments

Direction-Dependent Physical Modeling of Musical Instruments 15th International Congress on Acoustics (ICA 95), Trondheim, Norway, June 26-3, 1995 Title of the paper: Direction-Dependent Physical ing of Musical Instruments Authors: Matti Karjalainen 1,3, Jyri Huopaniemi

More information

Spatial Audio System for Surround Video

Spatial Audio System for Surround Video Spatial Audio System for Surround Video 1 Martin Morrell, 2 Chris Baume, 3 Joshua D. Reiss 1, Corresponding Author Queen Mary University of London, Martin.Morrell@eecs.qmul.ac.uk 2 BBC Research & Development,

More information

Tangible Lenses, Touch & Tilt: 3D Interaction with Multiple Displays

Tangible Lenses, Touch & Tilt: 3D Interaction with Multiple Displays SIG T3D (Touching the 3rd Dimension) @ CHI 2011, Vancouver Tangible Lenses, Touch & Tilt: 3D Interaction with Multiple Displays Raimund Dachselt University of Magdeburg Computer Science User Interface

More information

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1

Interactive Simulation: UCF EIN5255. VR Software. Audio Output. Page 4-1 VR Software Class 4 Dr. Nabil Rami http://www.simulationfirst.com/ein5255/ Audio Output Can be divided into two elements: Audio Generation Audio Presentation Page 4-1 Audio Generation A variety of audio

More information

AN EFFICIENT ALGORITHM FOR THE REMOVAL OF IMPULSE NOISE IN IMAGES USING BLACKFIN PROCESSOR

AN EFFICIENT ALGORITHM FOR THE REMOVAL OF IMPULSE NOISE IN IMAGES USING BLACKFIN PROCESSOR AN EFFICIENT ALGORITHM FOR THE REMOVAL OF IMPULSE NOISE IN IMAGES USING BLACKFIN PROCESSOR S. Preethi 1, Ms. K. Subhashini 2 1 M.E/Embedded System Technologies, 2 Assistant professor Sri Sai Ram Engineering

More information

SPATIAL SOUND REPRODUCTION WITH WAVE FIELD SYNTHESIS

SPATIAL SOUND REPRODUCTION WITH WAVE FIELD SYNTHESIS AES Italian Section Annual Meeting Como, November 3-5, 2005 ANNUAL MEETING 2005 Paper: 05005 Como, 3-5 November Politecnico di MILANO SPATIAL SOUND REPRODUCTION WITH WAVE FIELD SYNTHESIS RUDOLF RABENSTEIN,

More information

TACTILE COMPOSITION SYSTEMS FOR COLLABORATIVE FREE SOUND

TACTILE COMPOSITION SYSTEMS FOR COLLABORATIVE FREE SOUND TACTILE COMPOSITION SYSTEMS FOR COLLABORATIVE FREE SOUND Dan Livingstone Computer Music Research School of Computing, Communications and Electronics, University of Plymouth, Drakes Circus Plymouth PL148AA

More information

Convention Paper Presented at the 123rd Convention 2007 October 5 8 New York, NY

Convention Paper Presented at the 123rd Convention 2007 October 5 8 New York, NY Audio Engineering Society Convention Paper Presented at the 123rd Convention 2007 October 5 8 New York, NY The papers at this Convention have been selected on the basis of a submitted abstract and extended

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Lee, Hyunkook Capturing and Rendering 360º VR Audio Using Cardioid Microphones Original Citation Lee, Hyunkook (2016) Capturing and Rendering 360º VR Audio Using Cardioid

More information

MULTICHANNEL REPRODUCTION OF LOW FREQUENCIES. Toni Hirvonen, Miikka Tikander, and Ville Pulkki

MULTICHANNEL REPRODUCTION OF LOW FREQUENCIES. Toni Hirvonen, Miikka Tikander, and Ville Pulkki MULTICHANNEL REPRODUCTION OF LOW FREQUENCIES Toni Hirvonen, Miikka Tikander, and Ville Pulkki Helsinki University of Technology Laboratory of Acoustics and Audio Signal Processing P.O. box 3, FIN-215 HUT,

More information

A Gestural Interaction Design Model for Multi-touch Displays

A Gestural Interaction Design Model for Multi-touch Displays Songyang Lao laosongyang@ vip.sina.com A Gestural Interaction Design Model for Multi-touch Displays Xiangan Heng xianganh@ hotmail ABSTRACT Media platforms and devices that allow an input from a user s

More information

ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y

ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y New Work Item Proposal: A Standard Reference Model for Generic MAR Systems ISO JTC 1 SC 24 WG9 G E R A R D J. K I M K O R E A U N I V E R S I T Y What is a Reference Model? A reference model (for a given

More information

Magic Touch A Simple. Object Location Tracking System Enabling the Development of. Physical-Virtual Artefacts in Office Environments

Magic Touch A Simple. Object Location Tracking System Enabling the Development of. Physical-Virtual Artefacts in Office Environments Magic Touch A Simple Object Location Tracking System Enabling the Development of Physical-Virtual Artefacts Thomas Pederson Department of Computing Science Umeå University Sweden http://www.cs.umu.se/~top

More information

Recent Progress on Augmented-Reality Interaction in AIST

Recent Progress on Augmented-Reality Interaction in AIST Recent Progress on Augmented-Reality Interaction in AIST Takeshi Kurata ( チョヌン ) ( イムニダ ) Augmented Reality Interaction Subgroup Real-World Based Interaction Group Information Technology Research Institute,

More information

TRACKING PAPER NOTES ON A DISTRIBUTED PHYSICAL-VIRTUAL BULLETIN BOARD. Mohammad Mokarom Hossain. Masters Thesis (20P), 2004

TRACKING PAPER NOTES ON A DISTRIBUTED PHYSICAL-VIRTUAL BULLETIN BOARD. Mohammad Mokarom Hossain. Masters Thesis (20P), 2004 TRACKING PAPER NOTES ON A DISTRIBUTED PHYSICAL-VIRTUAL BULLETIN BOARD by Mohammad Mokarom Hossain Masters Thesis (20P), 2004 Department of Computing Science Umeå University SE-90187 Umeå, Sweden A thesis

More information

ApProgXimate Audio: A Distributed Interactive Experiment in Sound Art and Live Coding

ApProgXimate Audio: A Distributed Interactive Experiment in Sound Art and Live Coding ApProgXimate Audio: A Distributed Interactive Experiment in Sound Art and Live Coding Chris Kiefer Department of Music & Sussex Humanities Lab, University of Sussex, Brighton, UK. School of Media, Film

More information

Multiple Sound Sources Localization Using Energetic Analysis Method

Multiple Sound Sources Localization Using Energetic Analysis Method VOL.3, NO.4, DECEMBER 1 Multiple Sound Sources Localization Using Energetic Analysis Method Hasan Khaddour, Jiří Schimmel Department of Telecommunications FEEC, Brno University of Technology Purkyňova

More information

VAMBU SOUND: A MIXED TECHNIQUE 4-D REPRODUCTION SYSTEM WITH A HEIGHTENED FRONTAL LOCALISATION AREA

VAMBU SOUND: A MIXED TECHNIQUE 4-D REPRODUCTION SYSTEM WITH A HEIGHTENED FRONTAL LOCALISATION AREA VAMBU SOUND: A MIXED TECHNIQUE 4-D REPRODUCTION SYSTEM WITH A HEIGHTENED FRONTAL LOCALISATION AREA MARTIN J. MORRELL 1, CHRIS BAUME 2, JOSHUA D. REISS 1 1 Centre for Digital Music, Queen Mary University

More information

Audiopad: A Tag-based Interface for Musical Performance

Audiopad: A Tag-based Interface for Musical Performance Published in the Proceedings of NIME 2002, May 24-26, 2002. 2002 ACM Audiopad: A Tag-based Interface for Musical Performance James Patten Tangible Media Group MIT Media Lab Cambridge, Massachusetts jpatten@media.mit.edu

More information

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device 2016 4th Intl Conf on Applied Computing and Information Technology/3rd Intl Conf on Computational Science/Intelligence and Applied Informatics/1st Intl Conf on Big Data, Cloud Computing, Data Science &

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

LCC 3710 Principles of Interaction Design. Readings. Tangible Interfaces. Research Motivation. Tangible Interaction Model.

LCC 3710 Principles of Interaction Design. Readings. Tangible Interfaces. Research Motivation. Tangible Interaction Model. LCC 3710 Principles of Interaction Design Readings Ishii, H., Ullmer, B. (1997). "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms" in Proceedings of CHI '97, ACM Press. Ullmer,

More information

Development of Informal Communication Environment Using Interactive Tiled Display Wall Tetsuro Ogi 1,a, Yu Sakuma 1,b

Development of Informal Communication Environment Using Interactive Tiled Display Wall Tetsuro Ogi 1,a, Yu Sakuma 1,b Development of Informal Communication Environment Using Interactive Tiled Display Wall Tetsuro Ogi 1,a, Yu Sakuma 1,b 1 Graduate School of System Design and Management, Keio University 4-1-1 Hiyoshi, Kouhoku-ku,

More information