Research on Public, Community, and Situated Displays at MERL Cambridge

Similar documents
Multi-User Multi-Touch Games on DiamondTouch with the DTFlash Toolkit

DiamondTouch SDK:Support for Multi-User, Multi-Touch Applications

Semi-Automatic Antenna Design Via Sampling and Visualization

Around the Table. Chia Shen, Clifton Forlines, Neal Lesh, Frederic Vernier 1

Bayesian Method for Recovering Surface and Illuminant Properties from Photosensor Responses

ACTIVE: Abstract Creative Tools for Interactive Video Environments

Spatial Augmented Reality: Special Effects in the Real World

MRT: Mixed-Reality Tabletop

Circularly polarized near field for resonant wireless power transfer

Constructing Representations of Mental Maps

Buzz: Measuring and Visualizing Conference Crowds

Voice Search While Driving: Is It Safe?

Multi-User, Multi-Display Interaction with a Single-User, Single-Display Geospatial Application

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

MotionBeam: Designing for Movement with Handheld Projectors

Diploma Thesis Final Report: A Wall-sized Focus and Context Display. Sebastian Boring Ludwig-Maximilians-Universität München

Psychophysics of night vision device halo

1. First printing, TR , March, 2000.

Information Layout and Interaction on Virtual and Real Rotary Tables

Exploring Passive Ambient Static Electric Field Sensing to Enhance Interaction Modalities Based on Body Motion and Activity

AXIS Fence Guard. User Manual

Regan Mandryk. Depth and Space Perception

The Mixed Reality Book: A New Multimedia Reading Experience

Fast and High-Quality Image Blending on Mobile Phones

Design of Enhancement Mode Single-gate and Double-gate Multi-channel GaN HEMT with Vertical Polarity Inversion Heterostructure

New Human-Computer Interactions using tangible objects: application on a digital tabletop with RFID technology

An Un-awarely Collected Real World Face Database: The ISL-Door Face Database

A Collaboration with DARCI

Generalized DC-link Voltage Balancing Control Method for Multilevel Inverters

The Role of Dialog in Human Robot Interaction

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

From Table System to Tabletop: Integrating Technology into Interactive Surfaces

A User-Friendly Interface for Rules Composition in Intelligent Environments

Multimedia Virtual Laboratory: Integration of Computer Simulation and Experiment

EnhancedTable: An Augmented Table System for Supporting Face-to-Face Meeting in Ubiquitous Environment

SpeckleEye: Gestural Interaction for Embedded Electronics in Ubiquitous Computing

Power Delivery Optimization for a Mobile Power Transfer System based on Resonator Arrays

Computer-Augmented Environments: Back to the Real World

Spatial augmented reality to enhance physical artistic creation.

Computational Illumination

DISTINGUISHING USERS WITH CAPACITIVE TOUCH COMMUNICATION VU, BAID, GAO, GRUTESER, HOWARD, LINDQVIST, SPASOJEVIC, WALLING

Interaction Design for AnimMagix Prototype

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

ExTouch: Spatially-aware embodied manipulation of actuated objects mediated by augmented reality

DiamondTouch: A Multi-User Touch Technology

Tableau Machine: An Alien Presence in the Home

Keywords Unidirectional scanning, Bidirectional scanning, Overlapping region, Mosaic image, Split image

Accurate Models for Spiral Resonators

SITUATED DESIGN OF VIRTUAL WORLDS USING RATIONAL AGENTS

Chapter 7- Lighting & Cameras

Image Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt

Understanding OpenGL

Toward an Augmented Reality System for Violin Learning Support

GLOSSARY for National Core Arts: Media Arts STANDARDS

DESIGN STYLE FOR BUILDING INTERIOR 3D OBJECTS USING MARKER BASED AUGMENTED REALITY

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

AR Tamagotchi : Animate Everything Around Us

Image Processing and Particle Analysis for Road Traffic Detection

Lifelog-Style Experience Recording and Analysis for Group Activities

Technical Guide for Radio-Controlled Advanced Wireless Lighting

Measuring Skin Reflectance and Subsurface Scattering

Beacon Island Report / Notes

AR 2 kanoid: Augmented Reality ARkanoid

Figure 1 HDR image fusion example

Distributed Virtual Environments!

Context-Aware Interaction in a Mobile Environment

Using Scalable, Interactive Floor Projection for Production Planning Scenario

HUMAN COMPUTER INTERFACE

High Performance Imaging Using Large Camera Arrays

Frequency Noise Reduction of Integrated Laser Source with On-Chip Optical Feedback

synchrolight: Three-dimensional Pointing System for Remote Video Communication

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

Polytechnical Engineering College in Virtual Reality

Projected Time Travel:

AUGMENTED REALITY FOR COLLABORATIVE EXPLORATION OF UNFAMILIAR ENVIRONMENTS

Coded Modulation for Next-Generation Optical Communications

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

While entry is at the discretion of the centre, it would be beneficial if candidates had the following IT skills:

You ve heard about the different types of lines that can appear in line drawings. Now we re ready to talk about how people perceive line drawings.

Augmented Reality Mixed Reality

Locating Double-line-to-Ground Faults using Hybrid Current Profile Approach

William B. Green, Danika Jensen, and Amy Culver California Institute of Technology Jet Propulsion Laboratory Pasadena, CA 91109

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

Aimetis Outdoor Object Tracker. 2.0 User Guide

Humera Syed 1, M. S. Khatib 2 1,2

Simultaneous geometry and color texture acquisition using a single-chip color camera

Under the Table Interaction

Fly Elise-ng Grasstrook HG Eindhoven The Netherlands Web: elise-ng.net Tel: +31 (0)

Enhanced Virtual Transparency in Handheld AR: Digital Magnifying Glass

CROWD ANALYSIS WITH FISH EYE CAMERA

Synthetic aperture photography and illumination using arrays of cameras and projectors

Motivation and objectives of the proposed study

Engagement During Dialogues with Robots

Colour correction for panoramic imaging

Chapter 1 Virtual World Fundamentals

Interior Design using Augmented Reality Environment

Time-Lapse Panoramas for the Egyptian Heritage

Working with the BCC DVE and DVE Basic Filters

A Very High Level Interface to Teleoperate a Robot via Web including Augmented Reality

Transcription:

MERL A MITSUBISHI ELECTRIC RESEARCH LABORATORY http://www.merl.com Research on Public, Community, and Situated Displays at MERL Cambridge Kent Wittenburg TR-2002-45 November 2002 Abstract In this position paper, I discuss aspects of the research program at MERL Cambridge in public situated displays and interactions. We are working on a number of key enabling technologies including smart projective displays, multi-user touch, and computer vision for face detection. We have developed an initial concept demonstration for retail environments that includes an integration of some of these technologies. As we continue to develop technologies and tools, a key question at this point in our research program is to determine a strategy for addressing the social and evaluative aspects of public interactive spaces. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Information Technology Center America; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Information Technology Center America. All rights reserved. Copyright c Mitsubishi Electric Information Technology Center America, 2002 201 Broadway, Cambridge, Massachusetts 02139

Presented at the Workshop on Public, Community, and Situated Displays in connection with the ACM 2002 Conference on Computer Supported Cooperative Work, New Orleans, Louisiana, November 2002.

Research on Public, Community, and Situated Displays at MERL Cambridge Kent Wittenburg Cambridge Systems Laboratory Mitsubishi Electric Research Laboratories 201 Broadway Cambridge, MA 02139 USA wittenburg@merl.com ABSTRACT In this position paper, I discuss aspects of the research program at MERL Cambridge in public situated displays and interactions. We are working on a number of key enabling technologies including smart projective displays, multi-user touch, and computer vision for face detection. We have developed an initial concept demonstration for retail environments that includes an integration of some of these technologies. As we continue to develop technologies and tools, a key question at this point in our research program is to determine a strategy for addressing the social and evaluative aspects of public interactive spaces. Keywords Projectors, touch interaction, computer vision, face recognition INTRODUCTION The two laboratories at MERL Cambridge have an extensive research program in technologies and systems relevant to public situated displays. First, we are working on advanced technologies for projectors including smart projectors that can sense their orientation and correct for distortion [ 4], easily configured multi-projector mosaics for large and/or brighter displays [ 5], and shader lamps that allow projection onto non-planar surfaces and objects [ 6]. Second, we are working on interactive touch technologies that enable simultaneous multi-user touch events. For touch surfaces outfitted with antenna arrays and receivers, this technology also affords identification of who is touching where [ 2]. In the general case, we have shown how capacitive sensing can be used to afford interaction with physical displays and artefacts [ 3]. Third, we have a strong program in computer vision in which we have developed methods for detecting faces and determining when people are directly facing a given camera position [ 7]. Such technology can be used to determine when people are present and/ or looking at a certain target area and could trigger events for new forms of public interaction. Recently we completed a concept prototype of an interactive projective display in a retail environment space. (See Figures 1-6). The demo incorporated 13 projectors in a room to simulate part of a floor space in a shoe store. We modeled the physical space itself to enable a seamless, continuous display across corners and shelves in the room with no distortion. A multimedia presentation including engaging audio and graphical animation was created that was tuned to the physical characteristics of the space. Interaction was enabled by placing simple capacitive touch sensors at key points in the physical space. After playing a multimedia message about the product line, the display entered a state with a message that said Touch shoe for more information. Shoe shelves outfitted with touch sensors were highlighted using a virtual spotlight technique compatible with conventional projectors. Users then could pick up a shoe (actually, just putting a hand close to a shelf was sufficient), and a presentation about that specific shoe would be shown. In what follows I will describe aspects of this interactive retail space demonstration and identify ongoing research issues. Needless to say, it is impossible to appreciate the impact of a situated interactive multimedia presentation in print alone, but we will be able to identify some of the technical issues. I ll then touch on some of our other research and how it might fit into this general program. The focus of MERL Cambridge to date has been on the enabling technologies. We welcome interactions with other researchers focused on social and evaluative aspects. INTERACTIVE RETAIL SHOE STORE Our demonstration of an interactive retail space incorporates use of multiple projectors that integrates content from a common source, multimedia content that is created with the physical aspects of the space in mind, and interactive touch sensors placed in key physical positions in the environment. Situated Projective Displays For usage of projectors to expand in the ways suggested by our demonstration, a number of innovations are required. First, distortion problems need to be easily and automatically corrected so that projectors can be flexibly placed in the environment. One problem is that the customers will often occlude the displays on which we are hoping to project. One solution is to project onto these surfaces from

Figure 1: An evocative scene being displayed with multiple projectors across a nonplanar surface. igure 4: A close up of the display shelves that have been equipped with capacitive sensors. igure 2: A change of that scene showing brick wall texture and a syncronized 3D display of a rotating cube. Figure 5: At the end of the multimedia presentation loop, individual shelves are spotlit with a message to encourage interaction. Figure 3: Display shelves with a multimedia presentation designed for the space. Figure 6: A customer touching a shoe, which triggers an interactive display with more information for that shoe.

fairly acute angles with ceiling or floor-mounted projectors. This raises the issue of geometric distortion. We have overcome this problem by prewarping the image data so that it ends up looking correct on the display. We have also developed technologies using cameras to analyze a projected checkerboard pattern (on planar surfaces) so that projectors can quickly and automatically be calibrated [ 4]. For nonplanar surfaces, we build a geometric model of the space and calibrate by clicking with the mouse pointer on key points as they are projected onto the environment [ 6]. In order to create larger display areas as well as to add brightness so that front-projected displays are visible in a well-lit room, it is possible to use multiple projectors to display a single content source. If the projected displays overlap, the images from each projector must be "stitched" together to create a single seamless display. We have developed a system for automating such multi-projector mosaics [ 5]. In addition to automatically discovering the warping and blending required of each projector, the images must also be synchronized in time. We have built initial software to support such synchronization. Research on better tools for such content creation and distributed display is ongoing. Note that the use of multiple projectors with overlap helps to overcome the shadowing problems mentioned above. In addition to projection on planar surfaces such as walls and floors, we have also developed a technology called Shader Lamps that allows projection onto arbitrary threedimensional objects [ 6]. Shader lamps can project texture and other object details onto white three-dimensional objects. This can create the illusion of different materials and lighting conditions, as well as display intricate details. In addition, image animation can be used to yield the perception of apparent motion. Use of Shader Lamps requires the acquisition of a three-dimensional model and the creation of content to project onto that model. In the retail shoe store demonstration we used Shader Lamps to create an undistorted image over a space that included a corner projecting out into the room. (See Figures 1 and 2.) Research issues here include tools for easy acquisition of 3D models, integration with tools for content creation for use with multiple projectors, and display methods so that interactions as well as portions of the visual scene can be distributed across multiple projectors and computers. Interaction through proximity detection Since all of these systems we are proposing use video projection, the content displayed can change from moment to moment. This opens the possibility of creating highly interactive displays that respond automatically to consumers. Such systems require inexpensive sensors that serve to generate input events that trigger appropriate responses. In the retail shoe store demo, we have utilized Sensing Shelves that incorporate capacitive proximity sensors to determine when a person has touched or nearly touched a particular piece of merchandise. The underlying technology has been demonstrated in other projects at MERL, notably the buffer phone [ 3]. Issues here include the development of integrated device toolkits and development tools that are designed to work with them. Deployments require communication between sensors in the environment and one or more host computers. The interactions in physical spaces afforded by such sensors have only begun to be explored, and much work is needed to determine creative uses of such input devices and evaluate their effectiveness. FUTURE WORK In future work, we envision that cameras might be mounted on shelves or (virtual) signage and that vision algorithms could determine when consumers are facing a particular direction. We have developed face detection technology at MERL Cambridge that includes frontal as well as profile views [ 7]. Such technology could also be used to analyze crowdedness in particular floor locations and trigger an event that might attempt to draw people to another location. We also have two ongoing projects involving multi-user touch surfaces and computing tables [ 2][ 8]. With collaborators at the University of Manchester and Imperial College, we are beginning to investigate opportunistic browsing in communities through interactive computing tables in public spaces [ 1]. A related project with potential in public situated displays is in the area of rapid serial visual presentation (RSVP) [ 9]. At MERL we are building on earlier work [ 10] to explore how presentation and interaction techniques involving rapid sequences of images might be used for consumers to browse and find information. In a public environment, this suggests a highly interactive page flipping sort of interaction to support the tasks of getting an overview of a product line and/or searching for a particular item. CONCLUSION The demonstration and underlying technologies just described are indicative of the great potential for public interactive displays in retail environments. Our plan is to continue to explore technologies, interactive designs, and toolkits that can enable new forms of situated interactions of this kind. We are also investigating enhancements to existing projector product lines that would be required to support these scenarios. However, to date our laboratories have not articulated a plan to evaluate the effectiveness of these and other interactive public displays. How would the public actually react to such a display? Would behaviors be affected by the size and demographics of the crowd or other factors? Would interaction differences exist across age, sex, or other demographic factors? What presentation parameters might overcome inhibition of public interaction, assuming there would be some? We are actively seeking discussions and collaborations with other researchers to articulate theories that might explain such phenomena and construct a program for evaluation. ACKNOWLEDGMENTS This position paper discusses work from a large and fluid group at MERL Cambridge. The principals who are working on projector technology and its use in retail applications are Paul Beardsley, Shane Booth, Paul Dietz, Ramesh Raskar, Jeroen van Baar, and Remo Ziegler. The group working on capacitive sensing for touch technologies includes Paul Dietz, Darren Leigh, Kathy Ryall, Sam Shipman, and Bill Yerazunis. Those working on face detection technology include Michael Jones, Baback Moghaddam,

Dan Snow, Jay Thornton, and Paul Viola. The principals in the DiamondSpin project on circular table interfaces are Neal Lesh, Chia Shen, and Fred Vernier. The RSVP project includes Alan Esenther, Cliff Forlines, Tom Lanning, and the author. REFERENCES 1. de Bruijn, O., Spence R., Serendipity within a Ubiquitous Computing Environment: a Case for Opportunistic Browsing, in Proceedings of UbiComp2001 (Atlanta, Georgia, October 2001), Lecture Notes in Computer Science, vol. 2201, Springer, 2. Dietz, P.H., Leigh, D.L., "DiamondTouch: A Multi-User Touch Technology", in Proceedings of UIST: ACM Symposium on User Interface Software and Technology (Orlando FL USA, November 2001), ACM Press, 219-226. 3. Dietz, P.H., Yerazunis, W.S., "Real-Time Audio Buffering for Telephone Applications", in Proceedings of UIST: ACM Symposium on User Interface Software and Technology (Orlando FL USA, November 2001), ACM Press, 193-194. 4. Raskar, R., Beardsley, P., "A Self-Correcting Projector", in Proceedings of CVPR: IEEE Conference on Computer Vision and Pattern Recognition (Kauai HI, December 2001), IEEE. 5. Raskar, R., van Baar, J., Chai, J.X., "A Low-Cost Projector Mosaic with Fast Registration", in Proceedings of Asian Conference on Computer Vision (ACCV), January 2002. 6. Raskar, R.., Ziegler, R.., Willwacher, T., "Cartoon Dioramas in Motion", in Proceedings of International Symposium on Non-Photorealistic Animation and Rendering (Annecy France, June 2000). 7. Viola, P., Jones, M., "Rapid Object Detection using a Boosted Cascade of Simple Features," in Proceedings of CVPR: IEEE Conference on Computer Vision and Pattern Recognition (Kauai HI, 2001), IEEE, vol I, 511-518. 8. Vernier, F., Lesh, N.B., Shen, C., "Visualization Techniques for Circular Tabletop Interfaces", in Proceedings of AVI: ACM Advanced Visual Interfaces, (Trento Italy, May 2002), ACM Press, 257-263. 9. Spence, R., Rapid, Serial and Visual: a presentation technique with potential, Information Visualization, Vol. 1 2002, 13-19. 10.Wittenburg, K., Chiyoda, C., Heinrichs, M., and Lanning, T. Browsing Through Rapid-Fire Imaging: Requirements and Industry Initiatives, in Proceedings of Electronic Imaging 2000: Internet Imaging (San Jose CA USA, January 2000), SPIE, 48-56.