Disembodied Performance

Similar documents
Music and Technology in Death and the Powers

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

Paint with Your Voice: An Interactive, Sonic Installation

The Mixed Reality Book: A New Multimedia Reading Experience

GLOSSARY for National Core Arts: Media Arts STANDARDS

synchrolight: Three-dimensional Pointing System for Remote Video Communication

AFFECTIVE COMPUTING FOR HCI

WIRELESS VOICE CONTROLLED ROBOTICS ARM

Musical B-boying: A Wearable Musical Instrument by Dancing

Performing Art Utilizing Interactive Technology -Media Performance <Silent Mobius>-

Associated Emotion and its Expression in an Entertainment Robot QRIO

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

Visual Resonator: Interface for Interactive Cocktail Party Phenomenon

Touch Perception and Emotional Appraisal for a Virtual Agent

The Role of Interactive Systems in Audience s Emotional Response to Contemporary Dance

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

FACILITATING REAL-TIME INTERCONTINENTAL COLLABORATION with EMERGENT GRID TECHNOLOGIES: Dancing Beyond Boundaries

Gesture Recognition with Real World Environment using Kinect: A Review

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS

Building Perceptive Robots with INTEL Euclid Development kit

Tattle Tail: Social Interfaces Using Simple Anthropomorphic Cues

KINECT CONTROLLED HUMANOID AND HELICOPTER

Development of a telepresence agent

Perceptual Interfaces. Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces

Research Seminar. Stefano CARRINO fr.ch

D ISEMBODIED PERFORMANCE Abstraction of Representation in Live Theater

HeroX - Untethered VR Training in Sync'ed Physical Spaces

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1

The use of gestures in computer aided design

Electronics Design Laboratory Lecture #11. ECEN 2270 Electronics Design Laboratory

PlaceLab. A House_n + TIAX Initiative

Theater Production (1650)

Tangible Message Bubbles for Childrenʼs Communication and Play

Individual Test Item Specifications

Designing for Affective Interactions

North Valley Art Academies at PVSchools

ADVANCES IN IT FOR BUILDING DESIGN

National Coalition for Core Arts Standards Media Arts Model Cornerstone Assessment: High School- Proficient

Announcements. HW 6: Written (not programming) assignment. Assigned today; Due Friday, Dec. 9. to me.

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

REBO: A LIFE-LIKE UNIVERSAL REMOTE CONTROL

Creative Design. Sarah Fdili Alaoui

Telepresence Robot Care Delivery in Different Forms

Humanoid robot. Honda's ASIMO, an example of a humanoid robot

3D and Sequential Representations of Spatial Relationships among Photos

Empathy Objects: Robotic Devices as Conversation Companions

Booklet of teaching units

HOW CAN PUBLIC ART BE A STORYTELLER FOR THE 21 ST CENTURY?

Virtual Grasping Using a Data Glove

Waves Nx VIRTUAL REALITY AUDIO

Enduring Understandings 1. Design is not Art. They have many things in common but also differ in many ways.

Understanding the Mechanism of Sonzai-Kan

User Interface Agents

The Deep Sound of a Global Tweet: Sonic Window #1

ART 269 3D Animation The 12 Principles of Animation. 1. Squash and Stretch

Stage Acting. Find out about a production Audition. Rehearsal. Monologues and scenes Call back Casting

Movie Production. Course Overview

HUMAN-COMPUTER INTERACTION: OVERVIEW ON STATE OF THE ART TECHNOLOGY

Modeling Human-Robot Interaction for Intelligent Mobile Robotics

Context-sensitive speech recognition for human-robot interaction

New York State Learning Standards for the. P r e s e n t. P r o d u c e. Theater. At-A-Glance Standards

Intelligent interaction

Introduction to Mediated Reality

Journal of Theoretical and Applied Mechanics, Sofia, 2014, vol. 44, No. 1, pp ROBONAUT 2: MISSION, TECHNOLOGIES, PERSPECTIVES

Tableau Machine: An Alien Presence in the Home

Assess how research on the construction of cognitive functions in robotic systems is undertaken in Japan, China, and Korea

CONTEMPORARY COMPOSING

Australian Curriculum The Arts

GULLIVER PROJECT: PERFORMERS AND VISITORS

Wirelessly Controlled Wheeled Robotic Arm

Definitions and Application Areas

Amorphous lighting network in controlled physical environments

Birth of An Intelligent Humanoid Robot in Singapore

What was the first gestural interface?

BIOFEEDBACK GAME DESIGN: USING DIRECT AND INDIRECT PHYSIOLOGICAL CONTROL TO ENHANCE GAME INTERACTION

A SURVEY ON GESTURE RECOGNITION TECHNOLOGY

Dipartimento di Elettronica Informazione e Bioingegneria Robotics

Touch & Gesture. HCID 520 User Interface Software & Technology

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

The Voice Pump: An Affectively Engaging Interface for Changing Attachments

Kissenger: A Kiss Messenger

Attention Meter: A Vision-based Input Toolkit for Interaction Designers

GRADE FOUR THEATRE CURRICULUM Module 1: Creating Characters

Sexual Interactions: why we should talk about sex in HCI?

Robot: Geminoid F This android robot looks just like a woman

SPY ROBOT CONTROLLING THROUGH ZIGBEE USING MATLAB

Interactive Tables. ~Avishek Anand Supervised by: Michael Kipp Chair: Vitaly Friedman

Mr Beam. magazine. The Soundscapes concept and the Personics sensor system. Feature

Measuring emotions: New research facilities at NHTV. Dr. Ondrej Mitas Senior lecturer, Tourism, NHTV

CONTACT: , ROBOTIC BASED PROJECTS

EMOTIONAL INTERFACES IN PERFORMING ARTS: THE CALLAS PROJECT

A Robust Neural Robot Navigation Using a Combination of Deliberative and Reactive Control Architectures

Prospective Teleautonomy For EOD Operations

Resonant Self-Destruction

Cognitive robots and emotional intelligence Cloud robotics Ethical, legal and social issues of robotic Construction robots Human activities in many

Emoto-bot Demonstration Control System

Technology designed to empower people

Definitions of Ambient Intelligence

HeadScan: A Wearable System for Radio-based Sensing of Head and Mouth-related Activities

Transcription:

Disembodied Performance Peter A. Torpey MIT Media Laboratory 20 Ames Street, E15-443C Cambridge, MA 02139 USA http://web.media.mit.edu/~patorpey/ Elena N. Jessop MIT Media Laboratory 20 Ames Street, E15-445 Cambridge, MA 02139 USA http://web.media.mit.edu/~ejessop/ Copyright is held by the author/owner(s). CHI 2009, April 4 9, 2009, Boston, MA, USA ACM 978-1-60558-246-7/09/04. Abstract Early in Tod Machover s opera Death and the Powers, the main character, Simon Powers, is subsumed into a technological environment of his own creation. The theatrical set comes alive in the form of robotic, visual, and sonic elements that allow the actor to extend his range and influence across the stage in unique and dynamic ways. This environment must compellingly assume the behavior and expression of the absent Simon. In order to distill the essence of this character, we recover performance parameters in real time from physiological sensors, voice, and vision systems. These gesture and performance parameters are then mapped to a visual language that incorporates cognitive and semantic models informed by modal relationships. This language allows the off-stage actor to express emotion and interact with others on stage. Our Disembodied Performance system takes a new direction in augmented performance by employing a nonrepresentational abstraction of a human presence that fully translates a character into an environment. Keywords Performance, theater, visualization, physiological sensors ACM Classification Keywords J.5. Arts and humanities: performing arts; H.5.1. Multimedia information systems

Introduction At its most fundamental level, the Disembodied Performance system is a tool to help tell a story. This system is currently being developed for Death and the Powers, a new opera by Tod Machover being produced at the MIT Media Laboratory under the direction of Diane Paulus and with production design by Alex McDowell. In this opera, the powerful and wealthy Simon Powers is obsessed with leaving something of himself behind in the world; to this end, he develops The System, a technological masterpiece pervading his entire house, into which he can upload his essence upon the moment of his death. Simon enters The System at the end of Scene I, and we see his transformation into a new, non-anthropomorphic form, as he becomes present in his environment while still remaining agent and aware. His family is left to make sense of his new way of being. Since the main character is not physically on stage, but rather realized in the movements, sound, and imagery of the set, the question becomes how to create a believable live performance. We are designing a system that allows the character to maintain a compelling and provocative presence on stage in his transmogrified form. The actor will still give his performance singing, gesticulating but from off stage. Using a set of sensing technologies we are developing, we will capture many aspects of the actor s performance. These components include breath strain sensors, simple gesture capture sensors, touch sensors in objects that the actor can manipulate, audio analysis of the actor s voice, and a camera-based computer vision system. The data captured by these devices will be sent to a computer that will analyze each aspect in real time and create a vector of values that summarizes the behavior of the actor at a given time. The crux of this performance system is to take the captured values and map them into a parametric space that will model, at least to an artistic or subjective degree, the affective and cognitive state of the character [2]. Affect and nuance will be gleaned primarily from the physiological performance parameters and gestures. The dialogue of the given libretto and known story arc provide a more deterministic window for generating a model of the character s thoughts and memories. This model is then transmitted to a distributed system of set elements and other components to use light, projection, mechanical movement, and sound in order to recreate the performance on stage. In this way, this system takes a structured approach to defining the mappings from input to output by using an abstracted intermediate representation. Background A wide variety of performance artists have used analog and digital sensor technologies to gather various kinds of data from a live performance, with this data then controlling or affecting some other aspect of the performance or stage environment. As early as 1965, Merce Cunningham and John Cage s Variations V incorporated photoelectric sensors and antennae to mark the positions of dancers; the data gathered by these sensors and antennae then triggered and controlled electronic musical devices [5]. Many performance artists now use a variety of movementrecording sensors to control such elements as sound, projection, video capture, and lighting. One performance group that frequently uses such sensing

and control systems is Troika Ranch, creators of the software system Isadora, which takes input from flex sensors attached to performers joints and allows a choreographer to easily determine how this movement data controls media elements of the piece. Other performance works done by Troika Ranch have used movement sensors not on the body, such as laser beams crisscrossing the stage, camera-tracking systems, and piezo impact sensors on the floor [4,10,3]. Yamaha s Miburi system [11], Paradiso and Aylward s Sensemble [1], and the Danish Institute of Electronic Music s Digital Dance Interface [8] are other wearable sensor systems for movement tracking in performance. All these systems have been used for the real-time generation and adaptation of music to accompany performers onstage. Camera systems for tracking motion are particularly popular in interactive dance and performance. Falling Up, a performance piece by Todd Winkler, uses one such camera system, the Very Nervous System designed by David Rokeby. In this performance, live video is processed to determine the location and speed of an onstage performer; this data is mapped through software to control sound and the live, projected image of the performer [13]. Stichting Eleckro-Instrumentale Muzeik (STEIM) has developed another camera-based performer tracking system called BigEye [9], often used for performances where performers trigger sound or music events by moving into particular areas of the stage [8]. Artists such as Robert Lepage have also brought these interactive performance technologies into the world of opera. Lepage s 2008 staging of Hector Berlioz La Damnation de Faust for the Metropolitan Opera uses microphones to capture pitch and amplitude of the performers voices and the orchestra s music, as well as infrared lights and cameras to capture motion. The data from these sensors is used to shape projected images in real time [12]. Differences in Disembodied Performance One element that all of these sensor-based performances and systems have in common with our system is their use of real-time technology for live performance augmentation. These systems are not simply programmed to be identical every performance, but are sensitive to the nuance of the performer s action. However, these performance technologies also incorporate the onstage body of the live performer as a vital element of the performance. Our interactive system does not focus on a live performer interacting with or relating to a digital augmentation of his or her body, but on the complete digital transference of an absent performer s presence and reactions into the environment. Additionally, between the performance capture and the rendering of the output on stage, our system takes a novel approach to modeling the character s affect and cognitive state using parametric mappings informed by modal regularities. Finally, the role of the Disembodied Performance system is different from prominent examples of augmented performance typically employed in interactive installations, theater, and dance because it must represent the character fully, not merely respond in order to augment the performer. It must carry the emotional weight of a character on stage. Our Approach Sensor System Since breath is such a key element of most types of performance, from dance to opera, we believe that

Onstage Feedback Performer Sensor Systems Input Mappings Character Model Output Mappings Show Control Systems Modal Regularities On-stage Representation figure 1: System diagram illustrating the flow of data representations from the performer to the output on stage. Other show control systems can influence the output representation so that projected imagery can interact with stage lighting. Additionally, views of the stage and an audio mix are fed back to the performer so that he or she can react to others on stage. analyzing breath can be an essential component of digitally capturing a performance. The current implementation of our breath sensor consists of an inelastic chest band with a resistive stretch sensor located on a single elastic section. When the performer wearing the breath sensor band inhales, the flexible sensor changes resistance proportionally to the amount the chest expands with each breath. Current wearable sensors also include accelerometers on the performer s wrists, from which we obtain gestural data such as the rugosity of the performer s gestures. We are also using image analysis on the output of a USB-enabled video camera to obtain further gesture and movement data. We also chose to give the performer using this system a measure of control over the performance output of the system by providing a tangible object that the performer can manipulate to express emotion through touch. In its current form, this object consists of a series of conductive foam pressure sensors that can detect the quality of the user s touch. Data from all the on-the-body sensors is sent to modules located in a pouch on the chest band, which then transmit the data wirelessly via the Zigbee protocol to the processing computer. Vocal data from the performer is also collected using microphones and sent to the computer for audio processing. This vocal data, including both sung and spoken sounds, is analyzed for amplitude, pitch, timbre, and purity of sound (consonance). These values were then used as inputs for our mappings. representation. Each of the parameters measured (such as the velocity of the performer s hand or the timbre of the performer s voice) does not directly represent some aspect of the character s affective state. However, regularities in the change and variation of these parameters are expected to be consistent with the portrayed emotional state. Audiences clearly have an understanding of this gestural vocabulary from traditional performance where the actor can be seen directly and thus use this information to understand the intended emotional content. We use a model that can effectively capture the affective state of the character, such as a threedimensional metric space with orthogonal axes representing the normalized signed affective bases of stance, valence, and arousal (figure 2). This model is commonly used for parameterizing affect [6] and has a much lower dimension than the set of all signals recovered from the performer, so we apply a set of mappings to reduce the dimensionality of the data and project it at a given time to a point in the affect space. Stance Arousal Valence Mappings and Modal Regularities Using the sensor system previously described, we can capture aspects of the actor s performance as an input figure 2: Affect space

The mappings are chosen to be consistent with correlations in the parameters. Inspired by research in cognitive science, we use modal regularities, which are features in the data that have a high probability of cooccurrence with other properties. This probability of cooccurrence is assumed to be the result of a common cause [7]. Using modal regularities ensures mappings of the input parameters to known states in the modeled affect space. A real-time mapping from the highdimensional input representation (the performance) to the lower-dimensional intermediate representation (the affect space) provides a temporally nuanced model of the character s affective state. The value of the affect space can then be used to generate a visual on-stage representation. The primary manifestation of the performance in the particular instance of Death and the Powers is a new expressive visual language that is highly integrated with the physical design of the set. The mappings from the performance data to the affect space and from the affect space to the on-stage representation of the performance are determined empirically. Research on facial expressions and body language, color symbolism, and contour perception also must greatly influence the mappings that are generated. Additionally, the mapping parameters of the system can be edited during the rehearsal process, so that the system can take direction and be tuned for the desired performance. Further Work Working in an iterative development process, we will continue testing software systems and the mappings that may be generated. Additional performance sensors may be added, including galvanic skin response sensors, heart rate sensors, and more sophisticated gesture capture sensors. Production-specific mappings from the affect space to the on-stage output, which will take the form of projection on specific set pieces, lighting, and sound, will continue to be developed and refined, until Death and the Powers premieres in Monte Carlo, Monaco in mid-september 2009. Following that, the production is expected to tour throughout the United States and worldwide. During technical rehearsals and rehearsals with the cast, the director will be able to tune the performance of the system and the actor portraying Simon Powers will become accustomed to acting and singing offstage with a variety of sensors picking up his behavior. If Director Diane Paulus can treat this new form like any other actor in rehearsal, and if she can achieve the emotional resonance she envisions from its performance, the proposed system will have successfully provided a representation that can take direction. Contributions The system described presents a new way to think about and implement augmented performance systems. While multimodal mappings of expression have long been utilized, this approach brings to the table formal notions of affect and cognitive modeling, particularly in the unique application of modal regularities across input and output domains to provide intelligent and meaningful grounding to the mapping, all the while relying on the actor and director s artistic vision to provide the essence of the character portrayed. Applications can easily be seen beyond the scope of Death and the Powers, as the basic form of the system can be easily generalized for other performance pieces. Disembodied Performance distills the essence of a character from parameters recovered from the actor

and allows the performance to be extended out into the environment. The methodology and many aspects of the software infrastructure also offer new perspectives in the domains of remote presence, personal archiving, and storytelling. In remote presence, for example, modeling affect from gesture can be used to convey additional streams of information for interpersonal communication. This system opens the door for many alternatives to representing presence. It abstracts away the body in a meaningful way, allowing a person or character to become anything, to exist anywhere (even in non-anthropomorphic manifestations), providing greater ranges of evocative, intelligible, and compelling expression. Acknowledgements The authors thank Tod Machover and the Opera of the Future research group at the Media Lab for supporting this work. Thanks also to Diane Paulus, Alex McDowell, David Small, Whitman A. Richards, Cynthia Breazeal, and Rosalind Picard. References [1] Aylward, Ryan and Paradiso, Joseph. Sensemble: A Wireless, Compact, Multi-user Sensor System for Interactive Dance. Proc. New Interfaces for Musical Expression. 2006. pp. 134-139. [2] Breazeal, Cynthia. Emotion and Sociable Humanoid Robots. 59, s.l.: Elsevier, 2003, International Journal of Human-Computer Studies, pp. 119-155. [3] Coniglio, Mark. The Importance of Being Interactive. New Visions in Performance. s.l.: Taylor & Franci, 2004, pp. 5-12. [4] Dixon, Steve. Digital Performance: A History of New Media in Theater, Dance, Performance Art, and Installation. Cambridge: MIT Press, 2007. [5] Mazo, Joseph H. Prime Movers: The Makers of Modern Dance in America. 2nd Edition. Hightstown: Princeton Book Company, 1977. [6] Picard, Rosalind W. Affective Computing. Cambridge: MIT Press, 1997. [7] Richards, Whitman. Modal Inference. Association for the Advancement of Artificial Intelligence Symposium. 2008. [8] Siegel, Wayne and Jacobsen, Jens. The Challenges of Interactive Dance: An Overview and Case Study. 4, 1998, Computer Music Journal, Vol. 22, pp. 29-43. [9] STEIM. Products, BigEye. STEIM. [Online] http://www.steim.org/steim/bigeye.html. [10] Stoppiello, Dawn and Coniglio, Mark. Fleshmotor. [ed.] Judy Malloy. Women, Art, and Technology. Cambridge: MIT Press, 2003, 31, pp. 440-452. [11] Vickery, Lindsay. The Yamaha MIBURI MIDI Jump Suit as a Controller for STEIM's Interactive Video Software Image/ine. Proc. Australian Computer Music Conference. 2002. [12] Wakin, Daniel. Techno-Alchemy at the Opera: Robert Lepage Brings his "Faust" to the Met. New York Times. November 7, 2008, p. C1. [13] Winkler, Todd. Fusing Movement, Sound, and Video in Falling Up, an Interactive Dance/Theater Production. Proc. New Interfaces for Musical Expression, Session: Demonstrations. 2002. pp. 1-2.