Experiences of Research on Vision Based Interfaces at the MIT Media Lab
|
|
- Christopher Alexander
- 5 years ago
- Views:
Transcription
1 HELSINKI UNIVERSITY OF TECHNOLOGY Telecommunications Software and Multimedia Laboratory Tik Seminar on content creation Autumn 2003: Aspects of Interactivity Experiences of Research on Vision Based Interfaces at the MIT Media Lab Johnny Biström 21548C
2 Experiences of Research on Vision Based Interfaces at the MIT Media Lab Johnny Biström HUT, Telecommunications Software and Multimedia Laboratory Abstract This paper describes some interesting research projects conducted at the MIT Media Lab. The paper starts with an analysis of the need of visual interfaces that do not limit the movements of the body but still permit advanced communication in what is called perceptive spaces. The paper continues with the description of some of the most promising technologies proposed, tested and evaluated for the purposes mentioned above. A number of innovative applications depending on the vision based technologies are next presented. Finally the results of the experiments are evaluated and some guidelines for future investigations are presented. 1 INTRODUCTION Computer based interactive art and entertainment applications have always fascinated the more expressive programmers. The ability to track the movements and sounds of the human body and to let them produce multimedia based content or interaction with virtual environments have been the goal of several research projects at the MIT Media Laboratory during the last decade. The focus have been set on analyzing body movements and sounds and from those extract control signals to produce interesting audio, light, graphics, animations or video experiences for the actor and the viewers. The keyboard and the mouse are generally considered as insufficient input devices for artistic computer applications. The expressiveness of the human body is much greater and thus the interest in reading and analyzing the actions of the human body has increased. Computer applications that rely on interaction with the human body have often depended on equipment that the user wears. Examples of such equipment are data gloves and special movement detecting suits. These are often very expensive, difficult to put on and limit the movement of the human being trying to express herself. Dancers and children that use their bodies to express deep feelings often see these equipments impossible to use. Lately the research projects have been concentrating on creating body movement information from video camera input by analyzing and processing the digital image stream. Thus the movement of head, hands, feet and torso can be identified and used as control signals for different multimedia or used for interaction in storytelling applications. The MIT Media Laboratory has conducted several research projects and 1
3 studies where different interface technologies for visual interfaces have been evaluated. The technologies have generally been tested on visual applications that require the user input as body movements and sounds. This report describes, compares and evaluates some of the technologies and applications that have been tested at the MIT Media Laboratory and gives some guidelines for further research in this area. 2 VISUAL INTERFACE TECHNOLOGIES There are a number of interesting visual technologies that can be used for detecting body movements. In this chapter some of the most promising technologies are reported from the simplest use of one camera with color vision approach to stereo vision and multiple cameras. In cases where the lightning changes or when projected background is used the solution with infrared cameras give better result. Finally solutions with IR-emitters are analyzed. 2.1 Color Vision with One Frontal Camera The simplest way for an application to communicate with the user is the one camera color vision concept. The color is needed to be able to separate the parts of the human body from the background. The camera is installed in the front of the viewer. The needed equipment is cheap to buy and can be installed on any personal computer. Disadvantages are that it has high requirements on unchanged background and unchanged lightning. This concept can not be used when a background or floor projection is used. Another limitation is that only one person can be identified at a time. A typical one camera interactive space is presented by Wren et al. (1999, p.2) in figure 1. Figure 1: An Interactive Virtual Environment with One Frontal Color Camera The MIT Media Laboratory has developed a one camera system called Person Finder (Pfinder). Sparacino (2001, p.3) describes that the system uses a multi-class statistical model of color and shape to segment a person from a background scene and then track 2
4 the body parts and their movements. Pfinder first builds the scene model by observing the scene without a person. When a person enters the scene Pfinder constructs a multiblob model of the person s body based on the colors in the image. Blobs are generated for the head, hands, feet, shirt and pants. The process is driven by a 2D contour shape analysis. The performance of the Pfinder system is reported by Wren et al. (1999, p.3) and the results are shown in table 1. Table 1: Pfinder Estimation Performance This environment has been tested on thousands of persons in installations all over the world and it has proved to be very functional and stable. An application called DanceSpace has been developed based on this concept. DanceSpace tracks the movements of dancer and generates graphics and music based on the dancer movements. DanceSpace will be described later in this paper. 2.2 Multi Camera Color Vision Environment To make interactive stories where the can move around in the room and use furniture or appliances for different purposes the one camera approach is not sufficient as it cannot detect the user movements in the axis perpendicular to the projection screen. There might also be positions in the room where the furniture hides the user from the camera. For this purpose additional cameras can be used and they can be placed over head to track the movements of the user or users. Such an interactive space was used in the development of the applications KidsRoom at MIT. The idea of KidsRoom will be described in detail later. In this case no frontal camera was used but all three cameras were placed in the ceiling. There is however a possibility to use a frontal camera here as well if the background and lightning conditions can be kept constant. The several camera room built for KidsRoom is described in Pinhanez et al (2000, p. 441) in figure 2. Two cameras are used to track the gestures (recognition camera) of the users while one camera (overhead camera) is used to track the position of the users. This construction enables the users to move around the room and to fulfill different tasks. 3
5 Figure 2: An Interactive Space with Several Cameras The KidsRoom has been experienced by thousands of children and adults in London as part of the Millennium Dome Project. The solution has proved to be functional and reliable. 2.3 Stereo vision To improve the abilities of the tracking system and to make more complex applications where the user can interact in three dimensions with the application stereo vision, based on two or even three cameras, have been developed. Stereo vision makes it possible to produce 3-D blobs instead of the 2-D blobs described earlier. By using cameras which are much more apart than the human eyes a three dimensional model of the user can be produced. The stereo pair described by Wren et al. (1999, p.4) in figure 3 shows the configuration in which the research at MIT was done. By using two cameras a 3-D estimate of the user s body can be constructed and a blob produced. The accuracy of the estimation in the 3-D case is not as good as in the 2-D case because the estimation along the z axis is not so well conditioned mathematically. This is a result of the positioning of the cameras. The measured performance for the 3-D estimate is shown in table 2. 4
6 Figure 3: 3-D Estimation of the Position of One and a 3-D Blob of another User Table 2: Stereo Estimation Performance, Wren et al. (1999, p.4) Stereo vision has also been used in other projects at MIT. A special solution with three cameras in front of the scene was used in the It/I project described by Pinhanez et al (2000, p. 444). It/I is a computer play with two characters. One real actor plays himself while the other actor is played by the computer and has a non-human appearance which is composed of computer graphics (CG) representing technology in the form of clocks, cameras, televisions and switches e.g. The ideas behind It/I will not be described in this paper. Readers interested in It/I are referred to Pinhanez and Bobick (1998). The technical reason for having three cameras instead of two is that a background screen is needed for the play and it can be used for the viewers and then be eliminated from the process of identifying the movements of the user. The stage used for It/I is shown in figure 4. 5
7 Figure 4: Physical Setup of It/I with tree cameras. Pinhanez et al (2000, p. 444) 2.4 Infrared Sensitive Cameras and Infrared Light The problem of having varying lightning conditions and the need of projecting behind or below the camera can be solved by using infrared sensitive cameras instead of normal cameras. The infrared sensitive cameras used produce a black and white image and they are insensitive for projections made by data projectors. The lack of color information on the other hand makes it impossible to identify different body parts and to produce accurate blobs from these. Only the body silhouette can be observed which makes it difficult to identify small movements of arms, hands, head or feet. The user thus has to use fairly large gestures or body movements to interact with the computer. Another advantage is that the equipment needed is cheap and it can be connected to any personal computer. An application called City of News at MIT takes advantage of the infrared camera environment as it projects a map on the floor where the user stands. Infrared cameras observe the user from the front and from above. City of News is an interactive environment where the user can explore a virtual world, move and trig actions based on movements. The concept of City of News will be penetrated later in this paper. 6
8 If the need of accuracy is grater than what can be achieved with infrared sensitive cameras an array of infrared light emitters and a camera can be used. This was done to realize the Personal Aerobic Trainer (PAT) where a projected screen was needed both in front of and at the back of the user to make the user interface convenient. The array of IR emitters generates a IR floodlight from which the silhouette of the user can be identified. The arrangement can be seen in figure 5. Figure 5: IR Emitters behind Projected Screen. Pinhanez et al (2000, p. 447) The PAT application, which led to the development of the IR-emitter grid, needed greater accuracy to be able to identify the exact position of the user in order to correct errors in the pose or the aerobic movement the user exercised. The PAT application will be described later in this paper. Developing of a full 3D perceptive space or room would need the projection of all four walls, the floor and the ceiling. This approach is a challenge for visual identification. Normal color cameras can not used because of the changes in lightning and background. The infrared vision cameras used so far have been black and white cameras. Today color infrared cameras are available at a reasonable price. These color IR-cameras definitely have a future in visual identification as different levels in the form of colors can be identified. We must however remember that the coloring of infrared pictures is based on temperature. Identifying hands and head can benefit from this as they normally are warmer than clothes. There is however no guarantee that a persons arms, body or pants are identified as the same color. Future research should be put on the possibilities of color IR identification. 7
9 3 APPLICATIONS THAT USE VISUAL INTERFACE TECHNOLOGIES There are a number of research projects at MIT Media Laboratories that use the visual interfaces mentioned above. Only a few of them will be presented here. The purpose here is to show what kind of stories, applications and environments the technologies can be used for. The examples below address different audiences and have different type of stories and interaction. 3.1 Kids Room The KidsRoom is a multi-user experience in a fantasy world where children in the age of 6 to 12 years can interact with computer produced monsters and take part in an adventure play which starts and ends in a bedroom. The children are taken to a mystical forest, on a boat ride on a river and finally to the monster world. To advance in the story the children have to perform tasks as shouting, walking, running, hiding, paddling and dancing. The whole adventure takes about 12 minutes if the children follow the narrator s advices. The children have no control of the overall story development; they just interact in the scenes and perform the given tasks as smoothly as possible. The detailed story is described by Bobick et al (1999). Figure 5: Users Experiencing the KidsRoom. Pinhanez et al (2000, p. 443) KidsRoom takes place in the multi camera color vision environment described above. This environment is needed as the children are supposed to move around the room and their tasks are observed as they do them in different physical locations. As the environment is a closed rectangular room of 8 * 6 meters it is simple to keep the lightning conditions constant and projection is done only on the two front walls. It is however not possible to dim the lights in the mystical forest to achieve greater excitement or to make a projected path for children to follow on the floor. 8
10 3.2 Personal Aerobic Trainer An application created mainly for the adult audience is the Personal Aerobic Trainer (PAT). PAT is a virtual aerobic trainer that lets the user select which aerobic moves, which music and which instructor personality the user prefers and then guides the user through a workout while the trainer supervises and motivates the user. The application can be compared with a workout video but it has real interactivity in the same way as it would be to workout in a gym with a personal trainer. The application is described in detail by Davis and Bobick (1998). Figure 6: Interaction with PAT. Pinhanez et al (2000, p. 449) The first version of this application was based on a camera below the TV-set configuration as shown in figure 6. This was a product that aimed for the commercial market using only one cheap color camera. Further development of PAT at MIT led to a configuration with one projected screen in front of the user and one behind the user as described earlier in this paper. To realize this configuration some changes in the principles of the space had to be done. This application has special demands on the visual interface. The camera must exactly identify how the user performs the movements shown by the PAT. The large scale movements are easy to interpret but the ones which are more static or require only small movements are more complex to detect. Still these static and small movements are necessary to exercise in the right way to ensure a proper workout. The infrared camera used, because of the projected background, was not accurate for the purpose so an infrared emitter grid was developed and placed behind the user to improve the interpretation of the user movements. This setup is however not usable for home users but might be an alternative for gym users. 9
11 3.3 DanceSpace Several efforts have been made in the last decades to create interactive stages to produce graphical and musical effects from the movements of the human body. By identifying a dancer s hands, head, feet and torso and following the movements of these graphical and acoustic effects can be realized. DanceSpace was designed to visualize the movements of a dancer and to map the movements to music and sound. The movements of the parts of the body leave a multicolored trail. This is realized by drawing two Bezier curves to represent the dancer s body. The first curve is drawn according to the placement of the left foot, head and right foot. The second curve is drawn from the position of the left hand, center of torso and right hand. In this way a shadow of the dancer can be produced. The duration of the shadow can be prolonged and as time flows the color of the shadow changes producing a multicolored trail of the dancer. (Sparacino (2001, p.3)) Figure 7: User Dancing with Generated Shadow. Wren et al. (1999, pp.1 and 9) The equipment used for this application is the Pfinder, single frontal color camera solution, described earlier in this paper. Using this equipment it is naturally possible to realize any visualization the programmer is able to calculate. 10
12 3.4 City of News This application is actually an immersive, interactive web browser that enables the user to use his capabilities in 3D spatial layout. This means that most people have a capability to remember where things a situated in their homes. This ability is not used as people browse the Internet. City of News connects the addresses in the web to physical representations in a virtual 3D world. The user starts from a freely chosen home page which has links to other websites. This home page is converted to a city where each link is represented by a building. This building is identified by pictures or text from the webpage. As the user navigates his way through different links new buildings are raised in the city. Streets and alleys separate the buildings and the user can move in the streets and find his way to the buildings and to the sites they are linked to. The 3D city will form a virtual reality of the links visited and thus the user can use his ability to navigate in city to create a 3D representation of the websites visited. The mapping of the buildings takes place according to certain simple rules that associate the links to areas of the city. Each time a new building is raised the user is moved in front of that building. City of News is described in detail in Sparacino et al (1999). The navigation in the city takes place using previously specified gestures and sounds for different actions. The user can move around the city by raising left or right arm to point straight out. Raising both arms straight up lifts the user from the plane and makes it possible to observe the city from above to get a total picture of the city. By pointing at a building and saying there the user opens a new webpage. He can scroll down the page by pointing down and scroll up the page by pointing up. Every needed action has a corresponding gesture. Figure 8: User Navigating in the City of News. Wren et al. (1999, p.7) The user can navigate in the City of News by sitting at a desktop, as shown in figure 8, or by standing in a room with projection in the front wall and the floor. The equipment used for identifying the user gestures is one color camera in front of the user. 11
13 3.5 SURVIVE SURVIVE (Simulated Urban Recreational Violence Interactive Virtual Environment) is an application where the users movements, detected by color vision cameras, are mapped directly to the game controls of the popular shooting game Doom. The user stands in a perceptive space in front of the screen holding a large toy gun. The Doom game is played by moving, turning, looking up or down and pointing with the gun instead of using the keyboard controls or a game pad in a virtual environment. Some instructions are given with voice commands as the change of weapon and the firing of the weapon chosen. The user moves around the 3D worlds and tries to kill as many enemies as possible. Figure 9: Playing Doom with SURVIVE. Wren et al. (1999, p.5) The user interface in much more intuitive than using the keyboard and it forgives small mistakes better than the finger based keyboard. The risk of making erroneous moves is also reduced. The game is more immersive and physically demanding than the original keyboard controlled one according to Wren et al. (1999, p.5) SURVIVE has also been played as a multiplayer game between MIT and the British Telecom's Research Center in the U.K. 12
14 4 CONCLUSIONS The above illustrated examples show that perceptive spaces, interactive virtual environments or smart rooms as they are called today are challenging environments with innumerous possibilities in realizing visual communication between computer and the human being. They have been proved to be more intuitive and more immersive giving a intense, artistic or realistic possibility of interaction with virtual worlds. All of the projects above have been very successful and they have been presented at conferences, exhibitions and museums all over the world. Many of the projects have been developed during the years from the first prototypes produced. The equipment used for the projects are still in research and demonstrational use. The projects have however still been left on this demonstrational stage and no noteworthy commercial products based on this projects have been released on the market. It seems as the breakthrough of perceptive spaces are still to come. The technologies on infrared color cameras seem to be a promising new area in the detection of the gestures of the human body. One can extrapolate that these cameras also can be used as stereo cameras giving possibilities to produce 3D models of the person observed. In the near future it will be possible to map the gestures of a person to control 3D avatars in virtual 3D environments in real time. The possibility to fully understand, interpret and model the actions of a human body is a very complex task which leads to a very large number of possible states according to Pinhanez (1999). We are far from that goal at the moment. We must however remember that the screen, keyboard and mouse used for interaction with the user today easily, with the use of cameras, can be extended in the direction towards better understanding of the intensions of the human being. REFERENCES Bobick, A.; Intille, S.; Davis, J.; Baird, F.; Pinhanez, C.; Campbell, Y.; Ivanov, Y.; Shutte, A. and Wilson, A The KidsRoom: A Perceptually-Based Interactive Immersive Story Environment. PRESENCE: Teleoperators and Virtual Environments 8, n. 4. Cambridge, MA, USA. Davis, J. and Bobick, A Virtual PAT: A Virtual Personal Aerobics Trainer. Proceedings of Workshop on Perceptual User Interfaces, San Francisco, CA, USA. Pinhanez, C. S.; Davis, J. W. ; Intille, S. ; Johnson, M. P. ; Wilson, A. D. ; Bobick, A. F.; Blumberg, B Physically Interactive Story Environments. IBM Systems Journal. Vol. 39. Nos. 3&4. IBM. Pinhanez, C. S Representation and Recognition of Action in Interactive Spaces, Doctor Thesis. Media Arts and Sciences, Massachusetts Institute of Technology, MA, USA Pinhanez, C. S. and Bobick, A. F It/I : A Teater Play Featuring an Autonomous Computer Graphics Character. Proceedings of the ACM Multimedia 98 Workshop on Technologies for Interactive Movies. ACM. New York. USA 13
15 Sparacino, Flavia (Some) computer vision based interfaces for interactive art and entertainment installations. INTER_FACE Body Boundaries, issue editor Emanuele Quintz, Anomalie, n.2. Paris, France. Anomos. Sparacino, F.; DeVaul, R.; Wren, C.; MacNeil, R.; Davenport, G.; Pentland, A (1999). City of News. SIGGRAPH 99, Visual Proceedings, Emerging Technologies, Los Angeles, USA Wren, C. R.; Sparacino, F. ; Azarbayejani, A. J.; Darrell, T. J.; Davis, J. W.; Starner, T. E.; Kotani, A.; Chao, C. M.; Hlavac, M.; Russell, K. B.; Bobick, A.; Pentland, A. P Perceptive Spaces for Performance and Entertainment. Perceptual Computing Section. The MIT Media Laboratory. Cambridge, MA, USA. WEBREFERENCES MIT Media: Interactive Cinema at MIT Media: Vision and Modeling at MIT:Media: Smart Rooms: Bobick, A.: Pinhanez, C.: Sparacino, F.: Wren, C.: Sensing Places: 14
ACTIVE: Abstract Creative Tools for Interactive Video Environments
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com ACTIVE: Abstract Creative Tools for Interactive Video Environments Chloe M. Chao, Flavia Sparacino, Alex Pentland, Joe Marks TR96-27 December
More information(Some) computer vision based interfaces for interactive art and entertainment installations
In: INTER_FACE Body Boundaries, issue editor Emanuele Quinz, Anomalie, n.2, Paris, France, Anomos, 2001. (Some) computer vision based interfaces for interactive art and entertainment installations Flavia
More informationCombining Audio and Video in Perceptive Spaces
M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 511 to appear in 1st International Workshop on Managing Interactions in Smart Environments, December 13-14 1999, Dublin, Ireland
More informationVirtual PAT: A Virtual Personal Aerobics Trainer. James W. Davis Aaron F. Bobick.
Virtual PAT: A Virtual Personal Aerobics Trainer James W. Davis Aaron F. Bobick MIT Media Lab 20 Ames Street, Cambridge, MA 02139 jdavis@media.mit.edu, bobick@media.mit.edu Abstract Aprototype system for
More informationPerceptive Spaces for Performance and Entertainment (Revised) y. Christopher R. Wren Flavia Sparacino Ali J. Azarbayejani Trevor J.
Perceptive Spaces for Performance and Entertainment (Revised) y Christopher R. Wren Flavia Sparacino Ali J. Azarbayejani Trevor J. Darrell James W. Davis Thad E. Starner Akira Kotani Chloe M. Chao Michal
More informationR (2) Controlling System Application with hands by identifying movements through Camera
R (2) N (5) Oral (3) Total (10) Dated Sign Assignment Group: C Problem Definition: Controlling System Application with hands by identifying movements through Camera Prerequisite: 1. Web Cam Connectivity
More informationFATE WEAVER. Lingbing Jiang U Final Game Pitch
FATE WEAVER Lingbing Jiang U0746929 Final Game Pitch Table of Contents Introduction... 3 Target Audience... 3 Requirement... 3 Connection & Calibration... 4 Tablet and Table Detection... 4 Table World...
More informationWaves Nx VIRTUAL REALITY AUDIO
Waves Nx VIRTUAL REALITY AUDIO WAVES VIRTUAL REALITY AUDIO THE FUTURE OF AUDIO REPRODUCTION AND CREATION Today s entertainment is on a mission to recreate the real world. Just as VR makes us feel like
More informationVision-based User-interfaces for Pervasive Computing. CHI 2003 Tutorial Notes. Trevor Darrell Vision Interface Group MIT AI Lab
Vision-based User-interfaces for Pervasive Computing Tutorial Notes Vision Interface Group MIT AI Lab Table of contents Biographical sketch..ii Agenda..iii Objectives.. iv Abstract..v Introduction....1
More informationare in front of some cameras and have some influence on the system because of their attitude. Since the interactor is really made aware of the impact
Immersive Communication Damien Douxchamps, David Ergo, Beno^ t Macq, Xavier Marichal, Alok Nandi, Toshiyuki Umeda, Xavier Wielemans alterface Λ c/o Laboratoire de Télécommunications et Télédétection Université
More informationThe Mixed Reality Book: A New Multimedia Reading Experience
The Mixed Reality Book: A New Multimedia Reading Experience Raphaël Grasset raphael.grasset@hitlabnz.org Andreas Dünser andreas.duenser@hitlabnz.org Mark Billinghurst mark.billinghurst@hitlabnz.org Hartmut
More informationChapter 1 Virtual World Fundamentals
Chapter 1 Virtual World Fundamentals 1.0 What Is A Virtual World? {Definition} Virtual: to exist in effect, though not in actual fact. You are probably familiar with arcade games such as pinball and target
More informationPROPOSED SYSTEM FOR MID-AIR HOLOGRAPHY PROJECTION USING CONVERSION OF 2D TO 3D VISUALIZATION
International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 7, Issue 2, March-April 2016, pp. 159 167, Article ID: IJARET_07_02_015 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=7&itype=2
More informationEnabling Cursor Control Using on Pinch Gesture Recognition
Enabling Cursor Control Using on Pinch Gesture Recognition Benjamin Baldus Debra Lauterbach Juan Lizarraga October 5, 2007 Abstract In this project we expect to develop a machine-user interface based on
More informationPinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data
Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data Hrvoje Benko Microsoft Research One Microsoft Way Redmond, WA 98052 USA benko@microsoft.com Andrew D. Wilson Microsoft
More informationHyperPlex: a World of 3D Interactive Digital Movies. Flavia Sparacino, Christopher Wren, Alex Pentland, Glorianna Davenport
HyperPlex: a World of 3D Interactive Digital Movies Flavia Sparacino, Christopher Wren, Alex Pentland, Glorianna Davenport The Media Laboratory, Massachusetts Institute of Technology Room E15-384, 20 Ames
More informationNarrative Guidance. Tinsley A. Galyean. MIT Media Lab Cambridge, MA
Narrative Guidance Tinsley A. Galyean MIT Media Lab Cambridge, MA. 02139 tag@media.mit.edu INTRODUCTION To date most interactive narratives have put the emphasis on the word "interactive." In other words,
More informationThe development of a virtual laboratory based on Unreal Engine 4
The development of a virtual laboratory based on Unreal Engine 4 D A Sheverev 1 and I N Kozlova 1 1 Samara National Research University, Moskovskoye shosse 34А, Samara, Russia, 443086 Abstract. In our
More informationPhysical Presence in Virtual Worlds using PhysX
Physical Presence in Virtual Worlds using PhysX One of the biggest problems with interactive applications is how to suck the user into the experience, suspending their sense of disbelief so that they are
More informationE90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright
E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7
More informationLCC 3710 Principles of Interaction Design. Readings. Sound in Interfaces. Speech Interfaces. Speech Applications. Motivation for Speech Interfaces
LCC 3710 Principles of Interaction Design Class agenda: - Readings - Speech, Sonification, Music Readings Hermann, T., Hunt, A. (2005). "An Introduction to Interactive Sonification" in IEEE Multimedia,
More informationArts, A/V Technology Communications Career Cluster CIP Code Chart of Approvable CTE Programs
Code Chart of Approvable CTE Programs A/V Technology & Film Photographic And Film/Video Technology 100201 A program that prepares individuals to apply technical knowledge and skills to the operation and
More informationExhibition Strategy of Digital 3D Data of Object in Archives using Digitally Mediated Technologies for High User Experience
, pp.150-156 http://dx.doi.org/10.14257/astl.2016.140.29 Exhibition Strategy of Digital 3D Data of Object in Archives using Digitally Mediated Technologies for High User Experience Jaeho Ryu 1, Minsuk
More informationVirtual Tactile Maps
In: H.-J. Bullinger, J. Ziegler, (Eds.). Human-Computer Interaction: Ergonomics and User Interfaces. Proc. HCI International 99 (the 8 th International Conference on Human-Computer Interaction), Munich,
More informationThe Control of Avatar Motion Using Hand Gesture
The Control of Avatar Motion Using Hand Gesture ChanSu Lee, SangWon Ghyme, ChanJong Park Human Computing Dept. VR Team Electronics and Telecommunications Research Institute 305-350, 161 Kajang-dong, Yusong-gu,
More informationBuilding a bimanual gesture based 3D user interface for Blender
Modeling by Hand Building a bimanual gesture based 3D user interface for Blender Tatu Harviainen Helsinki University of Technology Telecommunications Software and Multimedia Laboratory Content 1. Background
More informationLCC 3710 Principles of Interaction Design. Readings. Tangible Interfaces. Research Motivation. Tangible Interaction Model.
LCC 3710 Principles of Interaction Design Readings Ishii, H., Ullmer, B. (1997). "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms" in Proceedings of CHI '97, ACM Press. Ullmer,
More informationCSE Tue 10/09. Nadir Weibel
CSE 118 - Tue 10/09 Nadir Weibel Today Admin Teams Assignments, grading, submissions Mini Quiz on Week 1 (readings and class material) Low-Fidelity Prototyping 1st Project Assignment Computer Vision, Kinect,
More informationApplication of 3D Terrain Representation System for Highway Landscape Design
Application of 3D Terrain Representation System for Highway Landscape Design Koji Makanae Miyagi University, Japan Nashwan Dawood Teesside University, UK Abstract In recent years, mixed or/and augmented
More informationGesture Recognition with Real World Environment using Kinect: A Review
Gesture Recognition with Real World Environment using Kinect: A Review Prakash S. Sawai 1, Prof. V. K. Shandilya 2 P.G. Student, Department of Computer Science & Engineering, Sipna COET, Amravati, Maharashtra,
More informationpreface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...
v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)
More informationUSER-ORIENTED INTERACTIVE BUILDING DESIGN *
USER-ORIENTED INTERACTIVE BUILDING DESIGN * S. Martinez, A. Salgado, C. Barcena, C. Balaguer RoboticsLab, University Carlos III of Madrid, Spain {scasa@ing.uc3m.es} J.M. Navarro, C. Bosch, A. Rubio Dragados,
More informationsynchrolight: Three-dimensional Pointing System for Remote Video Communication
synchrolight: Three-dimensional Pointing System for Remote Video Communication Jifei Ou MIT Media Lab 75 Amherst St. Cambridge, MA 02139 jifei@media.mit.edu Sheng Kai Tang MIT Media Lab 75 Amherst St.
More informationSIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF VIRTUAL REALITY AND SIMULATION MODELING
Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. SIMULATION MODELING WITH ARTIFICIAL REALITY TECHNOLOGY (SMART): AN INTEGRATION OF
More informationRemote Media Immersion (RMI)
Remote Media Immersion (RMI) University of Southern California Integrated Media Systems Center Alexander Sawchuk, Deputy Director Chris Kyriakakis, EE Roger Zimmermann, CS Christos Papadopoulos, CS Cyrus
More informationWhat was the first gestural interface?
stanford hci group / cs247 Human-Computer Interaction Design Studio What was the first gestural interface? 15 January 2013 http://cs247.stanford.edu Theremin Myron Krueger 1 Myron Krueger There were things
More informationPhysically Interactive Story Environments
Physically Interactive Story Environments Claudio Pinhanez *, James Davis, Stephen Intille, Michael Johnson, Andrew Wilson, Aaron Bobick, Bruce Blumberg * IBM TJ Watson Research Center, MIT Media Laboratory,
More informationVideo Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces
Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces Content based on Dr.LaViola s class: 3D User Interfaces for Games and VR What is a User Interface? Where
More informationPRODUCTION. in FILM & MEDIA MASTER OF ARTS. One-Year Accelerated
One-Year Accelerated MASTER OF ARTS in FILM & MEDIA PRODUCTION The Academy offers an accelerated one-year schedule for students interested in our Master of Arts degree program by creating an extended academic
More informationCSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS
CSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS Announcements Homework project 2 Due tomorrow May 5 at 2pm To be demonstrated in VR lab B210 Even hour teams start at 2pm Odd hour teams start
More informationThe SCD Architecture and its Use in the Design of Story-Driven Interactive Spaces
The SCD Architecture and its Use in the Design of Story-Driven Interactive Spaces Claudio S. Pinhanez MIT Media Laboratory, 20 Ames Street Cambridge, Massachusetts, MA 02139, USA pinhanez@media.mit.edu
More informationThe Use of Avatars in Networked Performances and its Significance
Network Research Workshop Proceedings of the Asia-Pacific Advanced Network 2014 v. 38, p. 78-82. http://dx.doi.org/10.7125/apan.38.11 ISSN 2227-3026 The Use of Avatars in Networked Performances and its
More informationBODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS
KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,
More informationPerceptual Interfaces. Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces
Perceptual Interfaces Adapted from Matthew Turk s (UCSB) and George G. Robertson s (Microsoft Research) slides on perceptual p interfaces Outline Why Perceptual Interfaces? Multimodal interfaces Vision
More informationInteracting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)
Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine) Presentation Working in a virtual world Interaction principles Interaction examples Why VR in the First Place? Direct perception
More informationInternational Journal of Informative & Futuristic Research ISSN (Online):
Reviewed Paper Volume 2 Issue 6 February 2015 International Journal of Informative & Futuristic Research An Innovative Approach Towards Virtual Drums Paper ID IJIFR/ V2/ E6/ 021 Page No. 1603-1608 Subject
More informationControlling Humanoid Robot Using Head Movements
Volume-5, Issue-2, April-2015 International Journal of Engineering and Management Research Page Number: 648-652 Controlling Humanoid Robot Using Head Movements S. Mounica 1, A. Naga bhavani 2, Namani.Niharika
More informationExploring 3D in Flash
1 Exploring 3D in Flash We live in a three-dimensional world. Objects and spaces have width, height, and depth. Various specialized immersive technologies such as special helmets, gloves, and 3D monitors
More informationAir Marshalling with the Kinect
Air Marshalling with the Kinect Stephen Witherden, Senior Software Developer Beca Applied Technologies stephen.witherden@beca.com Abstract. The Kinect sensor from Microsoft presents a uniquely affordable
More information.VP CREATING AN INVENTED ONE POINT PERSPECTIVE SPACE
PAGE ONE Organize an invented 1 point perspective drawing in the following order: 1 Establish an eye level 2 Establish a Center Line Vision eye level vision Remember that the vanishing point () in one
More information4 HUMAN FIGURE. Practical Guidelines (Secondary Level) Human Figure. Notes
4 HUMAN FIGURE AIM The study of Human figure concerns in capturing the different characters and emotional expressions. Both of these could be achieved with gestures and body languages. INTRODUCTION Human
More informationDESN2270 Final Project Plan
DESN2270 Final Project Plan Contents Website Content... 1 Theme... 1 Narrative... 1 Intended Audience... 2 Audio/ Animation Sequences... 2 Banner... 2 Main Story... 2 Interactive Elements... 4 Game...
More informationBrowsing 3-D spaces with 3-D vision: body-driven navigation through the Internet city
To be published in: 3DPVT: 1 st International Symposium on 3D Data Processing Visualization and Transmission, Padova, Italy, June 19-21, 2002 Browsing 3-D spaces with 3-D vision: body-driven navigation
More informationA Quick Spin on Autodesk Revit Building
11/28/2005-3:00 pm - 4:30 pm Room:Americas Seminar [Lab] (Dolphin) Walt Disney World Swan and Dolphin Resort Orlando, Florida A Quick Spin on Autodesk Revit Building Amy Fietkau - Autodesk and John Jansen;
More informationMECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES
INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL
More informationNovember 30, Prof. Sung-Hoon Ahn ( 安成勳 )
4 4 6. 3 2 6 A C A D / C A M Virtual Reality/Augmented t Reality November 30, 2009 Prof. Sung-Hoon Ahn ( 安成勳 ) Photo copyright: Sung-Hoon Ahn School of Mechanical and Aerospace Engineering Seoul National
More informationChristopher R. Wren Flavia Sparacino Ali J. Azarbayejani. Michal Hlavac Kenneth B. Russell Alex P. Pentland
M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 372 Appears in Applied Articial Intelligence, Vol. 11, No. 4, June 1997 Perceptive Spaces for Performance and Entertainment: Untethered
More informationShopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction
Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction Minghao Cai 1(B), Soh Masuko 2, and Jiro Tanaka 1 1 Waseda University, Kitakyushu, Japan mhcai@toki.waseda.jp, jiro@aoni.waseda.jp
More informationDirect gaze based environmental controls
Loughborough University Institutional Repository Direct gaze based environmental controls This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: SHI,
More informationThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems
ThumbsUp: Integrated Command and Pointer Interactions for Mobile Outdoor Augmented Reality Systems Wayne Piekarski and Bruce H. Thomas Wearable Computer Laboratory School of Computer and Information Science
More informationComposition. And Why it is Vital to Understand Composition for Artists
Composition And Why it is Vital to Understand Composition for Artists Composition in painting is much the same as composition in music, and also ingredients in recipes. The wrong ingredient a discordant
More informationDiscussion on Different Types of Game User Interface
2017 2nd International Conference on Mechatronics and Information Technology (ICMIT 2017) Discussion on Different Types of Game User Interface Yunsong Hu1, a 1 college of Electronical and Information Engineering,
More informationShort Course on Computational Illumination
Short Course on Computational Illumination University of Tampere August 9/10, 2012 Matthew Turk Computer Science Department and Media Arts and Technology Program University of California, Santa Barbara
More informationA system for creating virtual reality content from make-believe games
A system for creating virtual reality content from make-believe games Adela Barbulescu, Maxime Garcia, Antoine Begault, Laurence Boissieux, Marie-Paule Cani, Maxime Portaz, Alexis Viand, Romain Dulery,
More informationEnvironmental control by remote eye tracking
Loughborough University Institutional Repository Environmental control by remote eye tracking This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: SHI,
More informationTouch & Gesture. HCID 520 User Interface Software & Technology
Touch & Gesture HCID 520 User Interface Software & Technology Natural User Interfaces What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger
More informationInteractive and Immersive 3D Visualization for ATC
Interactive and Immersive 3D Visualization for ATC Matt Cooper & Marcus Lange Norrköping Visualization and Interaction Studio University of Linköping, Sweden Summary of last presentation A quick description
More informationA Multimodal Locomotion User Interface for Immersive Geospatial Information Systems
F. Steinicke, G. Bruder, H. Frenz 289 A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems Frank Steinicke 1, Gerd Bruder 1, Harald Frenz 2 1 Institute of Computer Science,
More informationResearch on product design and application based on virtual reality. technology and media interactive art
International Conference on Computational Science and Engineering (ICCSE 2015) Research on product design and application based on virtual reality technology and media interactive art Gang Liu 1,a,* and
More informationVirtual Reality Calendar Tour Guide
Technical Disclosure Commons Defensive Publications Series October 02, 2017 Virtual Reality Calendar Tour Guide Walter Ianneo Follow this and additional works at: http://www.tdcommons.org/dpubs_series
More informationInterface Design V: Beyond the Desktop
Interface Design V: Beyond the Desktop Rob Procter Further Reading Dix et al., chapter 4, p. 153-161 and chapter 15. Norman, The Invisible Computer, MIT Press, 1998, chapters 4 and 15. 11/25/01 CS4: HCI
More informationUser Interfaces. What is the User Interface? Player-Centric Interface Design
User Interfaces What is the User Interface? What works is better than what looks good. The looks good can change, but what works, works UI lies between the player and the internals of the game. It translates
More informationControlling vehicle functions with natural body language
Controlling vehicle functions with natural body language Dr. Alexander van Laack 1, Oliver Kirsch 2, Gert-Dieter Tuzar 3, Judy Blessing 4 Design Experience Europe, Visteon Innovation & Technology GmbH
More informationARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)
Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416
More informationCREATING. Digital Animations. by Derek Breen
CREATING Digital Animations by Derek Breen ii CREATING DIGITAL ANIMATIONS Published by John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 5774 www.wiley.com Copyright 2016 by John Wiley & Sons,
More informationVisual information is clearly important as people IN THE INTERFACE
There are still obstacles to achieving general, robust, high-performance computer vision systems. The last decade, however, has seen significant progress in vision technologies for human-computer interaction.
More informationThe User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space
, pp.62-67 http://dx.doi.org/10.14257/astl.2015.86.13 The User Activity Reasoning Model Based on Context-Awareness in a Virtual Living Space Bokyoung Park, HyeonGyu Min, Green Bang and Ilju Ko Department
More informationPolytechnical Engineering College in Virtual Reality
SISY 2006 4 th Serbian-Hungarian Joint Symposium on Intelligent Systems Polytechnical Engineering College in Virtual Reality Igor Fuerstner, Nemanja Cvijin, Attila Kukla Viša tehnička škola, Marka Oreškovica
More informationThe future of illustrated sound in programme making
ITU-R Workshop: Topics on the Future of Audio in Broadcasting Session 1: Immersive Audio and Object based Programme Production The future of illustrated sound in programme making Markus Hassler 15.07.2015
More informationVision for a Smart Kiosk
Appears in Computer Vision and Pattern Recognition, San Juan, PR, June, 1997, pages 690-696. Vision for a Smart Kiosk James M. Rehg Maria Loughlin Keith Waters Abstract Digital Equipment Corporation Cambridge
More informationINTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY
INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,
More informationImmersive Real Acting Space with Gesture Tracking Sensors
, pp.1-6 http://dx.doi.org/10.14257/astl.2013.39.01 Immersive Real Acting Space with Gesture Tracking Sensors Yoon-Seok Choi 1, Soonchul Jung 2, Jin-Sung Choi 3, Bon-Ki Koo 4 and Won-Hyung Lee 1* 1,2,3,4
More information6Visionaut visualization technologies SIMPLE PROPOSAL 3D SCANNING
6Visionaut visualization technologies 3D SCANNING Visionaut visualization technologies7 3D VIRTUAL TOUR Navigate within our 3D models, it is an unique experience. They are not 360 panoramic tours. You
More information*Which code? Images, Sound, Video. Computer Graphics Vocabulary
*Which code? Images, Sound, Video Y. Mendelsohn When a byte of memory is filled with up to eight 1s and 0s, how does the computer decide whether to represent the code as ASCII, Unicode, Color, MS Word
More informationEXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON
EXPERIMENTAL BILATERAL CONTROL TELEMANIPULATION USING A VIRTUAL EXOSKELETON Josep Amat 1, Alícia Casals 2, Manel Frigola 2, Enric Martín 2 1Robotics Institute. (IRI) UPC / CSIC Llorens Artigas 4-6, 2a
More informationStereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays. Habib Abi-Rached Thursday 17 February 2005.
Stereo-based Hand Gesture Tracking and Recognition in Immersive Stereoscopic Displays Habib Abi-Rached Thursday 17 February 2005. Objective Mission: Facilitate communication: Bandwidth. Intuitiveness.
More informationAugmented Reality 3D Pop-up Book: An Educational Research Study
Augmented Reality 3D Pop-up Book: An Educational Research Study Poonsri Vate-U-Lan College of Internet Distance Education Assumption University of Thailand poonsri.vate@gmail.com Abstract Augmented Reality
More informationTouch & Gesture. HCID 520 User Interface Software & Technology
Touch & Gesture HCID 520 User Interface Software & Technology What was the first gestural interface? Myron Krueger There were things I resented about computers. Myron Krueger There were things I resented
More information3D and Sequential Representations of Spatial Relationships among Photos
3D and Sequential Representations of Spatial Relationships among Photos Mahoro Anabuki Canon Development Americas, Inc. E15-349, 20 Ames Street Cambridge, MA 02139 USA mahoro@media.mit.edu Hiroshi Ishii
More informationThe Ultimate Career Guide
Career Guide www.first.edu The Ultimate Career Guide For The Film & Video Industry Learn about the Film & Video Industry, the types of positions available, and how to get the training you need to launch
More informationISCW 2001 Tutorial. An Introduction to Augmented Reality
ISCW 2001 Tutorial An Introduction to Augmented Reality Mark Billinghurst Human Interface Technology Laboratory University of Washington, Seattle grof@hitl.washington.edu Dieter Schmalstieg Technical University
More informationPhantom-X. Unnur Gretarsdottir, Federico Barbagli and Kenneth Salisbury
Phantom-X Unnur Gretarsdottir, Federico Barbagli and Kenneth Salisbury Computer Science Department, Stanford University, Stanford CA 94305, USA, [ unnurg, barbagli, jks ] @stanford.edu Abstract. This paper
More informationCOLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES.
COLLABORATION WITH TANGIBLE AUGMENTED REALITY INTERFACES. Mark Billinghurst a, Hirokazu Kato b, Ivan Poupyrev c a Human Interface Technology Laboratory, University of Washington, Box 352-142, Seattle,
More informationTRADITIONAL PHOTOGRAPHY; THE SPOTTING MICROSCOPE
TRADITIONAL PHOTOGRAPHY; THE SPOTTING MICROSCOPE FROM THE jbhphoto.com BLOG Collection #09-A 10/2013 MUSINGS, OPINIONS, COMMENTARY, HOW-TO AND GENERAL DISCUSSION ABOUT TRADITIONAL WET DARKROOM PHOTOGRAPHY
More informationCOPYRIGHTED MATERIAL. Overview
In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated
More informationHaptic Camera Manipulation: Extending the Camera In Hand Metaphor
Haptic Camera Manipulation: Extending the Camera In Hand Metaphor Joan De Boeck, Karin Coninx Expertise Center for Digital Media Limburgs Universitair Centrum Wetenschapspark 2, B-3590 Diepenbeek, Belgium
More informationHUMAN COMPUTER INTERFACE
HUMAN COMPUTER INTERFACE TARUNIM SHARMA Department of Computer Science Maharaja Surajmal Institute C-4, Janakpuri, New Delhi, India ABSTRACT-- The intention of this paper is to provide an overview on the
More informationUltrasonic Calibration of a Magnetic Tracker in a Virtual Reality Space
Ultrasonic Calibration of a Magnetic Tracker in a Virtual Reality Space Morteza Ghazisaedy David Adamczyk Daniel J. Sandin Robert V. Kenyon Thomas A. DeFanti Electronic Visualization Laboratory (EVL) Department
More informationGamescape Principles Basic Approaches for Studying Visual Grammar and Game Literacy Nobaew, Banphot; Ryberg, Thomas
Downloaded from vbn.aau.dk on: april 05, 2019 Aalborg Universitet Gamescape Principles Basic Approaches for Studying Visual Grammar and Game Literacy Nobaew, Banphot; Ryberg, Thomas Published in: Proceedings
More informationSMART GUIDE FOR AR TOYS AND GAMES
SMART GUIDE FOR AR TOYS AND GAMES Table of contents: WHAT IS AUGMENTED REALITY? 3 AR HORIZONS 4 WHERE IS AR CURRENTLY USED THE MOST (INDUSTRIES AND PRODUCTS)? 7 AR AND CHILDREN 9 WHAT KINDS OF TOYS ARE
More information