A Classification for User Embodiment in Collaborative Virtual Environments

Size: px
Start display at page:

Download "A Classification for User Embodiment in Collaborative Virtual Environments"

Transcription

1 A Classification for User Embodiment in Collaborative Virtual Environments Katerina MANIA Alan CHALMERS Department of Computer Science, University of Bristol, Bristol, UK ( Abstract The general goal of Collaborative Virtual Environments (CVEs) is to provide a space within which people may interact. CVEs are increasingly being used to support collaborative work between geographically separated participants. User embodiment is concerned with the provision of users with a representation of their choice so as to make others (and themselves) aware of their presence in a virtual space. The taxonomy presented in this paper details many of the existing networked virtual environments and examines the fundamental interaction interfaces which these systems provide. By initially discussing the features of communication which should be supported regardless of the medium available, the following investigation reveals an incomplete support for non-verbal communication cues over the range of the environments examined. 1. Introduction Most of the networked virtual communities have until recently been text-based. However, such environments are now increasingly using 3D graphics to represent the space and people that inhabit them in real-time. The users, after being connected to a networked system, choose a graphical representation of themselves, termed an embodiment or an avatar [15]. They can now explore the environment by controlling their graphical representation and also interact with other avatars. Although these systems have evolved graphically, communication is still predominantly based on text or audio links. User embodiment is there to indicate the presence of the user in a particular location while the interface provides limited support for non-verbal communication features[16]. This also affects self-representation of the participants in a CVE and the capabilities of personalisation of their selected body images; these images should convey information about the identity, the personality or even the availability of each user. The basic premise of this paper is, therefore, that incorporating such fundamental behavior and features is crucial for the credibility of the virtual interaction. 2. Background The first application of networked computer graphics appeared in 1972 on ARPANET, the computer network developed by the Advanced Research Projects Agency[13]. This network was mainly intended for co-operative work and for sharing information. Today, multi-user virtual environments are used for a variety of purposes, including shared scientific visualization, training, co-operative work, battlefield simulation and entertainment games. Several platforms exist for building multi-user virtual worlds, some of them free and easily accessible through the Internet. Obviously, the performance of these systems is different from high-end applications which are specialised, expensive and mostly running on dedicated networks. Although this gap is shrinking, the future of networked environments which are able to accommodate a large number of users and provide complex interfaces and rich user embodiments depends on aligning a number of

2 technical (networks, computer graphics capabilities, etc.) and social issues (telephone companies, government regulators, etc.)[14]. 3. Non-Verbal Communication New media such as distributed virtual environments, force researchers to analyse what is fundamental about communication[17]. We follow Abercrombie[1] in thinking of conversation as relying on all channels of communication through which information is exchanged by individuals during face-to-face interactions. Language is closely linked with and supported by, non-verbal communication which adds to the meaning of utterances, provides feedback, controls synchronisation and also plays a central role in human social behaviour[2]. Facial expressions: The face is one of the most important areas for non-verbal signaling. In general, facial expressions are indicators of personality and emotions, serving also as interaction signals[8]. Facial expressions provide feedback and information about the listener's level of understanding while revealing interest, puzzlement or disbelief. In addition, affective expressions allow listeners to infer the speaker s current emotional state and communicate their audience's emotional reaction to what is being said. Gaze: Gaze[3] is a general indicator of attention and can be directed at other conversational participants in face-to-face interaction as well as at features of the physical environment. Gaze is closely coordinated with verbal communication. It is used to obtain feedback on the other's responses while talking and extra information about what is being said while listening. In addition, shifts of gaze are used to regulate the synchronisation of speech. Gaze is also used as a signal in starting encounters, in greetings, as a reinforcer and to indicate that a point is understood. Gestures: The hands and to a lesser extent the head and feet can produce a wide range of gestures. Gestures are closely coordinated with speech and support multiple communication functions. They are used to co-ordinate conversational content, achieve reference and assist in turn taking. Conventional gestures are usually intended to communicate and are normally given and received with full-awareness. Posture: This is the information supplied by the orientation of a conversational participant's body. Posture is an important means of conveying interpersonal attitudes and is associated with emotional states. Posture accompanies speech in a way similar to that of gesture and provides feedback to the speaker about how the message is being received. Body position and orientation can also be used to include or exclude people from the conversation. Self-Representation: Self-representation can be regarded as a special kind of non-verbal communication. In general, the main purpose of manipulating appearance is to send messages about one-self. Thus, people send messages about their social status, their occupation, their personality or their mood. Appearance is also used to signal attitudes towards other people for example, aggression, rebelliousness and formality. Bodily Contact: Physical touch seems to have a primitive significance of heightened intimacy and it produces increased emotional arousal. Some forms of bodily contact are used as interaction signals like greetings and farewells or as attention signals. However, the precise meaning of a particular form of touch depends on the culture. 4. Platforms DIVE: The Distributed Interactive Virtual Environment(DIVE) is an internet-based multiuser virtual reality system developed by the Distributed Systems Laboratory of the Swedish Institute of Computer Science(SICS)[7]. DIVE supports the development of

3 shared multi-user virtual environments, user interfaces and applications. The DIVE platform is experimental and available for free for non-commercial use. A participant in a DIVE world is either a human user or an automated application process and is represented by an avatar. The simplest form of embodiment in DIVE is called 'blockie' and consists of a set of primitive 3D boxes which give a sense of presence and orientation[5]. In some DIVE applications, more sophisticated humanoid avatars are used, combining the use of texture-mapped photographs in order to give a stronger sense of identity. Personalisation is available through a set of default options. Embodiments in DIVE have the capability of head-movements, thus directing gaze while navigating. If a user is absent, the relevant embodiment is moved below the ground plane. Touching or grabbing is allowed and this may be used to determine, after a possible angry reaction, if a user is there[5]. DIVE supports audio as well as the display of real time video streams. MASSIVE: MASSIVE(Model, Architecture and System for Spatial Interaction in Virtual Environments)[11], a laboratory prototype from the University of Nottingham, UK, is a virtual reality conferencing system which scales to large numbers of participants. Its users interact in the same virtual world through a variety of different equipment, media and user interfaces(2d, 3D, text, audio)[6]. As in DIVE, the simplest embodiment is a 'blockie'. MASSIVE encourages the use of different colours for the avatars and name labeling to strengthen identity. Users who are connected through the text interface, are represented by the first character of their names. Each embodiment varies according to which medium each participant uses to connect to the system (if a user is connected through a textinterface, the user has a 'T' embossed in his/her head, an audio-capable user has ears, etc.). MASSIVE allows users to personalise their embodiments, however, limited practice of that capability has been recorded because of a shortage of modelling tools. Head movement capabilities are available as well as a selection of simple pre-programmed gestures such as sleeping (which is also used to indicate the user's presence) and blushing. VLNet: VLNet(Virtual Life Network) is a networked virtual environment developed in the MIRALab of the University of Geneva and the Computer Graphics Lab of the Swiss Federal Institute of Technology[12]. The system uses 3D human figures for avatar representations. The VLNet creators divide virtual humans according to the methods used to control them: directly controlled where face and joint representation is modified using sensors attached to the user's body; user guided where the user defines tasks for the embodiment to perform; and, autonomous that are self-governing and incorporate internal states of actions. In particular, an attempt was recently made to incorporate a non-verbal communication interface in the VLNet system. The interface includes two main windows, the posture panel and the gesture panel. Each panel consists of buttons displaying the actual action available and a textual label. The posture panel includes one section for the body (neutral, attentive, determined, relaxed, insecure, puzzled) and one section for the face (neutral, happy, caring, unhappy, sad, angry). The sections are divided into three columns: positive, negative, neutral. The gesture panel consists of one section for the head/face (yes, no, nod, wink, smile), one for the hand/arm (salute, mockery, alert, insult, good, bad) and one for the body (incomprehension, rejection, welcoming, anger, joy, bow). Other methods were explored for integrating facial expressions in VLNet such as video texturing of the face, model-based coding of facial expressions and lip-movement synthesis from speech. dvs: This is the commercial VR system by DIVISION Ltd., UK, which supports multiuser VR applications. A user can incorporate in the system any preferable embodiment (3D geometry) which is defined in the registry file of the platform on start up. If a 2D mouse is used to navigate in the world, then the whole body will follow the viewpoint around. If the user is wearing a head-mountain-display and uses a 3D mouse then the visual embodiment will represent the movement of the tracked joints; inverse kinematics are available to represent elbow position for example. If the user movements are tracked with

4 head and arm sensors then the user can gesture as in the real world with these limbs. In addition, if the user wears a cyberglove, then finger gestures could be tracked. Voice can be used to transmit emotion. Active Worlds: Active Worlds is a client/server software developed by Circle of Fire Studios Inc. The Active Worlds browser (client software), which is free, allows the user to move around the 3D universe and navigate from one world to another. In addition, the Active Worlds server which allows the user to own land in the Active Worlds Universe is free for a 30-day trial period. There are also servers (not for free) which can be used to build independent multi-user shared environments. The software provides a selection of humanoid avatars of various sexes and ethnicities for the user to choose from. The avatars communicate by means of text which is displayed in the text window as well as on top of the avatar that 'speaks' for 30 seconds or until the next message appears. Messages from the closest 12 users are displayed. The interface also provides some predefined action buttons on top of the world window such as 'happy', 'angry', 'wave', 'jump', 'fight', 'dance'. Each body executes a distinctive set of idle motion sequences randomly; for example, some avatars check their watches once in a while. Blaxxun: Blaxxun was the first company to produce a multi-user VRML compliant client, called Cybergate. The Blaxxun Community Server evaluation copy is free and it enables multi-user web capability for three users. The avatars can communicate through a text window which is placed below the 3D world. The system provides a set of cartoon-like avatars. There are eight gesture options which are displayed as buttons under the text window or activated in the text area using the G button: Hello, hey! dislike, no, not, bye. Participants can also provide as much information about themselves as they wish; different levels of privacy are available. Users may also define their own custom avatars by changing the initialisation file of the world. These avatars can also incorporate the default gestures by altering the avatar VRML file. OnLive!: OnLive! technologies is a company that offers commercial multi-user voice client/server software that enables groups of people to communicate with their own voices through the Internet. The participants use 3D embodiments which consist of only a head without a body[18]. The OnLive! worlds require the use of the OnLive! Traveller browser in order to visit the world. Once inside, a participant simply speaks via a microphone connected to the PC in order to engage in conversations with anyone in the environment. As in a real room full of people, chat participants who are close seem louder than those whose avatars are further away in the space. There is also a text-based interface which allows the user to choose an avatar through a pull-down menu and type a message. Eye blinking, lip synch, basic face layout and four basic expressions (happy, sad, surprise, anger) are available. Initially, the user chooses an avatar from two pre-defined sets which include three groups of heads to choose from: animals, fantasy characters and people. In addition to changing the emotional state of each selected avatar, users can change the colour on groups of polygons, as well as their size and shape. Avatars may be also personalised through voice modification and disguising options. OZ Virtual: OZ Interactive develops Internet software and applications that enable realtime collaborative communications in shared spaces on the Internet with a strong focus on creative content production. The company released Oz Virtual, a 3D world viewer that plugs into Netscape Navigator or Internet Explorer and is based on proprietary technology. The multi-user capabilities come via servers which are not commercially available. Oz developed its own format which is a blend of VRML and Oz's own motion control format while audio is supplied by Voxware. Oz Virtual comes with an avatar editor that offers selection from a set of pre-defined avatars and then allows the user to modify their appearance. When the user chooses the initial avatar, there are different sections of modifications that can be applied. The first one comes with the label 'Pieces' which

5 presents the user with a list of avatar parts that can be modified: torso, head or feet. There is no choice for arms. Colours of all the body parts of the avatar can be changed and scaling can be set for any of the three body areas. The last section is labelled 'motions' and provides a set of gestures, steps or dances. There also some expressions available as well as movement of eyelashes and a lip-synch mechanism. All Oz avatar movements are created using motion capture data. Oz also built an intranet environment for Ericsson incorporating avatars with animated facial expressions that could even use mobile phones inside the virtual world[18]. Community Place: Sony offers free multi-user server software as well as a browser and an authoring tool. Community Place is based on the VRML specification. Certain objects in the world have pre-programmed "actions" and users can share the experience of manipulating them. A chat window is also provided. In a Community Place world, the user chooses an avatar from a set provided by the author of that world. The user can change the colours of certain parts of the selected avatars. However, the browser accepts any VRML custom-made avatar modeled in any authoring tool. There is also a limited set of gestures/postures that are provided although the user with experience in VRML/Java can add different gestures as well. The default posture set includes: normal, hello, smile, wao(excited), woo(rejection), umm(skeptic), bye and sleep [18]. Quake: Quake is a multi-user virtual reality game. The player navigates through texturemapped environments very fast using a humanoid avatar (male or female) initially selected from a default set. Facial expressions are not available, however a set of actions could be triggered using the keyboard: walk slowly, walk, run, jump, crawl, hello, aggression, this way. There is also communication through a chat-line, however, messages reach all the players and not just a selected one. Worlds Chat: Worlds Chat from Worlds Inc. is a 3D social environment where the user can explore individual platforms and rooms on a space station and communicate with other visitors through text. It is the first graphical chat system to incorporate 3D. There is a free demo version of the software with limited features. The full version of the software provides a choice of 40 avatars, capabilities of visitors using custom avatars, unlimited session length. In addition, password protection lets visitors have permanent identity. SPLINE: SPLINE (Scalable platform for Large Interactive Networked Environments) is a multi-user platform developed in the Mitsubushi Electric Research Laboratories. SPLINE provides support for multiple users communicating with each other using natural spoken language while navigation is based on users cycling around the world. Users also interact with computer simulations which range from the very simple (e.g., a revolving door) to the very complex (e.g., a human-like robot). 5. A glance at virtual physical touch Physical touch as a particular strand of non-verbal communication is used when an interpersonal bond is being offered or established. And, similar to a direct gaze, physical contact seems to strengthen other messages, for example persuasion[2]. However, touch also carries the implication of invasion of privacy. Researchers are starting to address the issue of touch and haptics in the virtual world. Some initial experiments were undertaken dealing with the influence of haptic communication on the sense of being together. This work concluded that haptic feedback adds significantly to the sense of togetherness[10]. However, incidents have been reported where personal space was invaded by avatars without permission, concluding overall that there are lots of social conventions that are transferred from the physical world to the virtual world[4]. In general, there is limited support for bodily contact in the existing systems, however, there are many technical as well as social issues to be examined further as this is incorporated into the virtual world.

6 6. Discussion The premise of this paper is that communication is accomplished as a combination of speech/language and non-verbal communication features[2]. In addition, face-to-face interaction is accompanied by involuntary expressions making communication live and more natural. We examined the way that existing multi-user platforms incorporate non-verbal communication and the respective interfaces concerned. Most of the systems provide a limited set of gestures, facial expressions or actions which are activated by mouse clicks on relevant buttons. Although, VLNet, for example, incorporates non-verbal communication cues for the face, the body and the hands, in general, there is a significant overlap between the user's actions and the mouse clicks[9]. The non-verbal communication interface for the text-based systems such as ActiveWorlds, Blaxxun and Worlds Chat, tends not to be used at all because the user is busy typing away in the text window. Active Worlds includes some involuntary expressions and OnLive! as well as Oz incorporate lip-synch, but these actions are automatic and are not directed by the user. Motion tracking, on the other hand, used by systems such as dvs could capture movements or gestures directly from the user but also require specialised hardware. Gaze, for instance, while the user needs to look at the screen, is not mapped correctly. In addition, emotional displays were limited in all the systems. Voice/audio support could help to convey emotion on DIVE, Massive, onlive! and Oz, however, research has showed[6] that troublesome audio could affect turn-taking in a conversation; again, non-verbal communication cues are invaluable. A novel approach was presented by Vilhjalmsson in his thesis[16]. His system, 'BodyChat', treats the avatar as an autonomous agent whose face is animated based on a set of parameters. Still, the avatar is partially controlled by the system and not directly by the user. Having in mind that the interface should be driven by the task, the challenge for the future is to create an interface which incorporates all aspects of non-verbal communication. References [1] Abercrombie D., Paralanguage, Communication in Face-to-Face Interaction, ed. J. Laver, S. Hutcheson [2] Argyle Michael, Bodily Communication, Methuen & Co Ltd, London [3] Argyle M., Cook M., Gaze and Mutual Gaze, Cambridge University Press, UK, 1976 [4] Becker B., Mark G., Social Conventions in Collaborative Environments, Proc. of CVE'98, Manchester. [5] Benford S., Bowers J., Fahlen L.E., Greenlhalgh C., Snowdon D., Embodiments, avatars, clones and agents for multi-user, multi-sensory virtual worlds, Multimedia Systems, Springer-Verlag, 1997 [6] Bowers J., Pycock J., O'Brien J., Talk and Embodiment in Collaborative Virtual Environments, CHI'96 [7] Carlsson and Hagsand, "DIVE - A Multi User Virtual Reality System", IEEE VRAIS, Sept, 1993[10] [8] Ekman P. and Friesen W.V., Unmasking the Face, Prentice-Hall Inc., 1975 [9] Henne P., Mark G., Voss A., Gestures for Social Communication for Virtual Environments, BT Presence Workshop, [10] Ho C., Basdogan C., Slater M., Durlach N., Shrinivasan M.A., An Experiment on the Influence of Haptic Communication on the Sense of Being Together, BT Presence Workshop, [11] Greenhalgh C., Benford S., Virtual Reality Tele-conferencing: Impl and Exp, Proc. of ECSCW'95. [12] Guye-Vuilleme A., Capin T. K., Pandzic I.S., Thalmann N.M, Thalmann D., Nonverbal Communication Interface for Collaborative Virtual Environments, Proc. of CVE'98, Manchester, UK. [13] Norberg Arthur and O' Neil, Judy, Transforming Computer Technology: Information Processing at the Pentagon, Baltimore: John Hopkins University Press [14] Shroeder R., Networked Worlds: Social aspects of Multi-User Virtual Reality Technology, Sociological Research Online, vol.2, no. 4, 1997 [15] Stephenson N., Snowcrash, Bantam Books, 1993 [16] Vilhjalmsson H.H., Autonomous Communicative Behaviors in Avatars, M.Sc. thesis, MIT, 1997 [17] Whittaker S., O'Connaill B., The role of Vision in Face-to-face and Mediated Communication, Video- Mediated Communication, edited by Finn K.E., Sellen A.J., Wilbur S.B. [18] Wilcox S.K., Guide to 3D Avatars, John Wiley and Sons Inc, 1998

Autonomic gaze control of avatars using voice information in virtual space voice chat system

Autonomic gaze control of avatars using voice information in virtual space voice chat system Autonomic gaze control of avatars using voice information in virtual space voice chat system Kinya Fujita, Toshimitsu Miyajima and Takashi Shimoji Tokyo University of Agriculture and Technology 2-24-16

More information

Multiple Presence through Auditory Bots in Virtual Environments

Multiple Presence through Auditory Bots in Virtual Environments Multiple Presence through Auditory Bots in Virtual Environments Martin Kaltenbrunner FH Hagenberg Hauptstrasse 117 A-4232 Hagenberg Austria modin@yuri.at Avon Huxor (Corresponding author) Centre for Electronic

More information

The Effects of Avatars on Co-presence in a Collaborative Virtual Environment

The Effects of Avatars on Co-presence in a Collaborative Virtual Environment The Effects of Avatars on Co-presence in a Collaborative Virtual Environment Juan Casanueva Edwin Blake Collaborative Visual Computing Laboratory, Department of Computer Science, University of Cape Town,

More information

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS BY SERAFIN BENTO MASTER OF SCIENCE in INFORMATION SYSTEMS Edmonton, Alberta September, 2015 ABSTRACT The popularity of software agents demands for more comprehensive HAI design processes. The outcome of

More information

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS

BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS KEER2010, PARIS MARCH 2-4 2010 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH 2010 BODILY NON-VERBAL INTERACTION WITH VIRTUAL CHARACTERS Marco GILLIES *a a Department of Computing,

More information

Networked Virtual Environments

Networked Virtual Environments etworked Virtual Environments Christos Bouras Eri Giannaka Thrasyvoulos Tsiatsos Introduction The inherent need of humans to communicate acted as the moving force for the formation, expansion and wide

More information

Collaboration in Multimodal Virtual Environments

Collaboration in Multimodal Virtual Environments Collaboration in Multimodal Virtual Environments Eva-Lotta Sallnäs NADA, Royal Institute of Technology evalotta@nada.kth.se http://www.nada.kth.se/~evalotta/ Research question How is collaboration in a

More information

X3D Multi-user Virtual Environment Platform for Collaborative Spatial Design

X3D Multi-user Virtual Environment Platform for Collaborative Spatial Design X3D Multi-user Virtual Environment Platform for Collaborative Spatial Design Ch. Bouras Ch. Tegos V. Triglianos Th. Tsiatsos Computer Engineering Computer Engineering and Informatics Dept. and Informatics

More information

User Embodiment in Collaborative Virtual Environments

User Embodiment in Collaborative Virtual Environments User Embodiment in Collaborative Virtual Environments Steve Benford Department of Computer Science The University of Nottingham, Nottingham, UK Tel: 44-602-514203 E-mail: sdb@cs.nott.ac.uk John Bowers

More information

A*STAR Unveils Singapore s First Social Robots at Robocup2010

A*STAR Unveils Singapore s First Social Robots at Robocup2010 MEDIA RELEASE Singapore, 21 June 2010 Total: 6 pages A*STAR Unveils Singapore s First Social Robots at Robocup2010 Visit Suntec City to experience the first social robots - OLIVIA and LUCAS that can see,

More information

Proposal Accessible Arthur Games

Proposal Accessible Arthur Games Proposal Accessible Arthur Games Prepared for: PBSKids 2009 DoodleDoo 3306 Knoll West Dr Houston, TX 77082 Disclaimers This document is the proprietary and exclusive property of DoodleDoo except as otherwise

More information

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications

Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Evaluating 3D Embodied Conversational Agents In Contrasting VRML Retail Applications Helen McBreen, James Anderson, Mervyn Jack Centre for Communication Interface Research, University of Edinburgh, 80,

More information

Visual and audio communication between visitors of virtual worlds

Visual and audio communication between visitors of virtual worlds Visual and audio communication between visitors of virtual worlds MATJA DIVJAK, DANILO KORE System Software Laboratory University of Maribor Smetanova 17, 2000 Maribor SLOVENIA Abstract: - The paper introduces

More information

Understanding the Mechanism of Sonzai-Kan

Understanding the Mechanism of Sonzai-Kan Understanding the Mechanism of Sonzai-Kan ATR Intelligent Robotics and Communication Laboratories Where does the Sonzai-Kan, the feeling of one's presence, such as the atmosphere, the authority, come from?

More information

Development of A Collaborative Virtual Environment for Finite Element Simulation

Development of A Collaborative Virtual Environment for Finite Element Simulation Development of A Collaborative Virtual Environment for Finite Element Simulation M. Kasim Abdul-Jalil Advisor : Dr. Christina L. Bloebaum Co-advisor : Dr. Abani Patra Committee : Dr. T. Keshavadas Department

More information

Modeling and Simulation: Linking Entertainment & Defense

Modeling and Simulation: Linking Entertainment & Defense Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 1998 Modeling and Simulation: Linking Entertainment & Defense Zyda, Michael 1 April 98: "Modeling

More information

Distributed Virtual Learning Environment: a Web-based Approach

Distributed Virtual Learning Environment: a Web-based Approach Distributed Virtual Learning Environment: a Web-based Approach Christos Bouras Computer Technology Institute- CTI Department of Computer Engineering and Informatics, University of Patras e-mail: bouras@cti.gr

More information

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY

INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY INTELLIGENT GUIDANCE IN A VIRTUAL UNIVERSITY T. Panayiotopoulos,, N. Zacharis, S. Vosinakis Department of Computer Science, University of Piraeus, 80 Karaoli & Dimitriou str. 18534 Piraeus, Greece themisp@unipi.gr,

More information

PROJECT REPORT: GAMING : ROBOT CAPTURE

PROJECT REPORT: GAMING : ROBOT CAPTURE BOWIE STATE UNIVERSITY SPRING 2015 COSC 729 : VIRTUAL REALITY AND ITS APPLICATIONS PROJECT REPORT: GAMING : ROBOT CAPTURE PROFESSOR: Dr. SHARAD SHARMA STUDENTS: Issiaka Kamagate Jamil Ramsey 1 OUTLINE

More information

Virtual Environments. Ruth Aylett

Virtual Environments. Ruth Aylett Virtual Environments Ruth Aylett Aims of the course 1. To demonstrate a critical understanding of modern VE systems, evaluating the strengths and weaknesses of the current VR technologies 2. To be able

More information

INTERACTIVE ARCHITECTURAL COMPOSITIONS INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS

INTERACTIVE ARCHITECTURAL COMPOSITIONS INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS INTERACTIVE ARCHITECTURAL COMPOSITIONS IN 3D REAL-TIME VIRTUAL ENVIRONMENTS RABEE M. REFFAT Architecture Department, King Fahd University of Petroleum and Minerals, Dhahran, 31261, Saudi Arabia rabee@kfupm.edu.sa

More information

Associated Emotion and its Expression in an Entertainment Robot QRIO

Associated Emotion and its Expression in an Entertainment Robot QRIO Associated Emotion and its Expression in an Entertainment Robot QRIO Fumihide Tanaka 1. Kuniaki Noda 1. Tsutomu Sawada 2. Masahiro Fujita 1.2. 1. Life Dynamics Laboratory Preparatory Office, Sony Corporation,

More information

The Effects of Group Collaboration on Presence in a Collaborative Virtual Environment

The Effects of Group Collaboration on Presence in a Collaborative Virtual Environment The Effects of Group Collaboration on Presence in a Collaborative Virtual Environment Juan Casanueva and Edwin Blake Collaborative Visual Computing Laboratory, Department of Computer Science, University

More information

VR-MOG: A Toolkit For Building Shared Virtual Worlds

VR-MOG: A Toolkit For Building Shared Virtual Worlds LANCASTER UNIVERSITY Computing Department VR-MOG: A Toolkit For Building Shared Virtual Worlds Andy Colebourne, Tom Rodden and Kevin Palfreyman Cooperative Systems Engineering Group Technical Report :

More information

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The

SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The SIGVerse - A Simulation Platform for Human-Robot Interaction Jeffrey Too Chuan TAN and Tetsunari INAMURA National Institute of Informatics, Japan The 29 th Annual Conference of The Robotics Society of

More information

Agent Models of 3D Virtual Worlds

Agent Models of 3D Virtual Worlds Agent Models of 3D Virtual Worlds Abstract P_130 Architectural design has relevance to the design of virtual worlds that create a sense of place through the metaphor of buildings, rooms, and inhabitable

More information

GULLIVER PROJECT: PERFORMERS AND VISITORS

GULLIVER PROJECT: PERFORMERS AND VISITORS GULLIVER PROJECT: PERFORMERS AND VISITORS Anton Nijholt Department of Computer Science University of Twente Enschede, the Netherlands anijholt@cs.utwente.nl Abstract This paper discusses two projects in

More information

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa

VIRTUAL REALITY Introduction. Emil M. Petriu SITE, University of Ottawa VIRTUAL REALITY Introduction Emil M. Petriu SITE, University of Ottawa Natural and Virtual Reality Virtual Reality Interactive Virtual Reality Virtualized Reality Augmented Reality HUMAN PERCEPTION OF

More information

Balancing Privacy and Awareness in Home Media Spaces 1

Balancing Privacy and Awareness in Home Media Spaces 1 Balancing Privacy and Awareness in Home Media Spaces 1 Carman Neustaedter & Saul Greenberg University of Calgary Department of Computer Science Calgary, AB, T2N 1N4 Canada +1 403 220-9501 [carman or saul]@cpsc.ucalgary.ca

More information

Collaborative Virtual Environment for Industrial Training and e-commerce

Collaborative Virtual Environment for Industrial Training and e-commerce Collaborative Virtual Environment for Industrial Training and e-commerce J.C.OLIVEIRA, X.SHEN AND N.D.GEORGANAS School of Information Technology and Engineering Multimedia Communications Research Laboratory

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Gibson, Ian and England, Richard Fragmentary Collaboration in a Virtual World: The Educational Possibilities of Multi-user, Three- Dimensional Worlds Original Citation

More information

The use of gestures in computer aided design

The use of gestures in computer aided design Loughborough University Institutional Repository The use of gestures in computer aided design This item was submitted to Loughborough University's Institutional Repository by the/an author. Citation: CASE,

More information

Information Spaces Building Meeting Rooms in Virtual Environments

Information Spaces Building Meeting Rooms in Virtual Environments Information Spaces Building Meeting Rooms in Virtual Environments Drew Harry MIT Media Lab 20 Ames Street Cambridge, MA 02139 USA dharry@media.mit.edu Judith Donath MIT Media Lab 20 Ames Street Cambridge,

More information

Design and evaluation of Hapticons for enriched Instant Messaging

Design and evaluation of Hapticons for enriched Instant Messaging Design and evaluation of Hapticons for enriched Instant Messaging Loy Rovers and Harm van Essen Designed Intelligence Group, Department of Industrial Design Eindhoven University of Technology, The Netherlands

More information

Representing People in Virtual Environments. Will Steptoe 11 th December 2008

Representing People in Virtual Environments. Will Steptoe 11 th December 2008 Representing People in Virtual Environments Will Steptoe 11 th December 2008 What s in this lecture? Part 1: An overview of Virtual Characters Uncanny Valley, Behavioural and Representational Fidelity.

More information

Haptic messaging. Katariina Tiitinen

Haptic messaging. Katariina Tiitinen Haptic messaging Katariina Tiitinen 13.12.2012 Contents Introduction User expectations for haptic mobile communication Hapticons Example: CheekTouch Introduction Multiple senses are used in face-to-face

More information

Non Verbal Communication of Emotions in Social Robots

Non Verbal Communication of Emotions in Social Robots Non Verbal Communication of Emotions in Social Robots Aryel Beck Supervisor: Prof. Nadia Thalmann BeingThere Centre, Institute for Media Innovation, Nanyang Technological University, Singapore INTRODUCTION

More information

Physical Presence in Virtual Worlds using PhysX

Physical Presence in Virtual Worlds using PhysX Physical Presence in Virtual Worlds using PhysX One of the biggest problems with interactive applications is how to suck the user into the experience, suspending their sense of disbelief so that they are

More information

LINKING CONSTRUCTION INFORMATION THROUGH VR USING AN OBJECT ORIENTED ENVIRONMENT

LINKING CONSTRUCTION INFORMATION THROUGH VR USING AN OBJECT ORIENTED ENVIRONMENT LINKING CONSTRUCTION INFORMATION THROUGH VR USING AN OBJECT ORIENTED ENVIRONMENT G. Aouad 1, T. Child, P. Brandon, and M. Sarshar Research Centre for the Built and Human Environment, University of Salford,

More information

Touch Perception and Emotional Appraisal for a Virtual Agent

Touch Perception and Emotional Appraisal for a Virtual Agent Touch Perception and Emotional Appraisal for a Virtual Agent Nhung Nguyen, Ipke Wachsmuth, Stefan Kopp Faculty of Technology University of Bielefeld 33594 Bielefeld Germany {nnguyen, ipke, skopp}@techfak.uni-bielefeld.de

More information

Multi-Modal User Interaction

Multi-Modal User Interaction Multi-Modal User Interaction Lecture 4: Multiple Modalities Zheng-Hua Tan Department of Electronic Systems Aalborg University, Denmark zt@es.aau.dk MMUI, IV, Zheng-Hua Tan 1 Outline Multimodal interface

More information

Diseño y Evaluación de Sistemas Interactivos COM Affective Aspects of Interaction Design 19 de Octubre de 2010

Diseño y Evaluación de Sistemas Interactivos COM Affective Aspects of Interaction Design 19 de Octubre de 2010 Diseño y Evaluación de Sistemas Interactivos COM-14112-001 Affective Aspects of Interaction Design 19 de Octubre de 2010 Dr. Víctor M. González y González victor.gonzalez@itam.mx Agenda 1. MexIHC 2010

More information

An Unreal Based Platform for Developing Intelligent Virtual Agents

An Unreal Based Platform for Developing Intelligent Virtual Agents An Unreal Based Platform for Developing Intelligent Virtual Agents N. AVRADINIS, S. VOSINAKIS, T. PANAYIOTOPOULOS, A. BELESIOTIS, I. GIANNAKAS, R. KOUTSIAMANIS, K. TILELIS Knowledge Engineering Lab, Department

More information

Virtual Reality Calendar Tour Guide

Virtual Reality Calendar Tour Guide Technical Disclosure Commons Defensive Publications Series October 02, 2017 Virtual Reality Calendar Tour Guide Walter Ianneo Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT

INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT INTERACTION AND SOCIAL ISSUES IN A HUMAN-CENTERED REACTIVE ENVIRONMENT TAYSHENG JENG, CHIA-HSUN LEE, CHI CHEN, YU-PIN MA Department of Architecture, National Cheng Kung University No. 1, University Road,

More information

Advancements in Gesture Recognition Technology

Advancements in Gesture Recognition Technology IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 4, Issue 4, Ver. I (Jul-Aug. 2014), PP 01-07 e-issn: 2319 4200, p-issn No. : 2319 4197 Advancements in Gesture Recognition Technology 1 Poluka

More information

Short Course on Computational Illumination

Short Course on Computational Illumination Short Course on Computational Illumination University of Tampere August 9/10, 2012 Matthew Turk Computer Science Department and Media Arts and Technology Program University of California, Santa Barbara

More information

3D Virtual Worlds and the Active Worlds Toolkit

3D Virtual Worlds and the Active Worlds Toolkit 3D Virtual Worlds and the Active Worlds Toolkit Our contribution to the discussion of spatially explicit, multi-participatory software platforms for the Interdisciplinary Experimental Lab Katy Börner &

More information

Polytechnical Engineering College in Virtual Reality

Polytechnical Engineering College in Virtual Reality SISY 2006 4 th Serbian-Hungarian Joint Symposium on Intelligent Systems Polytechnical Engineering College in Virtual Reality Igor Fuerstner, Nemanja Cvijin, Attila Kukla Viša tehnička škola, Marka Oreškovica

More information

PERCEPTUAL AND SOCIAL FIDELITY OF AVATARS AND AGENTS IN VIRTUAL REALITY. Benjamin R. Kunz, Ph.D. Department Of Psychology University Of Dayton

PERCEPTUAL AND SOCIAL FIDELITY OF AVATARS AND AGENTS IN VIRTUAL REALITY. Benjamin R. Kunz, Ph.D. Department Of Psychology University Of Dayton PERCEPTUAL AND SOCIAL FIDELITY OF AVATARS AND AGENTS IN VIRTUAL REALITY Benjamin R. Kunz, Ph.D. Department Of Psychology University Of Dayton MAICS 2016 Virtual Reality: A Powerful Medium Computer-generated

More information

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit)

ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) Exhibit R-2 0602308A Advanced Concepts and Simulation ARMY RDT&E BUDGET ITEM JUSTIFICATION (R2 Exhibit) FY 2005 FY 2006 FY 2007 FY 2008 FY 2009 FY 2010 FY 2011 Total Program Element (PE) Cost 22710 27416

More information

HeroX - Untethered VR Training in Sync'ed Physical Spaces

HeroX - Untethered VR Training in Sync'ed Physical Spaces Page 1 of 6 HeroX - Untethered VR Training in Sync'ed Physical Spaces Above and Beyond - Integrating Robotics In previous research work I experimented with multiple robots remotely controlled by people

More information

Blue Eyes Technology with Electric Imp Explorer Kit Ankita Shaily*, Saurabh Anand I.

Blue Eyes Technology with Electric Imp Explorer Kit Ankita Shaily*, Saurabh Anand I. ABSTRACT 2018 IJSRST Volume 4 Issue6 Print ISSN: 2395-6011 Online ISSN: 2395-602X National Conference on Smart Computation and Technology in Conjunction with The Smart City Convergence 2018 Blue Eyes Technology

More information

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira

AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS. Nuno Sousa Eugénio Oliveira AGENT PLATFORM FOR ROBOT CONTROL IN REAL-TIME DYNAMIC ENVIRONMENTS Nuno Sousa Eugénio Oliveira Faculdade de Egenharia da Universidade do Porto, Portugal Abstract: This paper describes a platform that enables

More information

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision

Perception. Read: AIMA Chapter 24 & Chapter HW#8 due today. Vision 11-25-2013 Perception Vision Read: AIMA Chapter 24 & Chapter 25.3 HW#8 due today visual aural haptic & tactile vestibular (balance: equilibrium, acceleration, and orientation wrt gravity) olfactory taste

More information

Immersive Real Acting Space with Gesture Tracking Sensors

Immersive Real Acting Space with Gesture Tracking Sensors , pp.1-6 http://dx.doi.org/10.14257/astl.2013.39.01 Immersive Real Acting Space with Gesture Tracking Sensors Yoon-Seok Choi 1, Soonchul Jung 2, Jin-Sung Choi 3, Bon-Ki Koo 4 and Won-Hyung Lee 1* 1,2,3,4

More information

Robot: Geminoid F This android robot looks just like a woman

Robot: Geminoid F This android robot looks just like a woman ProfileArticle Robot: Geminoid F This android robot looks just like a woman For the complete profile with media resources, visit: http://education.nationalgeographic.org/news/robot-geminoid-f/ Program

More information

CheekTouch: An Affective Interaction Technique while Speaking on the Mobile Phone

CheekTouch: An Affective Interaction Technique while Speaking on the Mobile Phone CheekTouch: An Affective Interaction Technique while Speaking on the Mobile Phone Young-Woo Park Department of Industrial Design, KAIST, Daejeon, Korea pyw@kaist.ac.kr Chang-Young Lim Graduate School of

More information

65 Collaborative work using our Virtual

65 Collaborative work using our Virtual 65 Collaborative work using our Virtual ~ality Network System (VLNET) Nadia Magnenat Thalmann, Igor Pandzic MIRALab- CUI, University of Geneva 24 rue du General-Dufour CH1211 Geneva 4, Switzerland tel.

More information

Game Studies. Prepare to be schooled.

Game Studies. Prepare to be schooled. Game Studies Prepare to be schooled. Who We Are Ian Bogost, Ph.D. Mia Consalvo, Ph.D. Jane McGonigal, Ph.D. Cand. Why Game Studies? Very smart people who care a lot about games and the people who play

More information

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Team Breaking Bat Architecture Design Specification. Virtual Slugger Department of Computer Science and Engineering The University of Texas at Arlington Team Breaking Bat Architecture Design Specification Virtual Slugger Team Members: Sean Gibeault Brandon Auwaerter Ehidiamen

More information

Active Agent Oriented Multimodal Interface System

Active Agent Oriented Multimodal Interface System Active Agent Oriented Multimodal Interface System Osamu HASEGAWA; Katsunobu ITOU, Takio KURITA, Satoru HAYAMIZU, Kazuyo TANAKA, Kazuhiko YAMAMOTO, and Nobuyuki OTSU Electrotechnical Laboratory 1-1-4 Umezono,

More information

Chapter 5. Design and Implementation Avatar Generation

Chapter 5. Design and Implementation Avatar Generation Chapter 5 Design and Implementation This Chapter discusses the implementation of the Expressive Texture theoretical approach described in chapter 3. An avatar creation tool and an interactive virtual pub

More information

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics -

Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Development of an Interactive Humanoid Robot Robovie - An interdisciplinary research approach between cognitive science and robotics - Hiroshi Ishiguro 1,2, Tetsuo Ono 1, Michita Imai 1, Takayuki Kanda

More information

Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions

Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions Arbitrating Multimodal Outputs: Using Ambient Displays as Interruptions Ernesto Arroyo MIT Media Laboratory 20 Ames Street E15-313 Cambridge, MA 02139 USA earroyo@media.mit.edu Ted Selker MIT Media Laboratory

More information

Using Web-Based Computer Graphics to Teach Surgery

Using Web-Based Computer Graphics to Teach Surgery Using Web-Based Computer Graphics to Teach Surgery Ken Brodlie Nuha El-Khalili Ying Li School of Computer Studies University of Leeds Position Paper for GVE99, Coimbra, Portugal Surgical Training Surgical

More information

Subject Description Form. Upon completion of the subject, students will be able to:

Subject Description Form. Upon completion of the subject, students will be able to: Subject Description Form Subject Code Subject Title EIE408 Principles of Virtual Reality Credit Value 3 Level 4 Pre-requisite/ Corequisite/ Exclusion Objectives Intended Subject Learning Outcomes Nil To

More information

6 System architecture

6 System architecture 6 System architecture is an application for interactively controlling the animation of VRML avatars. It uses the pen interaction technique described in Chapter 3 - Interaction technique. It is used in

More information

City in The Box - CTB Helsinki 2003

City in The Box - CTB Helsinki 2003 City in The Box - CTB Helsinki 2003 An experimental way of storing, representing and sharing experiences of the city of Helsinki, using virtual reality technology, to create a navigable multimedia gallery

More information

An Escape Room set in the world of Assassin s Creed Origins. Content

An Escape Room set in the world of Assassin s Creed Origins. Content An Escape Room set in the world of Assassin s Creed Origins Content Version Number 2496 How to install your Escape the Lost Pyramid Experience Goto Page 3 How to install the Sphinx Operator and Loader

More information

User Interface Agents

User Interface Agents User Interface Agents Roope Raisamo (rr@cs.uta.fi) Department of Computer Sciences University of Tampere http://www.cs.uta.fi/sat/ User Interface Agents Schiaffino and Amandi [2004]: Interface agents are

More information

Virtual Life Network: a Body-Centered Networked Virtual Environment*

Virtual Life Network: a Body-Centered Networked Virtual Environment* Virtual Life Network: a Body-Centered Networked Virtual Environment* Igor-Sunday Pandzic 1, Tolga K. Capin 2, Nadia Magnenat Thalmann 1, Daniel Thalmann 2 1 MIRALAB-CUI, University of Geneva CH1211 Geneva

More information

NICE: Combining Constructionism, Narrative, and Collaboration in a Virtual Learning Environment

NICE: Combining Constructionism, Narrative, and Collaboration in a Virtual Learning Environment In Computer Graphics Vol. 31 Num. 3 August 1997, pp. 62-63, ACM SIGGRAPH. NICE: Combining Constructionism, Narrative, and Collaboration in a Virtual Learning Environment Maria Roussos, Andrew E. Johnson,

More information

Years 9 and 10 standard elaborations Australian Curriculum: Digital Technologies

Years 9 and 10 standard elaborations Australian Curriculum: Digital Technologies Purpose The standard elaborations (SEs) provide additional clarity when using the Australian Curriculum achievement standard to make judgments on a five-point scale. They can be used as a tool for: making

More information

Development of a telepresence agent

Development of a telepresence agent Author: Chung-Chen Tsai, Yeh-Liang Hsu (2001-04-06); recommended: Yeh-Liang Hsu (2001-04-06); last updated: Yeh-Liang Hsu (2004-03-23). Note: This paper was first presented at. The revised paper was presented

More information

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright E90 Project Proposal 6 December 2006 Paul Azunre Thomas Murray David Wright Table of Contents Abstract 3 Introduction..4 Technical Discussion...4 Tracking Input..4 Haptic Feedack.6 Project Implementation....7

More information

Abdulmotaleb El Saddik Associate Professor Dr.-Ing., SMIEEE, P.Eng.

Abdulmotaleb El Saddik Associate Professor Dr.-Ing., SMIEEE, P.Eng. Abdulmotaleb El Saddik Associate Professor Dr.-Ing., SMIEEE, P.Eng. Multimedia Communications Research Laboratory University of Ottawa Ontario Research Network of E-Commerce www.mcrlab.uottawa.ca abed@mcrlab.uottawa.ca

More information

Robotic Systems ECE 401RB Fall 2007

Robotic Systems ECE 401RB Fall 2007 The following notes are from: Robotic Systems ECE 401RB Fall 2007 Lecture 14: Cooperation among Multiple Robots Part 2 Chapter 12, George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation

More information

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS

A SURVEY OF SOCIALLY INTERACTIVE ROBOTS A SURVEY OF SOCIALLY INTERACTIVE ROBOTS Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presented By: Mehwish Alam INTRODUCTION History of Social Robots Social Robots Socially Interactive Robots Why

More information

DESIGN AGENTS IN VIRTUAL WORLDS. A User-centred Virtual Architecture Agent. 1. Introduction

DESIGN AGENTS IN VIRTUAL WORLDS. A User-centred Virtual Architecture Agent. 1. Introduction DESIGN GENTS IN VIRTUL WORLDS User-centred Virtual rchitecture gent MRY LOU MHER, NING GU Key Centre of Design Computing and Cognition Department of rchitectural and Design Science University of Sydney,

More information

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1

MIN-Fakultät Fachbereich Informatik. Universität Hamburg. Socially interactive robots. Christine Upadek. 29 November Christine Upadek 1 Christine Upadek 29 November 2010 Christine Upadek 1 Outline Emotions Kismet - a sociable robot Outlook Christine Upadek 2 Denition Social robots are embodied agents that are part of a heterogeneous group:

More information

Home-Care Technology for Independent Living

Home-Care Technology for Independent Living Independent LifeStyle Assistant Home-Care Technology for Independent Living A NIST Advanced Technology Program Wende Dewing, PhD Human-Centered Systems Information and Decision Technologies Honeywell Laboratories

More information

Natural Interaction with Social Robots

Natural Interaction with Social Robots Workshop: Natural Interaction with Social Robots Part of the Topig Group with the same name. http://homepages.stca.herts.ac.uk/~comqkd/tg-naturalinteractionwithsocialrobots.html organized by Kerstin Dautenhahn,

More information

Geo-Located Content in Virtual and Augmented Reality

Geo-Located Content in Virtual and Augmented Reality Technical Disclosure Commons Defensive Publications Series October 02, 2017 Geo-Located Content in Virtual and Augmented Reality Thomas Anglaret Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

User experience evaluation of human representation in collaborative virtual environments Economou, D., Doumanis, I., Argyriou, L. and Georgalas, N.

User experience evaluation of human representation in collaborative virtual environments Economou, D., Doumanis, I., Argyriou, L. and Georgalas, N. WestminsterResearch http://www.westminster.ac.uk/westminsterresearch User experience evaluation of human representation in collaborative virtual environments Economou, D., Doumanis, I., Argyriou, L. and

More information

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation University of California, Santa Barbara CS189 Fall 17 Capstone VR Telemedicine Product Requirement Documentation Jinfa Zhu Kenneth Chan Shouzhi Wan Xiaohe He Yuanqi Li Supervised by Ole Eichhorn Helen

More information

revolutionizing Subhead Can Be Placed Here healthcare Anders Gronstedt, Ph.D., President, Gronstedt Group September 22, 2017

revolutionizing Subhead Can Be Placed Here healthcare Anders Gronstedt, Ph.D., President, Gronstedt Group September 22, 2017 How Presentation virtual reality Title is revolutionizing Subhead Can Be Placed Here healthcare Anders Gronstedt, Ph.D., President, Gronstedt Group September 22, 2017 Please introduce yourself in text

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information

KEYWORDS virtual reality exhibition, high bandwidth, video-on-demand. interpretation

KEYWORDS virtual reality exhibition, high bandwidth, video-on-demand. interpretation ABSTRACT The SlCMA (Scaleable Interactive Continuous Media Server-Design and Application) project has been pan of the European Union's Advanced Communication Technologies and Services (ACTS) Program since

More information

Birth of An Intelligent Humanoid Robot in Singapore

Birth of An Intelligent Humanoid Robot in Singapore Birth of An Intelligent Humanoid Robot in Singapore Ming Xie Nanyang Technological University Singapore 639798 Email: mmxie@ntu.edu.sg Abstract. Since 1996, we have embarked into the journey of developing

More information

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam

Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1 Introduction Essay on A Survey of Socially Interactive Robots Authors: Terrence Fong, Illah Nourbakhsh, Kerstin Dautenhahn Summarized by: Mehwish Alam 1.1 Social Robots: Definition: Social robots are

More information

CS277 - Experimental Haptics Lecture 2. Haptic Rendering

CS277 - Experimental Haptics Lecture 2. Haptic Rendering CS277 - Experimental Haptics Lecture 2 Haptic Rendering Outline Announcements Human haptic perception Anatomy of a visual-haptic simulation Virtual wall and potential field rendering A note on timing...

More information

Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces

Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces Content based on Dr.LaViola s class: 3D User Interfaces for Games and VR What is a User Interface? Where

More information

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents

Craig Barnes. Previous Work. Introduction. Tools for Programming Agents From: AAAI Technical Report SS-00-04. Compilation copyright 2000, AAAI (www.aaai.org). All rights reserved. Visual Programming Agents for Virtual Environments Craig Barnes Electronic Visualization Lab

More information

New interface approaches for telemedicine

New interface approaches for telemedicine New interface approaches for telemedicine Associate Professor Mark Billinghurst PhD, Holger Regenbrecht Dipl.-Inf. Dr-Ing., Michael Haller PhD, Joerg Hauber MSc Correspondence to: mark.billinghurst@hitlabnz.org

More information

STRATEGO EXPERT SYSTEM SHELL

STRATEGO EXPERT SYSTEM SHELL STRATEGO EXPERT SYSTEM SHELL Casper Treijtel and Leon Rothkrantz Faculty of Information Technology and Systems Delft University of Technology Mekelweg 4 2628 CD Delft University of Technology E-mail: L.J.M.Rothkrantz@cs.tudelft.nl

More information

Body Buddies: Social Signaling through Puppeteering

Body Buddies: Social Signaling through Puppeteering Body Buddies: Social Signaling through Puppeteering Magy Seif El-Nasr 1, Katherine Isbister 2, Jeffery Ventrella, Bardia Aghabeigi 1, Chelsea Hash, Mona Erfani 1, Jacquelyn Morie 5, and Leslie Bishko 6

More information

The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments

The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments The Effect of Haptic Feedback on Basic Social Interaction within Shared Virtual Environments Elias Giannopoulos 1, Victor Eslava 2, María Oyarzabal 2, Teresa Hierro 2, Laura González 2, Manuel Ferre 2,

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES INTERNATIONAL CONFERENCE ON ENGINEERING AND PRODUCT DESIGN EDUCATION 4 & 5 SEPTEMBER 2008, UNIVERSITAT POLITECNICA DE CATALUNYA, BARCELONA, SPAIN MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL

More information