Home Sweet Virtual Home

Similar documents
Haptic presentation of 3D objects in virtual reality for the visually disabled

Perception in Immersive Environments

The 8 th International Scientific Conference elearning and software for Education Bucharest, April 26-27, / X

Workshop 4: Digital Media By Daniel Crippa

ADVANCED WHACK A MOLE VR

Virtual Reality as Innovative Approach to the Interior Designing

Video Games and Interfaces: Past, Present and Future Class #2: Intro to Video Game User Interfaces

Project Multimodal FooBilliard

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

Portfolio. Swaroop Kumar Pal swarooppal.wordpress.com github.com/swarooppal1088

G54GAM Lab Session 1

Immersion in Multimodal Gaming

Frequently Asked Questions

Team Breaking Bat Architecture Design Specification. Virtual Slugger

Xdigit: An Arithmetic Kinect Game to Enhance Math Learning Experiences

Journey through Game Design

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

Remote Shoulder-to-shoulder Communication Enhancing Co-located Sensation

Home-Care Technology for Independent Living

BIMXplorer v1.3.1 installation instructions and user guide

Touch & Gesture. HCID 520 User Interface Software & Technology

VR for Microsurgery. Design Document. Team: May1702 Client: Dr. Ben-Shlomo Advisor: Dr. Keren Website:

Virtual Reality RPG Spoken Dialog System

Virtual Universe Pro. Player Player 2018 for Virtual Universe Pro

In the end, the code and tips in this document could be used to create any type of camera.

Arcaid: Addressing Situation Awareness and Simulator Sickness in a Virtual Reality Pac-Man Game

The purpose of this document is to outline the structure and tools that come with FPS Control.

User Interface Software Projects

Classifying 3D Input Devices

Students: Bar Uliel, Moran Nisan,Sapir Mordoch Supervisors: Yaron Honen,Boaz Sternfeld

Macquarie University Introductory Unity3D Workshop

VR/AR Concepts in Architecture And Available Tools

Heads up interaction: glasgow university multimodal research. Eve Hoggan

What was the first gestural interface?

Development Outcome 2

VIRTUAL MUSEUM BETA 1 INTRODUCTION MINIMUM REQUIREMENTS WHAT DOES BETA 1 MEAN? CASTLEFORD TIGERS HERITAGE PROJECT

Image Manipulation Unit 34. Chantelle Bennett

Oculus Rift Getting Started Guide

Virtual Reality Game using Oculus Rift

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1


First Tutorial Orange Group

Easy Input For Gear VR Documentation. Table of Contents

Keywords: Innovative games-based learning, Virtual worlds, Perspective taking, Mental rotation.

LOOKING AHEAD: UE4 VR Roadmap. Nick Whiting Technical Director VR / AR

Oculus Rift Getting Started Guide

Chapter 1 Virtual World Fundamentals

Trial code included!

Chapter 2 Understanding and Conceptualizing Interaction. Anna Loparev Intro HCI University of Rochester 01/29/2013. Problem space

University of California, Santa Barbara. CS189 Fall 17 Capstone. VR Telemedicine. Product Requirement Documentation

TOUCH & FEEL VIRTUAL REALITY. DEVELOPMENT KIT - VERSION NOVEMBER 2017

understanding sensors

Picks. Pick your inspiration. Addison Leong Joanne Jang Katherine Liu SunMi Lee Development Team manager Design User testing

Controlling Viewpoint from Markerless Head Tracking in an Immersive Ball Game Using a Commodity Depth Based Camera

Overall approach, including resources required. Session Goals

A Kinect-based 3D hand-gesture interface for 3D databases

DESIGN A SHOOTING STYLE GAME IN FLASH 8

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko

CS 354R: Computer Game Technology

Introduction To Immersive Virtual Environments (aka Virtual Reality) Scott Kuhl Michigan Tech

interactive laboratory

Evaluation of Guidance Systems in Public Infrastructures Using Eye Tracking in an Immersive Virtual Environment

In this project you ll learn how to create a platform game, in which you have to dodge the moving balls and reach the end of the level.

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

A New Simulator for Botball Robots

Revision for Grade 6 in Unit #1 Design & Technology Subject Your Name:... Grade 6/

M-16DX 16-Channel Digital Mixer

What My Content Was Like Four Years Ago

Software Requirements Specification

Input devices and interaction. Ruth Aylett

ncloth Simulation in Maya ncloth Simulation Report Cart 434 Advance 3D Studio By Umer Usman Instructor: Stephan Menzies

Oculus Rift Development Kit 2

Advancements in Gesture Recognition Technology

Orbital Delivery Service

Markerless 3D Gesture-based Interaction for Handheld Augmented Reality Interfaces

VR-Plugin. for Autodesk Maya.

Requirements Specification. An MMORPG Game Using Oculus Rift

School Based Projects

PRINTING & SHARING IMAGES IN LIGHTROOM

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

PASS IT ON PROBLEM SOLUTION OVERVIEW INTERFACE DESIGNS

Haptic Camera Manipulation: Extending the Camera In Hand Metaphor

Apple ARKit Overview. 1. Purpose. 2. Apple ARKit. 2.1 Overview. 2.2 Functions

IMGD 4000 Technical Game Development II Interaction and Immersion

HCI Midterm Report CookTool The smart kitchen. 10/29/2010 University of Oslo Gautier DOUBLET ghdouble Marine MATHIEU - mgmathie

Quality of Experience for Virtual Reality: Methodologies, Research Testbeds and Evaluation Studies

Instruction Manual. Pangea Software, Inc. All Rights Reserved Enigmo is a trademark of Pangea Software, Inc.

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21

Easy Input Helper Documentation

Until now, I have discussed the basics of setting

Seaman Risk List. Seaman Risk Mitigation. Miles Von Schriltz. Risk # 2: We may not be able to get the game to recognize voice commands accurately.

Foreword Thank you for purchasing the Motion Controller!

Alternative Interfaces. Overview. Limitations of the Mac Interface. SMD157 Human-Computer Interaction Fall 2002

Background - Too Little Control

Network Institute Tech Labs

Blend Photos With Apply Image In Photoshop

What you see is not what you get. Grade Level: 3-12 Presentation time: minutes, depending on which activities are chosen

The 8 th International Scientific Conference elearning and software for Education Bucharest, April 26-27, / X

A Guide to Virtual Reality for Social Good in the Classroom

Overview. The Game Idea

Transcription:

KTH DT2140 Home Sweet Virtual Home Niklas Blomqvist nblomqvi@kth.se Carlos Egusquiza carlosea@kth.se January 20, 2015 Annika Strålfors stralf@kth.se Supervisor: Christopher Peters 1

ABSTRACT Multimodal interaction is an important area in human computer interaction. This project explores how the Oculus Rift works together with voice recognition and standard controls in a virtual environment. The users are able to walk around in two rooms and change certain parameters of the furniture. A study was conducted and user experience data was collected through a survey in order to evaluate the application. None of the users experienced motion sickness and all thought the navigation of the menu and the rooms with the Oculus Rift head tracking together with the standard controls worked well. Voice input was not perceived as a completely natural modality to interact with the program. 2

CONTENTS 1 Introduction 4 2 Related work 4 3 Implementation 7 3.1 Initial thoughts...................................... 7 3.2 Change of plans...................................... 7 3.3 Final implementation.................................. 7 4 Results 8 4.1 Application......................................... 8 4.2 Evaluation......................................... 8 5 Discussion 10 6 Conclusion 13 3

1 INTRODUCTION Multimodal interaction is an expanding area in human computer interaction in which users can interact with technology through several input and output modalities. This project aimed to combine head tracking with voice recognition and standard controls in order to make an application where users could explore a virtual room through the use of an Oculus Rift. A virtual representation of a room in a house environment was created in Unity together with a GUI. Through the Oculus Rift, the user could explore and manipulate certain objects in order to see how they would look in the real world room. The objects consisted of different furniture in which the user could change textures, color and models. A future use of the application might be for commercial use in different web furniture shops where users can download models of different furniture and import them into a virtual representation of a room in their house in order to explore how the furniture will look in context with a real world room. The Oculus Rift is a head mounted display (HMD) which will most likely be available for commercial use sometime in 2015. It has great potential in many usability areas but not much research has been done on it since it is fairly new. Motion sickness is a big issue when developing virtual environments used with a HMD. In these scenarios, where the user s eyes are occupied, voice input has shown superiority over for example keyboard entry in many applications [1]. Further on, voice recognition has been proved useful in certain environments, i.e to store and forward messages, for different alerts in busy environments and for blind or motor-impaired users. However, some researchers mean that voice input is inefficient compared to physical controls input in tasks that require problem solving activities due to how the brain processes information [2]. This project wanted to explore how well Oculus Rift head tracking and voice recognition could be used together for navigation in the virtual room. This report consist of an description over the application followed by an evaluation in which the users got to test the application and answer a survey. The results are presented with a subsequent discussion. 2 RELATED WORK The role that voice input plays in human-computer interaction is significant under certain circumstances. When the user is disabled, when pronounciation is the subject in matter of computer use, when natural language interaction is prefered, when only a limited keyboard and/or screen is available or when the user s hands or eyes are busy. When using an Oculus Rift, the user s eyes will be looking in the virtual environment and the user won t be able to see a standard control, like the keyboard. Voice could therefore be an optimal input method over the keyboard in this scenario. In many applications, where the user s input is constrained, voice leads to faster task performance and less errors than keyboard entry [1]. However, the use of voice recognition as an input modality in human-computer interaction applications is a challenging area with a slow development compared to visual interfaces. Spoken languages between humans are a very effective and natural way to interact for most people. As addressed in the article The limits of speech recognition, voice recognition can 4

create severe limitations when implemented in human-computer systems [2]. Speech is difficult to review and edit, it is a slow way of representing information and it interferes significantly with other cognitive tasks. As discussed in the article, one reason might be that humans perceive it as more difficult to talk and think at the same time, compared to the use of standard controls input, i.e use of keyboard or mouse, and parallel through processes. The reason is that physical activity is processed in other parts of the brain than problem solving, while speaking and listening are processes in the same brain area as the short-term and working memory. Using speech as an input modality in applications that demand a lot of attention can therefore be problematic. In order to create effective application interaction, designers need to set realistic goals for speech-based human-computer interaction and acknowledge the limitations of voice recognition when it comes to be performed in parallel with problem solving. The Oculus Rift is probably the leading low-cost HMD at this point. There are, as mentioned before, several other HMDs but most of these are intended for use in labs and are much more expensive. A study which compared the Oculus Rift, development kit 1 (DK1), with the high-cost Nvis SX60 was made in 2014 [3]. The Oculus Rift has a lower resolution (640 x 800 per eye compared to Nvis 1280 x 1024 per eye) but a higher field of view (FoV). The Oculus Rift is also less heavy, weighing about 480 g against Nvis 1250 g. There are some other differentiating factors between the two systems. Two tests were conducted, followed by a survey about motion sickness. In the first test, they compared the Oculus Rift to the Nvis SX60 in the task of egocentric distance estimation. The second test was divided into three tasks; sorting, searching and viewing. The Oculus Rift was used with two Razer Hydra controllers where movement is constructed by using the analog stick on the right controller and the environment is built in Unity3Ds game engine s pro license version. The Nvis SX60 system was used with Cyberglove II for grasping and a Logitech Freedom wireless joystick for movement. The results showed that the low-cost Oculus Rift outperformed the high-cost Nvis SX60 in both the distance estimation task and the sorting, searching and viewing tasks. In the egocentric distance test, the results were obtained by calculating the ratio of distance walked to true distance. The Oculus Rift users scored about 0.9 (where 1 is perfect estimation) when below 10 meters and about 1.1 after that whilst the Nvis SX60 users scored around 0.55 for all distances. The task completion time for the sorting and searching tasks was less with the Oculus Rift and the participants felt more present in the virtual environment viewed in the Oculus Rift. There were 2 participants who had to disconnect from the test due to motion sickness when using the Oculus Rift. The authors suggests this might be due to the low resolution but states that there are many factors at play in determining visual discomfort in 3D displays. The new Oculus Rift (DK2) that is available now, has a higher resolution of 960 x 1080 per eye. The development of new low-cost interface technologies, like the Oculus Rift has renewed the interest of virtual environments, especially for private entertainment use. Side effects such as nausea from cybersickness, sickness experienced by users of head-steered virtual reality systems, is a major issue [4]. Simulator sickness and motion sickness have been previously well-studied, yet many of the issues are still unresolved. A study suggests that a big factor in why they are still unresolved is because of the methods of measurement of cybersickness. More precise tools to objectively measure cybersickness are needed in order to solve some of these issues. Most methods used to measure cybersickness today are subjec- 5

tive and based on questionnaires and while useful for providing a snapshot of participant experience, this method is prone to problems and does not provide uninterrupted real time monitoring information while the participant is in the virtual environment. Studies have shown that distance in virtual reality environments often are underestimated compared to distance perception in the real world [5]. In the real world, distances up to 25 meters have been shown to be perceived with quite accurate estimations. In one of these studies, two main explanations why distance perception in virtual environments tend to be underestimated have been proposed. The first suggestion is that graphical information are missing from the rendering of the scene. The second proposal is that it depends on the immersive display technology. In the study, an experiment of action tasks related to egocentric distance estimations was conducted. The participants in the experiment viewed targets on the ground from different distances (2m, 3.5m and 5m) and was then instructed to walk towards the previous targets, without vision. The participants were instructed to memorize the target and its surroundings before their vision was blacked out. There were three different environments; a real hallway, a virtual 360 degree panorama photograph of the hallway display in a HMD (head mounted display) and a computer graphic rendering of the hallway displayed in a HMD. After the participants had observed the target, their vision was blacked out (either with a blindfold or the HMD display was cleared) and they were instructed to walk towards the previous target and stop when they perceived that they had moved the distance to the target. The results from the study showed that there was a significant difference between distance perception in the real world and the two virtual representations of the hallway. The distance perception was most accurate under the real world conditions. The panorama photograph of the hallway resulted in a slightly better distance perception compared to the computer graphic rendering, but the difference was very small compared to results from previous research. A possible explanation of this was that the complexity of the graphical model and the high resolution of the display system. The results from the study confirmed that distance perception is more accurate under real world conditions and suggest that due to the small differences between the estimations under the virtual conditions, the compression of egocentric distance in virtual environments is likely to be caused by the HMD system. When reconstructing virtual environments, cameras with an built-in depth sensor can be used. The consumer-graded camera Kinect has the potential to be used in these mapping applications where the accuracy requirements are less strict [6]. It determines the depth of difference objects by triangulation. This triangulation is done by the sensor s main components: an IR laser emitter, an IR camera and a RGB camera. The random errors in the depth data are minimum when the object is close to the sensor. Increasing the distance to the sensor makes the random error increase quadratically. The depth resolution decreases quadratically, reaching the lowest resolution at the maximum range of 5 m. 6

3 IMPLEMENTATION 3.1 INITIAL THOUGHTS Our goals were different in the planning stage of the project. We had planned to create a virtual environment by scanning an existing room with the help of the Kinect. The virtual room would be navigated with a wii-controller in combination with voice commands. The main goal was multimodality: it is important to let the user decide which modality he or she wants to use. The user would have the possibility to change the color, model or texture of the room s furniture. One important aspect of this setup was the fact that, as the modeled room would be an existing room, we would be able to analyse the distance perception in the virtual environment. 3.2 CHANGE OF PLANS We found it difficult to find free scanning software for the Kinect. The ones that were found were very limited and required a lot of computational power, preferable computers with GPU acceleration. The laptops available could not perform these tasks in an efficient way. These difficulties led to a change of plans. Instead, we chose to use existing 3d-models downloaded from an online 3d-models source Turbosquid. This solved part of the problem and we were able to focus on functionality. Another difference is that the wii-controller was not used for navigation, mostly due to lack of time. Navigation is instead done by using the keyboard, the keys W, A, S and D for walking in each direction, and Q and E for rotating. 3.3 FINAL IMPLEMENTATION The 3d-models were imported into Unity3D, where they were also put together, resulting in a virtual environment. Unity3D is a game engine which is well-known. Unity3D makes it easier to implement physics (such as collision and gravity) into the game, so the developers don t have to code everything from scratch. The virtual environment uses the Microsoft Speech API in order for the voice commands to work. There are currently three voice commands that can be used: select, bedroom, kitchen, and close. Their functionality is self-explanatory. Environments created in unity can be visualized with the help of the Oculus Rift. This is done by attaching two cameras (one for each eye) to the player character. The player character is just an invisible capsule shaped collider, which gives the feeling that it can walk into other physical objects in the room. 7

4 R ESULTS Here are the result from the collected data from the evaluation together with snapshots from the results of the appearance of the final product. 4.1 A PPLICATION Below are some snapshots from the application. The end product consisted of two rooms in which the user could navigate through standard controls and furnish the room through both standard controls and voice recognition. A full video presentation of the application is available in the reference section [7]. 4.2 E VALUATION Below are the results from the survey evaluation. Each diagram corresponds to one of the questions in the survey. The total number of participants were five and the participants were instructed to answer the survey directly after they had tried out the application. As seen in the first question none of the participants experienced any nausea or motion sickness. The participants were also consistent regarding that they felt that navigation through head tracking in the menu worked well. All participants experienced that the combination between the Oculus rift head tracking and standard controls for navigation also worked well. Furthermore, 40 % of the participants felt that voice worked as a natural modality to interact with 8

the program, while 60 % felt unsure. In the question regarding which modality the participants preferred to use, standard controls or voice recognition, 60 % preferred to use standard controls while 40 % favoured voice recognition for interaction with the program. 9

5 DISCUSSION This project did not put much emphasis on making an evaluation, but focused more on implementing a multimodal program. Nevertheless, we did a smaller evaluation to get some insight into the feel of the program and how the different modalities worked with each other. None of our participants felt motion sick during testing but this is normally a big problem with HMD s. We used the latest Oculus Rift (DK2) which has a high resolution and field of view, which have been shown to decrease motion sickness, so this could be one factor to no one feeling motion sick. Participants tested the program for about two to three minutes which is not that long and being in a calm environment could also be factors to why no one felt motion sick. Navigation in the menu by using head tracking worked well for all participants which were somewhat expected. A lot of current iterations of Oculus Rift implementations uses this technique. Navigating in the room with the Oculus Rift and the keyboard did also work well for all participants, but would probably not work as well in more complex scenarios. The Oculus Rift occupies your eyes, so finding keys on the keyboard for someone who s not used to a keyboard will be a big problem. We did only add one button for clicking things due to this being a problem and our initial thoughts on integrating the program with a Wii-Remote or some sort of controller would probably be the better option since it s easier to find buttons on that than on the keyboard without looking. Also, if you would remove your hand from the des- 10

ignated control area (around the WASD-buttons) you would probably need to remove your headgear to find the location again and this is not a problem with a Wii-Remote. Most participants preferred the standard control (the keyboard) over voice input as well. This could be because of our scenario, where clicking something could be done with either the keyboard or by voice. It doesn t feel so natural to say select as to say for example change texture or change color to red. We had problems when using too many word inputs that the windows speech service recognized them poorly which resulted with us only having four voice inputs; select, kitchen, bedroom and close. Another factor that the keyboard was preferred could be that choosing something in a menu by clicking the boxes feels natural with a keyboard and less natural by voice, but in another scenario the feeling could be switched. Movement was also done with the keyboard so the participants were already using it before entering a menu. Lastly, most participants didn t know if voice felt as a natural modality for interacting with the program and the rest felt it was natural. This is again, probably because of our specific scenario where you already use the keyboard for navigation as well as clicking in a menu feels more natural when using a keyboard than voice because that s how we normally do. Having more natural voice commands would probably make voice feel more natural when interacting with the program or by increasing the complexity of what you as a user can do in the environment. During the course of this project we have hit some bumps in the road, but nothing we couldn t manage to solve. Initially, it was hard for us to figure out what kind of project we wanted to do and to help us narrow it down, we started by choosing what technology we wanted to use. Early on we knew that we wanted to work with the Oculus Rift in some kind of environment. We had thoughts on re-creating a real environment and then comparing it with the real environment, thus having the opportunity to get some valuable human perception data from the survey as well. We did some research and had a meeting with our supervisor Chris, after which, we decided to try and integrate a Kinect into our project, to make a virtual environment from a real one with the help of Kinect s built-in depth sensor. After much research and trying it out, we decided to skip these parts. We wouldn t have time to build an environment and then try to focus on the multimodality aspects of the project, which should be in the center of the project. We had also intentions of making all models ourselves in Maya but decided to skip this too because of the project s time limit as well as skipping the integration of a Wii-Remote instead of a keyboard. We have been very optimistic from the start with what we wanted to accomplish and had to scale it down as we progressed with the project. Nevertheless, what we accomplished in the end, is something we re proud of. We worked tools and environment which we hadn t worked with before and successfully created something of value. We got to create something with Unity3D by having keyboard, voice and head motion tracking as inputs and have learnt a great deal surrounding these techniques. In addition to being optimistic about the goals of the project and pressed for time, we encountered another unforeseen problem. We had talked with another group that had access to the Visualization studio during the winter holiday so we could also have access when it was closed. But the one with the valid passcard went on a holiday and neither us, or the rest of their group could get access to the Visualization studio during these two weeks. Only 11

having access to an Oculus Rift in the studio, we had to do most of the work without one and then the last couple of days integrate it and try it out with an Oculus Rift. This led to us having very little time to polish the program beyond our intended functionality. What worked well within the group was our cooperation. We met up when we needed and talked on either Facebook or Skype to update each other on the progress and goals. There were never any problem for us to change the goals of the projects when needed. For a small group these types of quick changes are easier to carry out than for a bigger group so another strength was our ability to be agile. It s hard to come up with things we would have done differently if given the opportunity but one thing would probably be to have a backup plan to get access to the Visualization studio. We lost valuable testing time with the Oculus Rift because of the two weeks of no access to it during the winter holiday. If we would have a functioning program before the last couple of days of submitting the report we would also have had more time to evaluate the program and test it on more people. Another thing in hindsight, would maybe have been to use less time for research and more time for brainstorming and coming up with ideas in the beginning. We lost a lot of time researching and trying technologies we didn t use in the end. We used up a couple of days trying out the Kinect, researching its functionality and how to re-construct real environments. We did learn much in this area but because of the time limit of the project, time was derived from it. We did spend time brainstorming but after we had come up with somewhat of an idea we pursued it instead of coming up with more ideas to use as backup. We went from re-creating a shopping mall, to re-creating the Visualization studio, to re-creating a group members house and then decided to skip constructing a virtual environment from a real. Instead of going from one idea to the next we could ve worked with many ideas parallel to each other so that we wouldn t waste as much time on every idea. When researching the environments and ideas with the Oculus Rift we found that a lot are under development. The Oculus Rift will probably be available commercially this year and many companies wants to have products usable with it. We noticed that our idea to recreate a real environment from a house and then use it for house viewings or to re-furnish it isn t unique, products and programs like these are under development. It was fun to work and develop something that is so new and discuss and come up with ideas for potential obstacles these environments and technologies struggles with. Probably the biggest thing we ll take with us from this project is the insight in many different technologies and platforms. We had barely worked with any of the tools and technologies before so we learnt a lot about them. If we were to continue with our work we would try to integrate some sort of re-construction technique of a real environment to make our program more up to date and usable in scenarios like house viewings and re-furnishing. We would also add functionality and more natural voice commands and do research in a better voice recognition service. Most, if not all of our weaknesses could be solved if we had more time to work on them, but some, requires a lot more time. Working with re-construction of real environments for example, would require a project in itself. This being a multimodality course we focused as much as possible on the multimodal aspects than making our program usable as a commercial product or getting good data from a human perception point of view. 12

6 CONCLUSION We are certain that virtual environments will be increasingly popular in the future. In order to increase usability, developers need to focus on multimodality. We have tried to do so by letting the user decide between voice recognition and a keyboard; both of which could accomplish the same tasks in the virtual environment. One important detail to keep in mind is that voice recognition (or at least Microsoft Speech API) is not very precise at times; and it can return a lot of false positives. These limitations led to us only having four voice commands. This is something that we could have worked on if we had had more time, maybe by changing the voice recognition API. We are all very happy with the result. It was fun developing and testing the virtual environment for the Oculus Rift; and we all agree that our product is something which could be useful for a lot of people if we continued working on it. 13

REFERENCES [1] The role of voice input for human-machine interaction. Philip R. Cohen, Sharon L. Oviatt. 1995. http://www.pnas.org/content/92/22/9921.short [2] The limits of speech recognition, Ben Shneiderman, Communications of the ACM, Vol. 43, No. 9, 2000. http://www.cs.umd.edu/~ben/p63-shneidermansept2000cacmf.pdf [3] A Comparison of Two Cost-Differentiated Virtual Reality Systems for Perception and Action Tasks. Mary K. Young, Graham B. Gaylor, Scott M. Andrus, Bobby Bodenheimer. 2014. http://dl.acm.org/citation.cfm?id=2628261 [4] A Systematic Review of Cybersickness. Simon Davis, Keith Nesbitt, Eugene Nalivaiko. 2014. http://dl.acm.org/citation.cfm?id=2677780 [5] Perceived egocentric distance in Real, Image-based and Traditional Virtual Environments. Peter Willemsen, Amy A. Gooch. 2002. http://www.duluth.umn.edu/~willemsn/pubs/vr2002_willemsen_distance.pdf [6] Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Kourosh Khoshelham, Sander Oude Elberink. 2012. http://www.mdpi.com/1424-8220/12/2/1437/htm [7] Video presentation of the program https://vimeo.com/117303155 14