Social Viewing in Cinematic Virtual Reality: Challenges and Opportunities

Similar documents
HI, THIS IS FROM COLOGNE. WE UNITE THE DIGITAL AND ANALOG WORLD.

UbiBeam++: Augmenting Interactive Projection with Head-Mounted Displays

Interactive Coffee Tables: Interfacing TV within an Intuitive, Fun and Shared Experience

One Size Doesn't Fit All Aligning VR Environments to Workflows

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

ENHANCED HUMAN-AGENT INTERACTION: AUGMENTING INTERACTION MODELS WITH EMBODIED AGENTS BY SERAFIN BENTO. MASTER OF SCIENCE in INFORMATION SYSTEMS

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

Sergi Fernández (i2cat), Project Coordinator Luk Overmeire, VRT Senior Researcher. IBC 2016 Innovation Theatre 11 th of September 2016

The Control of Avatar Motion Using Hand Gesture

Multi-User Interaction in Virtual Audio Spaces

Sky Italia & Immersive Media Experience Age. Geneve - Jan18th, 2017

Spatial Interfaces and Interactive 3D Environments for Immersive Musical Performances

- applications on same or different network node of the workstation - portability of application software - multiple displays - open architecture

UMI3D Unified Model for Interaction in 3D. White Paper

Immersive Real Acting Space with Gesture Tracking Sensors

Capability for Collision Avoidance of Different User Avatars in Virtual Reality

Chapter 2 Introduction to Haptics 2.1 Definition of Haptics

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

Social Editing of Video Recordings of Lectures

Arcaid: Addressing Situation Awareness and Simulator Sickness in a Virtual Reality Pac-Man Game

GLOSSARY for National Core Arts: Media Arts STANDARDS

Perceptual Characters of Photorealistic See-through Vision in Handheld Augmented Reality

MRT: Mixed-Reality Tabletop

Development of Video Chat System Based on Space Sharing and Haptic Communication

Simultaneous Object Manipulation in Cooperative Virtual Environments

CS449/649: Human-Computer Interaction

Multi-User Collaboration on Complex Data in Virtual and Augmented Reality

Early Take-Over Preparation in Stereoscopic 3D

A Multimodal Locomotion User Interface for Immersive Geospatial Information Systems

iwindow Concept of an intelligent window for machine tools using augmented reality

Interactive Exploration of City Maps with Auditory Torches

TOGETHERVR: A FRAMEWORK FOR PHOTO-REALISTIC SHARED MEDIA EXPERIENCES IN 360-DEGREE VR

Beyond Actuated Tangibles: Introducing Robots to Interactive Tabletops

AUGMENTED REALITY FOR COLLABORATIVE EXPLORATION OF UNFAMILIAR ENVIRONMENTS

R (2) Controlling System Application with hands by identifying movements through Camera

A Mobile Device Interface Using Orientation Sensing For Electronic Games and Digital Cinema

synchrolight: Three-dimensional Pointing System for Remote Video Communication

A Study of Navigation and Selection Techniques in Virtual Environments Using Microsoft Kinect

Enhanced Virtual Transparency in Handheld AR: Digital Magnifying Glass

LCC 3710 Principles of Interaction Design. Readings. Sound in Interfaces. Speech Interfaces. Speech Applications. Motivation for Speech Interfaces

interactive laboratory

Integration of Hand Gesture and Multi Touch Gesture with Glove Type Device

CSC 2524, Fall 2017 AR/VR Interaction Interface

Virtual Co-Location for Crime Scene Investigation and Going Beyond

Output Devices - Visual

Virtual Reality Mobile 360 Nanodegree Syllabus (nd106)

Alternative Interfaces. Overview. Limitations of the Mac Interface. SMD157 Human-Computer Interaction Fall 2002

Communication Requirements of VR & Telemedicine

Guidelines for choosing VR Devices from Interaction Techniques

Interactive Multimedia Contents in the IllusionHole

HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments

Virtual Environment Interaction Based on Gesture Recognition and Hand Cursor

360-degree video, attract attention, video production, post-production, video effects.


Remote Media Immersion (RMI)

Virtual Reality and Natural Interactions

Virtual and Augmented Reality for Cabin Crew Training: Practical Applications

Interactive Advertisements in an IoT Era

Tele-Nursing System with Realistic Sensations using Virtual Locomotion Interface

GESTURE RECOGNITION SOLUTION FOR PRESENTATION CONTROL

Virtual Reality for Real Estate a case study

Virtual Reality Applications in the Cultural Heritage Sector

Figure 1. The game was developed to be played on a large multi-touch tablet and multiple smartphones.

A Study on the Navigation System for User s Effective Spatial Cognition

Trust and Interaction in Industrial Human-Robot Collaborative applications

Localized Space Display

Towards an augmented reality fourth generation social networks

MOVIE-BASED VR THERAPY SYSTEM FOR TREATMENT OF ANTHROPOPHOBIA

your LEARNING EXPERIENCE

Virtual reality and Immersive Media

Interacting within Virtual Worlds (based on talks by Greg Welch and Mark Mine)

Positioning Performance Study of the RESSOX System With Hardware-in-the-loop Clock

Introduction to Virtual Reality (based on a talk by Bill Mark)

The Mixed Reality Book: A New Multimedia Reading Experience

Remote Shoulder-to-shoulder Communication Enhancing Co-located Sensation

VR/AR Concepts in Architecture And Available Tools

MIXED REALITY BENEFITS IN BUSINESS

Hologram Table 2018 EUCLIDEON PTY LTD

NAVIGATIONAL CONTROL EFFECT ON REPRESENTING VIRTUAL ENVIRONMENTS

Towards affordance based human-system interaction based on cyber-physical systems

Augmented Home. Integrating a Virtual World Game in a Physical Environment. Serge Offermans and Jun Hu

About us. What we do at Envrmnt

LOW COST CAVE SIMPLIFIED SYSTEM

Shopping Together: A Remote Co-shopping System Utilizing Spatial Gesture Interaction

Touch Perception and Emotional Appraisal for a Virtual Agent

Future Directions for Augmented Reality. Mark Billinghurst

- Modifying the histogram by changing the frequency of occurrence of each gray scale value may improve the image quality and enhance the contrast.

Can the Success of Mobile Games Be Attributed to Following Mobile Game Heuristics?

An Audio-Haptic Mobile Guide for Non-Visual Navigation and Orientation

EMPOWERING THE CONNECTED FIELD FORCE WORKER WITH ADVANCED ANALYTICS MATTHEW SHORT ACCENTURE LABS

Multimedia Virtual Laboratory: Integration of Computer Simulation and Experiment

Immersive Guided Tours for Virtual Tourism through 3D City Models

Rethinking Prototyping for Audio Games: On Different Modalities in the Prototyping Process

Sound and Movement Visualization in the AR-Jazz Scenario

Conversational Gestures For Direct Manipulation On The Audio Desktop

Waveform Multiplexing using Chirp Rate Diversity for Chirp-Sequence based MIMO Radar Systems

VIP-Emulator: To Design Interactive Architecture for adaptive mixed Reality Space

Approaching E_Learning on Three-Phase System Measurements

User Interface Agents

By: Celine, Yan Ran, Yuolmae. Image from oss

Transcription:

Social Viewing in Cinematic Virtual Reality: Challenges and Opportunities Sylvia Rothe 1, Mario Montagud 2, Christian Mai 1, Daniel Buschek 1 and Heinrich Hußmann 1 1 Ludwig Maximilian University of Munich, Germany 2 University of Valencia & i2cat Foundation, Spain Abstract. Cinematic Virtual Reality (CVR) has been increasing in popularity in the last years. However, viewers can feel isolated when watching 360 movies with a Head-Mounted Display. Since watching movies is a social experience for most people, we investigate if the use of Head Mounted Displays is appropriate for enabling shared CVR experiences. In this context, even if viewers are watching the movie simultaneously, they do not automatically see the same field of view, since they can freely choose the viewing direction. Based on the literature and experiences from past user studies, we identify seven challenges. To address these challenges, we present and discuss design ideas for a CVR social movie player and highlight directions for future work. Keywords: Cinematic Virtual Reality, 360 video, social viewing 1 Introduction 360 movies are attracting widespread interest in a number of applications, like education, entertainment and news. Users highly benefit from the possibilities to freely look around and explore the presented scenes, either to entertain themselves or to gain a better understanding of the movie content. In Cinematic Virtual Reality (CVR) viewers watch 360 movies via Virtual Reality (VR) devices. By using an HMD, the viewer can feel immersed within the scenes and freely choose the viewing direction. In contrast to traditional cinema or TV, each CVR viewer has an own display and gets isolated of the surrounding environment when watching a movie via HMD. The drawback of these systems is the associated visual and mental separation from other people, i.e. social isolation. Natural discussion, like pointing at interesting objects in the video or keeping the awareness about what the others focus is on, is impeded by the HMD. In this work, we identify key challenges and related design aspects that are crucial for efficiently supporting social awareness and interaction when watching movies together remotely. We provide an overview of the current research state and identify seven open challenges. While further challenges may exist, these seven challenges are important for a first design approach. For each of these challenges, we propose potential approaches and future work directions.

2 2 Challenges and Approaches for Social Viewing in CVR 2.1 Challenge 1: Viewport Sharing One of the main problems for social viewing via HMDs is the difference of the users FoV and the missing awareness of the other s viewport. Being unaware of where co-watchers are looking at within the 360º scenes can lead to difficulties of understanding. A first approach is to frame the viewport of the co-watcher [1]. So, the viewport is visible, if the viewports are overlapping (Fig. 1 left). For finding the viewport of the co-watcher, which is off-screen, an arrow can be used [1]. Fig. 1. Approaches to show the viewport of a co-watcher. Left: Viewport shown by a frame; Right: Viewport shown by a display Another approach is the picture-in-picture (PiP) method [2], where a little screen shows the co-watcher s FoV (Fig. 1 right). This has the advantage of visually showing the other s viewport, independent on the own viewing direction, but the disadvantage of covering a larger area of the display. So, the possibility of switching it off should be explored. The PiP-screen can be placed on that display side which is closer to the target. 2.2 Communication A key issue in social viewing is the communication. In CVR the viewer does not know where the others are looking at. Why is he or she laughing? How can a viewer indicate details in the movie which are not necessarily in the FoV of the other viewers? Voice chat is one possibility to communicate in social viewing. Although voice chat increases the social awareness [3], it can reduce the viewing experience because of distraction. Chatting could be replaced or extended by a simple sign language realized by gestures or controllers, to show emoticons to the co-watcher. To inform the co-watcher about PoIs out of the screen, we plan to transfer methods used by gliders for collision avoidance. An example is shown in Fig. 2 left. The slide bar at the bottom shows if the PoI is on the right or on the left side. The slider on the right shows if the PoI is higher or deeper than the own viewing direction. Another example can be seen in Fig. 2 right, where the direction is shown by a circle and the height by a slider. In this way, participants will be able to distinguish between viewport awareness and signalling a PoI.

3 Fig. 2. Collision avoidance methods of gliders transferred to indicate the PoIs of the co-watcher. Left: The PoI is on the left side behind the viewer, below the viewing direction; Right: The PoI is on the left side behind the viewer, below the viewing direction. 2.3 Social Awareness Another challenge for social viewing via HMDs is to provoke the feeling of being together. Watching a movie together in the cinema or TV, the fellow is perceived in the periphery of the view. Even though silent feelings cannot be heard, they can be recognized by postures or gestures. Voice chat and visualization of the co-watcher s viewport enable awareness of the other persons. Another way for increasing the social awareness is to include video chat windows via PiP. Figure 3 shows two examples. In the left one, the front-view of the co-watcher is displayed in the middle of the screen, even if the viewer turns the head. The right one is very similar to the situation of viewing a movie together in cinema or TV. The PiP is placed on the side of the viewer and shows the co-watcher from the side. Fig.3. Left: PiP of the co-watcher in front of the viewer. The co-watcher is always in the FoV; Right: PiP of the co-watcher on the side. The co-watcher can be seen, turning the head right. 2.4 Synchronization/Navigation Synchronization of the media playout across the involved devices is a key requirement in social viewing [4, 5]. By providing this, all distributed users will perceive the same events at the same time. This involves designing and adopting the appropriate communication and control protocols, monitoring algorithms, reference selection strategies and adjustment techniques. Likewise, media synchronization must be preserved after issuing navigation control commands (e.g. play, pause, seek) in a shared session.

4 2.5 Input Device For navigation as well as for communication, non-disturbing input methods need to be adopted. We think graphical elements on the display or controlling via speech disturb the viewing experience. One approach is to use gestures, because gestures are a natural method for interaction [6]. Other approaches are head/view-based, and controller-based methods. In [7] some of these methods were compared, head and controllerbased methods achieving the best results, since there were some problems in gesture recognition. For our first test we used controllers to be sure the actions were triggered on purpose. 2.6 Role Concept To define the relation between the viewers, two role concepts are conceivable: nonguided and guided. The non-guided approach is based on assigning the same roles and permission to all viewers. This can originate conflicts in case of highly active communications. Likewise, the display can become overloaded when information about all users is shown. The guided approach consists of differentiating two roles: the guide and the follower. The guide will be taken as the reference for communication and synchronization and will be the only participant with the navigation functionalities enabled. To allow more interactive and flexible sessions, the roles of guide and followers can be dynamic exchanged. A slave mode, where the follower is synchronized in time and viewing direction to the guide, causes simulator sickness [8]. However, it can be a helpful in asymmetric environments with non-vr collaborators. 2.7 Asymmetric Environments Social viewing should also be possible for participants using different devices. Gugenheimer et. al [9] implemented ShareVR, which enables users of the real world to interact with users in a virtual world. They studied asymmetry in visualization and interaction. A novel social viewing concept is considered in [10], consisting of a multiscreen scenario in which different users play a different role: observer (TV), assistant (tablet) and inspector (HMD). The inspector s viewport is streamed to the TV to allow the remaining users being aware of the 360º scenes, thus overcoming isolation and stimulating interaction. This and the guidelines in [9] will be taken into account in our work. 3 Conclusion and Future Work In this work, we have identified and addressed key challenges to enable social viewing in CVR. For a shared experience, viewers need new methods of communication and viewport awareness. The field of social viewing in CVR is relatively new and it needs more research and knowledge about the viewers behaviour. The described approaches are one step to explore this field. Based on this work, we will compare the approaches among other relevant aspects. The aim is to define a design space for social CVR.

5 References 1. Nguyen C, DiVerdi S, Hertzmann A, Liu F (2017) CollaVR. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology - UIST 17. ACM Press, New York, New York, USA, pp 267 277 2. Lin Y-T, Liao Y-C, Teng S-Y, et al (2017) Outside-In. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology - UIST 17. ACM Press, New York, New York, USA, pp 255 265 3. Geerts D, Vaishnavi I, Mekuria R, et al (2011) Are we in sync?: synchronization requirements for watching online video together. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. pp 311 314 4. Montagud M, Boronat F, Stokking H, van Brandenburg R (2012) Interdestination multimedia synchronization: schemes, use cases and standardization. Multimed Syst 18:459 482. doi: 10.1007/s00530-012-0278-9 5. Boronat F, Montagud M, Marfil D, Luzon C (2018) Hybrid Broadcast/Broadband TV Services and Media Synchronization: Demands, Preferences and Expectations of Spanish Consumers. IEEE Trans Broadcast 64:52 69. doi: 10.1109/TBC.2017.2737819 6. O Hagan RG, Zelinsky A, Rougeaux S (2002) Visual gesture interfaces for virtual environments. Interact Comput 14:231 250. doi: 10.1016/S0953-5438(01)00050-9 7. Pakkanen T, Hakulinen J, Jokela T, et al (2017) Interaction with WebVR 360 video player: Comparing three interaction paradigms. In: 2017 IEEE Virtual Reality (VR). IEEE, pp 279 280 8. Nguyen C, DiVerdi S, Hertzmann A, Liu F (2017) CollaVR: Collaborative In- Headset Review for VR Video. In: Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. pp 267 277 9. Gugenheimer J, Stemasov E, Frommel J, Rukzio E (2017) ShareVR. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI 17. ACM Press, New York, New York, USA, pp 4021 4033 10. A. Núñez, M. Montagud, I. Fraile, D. Gómez SF ImmersiaTV: an end-to-end toolset to enable customizable and immersive multi-screen TV experiences. In: Workshop on Virtual Reality, co-located with ACM TVX 2018, Seoul (South Korea)