Rendering Challenges of VR

Similar documents
Special Topic: Virtual Reality

Intro to Virtual Reality (Cont)

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21

Virtual Reality. Lecture #11 NBA 6120 Donald P. Greenberg September 30, 2015

Virtual Reality Technology and Convergence. NBAY 6120 March 20, 2018 Donald P. Greenberg Lecture 7

Diving into VR World with Oculus. Homin Lee Software Engineer at Oculus

Virtual Reality Technology and Convergence. NBA 6120 February 14, 2018 Donald P. Greenberg Lecture 7

Virtual Reality. NBAY 6120 April 4, 2016 Donald P. Greenberg Lecture 9

Bring Imagination to Life with Virtual Reality: Everything You Need to Know About VR for Events

/ Impact of Human Factors for Mixed Reality contents: / # How to improve QoS and QoE? #

HMD based VR Service Framework. July Web3D Consortium Kwan-Hee Yoo Chungbuk National University

Considerations for Standardization of VR Display. Suk-Ju Kang, Sogang University

Mobile Virtual Reality what is that and how it works? Alexey Rybakov, Senior Engineer, Technical Evangelist at DataArt

LOOKING AHEAD: UE4 VR Roadmap. Nick Whiting Technical Director VR / AR

The eye, displays and visual effects

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Head Mounted Display Optics II!

Visual Imaging in the Electronic Age An Interdisciplinary Course Bridging Art, Architecture, Computer Science, and Engineering Offered in Fall 2016

What will be on the midterm?

Lecture Outline Chapter 27. Physics, 4 th Edition James S. Walker. Copyright 2010 Pearson Education, Inc.

Unpredictable movement performance of Virtual Reality headsets

Einführung in die Erweiterte Realität. 5. Head-Mounted Displays

CS 376b Computer Vision

Output Devices - Visual

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Well..How Did I Get Here?

The Impact of Dynamic Convergence on the Human Visual System in Head Mounted Displays

Oculus Rift Development Kit 2

CPSC 425: Computer Vision

Lenses, exposure, and (de)focus

Camera Image Processing Pipeline: Part II

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

Potential Uses of Virtual and Augmented Reality Devices in Commercial Training Applications


CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics

Assignment 5: Virtual Reality Design

Head Tracking for Google Cardboard by Simond Lee

Virtual Environments. Ruth Aylett

Omni-Directional Catadioptric Acquisition System

Miguel Rodriguez Analogix Semiconductor. High-Performance VR Applications Drive High- Resolution Displays with MIPI DSI SM

Physics 2310 Lab #5: Thin Lenses and Concave Mirrors Dr. Michael Pierce (Univ. of Wyoming)

Physics 1230: Light and Color. Guest Lecture, Jack again. Lecture 23: More about cameras

Lecture 26. PHY 112: Light, Color and Vision. Finalities. Final: Thursday May 19, 2:15 to 4:45 pm. Prof. Clark McGrew Physics D 134

Camera Image Processing Pipeline: Part II

Introduction.

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

The Human Visual System!

LENSES. INEL 6088 Computer Vision

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Image Formation: Camera Model

PHYS 1020 LAB 7: LENSES AND OPTICS. Pre-Lab

Regan Mandryk. Depth and Space Perception

Construction of visualization system for scientific experiments

Chapter 24 Geometrical Optics. Copyright 2010 Pearson Education, Inc.

doi: /

Image stabilization (IS)

PHYS:1200 LECTURE 31 LIGHT AND OPTICS (3)

REPLICATING HUMAN VISION FOR ACCURATE TESTING OF AR/VR DISPLAYS Presented By Eric Eisenberg February 22, 2018

Chapter 25 Optical Instruments

A Low Cost Optical See-Through HMD - Do-it-yourself

Quality of Experience for Virtual Reality: Methodologies, Research Testbeds and Evaluation Studies

A Case Study of Security and Privacy Threats from Augmented Reality (AR)

Image Formation III Chapter 1 (Forsyth&Ponce) Cameras Lenses & Sensors

Lecture PowerPoint. Chapter 25 Physics: Principles with Applications, 6 th edition Giancoli

State Of The Union.. Past, Present, And Future Of Wearable Glasses. Salvatore Vilardi V.P. of Product Development Immy Inc.

Camera Image Processing Pipeline

CSE 527: Introduction to Computer Vision

Technical Specifications: tog VR

Best Practices for VR Applications

Getting light to imager. Capturing Images. Depth and Distance. Ideal Imaging. CS559 Lecture 2 Lights, Cameras, Eyes

Introduction. Related Work

Universidade de Aveiro Departamento de Electrónica, Telecomunicações e Informática. Output Devices - I

VR/AR Concepts in Architecture And Available Tools

CS 443: Imaging and Multimedia Cameras and Lenses

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

ReVRSR: Remote Virtual Reality for Service Robots

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Optical image stabilization (IS)

Ch 24. Geometric Optics

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

Unit 1: Image Formation

Design and Implementation of the 3D Real-Time Monitoring Video System for the Smart Phone

CS 450: COMPUTER GRAPHICS REVIEW: RASTER IMAGES SPRING 2016 DR. MICHAEL J. REALE

Light-Field Database Creation and Depth Estimation

MEMS Solutions For VR & AR

Applications of Optics

Aberrations of a lens

Getting Real with the Library. Samuel Putnam, Sara Gonzalez Marston Science Library University of Florida

Topic 6 - Optics Depth of Field and Circle Of Confusion

4/23/16. Virtual Reality. Virtual reality. Virtual reality is a hot topic today. Virtual reality

Virtual Reality in Neuro- Rehabilitation and Beyond

Reinventing movies How do we tell stories in VR? Diego Gutierrez Graphics & Imaging Lab Universidad de Zaragoza

Cameras have finite depth of field or depth of focus

Color and Perception

Optical image stabilization (IS)

Market Snapshot: Consumer Strategies and Use Cases for Virtual and Augmented Reality

6.A44 Computational Photography

Unit 3: Energy On the Move

revolutionizing Subhead Can Be Placed Here healthcare Anders Gronstedt, Ph.D., President, Gronstedt Group September 22, 2017

Transcription:

Lecture 27: Rendering Challenges of VR Computer Graphics CMU 15-462/15-662, Fall 2015

Virtual reality (VR) vs augmented reality (AR) VR = virtual reality User is completely immersed in virtual world (sees only light emitted by display AR = augmented reality Display is an overlay that augments user s normal view of the real world (e.g., terminator) Image credit: Terminator 2 (naturally)

VR headsets Oculus Rift (Crescent Bay Prototype) Sony Morpheus HTV Vive Google Cardboard

AR headset: Microsoft Hololens

Today: rendering challenges of VR Since you are now all experts in renderings, today we will talk about the unique challenges of rendering in the context of modern VR headsets VR presents many other difficult technical challenges - display technologies - accurate tracking of face, head, and body position - haptics (simulation of touch) - sound synthesis - user interface challenges (inability of user to walk around environment, how to manipulate objects in virtual world) - content creation challenges - and on and on

VR gaming Bullet Train Demo (Epic)

VR video Vaunt VR (Paul McCartney concert)

VR video

VR teleconference / video chat http://vrchat.com/

Oculus Rift DK2 Rift DK2 is best documented of modern prototypes, so I ll use it for discussion here Oculus Rift DK2

Oculus Rift DK2 headset Image credit: ifixit.com

Oculus Rift DK2 headset Image credit: ifixit.com

Oculus Rift DK2 display 5.7 1920 x 1080 OLED display 75 Hz refresh rate (Same display as Galaxy Note 3) Image credit: ifixit.com Note: the upcoming 2016 Rift consumer product features two 1080 1200 displays at 90Hz.

Role of optics field of view 1. Create wide field of view 2. Place focal plane at several meters away from eye (close to infinity) Note: parallel lines reaching eye converge to a single point on display (eye accommodates to plane near infinity) eye OLED display Lens diagram from Open Source VR Project (OSVR) (Not the lens system from the Oculus Rift) http://www.osvr.org/

Accommodation and vergence Accommodation: changing the optical power of the eye to focus at different distances Eye accommodated at far distance Eye accommodated at near distance Vergence: rotation of eye to ensure projection of object falls in center of retina

Accommodation - vergence conflict Given design of current VR displays, consider what happens when objects are up-close to eye in virtual scene - Eyes must remain accommodated to near infinity (otherwise image on screen won t be in focus) - But eyes must converge in attempt to fuse stereoscopic images of object up close - Brain receives conflicting depth clues (discomfort, fatigue, nausea) This problem stems from nature of display design. If you could just make a display that emits the light field that would be produced by a virtual scene, then you could avoid the accommodation - vergence conflict

Aside: near-eye light field displays Recreate light field in front of eye

Oculus DK2 IR camera and IR LEDs Headset contains: 40 IR LEDs Gyro + accelerometer (1000Hz) 60Hz IR Camera Image credit: ifixit.com

Name of the game, part 1: low latency The goal of a VR graphics system is to achieve presence, tricking the brain into thinking what it is seeing is real Achieving presence requires an exceptional low-latency system - What you see must change when you move your head! - End-to-end latency: time from moving your head to the time new photons hit your eyes - Measure user s head movement - Update scene/camera position - Render new image - Transfer image to headset, then to transfer to display in headset - Actually emit light from display (photons hit user s eyes) - Latency goal of VR: 10-25 ms - Requires exceptionally low-latency head tracking - Requires exceptionally low-latency rendering and display

Thought experiment: effect of latency Consider a 1,000 x 1,000 display spanning 100 field of view - 10 pixels per degree Assume: - You move your head 90 in 1 second (only modest speed) - End-to-end latency of system is 50 ms (1/20 sec) Therefore: - Displayed pixels are off by 4.5 ~ 45 pixels from where they would be in an ideal system with 0 latency Example credit: Michael Abrash

Name of the game, part 2: high resolution 160 o ~5 o Human: ~160 view of field per eye (~200 overall) (Note: this does not account for eye s ability to rotate in socket) Future retina VR display: 57 ppd covering 200 = 11K x 11K display per eye = 220 MPixel iphone 6: 4.7 in retina display: 1.3 MPixel 326 ppi 57 ppd Strongly suggests need for eye tracking and foveated rendering (eye can only perceive detail in 5 region about gaze point Eyes designed by SuperAtic LABS from the thenounproject.com

Foveated rendering high-res image med-res image low-res image Idea: track user s gaze, render with increasingly lower resolution farther away from gaze point Three images blended into one for display

Requirement: wide field of view View of checkerboard through Oculus Rift lens 100 Lens introduces distortion - Pincushion distortion - Chromatic aberration (different wavelengths of light refract by different amount) Icon credit: Eyes designed by SuperAtic LABS from the thenounproject.com Image credit: Cass Everitt

Rendered output must compensate for distortion of lens in front of display Step 1: render scene using traditional graphics pipeline at full resolution for each eye Step 2: warp images and composite into frame rendering is viewed correctly after lens distortion (Can apply unique distortion to R, G, B to approximate correction for chromatic aberration) Image credit: Oculus VR developer guide

Challenge: rendering via planar projection Recall: rasterization-based graphics is based on perspective projection to plane - Reasonable for modest FOV, but distorts image under high FOV - Recall: VR rendering spans wide FOV Pixels span larger angle in center of image (lowest angular resolution in center) Future investigations may consider: curved displays, ray casting to achieve uniform angular resolution, rendering with piecewise linear projection plane (different plane per tile of screen) Image credit: Cass Everitt

Consider object position relative to eye X time X (position of object relative to eye) time X (position of object relative to eye) Case 1: object stationary relative to eye: (eye still and red object still OR red object moving left-to-right and eye moving to track object OR red object stationary in world but head moving and eye moving to track object) Case 2: object moving relative to eye: (red object moving from left to right but eye stationary, i.e., it s focused on a different stationary point in world) NOTE: THESE GRAPHS PLOT OBJECT POSITION RELATIVE TO EYE RAPID HEAD MOTION WITH EYES TRACK A MOVING OBJECT IS A FORM OF CASE 1!!! Spacetime diagrams adopted from presentations by Michael Abrash Eyes designed by SuperAtic LABS from the thenounproject.com

Effect of latency: judder time X X frame 0 X frame 0 frame 1 frame 1 frame 2 frame 2 frame 3 frame 3 Case 2: object moving from left to right, eye stationary (eye stationary with respect to display) Continuous representation. Case 2: object moving from left to right, eye stationary (eye stationary with respect to display) Light from display (image is updated each frame) Case 1: object moving from left to right, eye moving continuously to track object (eye moving relative to display!) Light from display (image is updated each frame) Explanation: since eye is moving, object s position is relatively constant relative to eye (as it should be, eye is tracking it). But due discrete frame rate, object falls behind eye, causing a smearing/strobing effect ( choppy motion blur). Recall from earlier slide: 90 degree motion, with 50 ms latency results in 4.5 degree smear Spacetime diagrams adopted from presentations by Michael Abrash

Reducing judder: increase frame rate X X X time Case 1: continuous ground truth red object moving left-to-right and eye moving to track object OR red object stationary but head moving and eye moving to track object frame 0 frame 1 frame 2 frame 3 Light from display (image is updated each frame) frame 0 frame 1 frame 2 frame 3 frame 4 frame 5 frame 6 frame 7 Light from display (image is updated each frame) Higher frame rate results in closer approximation to ground truth Spacetime diagrams adopted from presentations by Michael Abrash

Reducing judder: low persistence display X X X time frame 0 frame 0 frame 1 frame 1 frame 2 frame 2 frame 3 frame 3 Case 1: continuous ground truth Light from full-persistence display Light from low-persistence display red object moving left-to-right and eye moving to track object OR red object stationary but head moving and eye moving to track object Full-persistence display: pixels emit light for entire frame Low-persistence display: pixels emit light for small fraction of frame Oculus DK2 OLED low-persistence display - 75 Hz frame rate (~13 ms per frame) - Pixel persistence = 2-3ms Spacetime diagrams adopted from presentations by Michael Abrash

Artifacts due to rolling OLED backlight Image rendered based on scene state at time t 0 Image sent to display, ready for output at time t 0 + Δt Rolling backlight OLED display lights up rows of pixels in sequence - Let r be amount of time to scan out a row - Row 0 photons hit eye at t 0 + Δt - Row 1 photos hit eye at t 0 + Δt + r - Row 2 photos hit eye at t 0 + Δt + 2r Implication: photons emitted from bottom rows of display are more stale than photos from the top! Consider eye moving horizontally relative to display (e.g., due to head movement while tracking square object that is stationary in world) X (position of object relative to eye) Result: perceived shear! Recall rolling shutter effects on modern digital cameras. Y display pixel row

Compensating for rolling backlight Perform post-process shear on rendered image - Similar to previously discussed barrel distortion and chromatic warps - Predict head motion, assume fixation on static object in scene - Only compensates for shear due to head motion, not object motion Render each row of image at a different time (the predicted time photons will hit eye) - Suggests exploration of different rendering algorithms that are more amenable to fine-grained temporal sampling, e.g., ray caster? (each row of camera rays samples scene at a different time)

Increasing frame rate using re-projection Goal: maintain as high a frame rate as possible under challenging rendering conditions: - Stereo rendering: both left and right eye views - High-resolution outputs - Must render extra pixels due to barrel distortion warp - Many rendering hacks (bump mapping, billboards, etc.) are less effective in VR so rendering must use more expensive techniques Researchers experimenting with reprojection-based approaches to improve frame rate (e.g., Oculus Time Warp ) - Render using conventional techniques at 30 fps, reproject (warp) image to synthesize new frames based on predicted head movement at 75 fps - Potential for image processing hardware on future VR headsets to perform high frame-rate reprojection based on gyro/accelerometer

Near-future VR system components Low-latency image processing for subject tracking High-resolution, high-frame rate, wide-field of view display Massive parallel computation for high-resolution rendering In headset motion/accel sensors + eye tracker Exceptionally high bandwidth connection between renderer and display: e.g., 4K x 4K per eye at 90 fps! On headset graphics processor for sensor processing and reprojection

Interest in acquiring VR content Google s JumpVR video: 16 4K GoPro cameras Consider challenge of: Registering/3D align video stream (on site) Broadcast encoded video stream across the country to 50 million viewers Lytro Immerge (leveraging light field camera technology to acquire VR content)

Summary: virtual reality presents many new challenges for graphics systems developers Major goal: minimize latency of head movement to photons - Requires low latency tracking (not discussed today) - Combination of external camera image processing (vision) and high rate headset sensors - Heavy use of prediction - Requires high-performance rendering - High-resolution, wide field-of-view output - High frame-rate - Rendering must compensate for constraints of display system: - Optical distortion (geometric, chromatic) - Temporal offsets in rows of pixels Significant research interest in display technologies that are alternatives to flat screens with lenses in front of them

Course wrap up

Student project demo reel! yyuan2 mplamann jmrichar

Student project demo reel! kcma paluri hongyul yyuan2 aperley jianfeil

Student project demo reel! chunyenc hongyul jianfeil sohils

Other cool graphics-related courses 15-869: Discrete Differential Geometry (Keenan Crane) 15-463: Computational Photography 15-467: Simulation Methods for Animation and Digital Fabrication (Stelian Coros) 15-465: Animation Art and Technology (Hodgins/Duesing) 15-661: Interaction and Expression using the Pausch Bridge Lighting 15-418/618: Parallel Computer Architecture and Programming (Kayvon Fatahalian)

TAs and independent study! 15-462 next semester is looking for TAs! - Email us if interested, and we ll direct you to Prof. Pollard Students that did well in 462 have a great foundation for moving on to independent study or research in graphics - Come talk to Keenan and I!

Beyond assignments and exams Come talk to Keenan or I (or other professors) about participating in research! Consider a senior thesis! Pitch a seed idea to Project Olympus Get involved with organizations like Hackathon or ScottyLabs

Thanks for being a great class! See you on Monday! (study hard, but don t stress too much) Credit: Inside Out (Pixar)