Visual Imaging in the Electronic Age An Interdisciplinary Course Bridging Art, Architecture, Computer Science, and Engineering Offered in Fall 2016

Similar documents
Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21

Assignment 5: Virtual Reality Design

Virtual Reality Technology and Convergence. NBAY 6120 March 20, 2018 Donald P. Greenberg Lecture 7

Virtual Reality Technology and Convergence. NBA 6120 February 14, 2018 Donald P. Greenberg Lecture 7

Virtual Reality. NBAY 6120 April 4, 2016 Donald P. Greenberg Lecture 9

SPIDERMAN VR. Adam Elgressy and Dmitry Vlasenko

COMPSCI 372 S2 C Computer Graphics

Visual Studies (VS) Courses. Visual Studies (VS) 1

Chapter 1 Virtual World Fundamentals

interactive laboratory

Rendering Challenges of VR

Introduction To Immersive Virtual Environments (aka Virtual Reality) Scott Kuhl Michigan Tech

CS 376b Computer Vision

COPYRIGHTED MATERIAL

Prof. Feng Liu. Winter /09/2017

Virtual Reality. Lecture #11 NBA 6120 Donald P. Greenberg September 30, 2015

CSC 170 Introduction to Computers and Their Applications. Lecture #3 Digital Graphics and Video Basics. Bitmap Basics

Intro to Virtual Reality (Cont)

Install simple system for playing environmental animation in the stereo display

COMPUTER GAME DESIGN (GAME)

Exploring 3D in Flash

ART DEPARTMENT Senior High School

E90 Project Proposal. 6 December 2006 Paul Azunre Thomas Murray David Wright

REPORT ON THE CURRENT STATE OF FOR DESIGN. XL: Experiments in Landscape and Urbanism

Virtual- and Augmented Reality in Education Intel Webinar. Hannes Kaufmann

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

Virtual Environments. Ruth Aylett

There will be a course blackboard which will be mirrored on website:

Reinventing movies How do we tell stories in VR? Diego Gutierrez Graphics & Imaging Lab Universidad de Zaragoza

COPYRIGHTED MATERIAL. Overview

MECHANICAL DESIGN LEARNING ENVIRONMENTS BASED ON VIRTUAL REALITY TECHNOLOGIES

COPYRIGHTED MATERIAL OVERVIEW 1

Learning Macromedia Fireworks Essentials and Digital Image Editing

Photography (PHOT) Courses. Photography (PHOT) 1

Brief summary report of novel digital capture techniques

CS 354R: Computer Game Technology

Early art: events. Baroque art: portraits. Renaissance art: events. Being There: Capturing and Experiencing a Sense of Place

Geog183: Cartographic Design and Geovisualization Spring Quarter 2018 Lecture 2: The human vision system

Short Course on Computational Illumination

Digitalisation as day-to-day-business

Digital Photography and Geometry Capture. NBAY 6120 March 9, 2016 Donald P. Greenberg Lecture 4

VISUALIZING CONTINUITY BETWEEN 2D AND 3D GRAPHIC REPRESENTATIONS

/ Impact of Human Factors for Mixed Reality contents: / # How to improve QoS and QoE? #

University of Wisconsin-Madison, Nelson Institute for Environmental Studies September 2, 2014

Course Description. Learning Objectives

Yale University Art Students Explore Painting in 3D with VR and Tilt Brush

4/23/16. Virtual Reality. Virtual reality. Virtual reality is a hot topic today. Virtual reality

Psychophysics of night vision device halo

VR based HCI Techniques & Application. November 29, 2002

ART DEPARTMENT ART COURSES CAN BE USED AS ELECTIVE CREDITS


To control, or to be controlled

CSC Stereography Course I. What is Stereoscopic Photography?... 3 A. Binocular Vision Depth perception due to stereopsis

RH King Academy OCULUS RIFT Virtual Reality in the High School Setting

Introduction.

Lecture 30 Chapter 26 The Human Eye & Visual Perception. Chapter 27 Color

Paper on: Optical Camouflage

Subject Description Form. Upon completion of the subject, students will be able to:

The topics are listed below not exactly in the same order as they were presented in class but all relevant topics are on the list!

COLLEGE OF ARTS AND SCIENCES COMMITTEE ON INSTRUCTION Minutes #9 November 13, Varner Hall MINUTES

1 classroom hour, 2 lab/studio hours, 2 credits

Kankakee Community College

Spring 2018 CS543 / ECE549 Computer Vision. Course webpage URL:

Homework #2 Color Science

Special Topic: Virtual Reality

CSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS

Computer Graphics Si Lu Fall /25/2017

CS21297 Visualizing Mars: Enabling STEM Learning Using Revit, Autodesk LIVE, and Stingray

Time-Lapse Panoramas for the Egyptian Heritage

Output Devices - Visual

Image Representations, Colors, & Morphing. Stephen J. Guy Comp 575

MECHANICAL ENGINEERING AND DESIGN 2017/18 SEMESTER 1 MODULES

ART DEPARTMENT POSSIBLE ART SEQUENCES. Ceramics/Sculpture. Photography. Digital. Commercial Art* Digital 2* Studio

the dimensionality of the world Travelling through Space and Time Learning Outcomes Johannes M. Zanker

Virtual reality has some problems to fix

Understanding Color Theory Excerpt from Fundamental Photoshop by Adele Droblas Greenberg and Seth Greenberg

Art Department Courses

Digital image processing vs. computer vision Higher-level anchoring

AUGMENTED VIRTUAL REALITY APPLICATIONS IN MANUFACTURING

2015 Technology Fee Full Proposal. Title: Virtual Reality for Good (UFVRfG)

DFTG Blueprint Reading and Sketching

Glossary Unit 1: Hardware/Software & Storage Media

Black & White Photography Course Syllabus

Digital Media and the Language of Vision

Photography: Session B Instructor: Louis Heilbronn TA: Gaby

Colour correction for panoramic imaging

MIDDLE SCHOOL COURSE OUTLINE

Implementing Social Impact Games and Games for Change into the Class Curriculum

ART 151 BASIC BLACK AND WHITE PHOTOGRAPHY

Assessment: Course Four Column Fall 2017

COMP371 COMPUTER GRAPHICS SESSION 1 COURSE OVERVIEW - SYLLABUS

Computational and Biological Vision

Photo of the Month Competition and Advancement Photography Club of Sun City Hilton Head Guidelines and Rules Effective January 2019

EC-433 Digital Image Processing

THE RELATIVE IMPORTANCE OF PICTORIAL AND NONPICTORIAL DISTANCE CUES FOR DRIVER VISION. Michael J. Flannagan Michael Sivak Julie K.

ADVANCED PLACEMENT STUDIO ART

The Mixed Reality Book: A New Multimedia Reading Experience

Immersive Training. David Lafferty President of Scientific Technical Services And ARC Associate

Haptic Holography/Touching the Ethereal

COMM 690 / ARTS 490 Movie Making Machines : Learning About Cinema in the BeAM Space Hanes Art Center - Rm 112 MW 9:05-10:45 AM.

Transcription:

Candice Zhao, a student in the ART 2907 Fall 2015 course, tries Oculus headset goggles. A 2-D version of the immersive 3-D scene is shown on the screen behind her. Visual Imaging in the Electronic Age An Interdisciplinary Course Bridging Art, Architecture, Computer Science, and Engineering Offered in Fall 2016 The following is a brief description of some of the topics covered in the course, Visual Imaging in the Electronic Age. The material is being presented since I have received so many questions regarding the content of this novel multidisciplinary course. I hope the short explanations help. -Don Greenberg

Professor Greenberg discussing the technology used for generating virtual reality images. In the background, is a student using the Oculus DK2 glasses and the images for each eye are shown on the large interactive display. Each year, Professor Don Greenberg s interdisciplinary course, Visual Imaging in the Electronic Age, is updated and modified based on the radical impacts of an exponentially changing digital world. The incorporation of new and emerging technologies is one aspect of the course that explains both its popularity and longevity. For instance, in Fall 2015, roughly one hundred undergraduates across various schools enrolled in the course and not only experienced firsthand new virtual reality technology, but also built their own virtual reality environments for their final project. Of course, this opportunity is vastly different than the assignments used for the first rendition of the course, which was taught more than three decades ago! However, VR technology is just one aspect of the concept and theory course, which covers a range of topics and disciplines related to visual imaging each year. The main goal of the course is to provide students from different colleges with a foundational knowledge of the concepts behind digital pictorial representation, image capture, and image display. Recently, the course has highlighted perspective representations, color perception, display technology, how television works, bandwidth and printing concepts, digital photography, computer graphics modeling and rendering, user interfaces and touch panel displays, and 2D, 3D and stereo animation. The latter part of the course has focused on future technologies including digital photorealism, and photographic, laser and infra-red 3D geometry capture.

Grounded in a historical understanding of these technological developments, the course begins with describing how Renaissance architects and artists were able to illustrate how to make perspective drawings which accurately represented three dimensional scenes. Of course, this capability can now be done by using modern computer graphics methods, but it is important to note that these early geometric projections and current computer graphics algorithms yield exactly the same perspective images. Thus, the first assignment includes drawing exercises as well as the mathematical concepts underlying today s digital computations. The Science of Art (Chapter 1) by Martin Kemp. Brunelleschi (among others in the Renaissance period) is credited with inventing the methods of linear perspective. Above are illustrations of Brunelleschi s peep-hole and mirror system and how he proved that his technology was valid. In the figures below a demonstration is shown on how he painted the famous Baptistry in Florence using this system and for comparison, an actual photograph of the Baptistry.

Below are two examples of student work submitted for the first portion of the first assignment on perspective drawing attempting to mimic Brunelleschi s experiments. Following the segment above, describing the means for representing three dimensional geometries with two dimensional drawings, the course transitions to color science. This topic requires a knowledge of the physical behavior of light and color at all wavelengths. Although this can be accurately simulated, perhaps more important is the way the human visual system perceives color. This part of the course explores the limitations, not only of our perceptual system but of current display devices, ranging from computer screens to theater projection systems to inkjet printers and now to virtual reality. Various portions of this topic require deeper understandings of the human visual system, the physiology of our eyes, and how the brain interprets the information which is sent from our eyes through the optic nerve. Example of our perceptual responses, mastered by the artist Josef Albers, are shown below. (article continued below)

Josef Albers, the famous Bauhaus painter, had many exhibits illustrating the behavior of the human visual color perception. In the images shown above, the two small squares in each painting are identical but look substantially different. This perceptual phenomena can be explained by examining the chromatic interactions of the receptive fields of the fovea. Closely related to color science, the course then deals with the technology for omnipresent digital cameras, the limitations of printing technology, and how such new devices as the Lytro camera can extract three dimensional information. Today, an image can no longer be considered just a representation on a two dimensional surface, as it is the third dimension (depth) which is necessary for the brain to correctly interpret being in a virtual space. The course then segues to presenting new technologies for extracting geometric information. Many methods, exemplified by the senors on Google s autonomous driving vehicles, are currently being used. These include sonar, radar, digital photography, and time-of-flight (Tof) devices, all of which extract depth information. Within the laboratory of the Program of Computer Graphics, we have infra-red and laser scanning devices, and we demonstrate how we can extract 3D geometries from multiple photographs.

On the left, a student is being scanned with a Cyberware laser head scanner. Using a camera which rotates around the student s head, the depth of 250,000 (512 x 512) points can be calculated. The figure on the right shows the teaching assistant being scanned and the resultant head model being displayed on the background terminal. New cameras such as the Lytro Illum 40 camera, pictured below, can capture the three-dimensional information in one shot. Multiple camera rigs are now being developed to create 360 degree panoramas. The most recent Lytro camera, pictured above, can capture sufficient information, so that all items in the photograph can be put into focus. Instead of Point, Focus, Shoot, this capability changes the photography paradigm to Point, Shoot, and Focus later.

Googe s Jump 16-camera rig (there are other competitive devices) simultaneously captures 360 degree scenery. Images from each camera can be stitched together to create dynamic backgrounds for virtual and augmented reality systems. But to see this voluminous data, we must have adequate means for display. Resolution must be sufficiently high to allow accurate human interpretation. Images must be displayed fast enough to allow us to perceive motion. And images can no longer be considered just a representation on a two dimensional surface. Can liquid crystals (LCD s) twist fast enough? Will future OLED (Organic Light Emitting Displays) be sufficient? Since we are migrating to a 3D imaging world with stereo movies and virtual reality, what technology is necessary? We need to simulate all of the monoscopic and stereoscopic depth cues that our brains expect, including the blur (lack of focus) when an object is not at the focus depth and the image is displayed on a flat 2D surface. The algorithms involved, and the display technology required are demonstrated in the laboratory. The Program of Computer Graphics classroom uses three Microsoft Surface large screen high resolution interactive touch panel displays for teaching and interactive presentations. Designs and simulations can be controlled by touch, pen, or gestural interfaces. Teaching assistant and M.S. of Architecture student Nicholas Cassab-Gheta draws on the 80 Microsoft Surface Display.

Each year, an increasing amount of pictorial content, be it video on cellphones, digital television, or theater entertainment, is being created electronically. Physically-based rendering algorithms which are now almost perceptually indistinguishable from real-world scenes are frequently used in animation or combined with real live photography. An entire field of computer graphics has evolved over the past four decades to not only correctly model the physics, but to display the images fast enough to enable the beginnings of virtual reality. Understanding these popular technologies will be important for the future generation, not only for entertainment or virtual and augmented reality, but the techniques will most likely be involved in communication modes in the future. For better or for worse, images are replacing or augmenting words. As voice replaced the telegram, and Skype and video is starting to replace voice-only, so will the future means of communication replace our current methods. Given the recent publicity describing virtual and augmented reality, the final project of the course last semester was dedicated to providing students with the opportunity to explore these new technologies. Projects were assigned in groups to create a model and insert it into a virtual environment. Students were asked to design a virtual pavilion to be placed at the center of Cornell s Arts Quadrangle. In addition, the pavilion needed to house artifacts, which the students designed or chose from 3d online repositories. The assignments allowed students to explore and master skills in geometry and texturing on all of the visible surfaces in their virtual environment. Below are some pictures illustrating some of the group submissions. The entire class was able to put on Oculus goggles and see all submissions. This will certainly be continued for this year. Teaching assistant, Computer Science student Kenneth Lim engages students in how virtual reality algorithms work during one of the interactive laboratory sessions.

Below: The final projects were submitted during the last week of the semester. All students in the class were able to see the designs displayed on the large touch panel displays in the laboratory (above), although they could not get the same experience of presence that the user of the virtual reality glasses would have. Virtual Pavilion. Hallie Black and Ainslie Cullen.

Virtual Pavilion. Cole Norgaarden, Kevin Beaulieu, and Rachel DiPirro. In order to explain all of these topics, which cannot easily be done in a lecture format, each of the above segments requires hands-on interactive laboratory experiments for both demonstration and comprehension. For this reason, in the fall semester of 2017, there will be approximately eight laboratory sessions at the Program of Computer Graphics to illustrate the technologies. One of these will be specifically dedicated towards mathematical explanations for more advanced engineering and computer science students. Cornell s Program of Computer Graphics has had a long history in developing physically-based global illumination rendering algorithms, both for realistic simulations and animation. Starting with early work at Hanna-Barbera in the 1970s, and culminating in recent SIGGRAPH papers, we have been successfully in this specific area for more than four decades. Former students founded Pixar, have won 13 Hollywood technical Oscars, and now teach computer graphics at many computer science departments including Cornell. Recently, the Program of Computer Graphics at Cornell has been fortunate enough to procure funding and the newest VR technologies, giving undergraduate students the opportunity to engage with cutting edge developments in the nascent field. The research has been supported by major companies in the field, including Microsoft, Valve, Nvidia, Oculus, Pixar and Autodesk. This support has allowed students to get first-hand and hands-on experience in state-of-the-art technologies during class and lab sessions. To accomplish the segments describe above requires knowledge in many different fields, and thus the rational for this interdisciplinary course. Although in the short and limited time of one semester it is not possible to present all of this material in depth, the course will give undergraduate students the knowledge of what might be

available in many different departments and colleges at Cornell. Thus, one of the purposes of this course is to introduce all of the relevant topics so that one might choose specific majors or minors for continuing study. Pre-enrollment for the course will take place in mid-april (see the Cornell registrar for dates based on class year). The lectures will be held on Tuesdays and Thursdays from 11:15 am 12:05 pm, with one recitation per week at least every other week. Recitations will be given at two different times on Tuesdays and Thursdays, with half of the sections dedicated to rightbrainers and the other half to left-brainers. Attendance is mandatory. The one-hour lab sessions will be augmented by experiments and demonstrations at the state-of-the-art laboratory of the Program of Computer Graphics. There are no exams. Note that enrollment is somewhat limited by the seating capacity of the lecture hall, and thus students may not drop the course after the first week of classes so that the maximum number of students on the waiting list may enroll. Interested students should look for ART 2907, ARCH 3702, CS 1620, and ENGRI 1620 to preenroll.