A Virtual Reality approach to progressive lenses simulation

Similar documents
Fast Perception-Based Depth of Field Rendering

4th International Congress of Wavefront Sensing and Aberration-free Refractive Correction ADAPTIVE OPTICS FOR VISION: THE EYE S ADAPTATION TO ITS

Waves & Oscillations

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

Optics: Lenses & Mirrors

Lenses, exposure, and (de)focus

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Explanation of Aberration and Wavefront

Converging and Diverging Surfaces. Lenses. Converging Surface

Ch 24. Geometric Optics

Camera Models and Optical Systems Used in Computer Graphics: Part I, Object-Based Techniques

Supplemental: Accommodation and Comfort in Head-Mounted Displays

Study of self-interference incoherent digital holography for the application of retinal imaging

VISUAL PHYSICS ONLINE DEPTH STUDY: ELECTRON MICROSCOPES

CS 443: Imaging and Multimedia Cameras and Lenses

Chapter 25. Optical Instruments

CSE 190: Virtual Reality Technologies LECTURE #7: VR DISPLAYS

ECEN 4606, UNDERGRADUATE OPTICS LAB

Removing Temporal Stationary Blur in Route Panoramas

doi: /

Introduction. Related Work

In recent years there has been an explosion of

Lecture Outline Chapter 27. Physics, 4 th Edition James S. Walker. Copyright 2010 Pearson Education, Inc.

Exam Preparation Guide Geometrical optics (TN3313)

Adding Realistic Camera Effects to the Computer Graphics Camera Model

LENSES. INEL 6088 Computer Vision

OPTICAL SYSTEMS OBJECTIVES

Chapter 36. Image Formation

COPYRIGHTED MATERIAL. Overview

Fast Motion Blur through Sample Reprojection

Chapter 36. Image Formation

Chapter 34: Geometric Optics

PHYS 160 Astronomy. When analyzing light s behavior in a mirror or lens, it is helpful to use a technique called ray tracing.

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Topic 6 - Optics Depth of Field and Circle Of Confusion

Image Enhancement Using Calibrated Lens Simulations

Rediscover quality of life thanks to vision correction with technology from Carl Zeiss. Patient Information

Simulated Programmable Apertures with Lytro

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Physics 2310 Lab #5: Thin Lenses and Concave Mirrors Dr. Michael Pierce (Univ. of Wyoming)

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Opto Engineering S.r.l.

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

Chapter 29/30. Wave Fronts and Rays. Refraction of Sound. Dispersion in a Prism. Index of Refraction. Refraction and Lenses

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

Chapter 34 Geometric Optics

Applied Optics. , Physics Department (Room #36-401) , ,

Perspective. Announcement: CS4450/5450. CS 4620 Lecture 3. Will be MW 8:40 9:55 How many can make the new time?

Performance Factors. Technical Assistance. Fundamental Optics

CHAPTER 33 ABERRATION CURVES IN LENS DESIGN

Virtual Reality I. Visual Imaging in the Electronic Age. Donald P. Greenberg November 9, 2017 Lecture #21

The Human Visual System!

Physics 208 Spring 2008 Lab 2: Lenses and the eye

Angular motion point spread function model considering aberrations and defocus effects

Transferring wavefront measurements to ablation profiles. Michael Mrochen PhD Swiss Federal Institut of Technology, Zurich IROC Zurich

Subjective Image Quality Metrics from The Wave Aberration

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

Analysis of retinal images for retinal projection type super multiview 3D head-mounted display

6.A44 Computational Photography

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

COPYRIGHTED MATERIAL OVERVIEW 1

Image Formation and Capture

Reading: Lenses and Mirrors; Applications Key concepts: Focal points and lengths; real images; virtual images; magnification; angular magnification.

Head Mounted Display Optics II!

History of projection. Perspective. History of projection. Plane projection in drawing

Analysis of Hartmann testing techniques for large-sized optics

Notes from Lens Lecture with Graham Reed

Customized Correction of Wavefront Aberrations in Abnormal Human Eyes by Using a Phase Plate and a Customized Contact Lens

Visual Perception. Readings and References. Forming an image. Pinhole camera. Readings. Other References. CSE 457, Autumn 2004 Computer Graphics

3D Viewing. Introduction to Computer Graphics Torsten Möller / Manfred Klaffenböck. Machiraju/Zhang/Möller

General Imaging System

GEOMETRICAL OPTICS AND OPTICAL DESIGN

Vision and Color. Reading. The lensmaker s formula. Lenses. Brian Curless CSEP 557 Autumn Good resources:

Applications of Optics

Vision and Color. Reading. Optics, cont d. Lenses. d d f. Brian Curless CSEP 557 Fall Good resources:

Vision and Color. Brian Curless CSEP 557 Fall 2016

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

Coded Computational Photography!

Pablo Artal. Adaptive Optics visual simulator ( and depth of focus) LABORATORIO DE OPTICA UNIVERSIDAD DE MURCIA, SPAIN

Optical Systems: Pinhole Camera Pinhole camera: simple hole in a box: Called Camera Obscura Aristotle discussed, Al-Hazen analyzed in Book of Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Lecture 19 (Geometric Optics I Plane and Spherical Optics) Physics Spring 2018 Douglas Fields

Omni-Directional Catadioptric Acquisition System

Enhancing Fish Tank VR

Lab 11: Lenses and Ray Tracing

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

Big League Cryogenics and Vacuum The LHC at CERN

Dr. Todd Satogata (ODU/Jefferson Lab) Monday, April

E X P E R I M E N T 12

VC 11/12 T2 Image Formation

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

PHYSICS FOR THE IB DIPLOMA CAMBRIDGE UNIVERSITY PRESS

Optical Components for Laser Applications. Günter Toesko - Laserseminar BLZ im Dezember

Perspective. Cornell CS4620/5620 Fall 2012 Lecture Kavita Bala 1 (with previous instructors James/Marschner)

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

Using Optics to Optimize Your Machine Vision Application

Transcription:

A Virtual Reality approach to progressive lenses simulation Jose Antonio Rodríguez Celaya¹, Pere Brunet Crosa,¹ Norberto Ezquerra², J. E. Palomar³ ¹ Departament de Llenguajes i Sistemes Informatics, Universitat Politécnica de Catalunya {jcelaya, pere} @lsi.upc.edu ² College of Computing, Georgia Institute of Technology norberto@cc.gatech.edu ³ R+D+i Department. Lens Division, Industrias de Optica S.A. joan.palomar@indo.es Abstract Progressive lenses are lenses that allow focusing objects at any distance. The main problem of these lenses is the arise of marginal zones that present aberrations in which vision is defective. The interest of our research is to study the perceptual effect of these distortions using Virtual Reality techniques. In this paper we describe the different phases and goals of our research and we present a lens simulator that will help us to have a correct perception of the aberrations produced by these marginal zones. The interest of our research is to study the perceptive effect of these distortions by the use of Virtual Reality (VR) techniques. 1. Introduction Presbyopia is the optical condition in which eye s accommodation power irreversibly decreases due to age effects. It appears in people around 40 years old and older people suffer it in higher or lower level. A solution for presbyopia correction is the use of progressive lenses. A progressive lens has 3 zones, one for far vision (the upper one), other for lecture or near vision (lower one) and a progressive path between them (figure 1). Lens power varies continuously from far vision zone to low vision zone, allowing a clear vision at any distance. The main problem of these lenses is the arise of marginal zones that present aberrations in which vision is defective. Figure 1. Progressive lens design In section 2 we describe the goals and different phases that compound our research. We review in section 3 relevant previous work in the field of optical systems simulation and the methods used to model them.

In section 4 we expose the main techniques and algorithms used in the development of our lenses simulator. Section 5 shows some images of the results obtained by the developed system. Finally in section 6 we expose the conclusions obtained and the future work to do in the research. 2. The use of VR in progressive lens simulation As we previously mentioned, the main interest in our research is to study the perceptual effect of the aberrations produced by progressive lenses using VR techniques. To achieve this goal we need to develop a lens simulator that permits having a correct perception of the lens aberrations. We decided to develop two different simulators that generate pictures of a static model: A flat display simulator, which allows perceiving how a scene is seen through a particular lens model. This is the result we present in this article. The second simulator will be similar to the previous one but using a VR passive stereo system for visualizing the scenes. This is ongoing research and will be presented in a future paper. The most important task in this stage is the usability test, that will let us find out if users have the same perception with real lenses and with the simulation. The test will be done with N users, always with the same scene. For each user, the following tests will be performed: 1. User watches the real scene with real lenses. 2. User watches the simulated scene on the display using lenses that allow focusing at display s distance. 3. User watches the scene in a VR system using lenses that allow focusing at display s distance and passive stereo glasses. Other goal of these tests is to compare the simulations and decide which one is better, the flat display simulation or the simulation in the VR system, which has convergence and adaptation problems. This simulator and/or lens models will be improved until the usability test shows that perception in the simulator is acceptably the same to real perception. In a second stage both simulators should admit dynamic models with movement. Usability test would be the same to the one in the first stage. The final goal, if usability tests throw positive results, is the use of the simulator for testing nonexisting lenses and for tuning their parameters. The present paper focuses on the design and implementation of the flat display simulator that allows to have a correct perception of a lens aberrations. After discussion of the previous work the algorithm is presented in section 4 and the results are discussed in section 5. 3. Previous work One of the biggest problems that optical industry has when giving solutions to particular patients is the difficulty of knowing the effect of a particular lens on a particular patient. In the last years these needs have leaded to the development of optical systems simulators that generate distorted images from real lens data from modeled lenses [1] or from a particular patient s eye model, allowing to see an image equal to the one seen through that optical system. One of the most interesting projects in this area is the one developed by John Bastian et al. in collaboration with Sola optical [2]. In this project, a lenses simulator for a RV immersive system is developed, in this case, a CAVE system. This simulator has a lens model as input, done from a vector map that represent the distortion of the light rays through the lens. For simulating the distortion originated by the lens, a technique denominated Partitioned Blur Algorithm (PBA) is used, which, making a convolution, applies a Gaussian blur to each pixel in the image. To achieve this blur, a technique based on circle of least confusion, developed by Potmesil and Chakravarty [3], is used. This technique uses blur discs with a particular size depending of the distance of the object to which the pixel pertains.

The major inconvenience of this technique comes from the lens modeling. By using a vector map they assume that distortion in each point is exactly a circle which color has not in account near pixels but its degraded from its center to its border. This makes object colors interference between them. These vector maps, as they are in fact a simplification of a ray tracing, don t provide depth of field information, so all parts of a rendered image are equally in focus. Another interesting project is the one developed by the computer graphics group at the university of Berkeley. This project, called OPTICAL (OPtic and Topography Involving the Cornea And Lens) and leaded by Brian Barsky [4], has as primary goal to do a realistic simulation of vision through an optical system. This system can be a lens, a real patient eye or both together. This simulation produces an accurate image of what a particular patient sees [5]. To accomplish this simulation, the starting point are the data obtained by a Hartmann-Shack device[6][7], which gives an accurate measurement of the wavefront aberrations of an optical system. Wavefronts, differently to ray tracing, provide depth of field information. A way of representing those wavefronts is using PSF (Point Spread Functions) which are twodimensional energy histograms[5]. These PSF provide information about how deformed and blurred is seen a point, at a particular distance and direction, through an optical system. PSFs are used as focal two-dimensional filters and for each pixel in the final image its color is calculated doing a convolution. The result is a good approach to the real image seen through an optical system, which is specially useful for simulating the real vision of patients with vision defects [3]. One of the main limitations of this approach is the use of discrete depth values. This can result in images in which focus differences can be perceived. It is also a slow algorithm, not suitable for real time simulations. 4. Algorithms for lens simulation For the development of our lens simulator, we choose to use Barsky s approach, using PSF to model the lenses we want to simulate. In this section we describe the process done to achieve this simulation. 4.1. Modeling a lens A lens is modeled form a tri-dimensional PSF matrix, oriented in accordance to the viewer coordinate system. Imagine we are looking at an object through a lens in a particular distance and direction. This direction is modeled from 2 angles, horizontal and vertical, from optical axis. The PSF function for a particular distance and direction models the deformed and blurred we see the object. The ideal lens model would store a specific PSF for each possible direction and distance would have PSF. As this is not possible, a lens model is approached as a tri-dimensional matrix of PSF (fig 2). PSF 111 PSF 211 PSF 121 PSF 1N1 PSF 112 PSF 11M PSF N11 PSF NN1 PSF N12 PSF NN2 PSF N1M PSF NNM Figure 2. A lens model. Each point corresponds to a PSF. There are NxNxM PSF. DEPTH

4.2. Preprocess The PSF we use in our simulator are bidimensional arrays of 80x80 values. These PSF are normalized so all the values in a particular PSF sum 1. Ideally, there should be as many PSF, in x and y, as pixels on the screen. As we mentioned before, this is not possible due to the huge amount of data required. As we have less values that needed, an approach to the real PSF has to be done for each pixel. This can be done applying a tri-linear interpolation. Even this way, the interpolation is a costly process, because for each pixel a 80x80 value matrix (PSF) should be calculated. A way to reduce these calculations is to convert these PSF to pixel based PSF, so we determine how many pixels are affected by a PSF and we create a PSF with a value for each pixel affected. Resulting PSF are much smaller but perceptually the same to the previous ones. Due to PSF normalization, a 0 value doesn t contribute to final visualization s color, so we can calculate the maximal PSF size in pixels (from all given PSF, see figure 2) and convert all the initial PSF to these pixel-sized values, making computations easier to achieve. For calculating the new PSF we adjust the field of view to the angle the sum of angles between PSFs on the x axis. This way we can use the formula (1) to deduce the number of pixels in a PSF: x w pixx = 2d (1) tg( α) x is the size in millimeters of the x axis of the PSF (is equal to y), w is the number of pixels of the screens resolution, α is the angle from optical axis to the PSF at the edge of the screen (half screen) on the x axis and d is the distance at which the image is projected (fig 3). This preprocess has to be done only once, when loading a lens model or if we change any parameter as the projection distance. Figure 3. Calculating the size in pixels of a PSF. For simplification we use square screens (width = height). The FOV of the screen is adjusted to 2α.. 4.3. Calculate PSF values for each pixel on the screen For a rendering a particular image, we need to calculate the PSF values for each pixel in the screen. This is done as many times as we want to render an image, so in a real time simulation, it should be done for every frame. As we previously mentioned, this process is done applying a trilinear interpolation. In order to achieve this interpolation, its necessary to know the 8 PSF enclosing the point P(x,y,z) for which we are calculating the PSF (fig 4). For choosing these PSF, knowing the depth of the object in the scene that corresponds to the actual pixel is needed. We make this grabbing the depth information for that pixel from the depth buffer, so with x,y (screen coordinates) and depth values of the actual pixel, we know these 8 PSF and we can achieve a trilinear interpolation between them obtaining another PSF as result.

} With (x,y,depth) choose the 8 surrounding PSF (see Figure 4) Tri-linear interpolation of the 8 PSF at (x,y,depth) Apply convolution (Formula 2) using the PSF values and the color value in the pixels close to (x,y) 5. Results Figure 4. White circles are points in which we have PSF (data) P is the point for which we want to calculate its PSF. Algorithm computes PSF function from the 8 PSF of the vertex of the box containing P, using tri-linear interpolation For testing our simulator we decided to use a very simply lens model. The used PSFs correspond to a 5 diopter lens and they are calculated for an object at optical infinity (from 5 meters) and in 9 directions, differing 20º. They form a 3x3 matrix where the center corresponds to the optical axis and the other directions form 20º with the axis (fig 5). 4.4. Convolution Knowing the PSF corresponding to the actual pixel, we only have to achieve a convolution in order to find out how surrounding pixels affect to the final color of the pixel we are processing. Convolution is applied following the next formula: x + y+ I ( x, y) = color( i, j) PSF( x i, y j) (2) i= x j= y In this formula, is half the number of pixels of a PSF and x, y are screen coordinates of the computed pixel. The result, after computing these calculations for every pixel on the screen, we get a blurred scene, which is the simulation of the original scene seen through the modeled lens. The toplevel algorithm of the blurring process is: For each pixel on screen { Get depth values from depth buffer. Figure 5. Lens Model used in test. For this simulation, as at least another PSF plane for interpolating is needed, we decided to set a perfect vision PSF plane (PSF with very high values at its central point) at near distance. This distance can be modified by the application. The results are shown in the pictures (figs 6-10)

Figure 6. Original and Blurred Engine model. Figure 7. Two cubes. Original and blurred image.

Figure 8. Interior of a ship. Original and blurred image. Figure 9. Depth values of engine and cubes. Green values indicates depths between the 2 planes (one at 1m ant the other one at 5 m). Red values indicates depths greater than the depth of the 2 nd depth plane (5m). Blue values are depths < 1m.

Figure 10. Lens simulator interface. 6. Conclusions and future work The developed simulator allows us to see the behavior of different lens models and to try different parameters in the simulation, but there are still some tasks to accomplish: Try other lens types, for ex. Progressive lenses, with more depth planes. Implementation of the effects of magnification. This will improve the quality of the simulation. Adapt the simulator to generate stereoscopic images. This will allow us to simulate lenses in VR devices. Perform an usability test as described in section 2, which will determine the accurate the final images of our simulator are. In a second stage, we will try to develop the convolution described in section 4.4 using hardware graphics. This could allow us to make real time simulations. Finally, another usability test will be performed to test the final simulator and determine the advantages and disadvantages of the use of VR techniques. 7. Acknowledgements This work has been developed in the framework of the INDO-UPC agreement for the study of the usability of VR tools in the improvement of visual defects. It has been partially funded by Industrias de Optica S.A., by CDTI FIT-030000-2004-108, by PROFIT 04-0308, and by the TIN research project TIN-2004-08065-C02-01. The authors would like to thank Javier Vegas, Enric Fontdecaba and Juan C. Dürsteler from INDO for their help and cooperation. References [1] Heidrich, W., and Slusallek, P., and Seidel, H. An image-based model for realistic lens

systems in interactive computer graphics. Graphic Interface 97, pp.68-75, 1997 [2] J. Bastian, A. van den Hengel, K. Hawick, F. Vaughan. Modelling the perceptual effects of lens distortion via an immersive virtual reality system. Technical report DHPC 086, Department of computer science, University of Adelaide. 1999. [3] Potmesil, M., and Chakravarty, I. A lens aperture camera model for synthetic image generation. In SIGGRAPH 81 conf. proc., Computer Graphics, ACM Press, pp.297-305,1981. [4] D. García, B. A. Barsky, S. A. Klein. The OPTICAL project at UC Berkeley: simulating visual acuity, Medicine Meets Virtual Reality, 6 (Art, Science, Technology: Healthcare (r)evolution), San Diego, January 28-31, 1998 [5] D. García. CwhatUC: Software Tools for Predicting, Visualizing and Simulating Corneal Visual Acuity. PhD thesis, Computer Science Division, University of California, Berkeley, CA, May 2000. [6] B. A. Barsky, D. Garcia, S. A. Klein. Computer Simulation of Vision-Based Synthetic Images Using Hartmann-Shack- Derived Wavefront Aberrations., Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 29 April - 4 May 2001. Abstract in Investigative Ophthalmology & Visual Science, Vol. 42, No. 4, March 15, 2001, pp. S162. [7] B. A. Barsky, D. Garcia, S. A. Klein, W. M. Yu, B. P. Chen, S. S. Dalal. RAYS (Render As You See): Vision-Realistic Rendering Using Hartmann-Shack Wavefront Aberrations. internal report, OPTICAl project, UC Berkeley March 2001. [8] P. Artal, J. Santamaría, J. Bescoós. Retrieval of wave aberration of human eyes from actual point-spread-function data. J. Opt. Soc. Am. A, 5:1201-1206, 1988. [9] B. A. Barsky. Vision-Realistic Rendering: Simulation of the Scanned Foveal Image from Wavefront Data of Human Subjects. First Symposium on Applied Perception in Graphics and Visualization, co-located with ACM SIGGRAPH, Los Angeles, 7-8 August 2004, pp. 73-81 [10] B. A. Barsky, B. P. Chen, A. C. Berg, M. Moutet, D. Garcia, S. A. Klein. Incorporating Camera Models, Ocular Models, and Actual Patient Eye Data for Photo-Realistic and Vision-Realistic Rendering. Abstract in the Fifth International Conference on Mathematical Methods for Curves and Surfaces, June 29 - July 4, 2000, Oslo, Norway [11] J. Loops, Ph. Slusallek, H. P. Seidel. Using Wavefront Tracing for the Visualization and Optimization of Progressive Lenses. Proc of Eurographics 1998. Computer graphics forum Vol. 17 nº3