A Virtual Reality approach to progressive lenses simulation Jose Antonio Rodríguez Celaya¹, Pere Brunet Crosa,¹ Norberto Ezquerra², J. E. Palomar³ ¹ Departament de Llenguajes i Sistemes Informatics, Universitat Politécnica de Catalunya {jcelaya, pere} @lsi.upc.edu ² College of Computing, Georgia Institute of Technology norberto@cc.gatech.edu ³ R+D+i Department. Lens Division, Industrias de Optica S.A. joan.palomar@indo.es Abstract Progressive lenses are lenses that allow focusing objects at any distance. The main problem of these lenses is the arise of marginal zones that present aberrations in which vision is defective. The interest of our research is to study the perceptual effect of these distortions using Virtual Reality techniques. In this paper we describe the different phases and goals of our research and we present a lens simulator that will help us to have a correct perception of the aberrations produced by these marginal zones. The interest of our research is to study the perceptive effect of these distortions by the use of Virtual Reality (VR) techniques. 1. Introduction Presbyopia is the optical condition in which eye s accommodation power irreversibly decreases due to age effects. It appears in people around 40 years old and older people suffer it in higher or lower level. A solution for presbyopia correction is the use of progressive lenses. A progressive lens has 3 zones, one for far vision (the upper one), other for lecture or near vision (lower one) and a progressive path between them (figure 1). Lens power varies continuously from far vision zone to low vision zone, allowing a clear vision at any distance. The main problem of these lenses is the arise of marginal zones that present aberrations in which vision is defective. Figure 1. Progressive lens design In section 2 we describe the goals and different phases that compound our research. We review in section 3 relevant previous work in the field of optical systems simulation and the methods used to model them.
In section 4 we expose the main techniques and algorithms used in the development of our lenses simulator. Section 5 shows some images of the results obtained by the developed system. Finally in section 6 we expose the conclusions obtained and the future work to do in the research. 2. The use of VR in progressive lens simulation As we previously mentioned, the main interest in our research is to study the perceptual effect of the aberrations produced by progressive lenses using VR techniques. To achieve this goal we need to develop a lens simulator that permits having a correct perception of the lens aberrations. We decided to develop two different simulators that generate pictures of a static model: A flat display simulator, which allows perceiving how a scene is seen through a particular lens model. This is the result we present in this article. The second simulator will be similar to the previous one but using a VR passive stereo system for visualizing the scenes. This is ongoing research and will be presented in a future paper. The most important task in this stage is the usability test, that will let us find out if users have the same perception with real lenses and with the simulation. The test will be done with N users, always with the same scene. For each user, the following tests will be performed: 1. User watches the real scene with real lenses. 2. User watches the simulated scene on the display using lenses that allow focusing at display s distance. 3. User watches the scene in a VR system using lenses that allow focusing at display s distance and passive stereo glasses. Other goal of these tests is to compare the simulations and decide which one is better, the flat display simulation or the simulation in the VR system, which has convergence and adaptation problems. This simulator and/or lens models will be improved until the usability test shows that perception in the simulator is acceptably the same to real perception. In a second stage both simulators should admit dynamic models with movement. Usability test would be the same to the one in the first stage. The final goal, if usability tests throw positive results, is the use of the simulator for testing nonexisting lenses and for tuning their parameters. The present paper focuses on the design and implementation of the flat display simulator that allows to have a correct perception of a lens aberrations. After discussion of the previous work the algorithm is presented in section 4 and the results are discussed in section 5. 3. Previous work One of the biggest problems that optical industry has when giving solutions to particular patients is the difficulty of knowing the effect of a particular lens on a particular patient. In the last years these needs have leaded to the development of optical systems simulators that generate distorted images from real lens data from modeled lenses [1] or from a particular patient s eye model, allowing to see an image equal to the one seen through that optical system. One of the most interesting projects in this area is the one developed by John Bastian et al. in collaboration with Sola optical [2]. In this project, a lenses simulator for a RV immersive system is developed, in this case, a CAVE system. This simulator has a lens model as input, done from a vector map that represent the distortion of the light rays through the lens. For simulating the distortion originated by the lens, a technique denominated Partitioned Blur Algorithm (PBA) is used, which, making a convolution, applies a Gaussian blur to each pixel in the image. To achieve this blur, a technique based on circle of least confusion, developed by Potmesil and Chakravarty [3], is used. This technique uses blur discs with a particular size depending of the distance of the object to which the pixel pertains.
The major inconvenience of this technique comes from the lens modeling. By using a vector map they assume that distortion in each point is exactly a circle which color has not in account near pixels but its degraded from its center to its border. This makes object colors interference between them. These vector maps, as they are in fact a simplification of a ray tracing, don t provide depth of field information, so all parts of a rendered image are equally in focus. Another interesting project is the one developed by the computer graphics group at the university of Berkeley. This project, called OPTICAL (OPtic and Topography Involving the Cornea And Lens) and leaded by Brian Barsky [4], has as primary goal to do a realistic simulation of vision through an optical system. This system can be a lens, a real patient eye or both together. This simulation produces an accurate image of what a particular patient sees [5]. To accomplish this simulation, the starting point are the data obtained by a Hartmann-Shack device[6][7], which gives an accurate measurement of the wavefront aberrations of an optical system. Wavefronts, differently to ray tracing, provide depth of field information. A way of representing those wavefronts is using PSF (Point Spread Functions) which are twodimensional energy histograms[5]. These PSF provide information about how deformed and blurred is seen a point, at a particular distance and direction, through an optical system. PSFs are used as focal two-dimensional filters and for each pixel in the final image its color is calculated doing a convolution. The result is a good approach to the real image seen through an optical system, which is specially useful for simulating the real vision of patients with vision defects [3]. One of the main limitations of this approach is the use of discrete depth values. This can result in images in which focus differences can be perceived. It is also a slow algorithm, not suitable for real time simulations. 4. Algorithms for lens simulation For the development of our lens simulator, we choose to use Barsky s approach, using PSF to model the lenses we want to simulate. In this section we describe the process done to achieve this simulation. 4.1. Modeling a lens A lens is modeled form a tri-dimensional PSF matrix, oriented in accordance to the viewer coordinate system. Imagine we are looking at an object through a lens in a particular distance and direction. This direction is modeled from 2 angles, horizontal and vertical, from optical axis. The PSF function for a particular distance and direction models the deformed and blurred we see the object. The ideal lens model would store a specific PSF for each possible direction and distance would have PSF. As this is not possible, a lens model is approached as a tri-dimensional matrix of PSF (fig 2). PSF 111 PSF 211 PSF 121 PSF 1N1 PSF 112 PSF 11M PSF N11 PSF NN1 PSF N12 PSF NN2 PSF N1M PSF NNM Figure 2. A lens model. Each point corresponds to a PSF. There are NxNxM PSF. DEPTH
4.2. Preprocess The PSF we use in our simulator are bidimensional arrays of 80x80 values. These PSF are normalized so all the values in a particular PSF sum 1. Ideally, there should be as many PSF, in x and y, as pixels on the screen. As we mentioned before, this is not possible due to the huge amount of data required. As we have less values that needed, an approach to the real PSF has to be done for each pixel. This can be done applying a tri-linear interpolation. Even this way, the interpolation is a costly process, because for each pixel a 80x80 value matrix (PSF) should be calculated. A way to reduce these calculations is to convert these PSF to pixel based PSF, so we determine how many pixels are affected by a PSF and we create a PSF with a value for each pixel affected. Resulting PSF are much smaller but perceptually the same to the previous ones. Due to PSF normalization, a 0 value doesn t contribute to final visualization s color, so we can calculate the maximal PSF size in pixels (from all given PSF, see figure 2) and convert all the initial PSF to these pixel-sized values, making computations easier to achieve. For calculating the new PSF we adjust the field of view to the angle the sum of angles between PSFs on the x axis. This way we can use the formula (1) to deduce the number of pixels in a PSF: x w pixx = 2d (1) tg( α) x is the size in millimeters of the x axis of the PSF (is equal to y), w is the number of pixels of the screens resolution, α is the angle from optical axis to the PSF at the edge of the screen (half screen) on the x axis and d is the distance at which the image is projected (fig 3). This preprocess has to be done only once, when loading a lens model or if we change any parameter as the projection distance. Figure 3. Calculating the size in pixels of a PSF. For simplification we use square screens (width = height). The FOV of the screen is adjusted to 2α.. 4.3. Calculate PSF values for each pixel on the screen For a rendering a particular image, we need to calculate the PSF values for each pixel in the screen. This is done as many times as we want to render an image, so in a real time simulation, it should be done for every frame. As we previously mentioned, this process is done applying a trilinear interpolation. In order to achieve this interpolation, its necessary to know the 8 PSF enclosing the point P(x,y,z) for which we are calculating the PSF (fig 4). For choosing these PSF, knowing the depth of the object in the scene that corresponds to the actual pixel is needed. We make this grabbing the depth information for that pixel from the depth buffer, so with x,y (screen coordinates) and depth values of the actual pixel, we know these 8 PSF and we can achieve a trilinear interpolation between them obtaining another PSF as result.
} With (x,y,depth) choose the 8 surrounding PSF (see Figure 4) Tri-linear interpolation of the 8 PSF at (x,y,depth) Apply convolution (Formula 2) using the PSF values and the color value in the pixels close to (x,y) 5. Results Figure 4. White circles are points in which we have PSF (data) P is the point for which we want to calculate its PSF. Algorithm computes PSF function from the 8 PSF of the vertex of the box containing P, using tri-linear interpolation For testing our simulator we decided to use a very simply lens model. The used PSFs correspond to a 5 diopter lens and they are calculated for an object at optical infinity (from 5 meters) and in 9 directions, differing 20º. They form a 3x3 matrix where the center corresponds to the optical axis and the other directions form 20º with the axis (fig 5). 4.4. Convolution Knowing the PSF corresponding to the actual pixel, we only have to achieve a convolution in order to find out how surrounding pixels affect to the final color of the pixel we are processing. Convolution is applied following the next formula: x + y+ I ( x, y) = color( i, j) PSF( x i, y j) (2) i= x j= y In this formula, is half the number of pixels of a PSF and x, y are screen coordinates of the computed pixel. The result, after computing these calculations for every pixel on the screen, we get a blurred scene, which is the simulation of the original scene seen through the modeled lens. The toplevel algorithm of the blurring process is: For each pixel on screen { Get depth values from depth buffer. Figure 5. Lens Model used in test. For this simulation, as at least another PSF plane for interpolating is needed, we decided to set a perfect vision PSF plane (PSF with very high values at its central point) at near distance. This distance can be modified by the application. The results are shown in the pictures (figs 6-10)
Figure 6. Original and Blurred Engine model. Figure 7. Two cubes. Original and blurred image.
Figure 8. Interior of a ship. Original and blurred image. Figure 9. Depth values of engine and cubes. Green values indicates depths between the 2 planes (one at 1m ant the other one at 5 m). Red values indicates depths greater than the depth of the 2 nd depth plane (5m). Blue values are depths < 1m.
Figure 10. Lens simulator interface. 6. Conclusions and future work The developed simulator allows us to see the behavior of different lens models and to try different parameters in the simulation, but there are still some tasks to accomplish: Try other lens types, for ex. Progressive lenses, with more depth planes. Implementation of the effects of magnification. This will improve the quality of the simulation. Adapt the simulator to generate stereoscopic images. This will allow us to simulate lenses in VR devices. Perform an usability test as described in section 2, which will determine the accurate the final images of our simulator are. In a second stage, we will try to develop the convolution described in section 4.4 using hardware graphics. This could allow us to make real time simulations. Finally, another usability test will be performed to test the final simulator and determine the advantages and disadvantages of the use of VR techniques. 7. Acknowledgements This work has been developed in the framework of the INDO-UPC agreement for the study of the usability of VR tools in the improvement of visual defects. It has been partially funded by Industrias de Optica S.A., by CDTI FIT-030000-2004-108, by PROFIT 04-0308, and by the TIN research project TIN-2004-08065-C02-01. The authors would like to thank Javier Vegas, Enric Fontdecaba and Juan C. Dürsteler from INDO for their help and cooperation. References [1] Heidrich, W., and Slusallek, P., and Seidel, H. An image-based model for realistic lens
systems in interactive computer graphics. Graphic Interface 97, pp.68-75, 1997 [2] J. Bastian, A. van den Hengel, K. Hawick, F. Vaughan. Modelling the perceptual effects of lens distortion via an immersive virtual reality system. Technical report DHPC 086, Department of computer science, University of Adelaide. 1999. [3] Potmesil, M., and Chakravarty, I. A lens aperture camera model for synthetic image generation. In SIGGRAPH 81 conf. proc., Computer Graphics, ACM Press, pp.297-305,1981. [4] D. García, B. A. Barsky, S. A. Klein. The OPTICAL project at UC Berkeley: simulating visual acuity, Medicine Meets Virtual Reality, 6 (Art, Science, Technology: Healthcare (r)evolution), San Diego, January 28-31, 1998 [5] D. García. CwhatUC: Software Tools for Predicting, Visualizing and Simulating Corneal Visual Acuity. PhD thesis, Computer Science Division, University of California, Berkeley, CA, May 2000. [6] B. A. Barsky, D. Garcia, S. A. Klein. Computer Simulation of Vision-Based Synthetic Images Using Hartmann-Shack- Derived Wavefront Aberrations., Association for Research in Vision and Ophthalmology, Fort Lauderdale, Florida, 29 April - 4 May 2001. Abstract in Investigative Ophthalmology & Visual Science, Vol. 42, No. 4, March 15, 2001, pp. S162. [7] B. A. Barsky, D. Garcia, S. A. Klein, W. M. Yu, B. P. Chen, S. S. Dalal. RAYS (Render As You See): Vision-Realistic Rendering Using Hartmann-Shack Wavefront Aberrations. internal report, OPTICAl project, UC Berkeley March 2001. [8] P. Artal, J. Santamaría, J. Bescoós. Retrieval of wave aberration of human eyes from actual point-spread-function data. J. Opt. Soc. Am. A, 5:1201-1206, 1988. [9] B. A. Barsky. Vision-Realistic Rendering: Simulation of the Scanned Foveal Image from Wavefront Data of Human Subjects. First Symposium on Applied Perception in Graphics and Visualization, co-located with ACM SIGGRAPH, Los Angeles, 7-8 August 2004, pp. 73-81 [10] B. A. Barsky, B. P. Chen, A. C. Berg, M. Moutet, D. Garcia, S. A. Klein. Incorporating Camera Models, Ocular Models, and Actual Patient Eye Data for Photo-Realistic and Vision-Realistic Rendering. Abstract in the Fifth International Conference on Mathematical Methods for Curves and Surfaces, June 29 - July 4, 2000, Oslo, Norway [11] J. Loops, Ph. Slusallek, H. P. Seidel. Using Wavefront Tracing for the Visualization and Optimization of Progressive Lenses. Proc of Eurographics 1998. Computer graphics forum Vol. 17 nº3