Wavefront Sensor For Eye Aberrations Measurements

Size: px
Start display at page:

Download "Wavefront Sensor For Eye Aberrations Measurements"

Transcription

1 University of Central Florida Electronic Theses and Dissertations Doctoral Dissertation (Open Access) Wavefront Sensor For Eye Aberrations Measurements 2009 Costin Curatu University of Central Florida Find similar works at: University of Central Florida Libraries Part of the Electromagnetics and Photonics Commons, and the Optics Commons STARS Citation Curatu, Costin, "Wavefront Sensor For Eye Aberrations Measurements" (2009). Electronic Theses and Dissertations This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of STARS. For more information, please contact

2 WAVEFRONT SENSOR FOR EYE ABERRATIONS MEASUREMENTS by COSTIN E. CURATU B.A.Sc. University of Toronto, 2001 M.S. Laval University, 2003 M.S. University of Central Florida, 2005 A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy from the College of Optics and Photonics at the University of Central Florida Orlando, Florida Summer Term 2009 Major Professor: James Harvey

3 2009 Costin Curatu ii

4 ABSTRACT Ocular wavefront sensing is vital to improving our understanding of the human eye and to developing advanced vision correction methods, such as adaptive optics, customized contact lenses, and customized laser refractive surgery. It is also a necessary technique for high-resolution imaging of the retina. The most commonly used wavefront sensing method is based on the Shack Hartmann wavefront sensor. Since Junzhong Liang's first application of Shack-Hartmann wavefront sensing for the human eye in 1994 [1], the method has quickly gained acceptance and popularity in the ophthalmic industry. Several commercial Shack-Hartmann eye aberrometers are currently available. While the existing aberrometers offer reasonable measurement accuracy and reproducibility, they do have a limited dynamic range. Although rare, highly aberrated eyes do exists (corneal transplant, keratoconus, post-lasik) that cannot be measured with the existing devices. Clinicians as well as optical engineers agree that there is room for improvement in the performance of these devices Although the optical aberrations of normal eyes have been studied by the Shack-Hartmann technique, little is known about the optical imperfections of abnormal eyes. Furthermore, it is not obvious that current Shack-Hartmann aberrometers are robust enough to successfully measure clinically iii

5 abnormal eyes of poor optical quality Larry Thibos, School of Optometry, Indiana University [2]. The ultimate goal for ophthalmic aberrometers and the main objective of this work is to increase the dynamic range of the wavefront sensor without sacrificing its sensitivity or accuracy. In this dissertation, we attempt to review and integrate knowledge and techniques from previous studies as well as to propose our own analytical approach to optimizing the optical design of the sensor in order to achieve the desired dynamic range. We present the underlying theory that governs the relationship between the performance metrics of the sensor: dynamic range, sensitivity, spatial resolution, and accuracy. We study the design constraints and trade-offs and present our system optimization method in detail. To validate the conceptual approach, a complex simulation model was developed. The comprehensive model was able to predict the performance of the sensor as a function of system design parameters, for a wide variety of ocular wavefronts. This simulation model did confirm the results obtained with our analytical approach. The simulator itself can now be used as a standalone tool for other Shack-Hartmann sensor designs. Finally, we were able to validate our theoretical work by designing and building an experimental prototype. We present some of the more practical design aspects, such as illumination choices and tolerance analysis methods. The prototype validated the iv

6 conceptual approach used in the design and was able to demonstrate a vast increase in dynamic range while maintaining accurate and repeatable measurements. v

7 ACKNOWLEDGMENTS I would like to thank my advisor, Dr. James Harvey, for his help and support throughout my PhD journey. His guidance during the past three years was crucial to my success. I would also like to acknowledge the involvement of my PhD committee members, Dr. Shin-Tson Wu, Dr. Aristide Dogariu, and Dr. Ronald Phillips, whose valuable suggestions contributed to the success of this dissertation. I would like to acknowledge all my CREOL professors and colleagues, from whom I learned a great deal during my graduate years. A special thank you goes to my office boddies Ricky and Vesko for maintaining my sanity as well as their own with nothing but a great sense of humor. I would also like to acknowledge all my talented Alcon co-workers for believing in the success of this project and for their hands-on involvement in building the experimental prototype. I would like to thank my parents for their sacrifices and continuous support throughout the years. I would like to also thank my brother for always leading the way and leveling the path for me. Thanks for the inspiration and the smooth ride, George! vi

8 Finally, I am most thankful to my wife, Mihaela, for helping and encouraging me through times of stress and doubt. She inspired and motivated me to finally succeed. vii

9 To my Dad, Dr. Eugene O. Curatu, Professor of Optics. viii

10 TABLE OF CONTENTS LIST OF FIGURES... xii LIST OF TABLES... xvii 1 INTRODUCTION Motivation and Objectives of the Dissertation Dissertation Synopsis The Human Eye The Physiological Structure of the Eye Optical Eye Models Human Eye Aberrations and Vision Quality Low and High Order Aberrations Zernike Representation The Effect of Optical Aberrations on Vision Quality EYE ABERRATIONS MESURMENTS Methods Review Spatially Resolved Refractometry Tscherning Aberrometry Laser Ray Tracing Shack-Hartmann Comparison between methods Literature Review CONCEPTUAL APPROACH Metrics and Trade-offs Dynamic Range Definition ix

11 3.1.2 Sensitivity Definition Spatial Resolution Definition Trade-offs Analytical Method Minimum Spot Size Constraint Maximum Spot Size Constraint Sensitivity Constraint Spatial Resolution Constraint Dynamic Range Constraint Optimization Step Analytical Method Applied SIMULATION MODEL Input Wavefronts Defocus Wavefronts Simulated Normal Eye Wavefronts Simulated Post-Lasik Wavefronts Raytracing Model st and 2 nd order Aberrations Lenslet Processing Simulation Results EXPERIMENTAL PROTOTYPE Illumination Path Wavelength Selection Corneal reflection mitigation Tolerances, Calibration and Alignment Tolerance analysis method using a raytracing software Experimental Results Accuracy Measurements Repeatability measurements Sensitivity preservation Dynamic Range improvement x

12 5.3.5 Human Eye Measurements CONCLUSIONS APPENDIX A ZEMAX MACRO AND IDL CODE FOR SHACK-HARTMANN SIMULATION APPENDIX B KODAK KAI-4022 DETECTOR SPECIFICATIONS APPENDIX C HUMAN TRIALS - MEASUREMENT RESULTS REFERENCES xi

13 LIST OF FIGURES Figure 1: Structures of the eye Figure 2: Schematic eye Figure 3: Myopia: uncorrected (above), corrected (below) [8] Figure 4: Hyperopia: uncorrected (above), corrected (below) [8] Figure 5: Astigmatism: uncorrected [8] Figure 6: Pseudo-3D representation of Zernike polynomials up to the 4 th order Figure 7: Simulated retinal images of a standard letter chart. (a) Zero RMS wavefront error. (b) 1 micron RMS wavefront error. (c) 1 micron RMS wavefront error Figure 8: Spatially Resolved Refractometer functioning principle Figure 9: Tscherning Aberrometry functioning principle Figure 10: Laser Ray Tracing functioning principle Figure 11: Scheiner s disk functioning principle Figure 12: Smirnov aberrometer functioning principle Figure 13: Hartmann aberrometer functioning principle Figure 14: Shack-Hartmann Wavefront Sensor schematic setup Figure 15: Shack-Hartmann aberrometer functioning principle Figure 16: Traditional ophthalmic Shack-Hartmann aberrometer setup Figure 17: Post-LASIK eye showing potential spot crossover at the edge of the pupil Figure 18: Spot distribution in a plane 12 mm behind a microlens array with astigmatic microlenses of 10- and 15-mm focal length along the two principal axes and 400- mm pitch. (a) divergent spherical wave with 40-mm radius of curvature; (b) plane wave; (c) convergent spherical wave with 40-mm radius of curvature [49] xii

14 Figure 19: Experimental setup for recording and investigating nonlinear holographic lenslet array. He-Ne laser, λ= um; M1, M2, M3: 100% mirrors; BS: beamsplitter; L1, L2, L3, L4, L5: lenses; NDF: neutral-density filter; SM: semitransparent mirror; HLA: nonlinear holographic lenslet array; CCD1, CCD2: CCD area sensors [56] Figure 20: (a) Optical layout of the prototype large-dynamic-range wavefront sensor having a translatable plate blocking every other lenslet. (b) Schematic diagram of capturing spot pattern at each position of the plate after translations assuming the aberration measured with a 5x5 lenslet array [57] Figure 21: Schematic diagram of a modified SH WFS incorporating (1) a 4-F relay system, (2) a singlet lens used to alter the lenslet array s effective focal length, and (3) a spot pattern relay lens. Note that the inversions caused by the 4-F system and the spot pattern relay lens cancel [64] Figure 22: Schematic illustration of the dynamic range limitation due to adjacent spot collision Figure 23: Schematic illustration of the sensor sensitivity Figure 24: Schematic illustration of the sensor spatial resolution; (a) low spatial resolution yields a poor wavefront reconstruction; (b) high spatial resolution yields a more accurate wavefront reconstruction Figure 25: Relationship and trade-offs between SHWS performance metrics as a function of lenslet focal length and sampling density; the directions of the arrows indicate performance metric increase Figure 26: Schematic layout of the SHWS to be used as model in our analytical approach Figure 27: Minimum Spot Size constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement Figure 28: Maximum Spot Size constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement Figure 29: Sensitivity constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement xiii

15 Figure 30: Spatial resolution constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement Figure 31: Two-spot overlap limitation on the Dynamic Range Figure 32: Dynamic Range constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement Figure 33: Design solution space. The shaded area corresponds to the solution sub-space that satisfies the all system constraints Figure 34: Original LADARWave TM sensor path layout Figure 35: Effect of demagnifying objective lens on sesor sensitivity Figure 36: LADARWave TM initial performance metrics limits Figure 37: Optimization space for the SHWS redesign. The shaded are corresponds to the solution space that satisfies all the constraints Figure 38: Schematic rendering of the sensor simulation process Figure 39: Comparison of simulated Zernike coefficients with the real data. The inset represents the higher order coefficients on a smaller scale, for better visualization. 80 Figure 40: Simulated Post-Lasik wavefront Figure 41: ZEMAX layout of the new proposed sensor path design Figure 42: Shack Hartmann patterns for the same input aberration, but different lenslet focal lengths (f) Figure 43: RMS error for the reconstructed wavefronts. This example is for the human eye wavefronts and a lenslet focal length of 15 mm Figure 44: Wavefront reconstruction duration Figure 45: Simulated post-lasik Shack-Hartmann pattern Figure 46: The average RMS reconstruction error for different lenslet focal lengths Figure 47: The schematic layout of the experimental prototype. Both the sensor path and the illumination (Probe beam) path are shown. W protective window, PBS Polarizing Beam-Splitting Cube, L 1, L 2 Afocal Relay lenses f = 60 mm, M 1 Hot mirror, FT subject fixation target, LA lenslet array f = 8mm, CCD detector, M 2 Mirror, L 3 Collimator lens, SLD Super luminous diode Figure 48: Experimental prototype. The brass colored fixture houses the lenslet array; the CCD camera sits on top; to the right, the black barrel houses the afocal relay xiv

16 Figure 49: Zemax simulation of the focal shift dependence on wavelength in a model eye Figure 50: Linearly polarized light scheme. (a) the horizontally polarized corneal reflection is blocked by the vertical transmission polarizer of the PBS. (b) the poertion of the retinal reflection that maintains its horizontal polarization is blocked by the PBS; the only signal let through is the depolarized portion of the retinal scatter Figure 51: Preventing the corneal reflection from entering the sensor path by decentering the probe beam relative to the corneal vertex. The dotted lines represent the retinal scatter and subsequently the eye wavefront not being affected by the decenter Figure 52: Decentered probe beam and circular polarization scheme. (a) The corneal reflection is diverted away from the sensor path. (b) The portion of retinal scatter that maintains its polarization state is let through along with the depolarized part. 104 Figure 53: Shack-Hartmann spot image of the same eye measured on the same device. Centered linearly polarized light (left). Decentered circularly polarized light (right) Figure 54: Shack-Hartmann spot image of the same eye, exhibiting bow-tie intensity pattern, measured on the same device. Centered, linearly polarized light (left). Decentered, circularly polarized light (right) Figure 55: The 25 lenslet apertures chosen for the tolerance analysis calculation Figure 56: Schematic rendering of the misaligned system and the variable Zernike Phase Surface Figure 57: Schematic illustration of the PSA. Source-to-lens distance adjusted to hyperopic wavefront (left). Source-to-lens distance adjusted to myopic wavefront (right) Figure 58: Prototype spot image for PSA tool at -6 D Figure 59: Measured aberrations versus expected values, PSA tool at -6 D Figure 60: High-Order (no defocus and astigmatism terms) measured aberrations versus expected values, PSA tool at -6 D Figure 61: Prototype spot image for PSA tool at -12 D Figure 62: Measured aberrations versus expected values, PSA tool at -12 D xv

17 Figure 63: High-Order (no defocus and astigmatism terms) measured aberrations versus expected values, PSA tool at -6 D Figure 64: Measured Sphere error using the PSA tool Figure 65: Measured Sphere error using the PSA tool Figure 66: Defocus coefficient expressed as Spherical Equivalent. Standard deviation for three independent measurements Figure 67: Spherical Aberration 0 C4 coefficient. Standard deviation for three independent measurements Figure 68: High-Order RMS Wavefront value. Standard deviation for three independent measurements Figure 69: Sensitivity measurement. Comparison between LADARWave TM and our prototype Figure 70: Spot size. (a) LADARWave TM. (b) Experimental prototype Figure 71: Spot image for a D wavefront, exhibiting microns of spherical aberration. (a) LADARWave TM. (b) Experimental prototype Figure 72: Subject #1, OD, low-order aberrations Figure 73: Subject #1, OD, high-order aberrations Figure 74: Subject #1, OD, Total and high-order RMS wavefront values; the error bars represent the standard deviation of the three prototype measurements xvi

18 LIST OF TABLES Table 1: LeGrand Eye Model Table 2: Arizona Eye Model Table 3: Navarro Eye Model Table 4: Zernike polynomials up to 4 th order Table 5: Quantities and symbols to be used in the analytical method Table 6: LADARWave TM relevant specifications Table 7: LADARWave TM relevant specifications Table 8: SHWS1.zpl input file Table 9: SHWS1.zpl output file Table 10: SHSpot.pro input file Table 11: PSA tool aberrations as function of micrometer position Table 12: LADARWave TM accuracy requirement chart Table 13: Human subject list xvii

19 1 INTRODUCTION The work presented in this dissertation covers the theory, design, simulation and prototyping of an ocular wavefront sensor. This work has been done in part in collaboration with Alcon Laboratories, Inc., Orlando FL, maker of the commercially available LADARWave TM ophthalmic aberrometer. The author of this dissertation is currently employed by Alcon Research, LTD., in Fort Worth, TX. 1.1 Motivation and Objectives of the Dissertation The initial motivation for this work was to lay the foundation for the development of the next generation LADARWave TM device. The current device, based on a Shack-Hartmann sensor had been very successful in the field. However, in rare cases of eyes exhibiting extreme wavefronts it did suffer of limited dynamic range. The primary goal was to develop a new and improved device, featuring an increased dynamic range without losing measurement sensitivity or accuracy. We first researched several established methods for ocular wavefront sensing in hope of finding a method inherently predisposed to high dynamic range measurements. We eventually came back to the Shack-Hartmann method, proven as one of the more robust and accurate techniques for eye aberrations measurements. The goal became finding a 1

20 simple robust solution to increase the dynamic range of the Shack-Hartmann sensor. Existing literature on improving the Shack-Hartmann dynamic range, reviewed in greater detail in section 2.2, offered relatively complex solutions, more suitable for optical laboratory benches than for a commercial clinical device. We formulated the hypothesis that given the current advancement in custom lenslet array fabrication, availability of large high-resolution CCD detectors, and computing power, a large dynamic range sensor was possible based on the simple and elegant fundamental optical principles of the Shack-Hartmann sensor. Since Shack-Hartmann wavefront sensing is quite a mature technology, used in many metrology applications, we were optimistic about finding helpful resources to lead us through the design process. Surprisingly, existing literature offered few details on the optical design and optimization of Shack-Hartmann sensors. The great majority of the existing literature fell short of achieving the level of detail we needed for our application. We attribute the shortage of clear instructions on ocular Shack-Hartmann sensor optimization on two reasons. Firstly, the limited availability of custom made lenslet arrays may have forced many early designs to use off-the-shelf components thus virtually eliminating the need for thorough optimization. Secondly, due to the fierce competition in the ophthalmic industry, details on the optimization methods and design choices for Shack-Hartmann aberrometers have often been considered proprietary information, and have seldom been disclosed. 2

21 The objective of this dissertation is to document our conceptual approach and design steps from fundamental theory, to simulation modeling, to prototype demonstration. We believe that each of these sections of our work makes a significant contribution to the scientific body of knowledge. We attempt to describe our conceptual approach unambiguously, to present the simulation model method and share the simulation code, and finally, to present the early results obtained with our high-dynamic range eye aberrometer prototype. 1.2 Dissertation Synopsis The research work covered in this dissertation is organized in six chapters. In the first chapter we introduce the reader to the basic notions of eye physiology as well as the eye optical properties. We then present the optical aberrations of the eye, classified into low-order aberrations and high-order aberrations. Low-order aberrations are linked to traditional eye disorders, such as myopia and astigmatism while high-order aberrations give rise to more irregular vision loss, such as night-time halos. We show how the eye aberrations are expressed mathematically and we attempt to show the relation between aberration metrics and vision quality. In the second chapter we review the main method for ocular aberration measurements. We compare the existing methods, and we discuss some of their limitations. We present the Shack-Hartmann method in greater detail, from historical developments, to 3

22 functioning principles and limitations, followed by a review of the relevant literature pertaining to our work. In Chapter 3 we present our theoretical approach to Shack-Hartmann sensor design. We define and discuss the performance metrics constraining the sensor design, as well as the trade-offs between these constraints. We show how a design solution space can be constructed given a set of constraints, and how an optimization of the design can be achieved based on particular requirements. We then apply the conceptual method to our own design problem and we present the results obtained. Chapter 4 contains the details of a numerical simulation we developed and employed in order to validate the results obtained in the preceding chapter. The simulation model is presented in great detail and the results of the simulation are discussed. In Chapter 5, we present the culmination of our theoretical and simulation work, in form of an experimental prototype device. We discuss several technical issues related to the illumination path as well as the calibration and alignment of the sensor. We follow by showing actual results obtained with our prototype from both model eyes and human eyes. We analyze and discuss the data obtained. Chapter 6 concludes this dissertation, briefly summarizing the main contributions of the dissertation to the ocular aberrometry field, as well as some of the results and findings of this work. 4

23 1.3 The Human Eye The human eye is much like a simple camera. It forms a real image of an object onto a photo-sensitive surface, the retina. Although it is a relatively simple optical system, it is a very complex device from a structural point of view. It is well protected against natural hazards, yet it is delicate and easily damaged. The eye has been optically modeled with various levels of sophistication we show three such examples The Physiological Structure of the Eye The eye is formed from several different tissues, each with a different refractive index and transmittance. Some of them must be completely opaque while others are as transparent as possible. Despite all this diversification the eye can be thought of (as any other real optical system) as a simple positive thin lens casting a real image onto a surface (the retina). The total optical power of this system is about +60 diopters (D) of which +48D are provided by the cornea, with a refractive index n = This is the frontal part of the eye, so it is in contact with the air. It is the air-cornea interface that gives the strong optical power. The eye is self contained in an almost spherical bag of opaque white tissue, except on the cornea, called the sclera. The cornea is oval in shape with 12.6mm horizontal diameter and 11.7 mm vertical diameter. It absorbs most of the UV radiation with a peak at 270 nm, but transmits radiation with wavelengths from 310 to 2500 nm approximately. 5

24 Immediately behind the cornea is a chamber known as the anterior chamber filled with a watery fluid: the aqueous humor. This fluid is not static, but flows from the posterior chamber through the pupil aperture to the anterior chamber. This constant flow replenishes nutrients and takes away metabolic wastes from the posterior cornea, crystalline lens and anterior vitreous. It also helps to keep intraocular pressure stable. Its refractive index is n = The iris divides the front part of the eye into two chambers. It is the aperture stop of the eye. In a normal young eye the iris can expand or contract the pupil diameter from 2 to 7 or 8 mm. This reduces or increases the amount of light passing through the system. Contraction of the pupil size also helps to focus when doing close work. The iris can be easily identified. It gives to the eye its distinctive color: blue, green, brown, etc. Figure 1: Structures of the eye. 6

25 Next to the iris is the crystalline lens of the eye. It is remarkably transmissive despite its complex layered structure. It completely lacks blood supply and innervation. None of its cells are shed, so it grows in size and mass throughout life. Its refractive index varies from approximately at the inner core to about at the outer core. The lens is capable of changing its shape in order to provide a fine focusing mechanism. Combining the cornea and the lens into a single optical system it is possible to treat the system as a thin lens in a medium. Then its nodal point lies at 17mm from the retina. The posterior principal point of this system lies on the cornea, 5.6mm from the nodal point. A schematic eye representing this system is shown in Figure 2. Figure 2: Schematic eye. The back part of the lens is embedded in the vitreous body. With a volume of almost 4ml, it represents about the 80% of the total volume of the eye. It is a transparent collagenbased substance with a refractive index of Vitreous plays an important role in several metabolic eye processes. It is involved with the oxygenation process within the 7

26 eye. It also serves as a depository for metabolic wastes, such as lactic acid, and as a medium for active transport of different substances throughout the eye. Despite its importance it is not understood how all its processes are accomplished nor even what its internal structure is. Immediately behind the vitreous, is the retina, an externalized portion of the brain. The optic nerve, connecting the retina with the brain is a tract of the central nervous system. The retina is a thin layer of cells covering most of the choroid, a layer which in combination with the retinal pigment epithelium absorbs light not captured by the photoreceptors: rods and cones. These cells are connected to the optic nerve via the bipolar and ganglion cells. Rods, of about 2 μm diameter are very sensitive to low levels of light but unable to distinguish color. Most of them are on the periphery of the retina. They are used in night vision. Cones, of about 6 μm diameter each are able to distinguish color but are only sensitive to higher levels of light, and are consequently only used in day vision. Close to the centre of the retina there is a small depression of about 2.5 to 3 mm in diameter, called the macula. In its centre there is a region of about 200 μm in diameter with highly packed cones and no rods at all, called the fovea or fovea centralis. The nerve layers exiting the eye through the optic nerve form a radial grid around it. The arrangement of the grid has an impact on the polarization changes of the light scattered off the retina, an aspect which will be discussed in subsequent chapters. Cones in the fovea are about 2 μm in diameter. From this region the brain obtains the sharpest and most detailed information of an image. When light saturates them, their sensitivity reduces. This is why the eye keeps moving when observing an object. The most 8

27 important features of the object most be focused on the fovea constantly and registered by the brain. One other thing worth mentioning is the blind spot. This is a small area located on the retina that, contrary to the fovea, has no photoreceptors at all. It is from this spot that the optic nerve exits the eye. The eye is a dynamic mechanical system. It is in permanent motion, even if the sight is fixated on a target. To avoid fading of the image due to photoreceptor saturation the eye keeps making fast movements. It is possible to split these movements into two categories: microsaccades and slow drift. Microsaccades occur 2 to 3 times per second with amplitudes of 1 to 2 arc min (ı.e. 5 to 10 μm on the retina). Slow drift occurs with a velocity of about 1 arc min/sec and amplitude of about 2 to 5 arc min (10 to 25 μm). In addition to these two movements there is a high frequency tremor superimposed on them. It has a frequency of about 90 Hz and amplitude up to 40 sec of arc (3 μm), but normally its frequency oscillates between 30 to 80 Hz and its amplitude between 10 to 30 sec of arc (1 to 2.5 μm). These movements have to be taken into account when trying to measure or to correct ocular wavefronts in real time. In this work no such attempt is done; however this could be an important issue to be addressed in the future Optical Eye Models A variety of eye models exist that are used to examine the optical properties of the eye, as well as design ophthalmic accouterments such as spectacle and contact lenses. Different levels of sophistication exist in eye models, ranging from paraxial spherical models, to 9

28 wide-angle aspheric models. In general, spherical surface models can only match the first-order properties of the eye. They do a poor job at matching higher aberration content or off-axis properties of real eyes. Consequently, they should only be used to examine cardinal points, pupils, magnification and other first-order effects, such as locations of the Purkinje images (reflections of a point source off the cornea and crystalline lens of the eye). Aspheric eye models are much better suited for illustrating clinical levels of aberration, both on- and off-axis. A common paraxial model is the LeGrand full theoretical eye [3], sometimes called the Gullstrand-LeGrand schematic eye. This model approximates the gradient index of the crystalline lens with a uniform effective index of The model is composed of four surfaces, presented in Table 1. Table 1: LeGrand Eye Model. Name Radius of Curvature Index Thickness Cornea 7.8 mm mm Aqueous 6.5 mm mm Crystalline lens 10.2 mm mm Vitreous -6.0 mm mm The anterior cornea of this model has a power of D. The posterior cornea has a power of D. The total corneal power is D. The anterior and posterior lens surfaces have powers of D and D respectively. The total crystalline power is D. The total power of the eye model is D. 10

29 A more sophisticated eye model is the Arizona Eye Model [4]. This eye model is designed to match clinical levels of aberration both on and off-axis fields. The eye model can also accommodate by varying parameters with the accommodation level A in diopters. Table 2 defines the Arizona Eye Model. Name Radius of Curvature Table 2: Arizona Eye Model. Conic Index Abbe Thickness Cornea 7.8 mm mm Aqueous 6.5 mm A Crystalline lens Vitreous A mm A mm A A A A A mm Retina mm The parameters and the dimensions of the eye model have been chosen to be consistent with average human data. The crystalline lens had a uniform index and consequently does not model the true gradient index structure of the human lens. However, the crystalline lens index, dispersion, and conic constants can be used to make the eye model match clinical levels of aberration. The Arizona Eye has been designed to match a fit to the longitudinal chromatic aberration of the eye given by Atchinson and Smith [5]. Furthermore, the eye model approaches the average longitudinal spherical aberration found by Porter et al. for a 5.7 mm pupil [6]. The total power of the unaccomodated eye model is diopters. The third eye model we would like to mention is the Navarro Eye Model [7]. It is based on the Gullstrand-LeGrand model eye with aspheric surfaces in order to match the 11

30 clinically observed performance of the eye. In our optical modeling we use the Navarro Eye Model defined in Table 3. Name Radius of Curvature Table 3: Navarro Eye Model. Conic Index Thickness Cornea 7.72 mm mm Aqueous 6.5 mm mm Crystalline lens 10.2 mm mm Vitreous -6.0 mm mm The resulting total power of this eye model is 60.4 diopters. 1.4 Human Eye Aberrations and Vision Quality The human eye, just like any other optical system, has optical aberrations. In a perfect eye, light coming from optical infinity will focus at a perfect point on the retina. Optical aberrations, which can be described as departures from a perfectly spherical focusing wavefront, create a blurred focusing spot on the retina, thus decreasing vision quality. Optical eye aberrations can be classified into main categories: Low-order aberrations, defocus and longitudinal astigmatism, also referred to by optometrists as sphere and cylinder, and high-order aberrations: i.e. spherical aberration, trefoil, and coma. All of these aberrations and the interactions between them can describe the wavefront quality of the eye which can in turn be associated with vision quality. 12

31 1.4.1 Low and High Order Aberrations Classically, only two kinds of optical aberrations have been corrected by the optometrist: sphere and cylinder. They constitute the traditional eye refractive errors myopia, hyperopia and astigmatism. Myopia, also called nearsightedness, means that the cornea and the lens refract the light more than necessary. Because of this refraction the focal point misses the retina and lies somewhere in front of it, Figure 3. Figure 3: Myopia: uncorrected (above), corrected (below) [8]. The image on the retina is therefore blurred and it is difficult to resolve distant objects. Myopia arises when the eye is too elongated relative to the power of the eye optics. Negative lenses can be used to diverge the light before it enters the eye and thus correct the myopia. Hyperopia, also known as farsightedness, means that the cornea and the lens 13

32 do not refract the light enough. The focal point therefore misses the retina and ends up behind it, Figure 4. Figure 4: Hyperopia: uncorrected (above), corrected (below) [8]. The image on the retina will be blurred and it is difficult to resolve close objects. Hyperopia arises because the eye is too short relative to the power of the eye optics. Positive lenses, which converge the light before it enters the eye, can be used for correcting hyperopia. Astigmatism often occurs together with myopia or hyperopia. It arises from irregularities in the structure of the optics of the eye. In general the surfaces in the eye are uniformly bent in all meridians, thus light entering the cornea is focused symmetrically. An astigmatic eye has one surface, most often the anterior surface of the cornea, steeper in one meridian than in the other. This results in different focal lengths in different planes and will give rise to two focal planes, Figure 5. 14

33 Figure 5: Astigmatism: uncorrected [8]. Astigmatism causes blurred images on the retina for all object-distances. It can often be corrected with toric lenses, which have additional power (called cylinder) on one of the astigmatism meridians. Optometrists routinely perform eye exams involving a complete subjective refraction. During the exam defocus and astigmatism and its axis are analyzed. Even though these two components, sphere and cylinder, are responsible for the majority of the eye aberrations, there are other refractive aberrations of the eye also responsible for poor quality of vision. Thus, even with full refraction correction there is a large portion of patients that still complains of poor night vision. This is often referred to as night myopia. Night myopia is 15

34 partly caused by chromatic aberration due to a shift in retinal wavelength sensitivity, but also partly by other higher order monochromatic aberrations which become significant at larger pupils, such as spherical aberration. For many ordinary patients, these high frequency aberrations do not have a significant impact. Nevertheless, in instances involving of post-lasik surgery patients or abnormal corneal shape patients, the higher order aberrations can provoke visual disorders that can be elucidated only by analysing the entire spectrum of eye aberrations. Chromatic focus shift (also known as longitudinal chromatic aberration) will not be discussed in this chapter. We will focus our discussion on the monochromatic aberrations of the eye. Studies have shown that high order ocular aberrations do not change drastically with wavelength. However, all wavelengths in the visible spectrum are affected by the optical aberrations. In summary, most normal eyes, even if perfectly corrected during subjective refraction, will not form a diffraction limited point on the retina. Higher optical aberrations such as spherical aberration, coma order optical aberrations are responsible for further blurring of the image spot. From a clinical standpoint these aberrations can provoke vision issues such as of glares, halos, and decreased contrast sensitivity. Ocular aberrations need to be measured and quantified in a meaningful way in order for the clinicians to attempt to restore high-quality vision, with tools such as wavefront guided laser ablation of the cornea. 16

35 1.4.2 Zernike Representation To efficiently analyze the measured wavefront in a meaningful quantitative way, one must express it in understandable terms for clinicians and patients. Traditional subjective refraction expresses the wavefront in only two essential terms: sphere and cylinder. To represent the higher-order aberrations of the wavefront, a more complex depiction is necessary. Scientist and clinicians have traditionally favored Zernike polynomials to fit the ocular wavefront data in order to split the wavefront surface into meaningful orthogonal terms representing individual higher order aberrations. The Zernike polynomials are an orthogonal set of functions that have found application in visual optics in representing wavefront error as well as corneal shape. There exists several distinct nomenclatures and normalization systems for these polynomials, so care must be taken when comparing data and results from different sources. In this work we will use the standard notation for ophthalmic optics representation of Zernike polynomials [9], as presented in Table 4. The Zernike polynomials are typically defined in polar coordinates (ρ,θ) where ρ is the normalized radial pupil coordinate and θ is the angular component. Each of the polynomials has of a normalization factor, a radially reliant part, and an angularly dependent part. To unmistakably describe the functions a double indexing scheme is preferred. The index n would represent the exponent of the radial component and the index m would represent the angular frequency of the angular component. In general, the Zernike polynomials can be defined as: 17

36 Z m n m m N n Rn ( ρ) cos( mθ ); for m 0 ( ρ, θ ) = (1-1) m m N n Rn ( ρ) sin( mθ ); for m 0 Table 4: Zernike polynomials up to 4 th order. Zernike Term Mathematical expression Optical aberration Graphical representation Z 4ρ sin( θ ) Vertical tilt 1 1 Z 4ρ cos( θ ) Horizontal tilt 1 1 Z 6ρ 2 sin(2θ ) Astigmatism 2 2 Z 3(2ρ 2 1) Defocus 0 2 Z 6ρ 2 cos(2θ ) Astigmatism 2 2 Z 8ρ 3 sin(3θ ) Trefoil Z 8(3ρ 2ρ) sin( θ ) Coma Z 8(3ρ 2ρ) cos( θ ) Coma 1 3 Z 8ρ 3 cos(3θ ) Trefoil 3 3 Z 10ρ 4 sin(4θ ) Tetrafoil Z 10(4ρ 3ρ )sin(2θ ) 2 4 Secondary astigmatism 4 2 Z 5(6ρ 6ρ + 1) Spherical aberration Z 10(4ρ 3ρ ) cos(2θ ) 2 4 Secondary astigmatism Z 10ρ 4 cos(4θ ) Tetrafoil

37 where m N n is the normalization factor: N m n = (1-2) 2( n + 1) /1 + δ m0 with δ m0 is the Kronecker delta function (i.e. δ = m0 1for m = 0 and δ = m0 0 for m 0 ), while R m (ρ) is given by: n R m n ( 1) ( n s)! ( n m) / 2 s n 2s ρ ) = ρ (1-3) s= 0 s![0.5( n + m ) s]![0.5( n m ) s]! ( When a surface, such as a wavefront is decomposed into Zernike polynomials, it can be as follows: m = ± m W( r, θ ) C Z ( r / r, θ n, m n n max ) (1-4) where m Cn are is the polynomial expansion coefficients and r max is the maximum radial extent of the surface. The coordinates in the Zernike polynomial terms are normalized to ρ=r/r max. To report ocular wavefront aberrations, the expansion coefficients m Cn are usually given in units of microns or millimeters, as well as the pupil radius over which the normalization was carried out. Normalization of the individual modes enables the 19

38 observer to rapidly quantify the effect of each individual aberration on the entire wavefront error. It is worth mentioning that Zernike coefficients are very sensitive to the pupil size over which the wavefront is analyzed. Therefore utmost care needs to be taken when comparing and reporting Zernike coefficients from measurements and taken over different size pupils. Mathematical methods of scaling the Zernike coefficients over different pupil sizes have already been studied [10,11]. Let us look at the expansion terms again, this time in a pseudo-3d rendering, Figure 6. The first order terms are may not be as relevant to an ocular wavefront for they correspond to wavefront tilt and can adjusted by a prism. Defocus and astigmatism, corresponding to sphere and cylinder, are represented by the second order terms. Figure 6: Pseudo-3D representation of Zernike polynomials up to the 4 th order. 20

39 Using the three lower order terms, any sphere/cylinder refraction error can be expressed. Thus traditional spectacles or contact lenses can correct any wavefront error exhibiting only defocus and astigmatism. When reading the lower terms from a non subjective wavefront map, care must be used since, since higher order terms can interact destructively or constructively with the lower affecting vision quality. As a result, traditional subjective refraction error cannot be fully derived from the defocus and astigmatism terms. However, a spherical equivalent in diopters can be derived from the defocus coefficient 0 C2 in microns and r max in millimeters using Eq. (1-5): 4 3 Sph = C (1-5) ( r ) max Above the second order, all modes are referred to as high order aberrations. The higher order aberrations can contribute to poor vision quality, especially during low light conditions, when the pupil is enlarged. The higher order coefficients can not be analyzed by simple refraction or auto-refraction. The sole method of measuring the high-order aberrations is by utilizing a ocular wavefront sensor The Effect of Optical Aberrations on Vision Quality In order to summarize the wavefront error, clinicians and scientists have tried to develop metrics that would quantify the wavefront error with a numeric value. Presently, the most popular metric used, is the root mean square (RMS) error of the wavefront. Commercial aberrometers always display a RMS error both for the of high order aberrations and for 21

40 the low order aberrations. The RMS value represents the weighted average between discrete Zernike modes. Otherwise put, the RMS illustrates the overall departure of the wavefront from a perfect shape and offers a quantitative feel for the aberration level of a patient. The RMS error is simply: ( C ) + ( C ) + ( C ) + ( C ) + ( C )... RMS = (1-6) There are several limitations in using the RMS value as a wavefront quality metric. The most obvious one is that a high level of the RMS value does not necessarily cause a decrease in visual image quality. For instance, there are cases where spherical aberration is likely to cancel out some of the blurring effects created by defocus. Due to the intrinsic method by which RMS is computed, the potential constructive or destructive interference between individual aberrations is overlooked. Figure 7 illustrates how the same value of RMS wavefront error can yield very different visual quality, cases (b) and (c). Figure 7: Simulated retinal images of a standard letter chart. (a) Zero RMS wavefront error. (b) 1 micron RMS wavefront error. (c) 1 micron RMS wavefront error. 22

41 Several studies have measured the normal clinical values of ocular aberration. Chalita et al. [12] measured the ocular aberrations over 30 patients and measured average magnitudes of 0.35 ± 0.29 um of coma; 0.36 ± 0.31 um of spherical aberration; and 0.31 ± 0.14 um of other higher order terms for a 7 mm pupil diameter. Studies have also shown that 91% of the RMS value can be attributed to the second order terms while 99% can be accounted for by the first four orders, for 5mm pupil diameter [13]. Research is being done, to find better predictors of visual function that can be associated with the measured wavefront [14-16]. The RMS value, especially when separated into Low-order RMS and High-order RMS, remains a useful metric to establish the degree of success of ophthalmic procedures such as wavefront guided Lasik surgery. 23

42 2 EYE ABERRATIONS MESURMENTS Accurate measurement of the eye s wavefront aberration is becoming an important goal in clinical ophthalmology, for applications in corneal refractive surgery (LASIK), diagnosis of anomalies of the tear film, and corneal disease (Keratoconus) [2]. Over the years, several devices have been designed to measure these aberrations in clinical settings, using different principles and aiming to make it part of everyday clinical practice. This chapter first reviews some of the currently employed methods for ocular wavefront measurement as well as the current commercially available systems, together with some of their respective limitations, advantages, and disadvantages. The Shack- Hartmann method is presented in greater detail. We present the historical development of the method, its functioning principles, and limitations. In the last part of the chapter we discuss relevant existing literature on Shack-Hartman wavefront sensor design and dynamic range improvement. 2.1 Methods Review Some of the methods used for the determination of ocular wavefront aberrations analyze an image on the retina (ingoing), whereas others place a source of light on the retina and 24

43 analyze the wavefront as it leaves the eye (outgoing). Almost all methods used for the clinical determination of wavefront aberrations of the eye are based on ray tracing and the reconstruction of the original wavefront from local slopes [17]. The aberrations are derived for various points in the pupil and, typically using least squares, are converted into wave aberration functions. Interferometric methods have the inherent difficulty associated with stability and for this reason they are not used for the clinical measurement of ocular aberration. Ocular aberrometry techniques can be subjective, requiring the individual to report his or her observations such as spatially resolved refractometry [18,19], or objective such as Tscherning aberrometry [20], Laser Ray Tracing [21], Shack-Hartman Wavefront Sensing [22] Spatially Resolved Refractometry The spatially resolved refractometer, based on Scheiner s principle described by Smirnov [23], allows a psychophysical measurement of ocular wavefront aberrations. The technique is unique in that it is based on the subjective response of a patient. The principle is simple a narrow beam is directed into the eye at a give pupil entry location. In the presence of aberrations this beam does not intercept the fovea. The subject typically views a crosshair pattern and is asked to adjust the tilt of the incident beam until the spot coincides with the center of the crosshair target. The same principle can have another embodiment, with two beams entering the eye. One through a reference centered aperture and another through a peripheral sub-aperture. Depending on the aberration at the point measured, the retinal spots will be separated. Changing the angle of the peripheral illumination source brings the retinal spots together, Figure 8. 25

44 Figure 8: Spatially Resolved Refractometer functioning principle. By measuring the angle needed to obtain a single retinal spot or to overlap the crosshair target with the retinal spot, the aberration components relative to that pupil location can be calculated. By sampling a number of points, the wavefront can be reconstructed. Typically, a rectangular array of 37 pupil entry locations is sampled in a 6 mm pupil. The results obtained with this subjective technique have been reported to be similar to those obtained with other objective methods, [24]. Currently, the InterWave scanner (InView, Atlanta, Ga) is the only clinical wavefront sensor using this principle. It requires the patient to perform the alignment of the two points in the presence of local aberrations using a joystick. It has been successfully used in combination with surgical lasers to correct ocular aberrations [25,26]. This principle also set the basis for the ray-tracing objective aberrometers described later in this review. 26

45 2.1.2 Tscherning Aberrometry Tscherning aberrometry dates back to the late 1800s when Tscherning placed a grid of equi-spaced lines over a diopter lens [27]. Subjects viewing a distant point source through the lens would perceive a distorted shadow of the grid on their retinas. By drawing the distorted grid, a subjective analysis of individual wavefront aberrations could be performed. In the modern day system, a collimated beam of laser light in passed through a mask with a regular array of holes, Figure 9. Figure 9: Tscherning Aberrometry functioning principle. The effect of the mask is to create a series of discrete collimated pencils of light. Normally, collimated light entering the emmetropic eye would all focus to a point on the retina. However a positive power lens is added in the Tscherning aberrometer to effectively make the eye myopic. This added power focuses the collimated beam to go 27

46 through focus and the spread out again prior to striking the retina. As a result, a projection or shadow of the mask is formed on the retina. Aberrations from the ocular surface cause a distortion in the spacing between the holes in the shadow. The Tscherning aberrometer is essentially the Hartmann screen test [28,29] applied to the eye, although Hartmann s testing of large mirrors post-dated Tscherning s method for the eye. The array of spots falling on the retina is analogous to a spot diagram. To create an objective measurement of the wavefront error, a fundus camera, is used to capture the image of the spot pattern projected on the retina. Modern image processing techniques are then used to locate the spots and determine the amount of spot shift relative to an ideal eye. The distorted hole pattern is related to the transverse ray error which is used to reconstruct the shape of the aberrated wavefront within the eye. Typically a 1 mm spacing between holes is used and Five to ten frames of the image formed in the retina are captured. The method has shown good reproducibility for sphero-cylindrical refraction and total RMS values for higher order aberrations on human eyes [20]. WaveLight Laser Technologie AG (Erlangen, Germany) is, to our knowledge, the only manufacturer of sensors using this principle for clinical purposes. It was developed by Mierdel et al [30] and is successfully used for performing customized ablations [31] Laser Ray Tracing From a technical perspective, this technique falls between the spatially resolved refractometer and Tscherning methods, whereby the laser beam is projected onto the eye s retina, parallel to the visual axis, at several points on the eye s entrance, Figure

47 Figure 10: Laser Ray Tracing functioning principle. Normally this collimated beam would focus to a point on the fovea. Aberrations in the eye will cause the beam to deflect and strike the retina away from the fovea. An imaging system captures the position of the spot onto the retina relative to the fovea, giving the transverse ray error. Unlike the spatially resolved refractometer method in which the retinal spot is displaced towards the reference by modifying the entrance angle, the distance to the retinal reference position, or ideal spot location, is measured and used to compute the aberration. In contrast to the Tscherning devices in which, as mentioned earlier, a grid is projected on the retina providing simultaneous distortion on every point, in ray-tracing techniques, the laser beam is projected each time, making a rapid scan over several points across the pupil, providing a corresponding projection on the patient s retina. With the data acquired, a complete pattern is obtained in just a few milliseconds; the transverse ray error information is then used to reconstruct the wavefront error of the eye. In laboratory settings, this method showed results in good agreement with other 29

48 techniques such as the Shack-Hartmann or spatially resolved refractometer [24], and proved to be reliable in the evaluation of changes in ocular wavefront due to surgery [32]. Tracey Technologies Inc (Houston, Tex) is, to our knowledge, the only manufacturer of sensors using this principle for clinical purposes. This system has shown a good performance for sphero-cylindrical determination, in close match with clinical Hartmann- Shack sensors [33]. In studies with earlier versions of the Tracey device, it was found to provide accurate, highly reproducible measurements [34,35]. It has a multiple acquisition mode that allows a rapid screening of changes such as those occurring during accommodation [36] Shack-Hartmann Perhaps the most common technique for measuring eye aberrations is the Shack-Hartman Wavefront Sensor (SHWS) Historical Development In 1619, Christopher Scheiner, philosopher and astronomy professor at the University of Ingolstadt, verified the human eye focusing ability using an apparatus that is commonly recognized as the Scheiner disk [37]. Scheiner demonstrated how if an aberrated eye gazes through two pinholes of an opaque disk, a distant light source would appear as two separate images on the retina, Figure 11. If defocus is the only aberration present, a simple lens that corrects the ametropia will bring the two retinal images into each other forming only one image. On the other hand, for optical aberrations other than defocus, a more complex method was needed for measuring the optical aberrations of the eye at multiple pupil locations. 30

49 Figure 11: Scheiner s disk functioning principle. In 1961, Smirnov described a subjective aberrometer based on Scheiner s disk. He fixed the reference light source corresponding to the central pinhole while adjusting the position of the light source corresponding to the marginal pinhole. By adjusting the movable source vertically and horizontally, the marginal ray direction is varied until its retinal image intersects the retinal image formed by the reference ray and the patient perceives one single point of light, Figure 12. Following the movable source adjustment by, the displacement increments Δx and Δy can express the eye s ray-aberration for the specified pupil location [23]. The Smirnov technique was later adopted in ophthalmology as the spatially resolved refractometer, presented earlier. Figure 12: Smirnov aberrometer functioning principle. 31

50 In the early 1900s Johannes Hartmann described another method for measuring the aberrations of lenses and mirrors based on Scheiner s idea. The Hartmann Screen test was developed as a support tool to test the optical performance of an observatory telescope. Hartmann placed a perforated mask over the aperture of the telescope, creating a discrete set of ray beams, passing through specific locations in the entrance pupil. He recorded the spot diagram of the bundles on photographic plates placed on either side of the lens focal point. By connecting corresponding spots and knowing the position of the plates he was able to better identify the specific wavefront aberrations induced by the primary lens, guilty for blurring the focal image [28,29]. The Hartmann Screen, as applied to ophthalmology, inverted the optical path of earlier aberrometers by moving the light source to the retina, successfully switching the Scheiner/Smirnov methods from subjective measurements to objective measurements. Figure 13: Hartmann aberrometer functioning principle. By creating additional holes in Scheiner s disk, a map of the eye s aberrations across the pupil could be constructed from vertical and horizontal displacement of ray pencils emerging from the screen, captured on a detector, Figure

51 The one limitation of the astronomic Hartmann Screen test was the high illumination levels needed to form detectable spots in the image planes of the photographic plates. For low level illumination applications the discrete bundle of rays had to somehow be refocused. During the late 1960s, Roland Shack, from the Optical Sciences Center at the University of Arizona, was given the task to help with the efforts of imaging satellites from ground-based telescopes. The efforts had been hindered mainly by the atmospheric turbulence which distorted the images. It was therefore proposed to branch off the optical path of the telescope adding a Hartmann Screen test that could determine the atmospheric aberrations, and use that information for image post-processing. Shack soon found out that the illumination budget allocated to the Hartmann path was a small fraction of an already low light signal. The Hartmann mask reduced the amount of light arriving in the image plane even further. Shack and Platt s first improvement was to position lenses in the holes of the Hartmann mask [38]. That way, the little light that was passing through would be concentrated into focal spots. The second innovation was to replace the Hartmann mask with a lenslet array, thus effectively preventing the need to block any incident light, and obtaining wavefront information throughout the entire pupil. In the mid-1980s, Dr. Josef Bille began working with Shack to use the lens array to measure the corneal profile. Bille was the first to employ the new sensor for ophthalmic purposes [39]. The first utilization of the Shack-Hartmann technique to measure ocular aberrations in humans was successfully recorded by Liang et al. in 1994 [1]. The setup is schematically illustrated in Figure

52 Figure 14: Shack-Hartmann Wavefront Sensor schematic setup. Liang described the method as the Hartmann-Shack technique, which is chronologically correct. In literature both Hartmann-Shack and Shack-Hartmann nomenclature can be found, largely depending on geographical partiality. The ownership argument is however irrelevant, since the initial idea goes back to the 1600s. Today, the Shack-Hartmann technique is undoubtedly the most popular wavefront sensing technique for correcting astronomical telescopes in tandem with adaptive optics, as well as in measuring ocular aberrations Functioning principle The Shack-Hartmann technique captures the wavefront emanating from the eye with a two-dimensional array of lenses and focuses the sampled wavefront into discrete spots on a detector plane, positioned usually in the focal plane of the lenslet array. For a perfect eye, the outgoing wavefront is planar and the image on the detector is an equi-distant grid 34

53 of spots, coinciding with the optical axes of the lenslets, usually called the reference grid. For an aberrated eye, the wavefront is distorted. Figure 15 illustrates this concept. Figure 15: Shack-Hartmann aberrometer functioning principle. The local slopes of the aberrated wavefront vary at each lenslet and thus the wavefront is imaged into a distorted grid of spots. The spatial displacement of each spot relative to its reference grid location is a direct measurement of the corresponding wavefront slope as it at the lenslet entrance pupil. W ( x, y) Δx W ( x, y Δy = and ) = x f y lenslet f lenslet (2-1) 35

54 where W ( x, y) is the wavefront error, Δx and Δy are the lateral shifts of the spots and f lenslet is the focal distance of the array. After the array of spots is captured on the detector, an image processing routine is employed to find the centroids of the spots and to calculate their displacement. The slope information obtained is then used to reconstruct the wavefront through various integration techniques [40]. It is imperative to measure the ocular wavefront as it leaves the entrance pupil of the eye. Therefore it is customary to use an afocal relay system, as shown in Figure 16, to optically conjugate the entrance pupil of the eye with the lenslet array principal plane. Figure 16: Traditional ophthalmic Shack-Hartmann aberrometer setup Commercially available Shack-Hartmann aberrometers To date there are five main ophthalmic systems commercially available that utilize the Shack-Hartmann method. The LADARWave system developed by Alcon (Fort Worth, TX) captures approximately 170 spots within a 6.5 mm pupil aperture, it can measure up to eight-order aberrations and has a dynamic range capable of measuring spherical errors in the range of -15 to

55 D and cylindrical errors in the range of 0 to -8 D. The device can measure wavefronts with a maximum with a maximum curvature in any meridian lying between +8 and -14 D for an 8 mm pupil. The WaveScan system from VISX/AMO (Santa Clara, CA) captures about 180 spots within a 6 mm pupil aperture. It can measure up to sixth-order aberrations and has a dynamic range capable of measuring from -8 to +6 D. The Zywave System wavefront device designed by Bausch & Lomb (Rochester, NY) captures 60 spots within a 6 mm pupil aperture, measuring up to fifth-order aberrations. The system can measure refractive error from +6 to -12 D of sphere and up to -5D of astigmatism. The Topcon KR9000 PW wavefront analyzer device from Topcon America Corp (Paramus, NJ) measures up to sixth-order aberrations with 85 spots within a 6 mm pupil. The system can measure refractive error from +15 to -15 D of sphere and up to -7D of astigmatism. The COAS system from Wavefront Sciences/AMO (Albuquerque, NM) capture 1017 spots within a 6mm pupil and it can measure up to 16 th order aberrations. This device has a dynamic range capable of measuring -14 to +7, including -5D of cylinder. 37

56 Limitations There are two main limitations to the Shack-Hartmann sensor. The first one has to do with the inherent approximation in which the individual wavefront pieces sampled by each lenslet are assumed to be plane waves. Thus the data analysis does not take into account the sub-lenslet aberrations that degrade the quality of the spot. Only the spot s displacement is used for computing the local slope of the wavefront. A large enough sampling density, i.e. a large number of lenslets across the pupil, is needed in order for the plane wave approximation to hold. Figure 17: Post-LASIK eye showing potential spot crossover at the edge of the pupil. The second main limitation has to do with spot overlap or spot crossover in the case of large amount of aberrations. This directly affects reliability of the image processing and centroiding algorithms, effectively limiting the dynamic range of the sensor. 38

57 Shorter lenslet focal lengths decrease the spot displacement rate increasing the dynamic range at the expense of sensitivity. The dynamic range limitation becomes apparent especially in the case of highly aberrated eyes, such as post-lasik eyes, post corneal transplant eyes, keratoconus eyes, Figure 17. The crossover can create a false measurement or an inability of the device to calculate the aberration Comparison between methods There are advantages, drawbacks and limitations associated with each of the methods presented in this chapter. Since Spatially Resolved Refractometry (SSR) is the only one based on a pshycophysical test, it incorporates the subject s perceptions that include retinal and neural processing. Other wavefront sensing techniques do not take this effect into account. But that very aspect is the SSR s primary drawback. Since it interacts with the subject, it takes several minutes to sample sufficient points in the pupil. Both Laser Ray Tracing (LRT) and SSR can probe multiple pupil entry locations with a high spatial sampling density. Sequential acquisition of wavefront aberrations has the advantage of avoiding the possibility of overlapping or crossover phenomena, one of the main drawbacks of simultaneous acquisition measurements such as Shack-Hartmann and Tscherning. Simultaneous acquisition methods are also limited by the spatial resolution arising from the distribution of the micro-lenses (Shack-Hartmann) or the mask configuration (Tscherning), making it difficult to adjust the sampling pattern. In sequential acquisition methods, such as laser ray tracing, the sampling pattern can be easily modified. On the other hand, simultaneous acquisition measurements allow higher reliability in assessing wavefront error even during short acquisition periods and enable 39

58 continuous readings over short periods of time. This fact also makes them stronger against temporal factors affecting wavefront measurement, such as fluctuations in accommodation or micro-movements of the eye. Ingoing methods such as Tscherning aberrometry, which measure the retinal image of a screen projection, is not affected by any double-pass induced bias of outgoing methods, such as the Shack-Hartmann. However, studies have found the same results for ingoing and outgoing aberrometry methods, substantiating light reversibility in the eye and the equivalence of both methods [166]. The Shack-Hartmann method gained popularity over the Tscherning method because of its less complex image processing and simple setup. Also, due to tight illumination budgeting, the Tscherning aberrometer is usually required to use high power flashes of visible light which may be disturbing for patients. The Shack-Hartmann sensor needs only low-power invisible near-infrared light. The Shack-Hartmann technique is at the moment the most established method for measuring ocular aberrations. 2.2 Literature Review The Shack-Hartmann method is a relatively mature technique for measuring wavefront aberration. A Shack-Hartmann keyword search in any optics database will return hundreds of related technical papers and publications. As a result of the numerous applications of this technique, the range of topics discussed in the related literature is very broad; from more general papers on the historical developments of the Shack- Hartman technique [42] to very specific papers on dealing with the SHWS detector noise 40

59 [43], and everything in between. In this section we will attempt to review a few of the papers most relevant to our work. The trade-off between spatial resolution, sensitivity and dynamic range must be managed in the design of the sensor by choosing the f-number of the lenslet and the resolution of the detector that matches the requirements of the specific application. The dynamic range is mainly limited by spot confusion due to overlapping or crossover. Correct assignment of the spot to the microlens that refracted it is essential. We first review a few of the proposed methods to reduce the spot assignment uncertainty by either software or hardware means, once the lenslet/detector specifications have been fixed. We then review relevant literature on the design trade-offs influencing lenslet choice. In the classical SHWS, spots were required to lie in the detector area corresponding to a given lenslet, effectively limiting the dynamic range of the sensor. A great deal of effort has been devoted to developing software schemes for extending the dynamic range by allowing the spot to move freely beyond the projection of its corresponding lenslet onto the detector. An earliy approach was to use a modified unwrapping algorithm to assign the spots to their respective sub-apertures as long as the difference of the spot displacement between two adjacent sub-apertures was smaller than half of the pitch between two sub-apertures [44]. This method, first introduced by Lindlein, Pfund, and Schwider, is now widely used in Shack-Hartmann image processing software. Another software approach is to sort the position of focal-spots in ascending or descending order along X and Y directions respectively. This method was first proposed by Lee at al. [45] 41

60 and later generalized by Smith at al. [46]. A slightly different search and sort algorithm was proposed by Kim at al [47] while an algorithm based on predicting spot location based on extrapolating neighboring spots data was proposed and by Li at al [48]. All software methods do increase the dynamic range of the system beyond the classical setup. However, they are still essentially limited by spot overlap or spot crossover. The image processing and wavefront reconstruction software we had at our disposal already had the capability of allowing spots to travel outside of their corresponding lenslet area. Several hardware based schemes have been developed in order to reduce spot confusion in the case of highly aberrated wavefronts, and to allow an unambiguous spot-to-lenslet association. Lindlein, Pfund, and Schwider, proposed the use of a method for solving the ambiguity problem by giving the focal spots a kind of label to identify them even if they left their sub-aperture, using lenses with well-defined astigmatism [49,50]. Figure 18 illustrates the concept. A similar method was developed further by Ares et al., where the astigmatic lenslets were replaced by cylindrical lenslets, effectively transforming the Shack-Hartmann spots into Shack-Hartman lines [51]. 42

61 Figure 18: Spot distribution in a plane 12 mm behind a microlens array with astigmatic microlenses of 10- and 15-mm focal length along the two principal axes and 400-mm pitch. (a) divergent spherical wave with 40-mm radius of curvature; (b) plane wave; (c) convergent spherical wave with 40-mm radius of curvature [49]. Another clever method of labeling the spots was also introduced by Lindlein, Pfund, and Schwider, and called for the use of a spatial light modulator (SLM) in front of the lenslet array [52]. The technique consisted in switching on and off the individual sub-apertures and recording independent images of individual spots. In theory a spot could then have any location on the detector without interference. This method can become very slow in practice. The use of an SLM together with the lenslet array had other embodiments in the methods proposed by Rha [53,54] and later by Zhao [55]. Rha proposed an adaptive SHWS by using the liquid crystal SLM as a compensator for wavefront tilt across each 43

62 lenslet. Zhao used the SLM to encode a customized array of diffractive microlenses to sample the wavefront. The diffractive lenslet array approach was also studied by Podanchuk et al. as a solution to dynamic range increase [56]. A nonlinear holographic array was proposed in order to essentially form a bi-focal lenslet array. The set-up, while complex, is very ingenious. It effectively creates two sensors employing one lenslet array with two focal lengths: a short focal length for high dynamic range, and a long focal length for high sensitivity. Two CCDs were used in order to avoid cross talk between the two diffractive orders, Figure 19. Figure 19: Experimental setup for recording and investigating nonlinear holographic lenslet array. He-Ne laser, λ= um; M1, M2, M3: 100% mirrors; BS: beam-splitter; L1, L2, L3, L4, L5: lenses; NDF: neutral-density filter; SM: semitransparent mirror; HLA: nonlinear holographic lenslet array; CCD1, CCD2: CCD area sensors [56]. Another promising method of identifying spot lenslet source was proposed in a white paper, High-Quality Microlenses and High-Performance Systems For Optical Microelectromechanical Systems, by Muller, Choo, and Gupta, from the Berkeley Sensor & Actuator Center (BSAC), Univ. of California, Berkeley. They built a unique 44

63 MEMS-microlens array in which each individual lens image can be identified by means of its pre-assigned vibrational frequency. In other words, each lenslet would have a unique signature retrievable by means of image processing of each individual spots on the detector. It is unclear however if the retrieving algorithm would work in case of spot overlap. Several other methods of solving the spot origin ambiguity problem involving hardware have been proposed, using hardware moving parts. Two similar approaches [57,58] essentially switch on and off designated rows of spots by translating a mask across the lenslet array and blocking successive rows of sub-apertures, as showing in Figure 20. Figure 20: (a) Optical layout of the prototype large-dynamic-range wavefront sensor having a translatable plate blocking every other lenslet. (b) Schematic diagram of capturing spot pattern at each position of the plate after translations assuming the aberration measured with a 5x5 lenslet array [57]. 45

64 The method has the same disadvantage as other scanning methods, in being slow and susceptible to temporal wavefront variations. One last method guarantees correct spot labeling in case of overlap or crossover, by a shifting the detector plane along the optical axis of the sensor [59]. By defocusing the detector and tracking spot movement, correct spot-to-lenslet assignment is achieved. The method seems quite straightforward, but raises questions of robustness and the need for frequent re-calibration. In fact, while many of the methods presented above have tremendous technical merit, their implementation may be better suited for experimental setups, rather than compact, costeffective, maintenance-free, commercial devices. This very issue was the driving reason behind investigating a more traditional approach to dynamic range increase. Our design objective was to find the lenslet-detector combination that would allow the most increase in dynamic range without sacrificing existing sensitivity and spatial resolution. Maybe not so surprisingly, there are not many papers on the specifics of SHWS design. Many of the early sensors were built using off-the-shelf components, customized lenslet arrays not being readily available. The choice of detector was limited as well. Therefore precise optimization techniques were not necessarily needed. Loose guidelines for lenslet-detector choice were more appropriate. Several publications give a top-level presentation of the construction of ophthalmic aberrometers based on SHWS, followed by a presentation of the applications and measurement results of the sensor [60,61,62]. 46

65 Sheehan et al. present the design of their aberrometer in more detail [63]. Their design calls for correction of defocus prior to the wavefront reaching the lenslet array In order to alleviate some of the dynamic range requirements of the sensor. They use a 10 millimeter focal length, 190 micron pitch lenslet. No scientific basis was given for this choice of lenslet parameters. Widiker et al. present a more comprehensive study of the SHWS metric trade-offs [64]. A semi-qualitative trade-off matrix based on the parameters of the system is presented. However, the authors make two over-restrictive assumptions. A spot is not allowed to travel beyond its sub-aperture area, and the spot size is diffraction limited. The objective of the paper was to find the best off-the-shelf re-sizing optics, the pupil relay system, (the 4-F System), and the objective lens that images the spot image plane onto a detector, (the Relay Lens) Figure 21, in order to improve the performance of the sensor given an predetermined lenslet array. The paper is of great interest to us because it gave us a valuable glimpse at the mindset of the original LADARWave TM designers, who used both a magnifying pupil relay and a de-magnifying objective lens in the original design in order to ensure a high level of performance given the limited lenslet-detector choice. Our task would eventually develop into optimizing the lensletdetector choice in order to increase the dynamic range, and possibly make the use of auxiliary optics obsolete. 47

66 Figure 21: Schematic diagram of a modified SH WFS incorporating (1) a 4-F relay system, (2) a singlet lens used to alter the lenslet array s effective focal length, and (3) a spot pattern relay lens. Note that the inversions caused by the 4-F system and the spot pattern relay lens cancel [64]. Neal and colleagues, from Wavefront Sciences, Inc. published several good papers on the performance metrics of the SHWS and the trade-offs between them. One of the papers [65] discusses the details of how the density of the lenslet array affects the accuracy of the wavefront measurement. Another paper presents a standard methodology for measuring the repeatability, accuracy and dynamic range of different wavefront sensor designs [66]. In two other papers the methodology for designing instruments based on Shack-Hartmann sensors is discussed [67, 68]. In both cases the authors make the same assumptions as Widiker when defining the dynamic range of the system, not allowing the spots to travel beyond their respective sub-aperture areas, and assuming that the spot size is diffraction limited. The first assumption underestimates the dynamic range of a SHWS, while the second assumption tends to underestimate the size of the spot. For ophthalmic 48

67 aberrometers the spot size is usually dominated by the paraxial magnification of the scatter spot produced by the probe beam on the retina. Finally, Neal et al. presents the Resolution effects on the dynamic range and accuracy of ophthalmic wavefront sensing [69], in which again, the performance metrics are tied to the lenslet and detector parameters. The assumptions discussed above are once again made. In the conclusion of the paper the following statement is made: Increasing the resolution increases both the dynamic range and accuracy of the system. This is misleading. As we present in the next chapter, spatial resolution and dynamic range are inversely proportional. Smaller lenslet pitch will create less spacing between spots and possibly larger spot size, increasing the chance of spot overlap for a given wavefront slope differential. What the authors probably meant, was that for a given wavefront, the local tilt differential is likely to decrease as finer sampling is performed. However, dynamic range should be independent of the input wavefront. It should be noted that Wavefront Sciences, Inc. is the maker of COAS, an ophthalmic aberrometer with the highest spatial resolution relative to other commercial aberrometers. In terms of dynamic range COAS is average. See section for more details. 49

68 3 CONCEPTUAL APPROACH The SHWS, through its simplicity, represents a very popular alternative for clinicians and scientists studying the aberrations of the eye. The fundamental functioning principles of the SHWS for ocular measurements were presented in the previous chapter, and can be briefly summarized as follows: A narrow light beam is projected on the retina to form a scatter spot. The scatter spot becomes a light source and light re-emerges back through the eye, amassing the eye s optical aberrations in the process before exiting through the cornea and the entrance pupil of the eye. The entrance pupil is conjugated to a lenslet array using a relay system (not necessarily of unit magnification). Individual lenslets then sample the wavefront forming multiple images of the retinal spot on a detector. This grid of spot images is evaluated against a reference grid previously produced by a planar wavefront measurement. Thus, optical aberrations and imperfections of the device itself are for the most part cancelled out by the use of the reference wavefront. The departure of the spots relative to their reference location permits the computation of the local slope of the wavefront at each lenslet. Local slope integration over the entire pupil enables a wavefront reconstruction. Tthe reconstructed wavefront is the decomposed into a set of Zernike polynomials. Individual polynomial coefficients represent the effect of distinct aberrations to the total wavefront error. 50

69 Also in previous chapter, we also presented the main limitations of clinical aberrometers, namely the dynamic range limitation due to sport overlap in the case of highly aberrated wavefronts, and the reconstruction accuracy limitation due to low sampling in the presence of exceptionally high order aberrations. Both of these limitations are tied to the choice of the lenslet array main parameters, focal length and pitch. In this chapter we first attempt to describe the performance metrics of a SHWS and the trade-offs between them. We then formulate an analytical approach to connect these metrics together and to establish an optimization space for the lenslet parameters based on performance constraints. We finally show the application of our analytical method in the re-design of an ophthalmic SHWS optimized for high dynamic range with equal or better sensitivity and spatial resolution as its predecessor. 3.1 Metrics and Trade-offs Dynamic Range Definition Typically the dynamic range of an ophthalmic aberrometer is given in as the maximum measurable wavefront defocus amount, for a given pupil size. Other definitions exists, depending on the sensor application. Regardless of how it is specified, the fundamental definition of the SHWS dynamic range can be reduced to: the largest slope difference that can be measured between two adjacent lenslets. The dymamic range is limited by the collision of the adjacent spots in the image plane as illustrated in Figure 22, or even the potential crossover of adjacent spots. 51

70 Figure 22: Schematic illustration of the dynamic range limitation due to adjacent spot collision In both of these cases lenslet-spot correspondence becomes ambiguous, the processing of the spot image thus compromised, and the wavefront reconstruction is therefore aborted or incorrectly estimated. The dynamic range is thus dependent on the focal length of the array and the spacing (pitch) of the lenslets Sensitivity Definition Sensitivity can be defined as the minimum detectable slope over each lenslet. The measurement sensitivity is governed mainly by the focal length of the lenslet array, the signal-to-noise ratio, the number of pixels covered by a spot which in turn affects the precision of the centroiding algorithm. It can be equivalently represented as the minimum slope difference that can be detected by one lenslet. In Figure 23 we illustrate this concept by showing a small slope departure from a reference plane wave. 52

71 Figure 23: Schematic illustration of the sensor sensitivity. It can be almost immediately evident that a high-resolution detector can be quite advantageous in increasing the sensitivity of the sensor Spatial Resolution Definition There are two components of spatial sampling. First of all, the lenslet diameter has to be kept small enough to insure that the wavefront can locally be approximated to a plane wave over each lenslet. Any high order aberration modes occurring at sub-lenslet frequencies will cause the spots to blur but they will not be reconstructed, and therefore will be lost. Secondly, the wavefront sampling has to be done at a high enough rate in order to insure that the Nyquist criterion is satisfied for accurate high-order aberration reconstruction. An empirically derived rule is that to reliably reconstruct a wavefront, up to a certain Zernike order, the number of lenslets across the pupil needs to be at least equal to the number of Zernike terms up to that order. For instance, to effectively 53

72 reconstruct aberrations up to the 5th Zernike order at least 18 lenslets across the pupil are needed. The concept of spatial resolution is shown in Figure 24. Figure 24: Schematic illustration of the sensor spatial resolution; (a) low spatial resolution yields a poor wavefront reconstruction; (b) high spatial resolution yields a more accurate wavefront reconstruction Trade-offs Provided that the sensor is not dynamic range limited, the accuracy is given by the sensitivity and spatial resolution. Therefore the accuracy of the sensor relies heavily on the number of lenslets sampling the pupil, the focal length of the lenslet array, the number of pixels covered by a spot, and the SNR of the spot image. We can conclude that all four metrics, dynamic range, sensitivity, spatial resolution and accuracy are interconnected and do cause design trade-offs. Thus, all four metrics need to be balanced together for a given design requirement. 54

73 The number of lenslets across the pupil and the focal length of the lenslet array are the two main design variables affecting the tradeoff between the sensor parameters. Thus if we assume the pupil size and the pixel size of the detector constant we can make a series of statements relating the four main performance metrics of the sensor. Increasing the number of lenslets across the pupil will increase the spatial resolution, but it will decrease the pitch of the array and, as a result, the dynamic range of the system. Lowering the sampling density presents the reward of increased dynamic range, increased light amount per lenslet and thus a larger SNR. Since the focal length of the lenslet array is linearly proportional with the measurement sensitivity, reducing the focal length lead to a decrease in sensitivity. Moreover, reducing the focal length will reduce the spot size in the image plane, effectively reducing the number of pixels covered by the spot. On the other hand, reducing the focal length of the lenslet array will most surely increase the dynamic range of the system, since the spot displacement due to a slope variation will lessen and also the spot size will be reduced making spot overlap less likely to occur. We therefore have an inherent inverse relationship between dynamic range and sensitivity as well as dynamic range and spatial resolution. In most cases, when the sensor operates well within its dynamic range, as is the case for most eye aberration 55

74 measurements, the same inverse relationship exists between dynamic range and accuracy. The trends are illustrated in Figure Design Solution Space 50 Lenslet Focal Length SENSITIVITY SPATIAL RESOLUTION Number of Lenslets across the Pupil Figure 25: Relationship and trade-offs between SHWS performance metrics as a function of lenslet focal length and sampling density; the directions of the arrows indicate performance metric increase. The only exception (the reason for the double arrow on the accuracy line) occurs when, the dynamic range becomes too small and the lack thereof can yield inaccurate measurement results. Moreover, in cases of reduced algorithm sophistication or detector efficiency a small lenslet spacing can also affect the image SNR to the point where the centroiding accuracy may suffer. 3.2 Analytical Method 56

75 After understanding the definition of each governing performance metric and the tradeoffs between them, we can now express them mathematically, which will help us define a design solution space for our SHWS. As seen in the previous section, the design of the lenslet array largely determines the performance of a Shack-Hartmann sensor. In its simplest form, there are two available design parameters: the number of lenslets across the pupil diameter of the wavefront, and the focal length of the array. Spatial resolution, sensitivity, accuracy, and dynamic range are all affected by one or both of these parameters. In this particular approach following assumptions are made. The detector is placed in the focal plane of the lenslet array. The spots are shifted on the detector plane to the average phase gradient or wavefront tilt across their respective subapertures. We will also assume that the lenslets are placed in a square grid and have a circular subaperture. It is also useful at this time to tabulate the quantities, and the symbols associated with them, that we will be using for the next set of mathematical expressions, Table 5. For this approach we also assume the SHWS layout from Figure 26. Figure 26: Schematic layout of the SHWS to be used as model in our analytical approach. 57

76 Table 5: Quantities and symbols to be used in the analytical method. δ spot Spot diameter size δ ret Retinal scatter spot size 0.15 mm in our case Focal length of the lenslet f lenslet array f eye Focal length of the eye 17 mm for an emmetropic eye Focal length of the first relay f r1 lens Focal length of the second f r 2 relay lens M p Relay pupil magnification f M p = f N Number of lenslets per pupil diameter D Eye exit pupil diameter 10 mm in our case d p Pitch (diameter) of the lenslet CCD pixel size r 2 r1 D d = M N p We first write expressions for the size of the spots on the detector. It is important to note that the size of the spot can be limited by diffraction, but it can also be limited by the paraxial magnification of the retinal scatter spot, through the optics of the sensor. Thus we can write: f lenslet f r1 δ ret ; given by geometrica l magnificat ion f eye f r 2 δ spot = (3-1) f lenslet 2.44λ ; given by diffractio n d We can re-write this and obtain the spot size expression: 58

77 f lenslet δ ret ; given by geometrica l magnificat ion f eye M p δ spot = (3-2) N 2.44λf lenslet ; given by diffractio n D In the case of an ophthalmic SHWS, the spot size is usually dominated by the geometrical magnification of source (in this case the scatter spot on the retina produced by the probe beam). At a certain lenslet F-number, diffraction becomes dominant and the size will equal the Airy disk diameter of the diffraction limited spot. The domination switch occurs when the following inequality is satisfied: f lenslet N δ ret 2. 44λf lenslet (3-3) f M M D eye p p Re-writing this we obtain: δ ret D 2.44λf eye N (3-4) We can now begin to put constraints on the design solution space Minimum Spot Size Constraint The spot location is usually determined by a simple center-of-mass calculation. The accuracy of this calculation is improved when the spot covers a lot of pixels. The number 59

78 of pixels required for satisfactory centroiding accuracy will depend on the type of detector, illumination conditions, noise. However, when the system is not light-starved, the pixilation effect is strong when the spot is only a few pixels wide. A constraint on the minimum spot size removes the portion of the design space that does not meet this requirement. If we set the minimum spot size to be equal or greater than 8 pixels we obtain: δ spot f lenslet f eyem p δ ret 8 p f lenslet 8 p f eyem p δ ret _ MIN 8 p (3-5) N D 2.44λf lenslet 8 p f lenslet 8 p D 2.44Nλ Setting the pixel size to 10 microns for illustration purposes, we can then trace a contour of the design solution space that satisfies the minimum spot size requirement, Figure 27. Figure 27: Minimum Spot Size constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement. 60

79 3.2.2 Maximum Spot Size Constraint We could also set a maximum spot size constraint, requiring the spot size to be smaller than a certain fraction of a lenslet pitch, in order to ensure sufficient spot separation in the absence of aberration. 2 f DM lenslet p Df eyem p δ ret f lenslet d f eyem p sn snδ ret δ spot _ MAX (3-6) s 2 N DM p ( DM ) 2.44λf p lenslet f 2 DM sn lenslet p s 2.44λN If we let the maximum spot size be no larger than two thirds of the lenslet pitch, i.e. s = 1.5, we can trace a second constraint contour in our design space diagram, Figure 28. Figure 28: Maximum Spot Size constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement. 61

80 Note the inflection point at about 35 lenslets per pupil when the spot size domination changes from geometrical magnification to diffraction Sensitivity Constraint The sensitivity of the system depends mainly on the focal length of the system and can be written as: α M min p = q p ; f lenslet in radians (3-7) Where q p is the smallest spot displacement detectable by the algorithm. In this case it is expressed as the fraction q of a pixel. In FIGURE we show the sensitivity constraint contour corresponding to minimum detectable slopes of 5.5E-3 degrees. A sensitivity value that is clinically acceptable. We also make the reasonable assumption that the centroiding accuracy can be done within a 10 th of a pixel as long as the spot covers 8 pixels. 62

81 Figure 29: Sensitivity constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement Spatial Resolution Constraint For a given measurement there is typically a minimum spatial resolution requirement, and the magnitude of this constraint can be determined in several ways. If the system is supposed to estimate a number of Zernike modes or terms, there should be more lenslets than modes. Or, prior knowledge of the wavefront spatial frequency content can help choose the minimum lenslet spacing. In any case, a constraint on minimum spatial resolution removes an area of the design space to the left of the minimum number of lenslets per pupil. The illustrative case considered here assumes a minimum of 18 lenslets across the diameter, Figure

82 Figure 30: Spatial resolution constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement Dynamic Range Constraint Finally the dynamic range of the sensor, defined in the previous section as the largest detectable slope difference between two lenslets. The dynamic range limit, in which we are most interested, is directly related to the maximum distance a spot can travel before it starts overlapping an adjacent spot. The concept is illustrated in Figure

83 α 1 d α 2 κ f lenslet Figure 31: Two-spot overlap limitation on the Dynamic Range. Note that we assume both spots are displaced in the same direction, i.e. if α 1 0, then α 2 0 as well. If we allow the notion that for a given wavefront adjacent spots can that travel in opposite directions, we infringe on the slow varying envelope assumption. In that case it is very likely the sensor is limited by spatial resolution rather than dynamic range. The following inequality can then be written to express the spot separation condition: f δ spot δ spot tan( α1 ) + f lenslet tan( α 2 ) + δ spot κ d (3-8) 2 2 lenslet + where κ is the allowable overlapping factor between two spots, as a fraction of spot size, before the algorithm begins to fail discriminating between the two. Re-writing, we obtain: 65

84 f δ spot δ spot tan( α1 ) + f lenslet tan( α 2 ) + δ spot κ d (3-9) 2 2 lenslet + By definition the slope difference on a surface between two close points can be approximated by means of the second derivate of that surface. In our case the surface is the wavefront itself. 2 W ( ρ, θ ) tan( α1) tan( α1) d 2 (3-10) ρ Then, Eq. (3-9) can be re-written as: f lenslet 2 W ( ρ, θ ) d + δ 2 ρ spot (1 κ ) d 0 (3-11) Or, 2 W ( ρ, θ ) d δ spot (1 κ ) 2 ρ d f lenslet (3-12) Which becomes, 66

85 2 W ( ρ, θ ) 1 Nδ spot (1 κ ) 2 ρ f DM f lenslet p lenslet (3-13) Going back to the spot size expression, Eq. (3-2) we can re-write the dynamic range expression as: 1 2 W ( ρ, θ ) f 2 ρ 1 f lenslet lenslet Nδ ret f f DM lenslet eye p N 2.44λf M f lenslet DM p p (1 κ) lenslet N DM f p lenslet (1 κ) (3-14) Finally, the second derivative defined Dynamic Range can be neatly expressed as: 1 2 W ( ρ, θ ) f 2 ρ 1 f lenslet lenslet N DM N DM p p δ f 2 ret (1 κ ) ; M eye 2.44λ(1 κ); p spot given by geometrical magnification spot given by diffraction (3-15) We now have a convenient expression relating the maximum second derivative of the wavefront at with the parameters of the system. If for instance we would like to now quantify the maximum dynamic range as maximum measurable defocus at the edge of the pupil we can take the second derivative of the wavefront expressed as a Zernike polynomial expansion. 67

86 68 ( ) , ) cos(2 ) cos(2 1) (2 ), ( ), ( ), ( ρ θ ρ θ ρ ρ ρ θ ρ ρ θ ρ ρ θ ρ = = ± L C C C W Z C W m n m n m n (3-16) Where r max r = ρ and 2 max M p D r =. If we are interested only in defocus the second derivative becomes: ( ) ) (2 = M p D C C ρ ρ (3-17) Since the ophthalmic refraction error known as Sphere, linked to myopia and hyperopia, can be expressed in terms of the defocus Zernike coefficient, as seen in section 1.4, ( ) max 3 4 C r Sph = (3-18) we can combine Eq. (3-15), Eq. (3-17), and Eq. (3-18) and we can directly relate the maximum measurable Sphere (in diopters) with the parameters of the system.

87 1 Sph f f lenslet lenslet N DM N DM 2 p p δ 2 ret (1 + κ) ; f eye 2.44λ(1 + κ ); spot given by geometrical magnification spot given by diffraction (3-19) We can thus trace the constraint contour of for a dynamic range given in maximum measurable Sphere. In our illustrative diagram we use -20 D of Sphere as our Dynamic Range constraint, and we let κ = 0. 3, Figure 32. Figure 32: Dynamic Range constraint. The shaded area corresponds to the possible solution space that satisfies the constraint requirement Optimization Step With the solution space separately limited by each of the constraint contours we can verify whether a solutions space set exist that satisfies all the constraints simultaneously. Should such an area not exist, one would need to either loosen the performance 69

88 specifications of the system or to employ non traditional methods of increasing the either the dynamic range or the sensitivity of the system. In our case, for the constraints chosen to illustrate the method, there is an area bounded by the constraints contours traced in the previous sections, Figure 33. Figure 33: Design solution space. The shaded area corresponds to the solution sub-space that satisfies the all system constraints. The area now represents the optimization space of the design; the larger the area, the more choice for the designer. If we were to virtually superimpose Figure 25 and on Figure 33, we could infer that for the highest dynamic range, the solution chosen, in terms of lenslet f-number, should be in at the lowest left corner of the optimization area. Should optimizing for sensitivity be the objective, one would choose the highest corner available, while for the highest accuracy and spatial resolution the solution should be chosen from the right hand side region of the optimization space. 70

89 3.3 Analytical Method Applied In the previous section we proposed an analytical approach to determine the system parameter solution space, based on mapping the constraints of the performance metrics of the sensor. In this section we apply the analytical method to our particular setup, in order to determine the parameter solution space that would create a high dynamic range sensor, without sacrificing the existing aberrometer sensitivity, spatial resolution and accuracy. We first needed to establish the current aberrometer specifications and create the boundaries to the solution space that would offer equivalent performance. The original design employed resizing optics in order to better manage the trade-off between dynamic range and sensitivity. A sketch of the original LADARWave TM aberrometer sensor path is shown in Figure 34. Figure 34: Original LADARWave TM sensor path layout The demagnifying objective lens primary goal was to take the 10+ millimeter intermediary spot image created by the lenslet array and image it onto a 1/3 inch CCD, one of the few available high-resolution detector sizes at the time. While the image 71

90 demagnification M obj does not have an impact on dynamic range, it does have an effect on sensitivity. Equation (3-7) now becomes: α M min p = q p M f obj lenslet ; in radians (3-20) The virtual pixel size used to express sensitivity in the intermediary image plane has to be multiplied by the magnification of the relay in order to represent the real pixel size of the detector and the new smaller detectable spot displacement. The concept is illustrated in Figure 35. Figure 35: Effect of demagnifying objective lens on sesor sensitivity. As a result of the demagnifying objective lens, the intermediary spot image size no longer had a size constraint. Thus, the pupil size could be expanded so that the localized slopes 72

91 of the wavefront become more moderate. Assuming a constant spatial resolution over the pupil, for a pupil magnification of M p, we can expect the dynamic range to increase quadratically with M p, per Eq. (3-15), while the sensitivity is sacrificed by only one order of the magnification M p, per Eq. (3-7). Knowing the relevant specifications of the LADARWave aberrometer, given in Table 6, a map of the system limitations can be drawn based on the analytical approach developed in the previous section. Table 6: LADARWave TM relevant specifications. Symbol Description Value δ ret Retinal scatter spot size δ ret = 0. 15mm Focal length of the lenslet f lenslet array f lenslet = 40mm f eye Focal length of the eye f eye = 17 mm (emmetropic eye) M p Relay pupil magnification M p = 1. 4 M obj Objective magnification M obj = 0. 3 N Number of lenslets per pupil diameter N 19 D Eye exit pupil diameter D = 8mm d Pitch (diameter) of the lenslet d = 0. 6 mm p CCD pixel size p = 8.2 μm Note there is no shaded area on this map simply because this is not an optimization map. It is merely a graphical representation of our design starting point, FIGURE. The dynamic range, sensitivity and spatial resolution initial values are given by the intersection of the focal length of the lenslet and the lenslet density, f lenslet = 40mm and N = 19, respectively. 73

92 Figure 36: LADARWave TM initial performance metrics limits. Since large, 1 inch, high-resolution cameras were now readily available, we wanted to confirm whether we could find a solution to increase the dynamic range by discarding both the pupil magnification relay and the demagnifying objective lens. Discarding the objective lens had the potential of increased sensitivity, enough to offset decreased dynamic range caused by the absence of the pupil magnification. Table 7: LADARWave TM relevant specifications. Symbol Description Value δ Retinal scatter spot size δ = 0. mm ret f Focal length of the lenslet array TBD lenslet eye ret 15 f Focal length of the eye f eye = 17 mm (emmetropic eye) M Relay pupil magnification M = 1 p M Objective magnification M = 1 obj N Number of lenslets per pupil diameter N 19 D Eye exit pupil diameter D = 8 mm d Pitch (diameter) of the lenslet d = mm p CCD pixel size p = 7.4 μm p obj 74

93 We set up another table to reflect the new design conditions expressed in Table 7. Our solid constraints were given by the sensitivity and the spatial resolution, which had to be to no worse than those of the LADARWave TM system. We recreated the optimization map with the new system parameters, bounded by the solid constraints, Figure 37. Figure 37: Optimization space for the SHWS redesign. The shaded are corresponds to the solution space that satisfies all the constraints From the figure above, if we set the spatial resolution to 19, along the green line, we observe that for focal lengths between approximately 8 mm and 30 mm, the new design will have no worse dynamic range and no worse sensitivity than the original system. Since our initial goal was to increase the dynamic range as much as possible, the solution laid at the bottom left corner of the optimization area. A lenslet with a focal length of 8 mm was chosen to be the solution. Thus the final lenslet-detector specifications were: f lenslet = 8 d = mm mm p = 7.4 μm res =

94 4 SIMULATION MODEL The next step of our design and optimization undertaking was to validate the results obtained in the previous section, using a numerical simulation process. If successful, the validation would extend not only to the results but to the analytical method itself. The main objective of the simulation process was to investigate the performance of the SHWS with several sets of design parameters for a given set of input wavefronts. The output wavefronts of the modeling process were compared to the input wavefronts and the measurement RMS error was computed for each case. Several software packages were used in combination in order to recreate a model of the real life measurement. These included Zemax for raytracing the optical system of the aberrometer from the corneal plane to the lenslet plane (several macros have been written in Zemax s ZPL language to organize the input and output tracings of the aberrometer into text files). IDL (Research Systems Inc.) was then used to write custom routines to analyze these text files and convert them into a simulated image obtained on the image sensor. Finally, the simulated image was analyzed and processed by an Alcon Laboratories, Inc. proprietary algorithm for reconstructing the wavefront based on the Shack Hartmann pattern on the sensor (very similar to the software used for LADARWave TM analysis). The output of this process was a set of Zernike coefficients 76

95 that could then be related back to the input of the raytracing portion of the modeling. Figure 38 outlines the simulation process. Input Wavefronts - Defocus - Zernike phase - Grid Sag Compare ZEMAX Raytrace to the lenslet plane Spot Image 1 st and 2 nd order aberrations at each lenslet Alcon Software Reconstruct Wavefront IDL Generate and store spots Figure 38: Schematic rendering of the sensor simulation process In the next sections, we present each step of the simulation in more detail as well as some of the results obtained. 4.1 Input Wavefronts In our simulation runs we used three artificially generated wavefront categories. The first category consisted in a series of wavefronts exhibiting defocus only. The main objective for using this wavefront category was to investigate the dynamic range limitation of the system, in terms of Sphere error. The second category created was a set of human eye simulated wavefronts based on extrapolated real patient wavefront data. The category was used to investigate the system behavior when measuring normal eye wavefronts. 77

96 Finally the third category created was a set of wavefronts approximating the irregular phase generated by a post-lasik eye. We explain the three categories in more detail in the next sections Defocus Wavefronts A set of 60 defocused wavefronts was created from -30 diopters to +30 diopters, in increments of 1 diopter. These wavefront were generated in Zemax by simply adding a paraxial lens of varying power in the corneal plane of the eye Simulated Normal Eye Wavefronts A set of 100 wavefronts was created, in such a way as to mimic the aberrations found in real eye measurements. To create the simulated wavefronts we needed to extrapolate existing patient data (most of whom were normal myopes). To do so, we first needed to find the means and the covariance of the coefficients used to represent the human wavefront data. Each wavefront was already decomposed into a set of Zernike polynomials and coefficients for each subject were available. To create the simulated wavefronts, we first need to calculate the mean vector and covariance matrix of the Zernike coefficients. Given the Zernike expansion coefficients of N subjects, the covariance between two expansion coefficients is given by 78

97 cov(a j,a j' ) = N ( a ji a j )( a j'i a j' ) i= 1 N 1 (4-1) where a ji is the j th Zernike coefficient of the i th subject and a j is the average of the j th Zernike coefficient across all subjects. Note that a square covariance matrix is formed by comparing all combinations of coefficients and that the diagonal components of this matrix are the variance of each coefficient. This matrix is also symmetrical about its diagonal. Denote this covariance matrix as Σ. The mean vector μ can be defined as {,a, K} a. 0 1 a 2 One method for obtaining a vector of coefficients that follow the statistics of the subject population is to first use Cholesky decomposition to find the lower triangular matrix C such that CC T = Σ. Next, create a random vector of normal (i.e. Gaussian) variables N = { n,n, K}. Finally, create a new set of Zernike coefficients { b,b, K} 0 1 n b 2 = μ + CN. For each new set of Gaussian random variables N, a new set of {b j } with the appropriate statistics is generated. The preceding technique was used to simulate 100 human wavefronts, expressed as a Zernike decomposition over a 6.0 mm. The original coefficients were based on patient data measured with the LADARWave TM system from a study performed previously of 30 eyes undergoing myopic Lasik. The plots in Figure 39 show the distributions of Zernike coefficients along with their standard deviations (error bars). Clearly, the generated 79

98 exams closely match the actual data in terms of the magnitudes and variances. This technique is useful because it also preserves the relationships between coefficients that may be linked to one another. Figure 39: Comparison of simulated Zernike coefficients with the real data. The inset represents the higher order coefficients on a smaller scale, for better visualization. In Zemax, these input wavefronts are controlled by a Zernike Phase Surface in the corneal plane. Note that the coefficients in for the Zernike Phase Surface are ordered differently than the OSA Standard Zernikes and are given in units of waves. Refer to the Zemax user s manual for a more thorough description of using this surface [70]. 80

99 4.1.3 Simulated Post-Lasik Wavefronts The wavefront emerging from a post-refractive surgery eye can be simulated to first order as a flat wavefront over the optical zone and a spherical wavefront outside of the optical zone where the spherical wavefront converges to the pre-operative far point. This simulation assumes only defocus is present in the pre-operative eye and that this defocus is completely eliminated by the procedure. Figure 40 illustrates the shape of this wavefront. Figure 40: Simulated Post-Lasik wavefront. Mathematically, the phase φ of this wavefront is given by 0 r roz φ ( x, y) = Sphπ (4-2) ( x + y roz ) roz < r rp 1000λ 81

100 where Sph is the pre-operative refractive Sphere error in diopters, x and y are the Cartesian coordinates and x 2 + y 2 = r 2, λ is the wavelength of light, r oz is the radius of the treatment zone and r p is the radius of the pupil. We generated 20 wavefronts, from preoperative Sphere error of -5 to -25 diopters. This wavefront shape can be simulated in Zemax using the Grid Phase Surface type. This surface type requires an array of phase values along with the derivatives dφ/dx and dφ/dy, as well as the cross-derivative d 2 φ/dxdy. From Eq. (4-2) these derivatives are simply φ x ( x, y) 0 = Dπx 500λ r oz r r oz < r r p (4-3) φ y ( x, y) 0 = Dπy 500λ r oz r r oz < r r p (4-4) 2 φ x y ( x, y) = 0 (4-5) When implementing this surface in Zemax, we set both the Diffraction Order and Interpolate to 1 in Parameter 1 and 2 for the surface. We chose a sampling density that was equivalent to the lenslet spacing. Zemax uses bicubic interpolation between points, 82

101 so the abrupt discontinuity at the edge of the treatment zone will get smoothed out somewhat during raytracing. 4.2 Raytracing Model We first set up the aberrometer sensor path model in Zemax. The layout is illustrated in Figure 41. The Zemax file was slightly modified for the generation of the Shack- Hartmann grid pattern. The second last surface, originally the lenslet array, was changed to a dummy surface. This dummy surface was set as the system aperture. A Coordinate Break surface was added before the dummy surface to allow decentration of the aperture in order to raytrace through each individual lenslet. Protective window Afocal Relay BS Eye corneal plane Lenslet Array CCD Figure 41: ZEMAX layout of the new proposed sensor path design. To ensure this aperture is correctly traced, the system aperture is set to Float by Stop Size and Ray Aiming is enabled in Zemax. Finally, Afocal Image Space is set in Zemax for calculating the wavefront across the lenslet aperture relative to a plane. The semi- 83

102 diameter of the dummy surface can be set to any small value, as it will be modified automatically in the ensuing macros. The initial steps for generating the local wavefront slopes at each lenslet plane grid as follows: 1. Create a text file that contains the x and y coordinates of each lenslet, as well as the x width and y width of each individual Lenslet. This file will allow custom lenslet arrays to be modeled for different pitches and aperture sizes. 2. The preceding text file is read into Zemax via a macro. Each line contains a description of an individual lenslet in the array. The system aperture is decentered to the position of the lenslet and sized according to the dimensions of the lenslet. Since the system aperture in Zemax is always circular, the diameter of the aperture will be set to the diagonal of the lenslet. 3. The 1 st and 2 nd order Zernike polynomials are then extracted from Zemax and stored in an output text file. Steps 1 through 3 are repeated for each individual lenslet. A sample input text file is shown below. Table 8: SHWS1.zpl input file. Basic Square Lenslet: dx= dy= Nx=3 Ny=

103 The first line is an arbitrary header that can contain pertinent information regarding the lenslet configuration. The second line is the number of lenslets in the array. The remaining lines of the text file contain the x position of the lenslets in column 1, the y position of the lenslet in column 2, the x width of the lenslet in column 3 and the y width of the lenslet in column 4. The preceding example describes a 3x3 lenslet array with pitch of 1 mm and a square aperture of mm. Arbitrary positions and number of lenslets can be entered here, as well as rectangular or square apertures for each lenslet. The user must take care that the defined lenslets do not overlap. A macro, presented in APPENDIX A is then run within Zemax to process this file and create an output file. The macro file, SHWS1.zpl should be placed in the Zemax/macros directory and the macros list refreshed in Zemax. The path and filenames of the macro need to be edited to values appropriate to the user s computer st and 2 nd order Aberrations The output of the Zemax raytracing represents the sampled portion of the input wavefront at each individual lenslet aperture. The 1 st order aberration information, (tilt) was used to represent to local wavefront slope, ultimately responsible for spot displacement in the image plane. The 2nd order aberration information (defocus, astigmatism) was also collected in order to add its contribution to the final spot size in the image plane. Higher order aberrations within the lenslet aperture were assumed to be insignificant. The output for our example file is presented in Table 9. The file is identical to the input file except the columns 5-9 have been added to the main part of the file. These columns correspond 85

104 to the OSA Standard Zernike coefficients C 1, 1 1 C 1, C, C 2,and in microns, respectively. The deviation of each spot can be calculated from the preceding tilt information. Each SH spot moves a distance Δx in the x direction and Δy in the y direction in the presence of wavefront error. These movements are given by Eq. and Eq.. Table 9: SHWS1.zpl output file. Basic Square Lenslet: dx=3 dy=3 Nx=3 Ny= ( Column 6)( lenslet focal length) 1000 ( column 3) 1 C1 f lenslet Δ x ( mm) = = (4-6) 1000 d x 1 C1 f Δy ( mm) = 1000 d lenslet y = ( Column 5)( lenslet focal length) 1000 ( column 4) (4-7) 86

105 4.4 Lenslet Processing The next step in the simulation process was to the Shack-Hartmann grid pattern by processing each lenslet and generating a spot. Thus, we added an additional column to the output file from the previous step that contains the focal length of each lenslet. In addition, the parameters of the camera sensor needed to be defined. We intentionally created this two-step process for defining the lenslet array, because we wanted to have the liberty of varying the focal length of the lenslets. Since the Zemax macro is relatively slow, we wanted to generate the output of the macro only once. The output depends on the wavefront error being examined and the size and positions of the lenslet. Consequently, for a fixed wavefront error, the macro only needs to be executed once for a given lenslet geometry. The focal lengths of the lenslets can then be varied separately from the Zemax modeling to create a variety of Shack-Hartmann patterns. An example of the modified output file is shown below. The example in Table 10 assumes all of the lenslets have the same focal length (17 mm in column 10) and a 640x480 image sensor with 10 micron square pixels (Lines 2 and 3 of the file, respectively) is used. Line 4 is the location of the image sensor relative to the lenslet array (17 mm or the focal point of a lenslet). Line 5 now becomes the number of lenslets in the array. 87

106 Table 10: SHSpot.pro input file. Basic Square Lenslet: dx=3 dy=3 Nx=3 Ny=3 640x480 10x This modified raytracing output file output becomes the input file for the spot generating routine. The steps for determining these spots are: 1. Read in modified output file. Extract camera sensor information and the number of lenslets. 2. Create an empty image array that has the dimensions of the camera sensor 3. For a lenslet, calculate Δx and Δy from Eqs. (4-6) and (4-7) above. 4. Calculate the Point Spread Function (PSF) for the lenslet based on a Fresnel transform of the aperture defined by columns 3 and 4 and the 2 nd order wavefront error columns Add this PSF to the image array at the position given by (column 1 + Δx,column 2 + Δy). 6. Repeat steps 3-5 for each of the lenslets. 7. Normalize the image array to a peak value of

107 89 A subtlety of Step 4 above is that we need to sample the lenslet aperture such that when a Fourier transform is used, the sample spacing in Fourier space corresponds to the pixel sizes of the camera sensor. The phase U(x 1,y 1 ) in the lenslet aperture is related to the irradiance pattern I(x 0,y 0 ) on the image sensor by the Fresnel Integral. ( ) ( ) ( ) ( ) exp, 1, + I = y x z i y x U z y x I λ π λ ξη (4-8) where {} ξη I is the Fourier transform operation, z y and z x 0 0 λ η = λ = ξ are the spatial frequency coordinates of the transform, x 0 and y 0 are the coordinates in the plane of the camera sensor, x 1 and y 1 are the coordinates in the plane of the lenslet array and z is the distance between the lenslet array and the camera sensor. The phase U(x 1,y 1 ) in the lenslet aperture consists of two elements. The first element is a quadratic phase factor that depends on the focal length of the lenslet and the second element is a linear combination of Zernike polynomials based on the coefficients exported from Zemax. Consequently, the field in the lenslet plane is given by ( ) ( ) = θ θ θ λ π λ π,,, 2 exp exp max ,2 max ,0 max , r r Z a r r Z a r r Z a i y x f i y x U (4-9)

108 where f is the lenslet focal length, r = + and 2r max is the width of the diagonal of x1 y1 the lenslet aperture. Since we had discrete elements in the camera sensor (and since we were modeling the problem numerically), Fast Fourier Transforms (FFTs) were used to process the discrete data arrays. An N x N array in the lenslet plane was Fourier transformed to an N x N array in the camera sensor plane. The size of sensor pixels defined the size of the array elements in the sensor plane. If each pixel of the camera sensor has a dimension p x x p y, p x then these dimensions correspond to spatial frequencies of ξ = and η = λz p y λz. From the λ z λz properties of FFTs, the total width of the array in the lenslet plane needs to be. p x p y To avoid aliasing, the size of each lenslet should be less than or equal to half the size of its corresponding FFT array. We are free to choose the number of elements in the lenslet array. However, since an FFT needs to be performed for each lenslet, the size of the array should be small to keep calculation times reasonable. N should also be a power of 2 to take advantage of the speed of the FFT. For this application, 32 x 32 element arrays are used. The creation of a simulated Shack Hartmann grid image is carried out in IDL code. The routine code file, SHSpot.pro, can be found in APPENDIX A. The first part of the program simply reads in the header of the text file and extracts the relevant values for subsequent processing. The next section loops through the individual lenslets and determines the PSF for each. First, it is determined if the lenslet is completely obscured, 90

109 partially obscured or not obscured by the pupil edge. Completely obscured lenslets are ignored. For lenslets that are not obscured by the pupil, the phase function over the aperture of the lenslet is determined according to Eq. (4-9) above. For partially obscured lenslets, Eq. (4-9) is again used to determine the phase function over the lenslet aperture, but this function is masked by the pupil such that the region of the lenslet outside of the pupil is set to zero. For both the latter two cases, the decentration of the spot Δx and Δy from Eqs. (4-6) and (4-7) are calculated. The squared-modulus of the Fourier transform for each lenslet is calculated and the resultant PSF is inserted into a final image array at a point depending upon the lenslet position and the values of Δx and Δy. The final step in creating the simulated Shack Hartman image is to blur the spots calculated above by the irradiance distribution of the source on the retina. The PSF for a given lenslet calculated above can be thought of as the spot formed by an infinitesimal spot on the retina. For a finite retinal spot size, the resultant spot in the Shack Hartmann image is the convolution of the source spot with the lenslet PSF. The size of the spot on the retina needs to be scaled by the magnification of the eye/aberrometer combination. This magnification is simply the ratio of the lenslet focal length to the focal length of the eye. The code models the retinal spot as a Gaussian and following convolution with the PSF image, saves the image to a bitmap file. 4.5 Simulation Results For our numerical simulation run we first generated 1 st and 2 nd order aberration output files with Zemax, for every simulated wavefront discussed in section 4.1. We then run the output files through the IDL code specifying multiple lenslet focal lengths via an array of 91

110 focal lengths. One Shack-Hartmann spot image is generated for each focal length and each input wavefront. Figure 42 illustrates the images obtained for the same input wavefront, with five different lenslet focal lengths. Note the increase in spot size with longer focal lengths. Figure 42: Shack Hartmann patterns for the same input aberration, but different lenslet focal lengths (f). 92

111 A simulated Shack Hartmann image was generated for each of the virtual input wavefronts. As established in section 3.3, the camera sensor was assumed to have 2048 x 2048 pixels, with each pixel subtending 7.4 x 7.4 μm. These images in turn were run through a reconstruction algorithm to fit the measured wavefront. The reconstruction algorithm provides a set of Zernike coefficients for the reconstructed wavefront. The reconstructed coefficients can be compared directly to the input coefficients to determine the accuracy of the fit. A measure of the fit quality is the RMS of the difference between the two wavefronts. The RMS is easily determined as RMS = i ( ) a i b i 2 (6) where a i is the input Zernike coefficient and b i is the reconstructed Zernike coefficient. The piston and tilt coefficients are ignored in the RMS calculation. In addition to the quality of the fit, we also measured the time for the reconstruction algorithm. Since the image sensor is much larger than the original LADARWave TM device, there was some concern that the processing time would dramatically increase. The time measurement was meant to determine an order of magnitude for the processing length. Figure 43 shows an example of this modeling. Each of the 100 post-operative wavefronts were used to create a simulated Shack Hartmann image. The lenslet focal length in this case was 7.6 mm. The resulting image was then processed and the expansion coefficients compared to the original values. For the vast majority of cases, the RMS error is below 1 micron. Several spikes in the RMS error are seen in this example. The spikes are 93

112 primarily due to spots at the edge of the Shack Hartmann pattern either overlapping or being incorrectly linked to their reference location RMS error (microns) Simulated eye wavefront Figure 43: RMS error for the reconstructed wavefronts. This example is for the human eye wavefronts and a lenslet focal length of 15 mm. Figure 44 shows the wavefront reconstruction time for the 100 wavefronts shown in Figure 43. The reconstructions were performed on a 3.39 GHz Pentium 4 computer, which is currently a fairly standard clock speed. Average time for all of the reconstructions was approximately 4.6 seconds. The reconstruction algorithm was not optimized for speed and there are several easy steps to further diminish the reconstruction time. A time frame of 4.6 seconds for processing is acceptable at this point and does not represent a marked increase in computation time over the earlier device. 94

113 4.7 Wavefront reconstruction duration (s) Simulated eye wavefront trial Figure 44: Wavefront reconstruction duration. It is important to also show an example of the post-lasik wavefront image. This image is an example of a 5.00 diopter correction over a 6 mm optical zone. The lenslet focal length is 8 mm and the lenslet pitch is mm. The spots inside and outside of the treatment region are circular, but have a different grid spacing due to the differences in power in these regions. Spots falling on the transition of the optical zone are elongated in the radial direction primarily due to the rapid change of curvature that occurs in this region. Longer focal lengths will amplify the elongation of the spots and their spacing, potentially causing spot overlap or crossover. Lenslets with shorter focal lengths reduce the elongation of these spots and will facilitate the image processing required for reconstruction. 95

114 Figure 45: Simulated post-lasik Shack-Hartmann pattern. Finally, Figure 46 shows the average RMS error for different lenslet focal lengths, given the three input wavefront categories. There appears to be an optimum value of the lenslet focal length at approximately 8 mm, especially for the human eye wavefronts and the post-lasik wavefronts. For longer focal lengths, the spots size increase and the spots can interact and or merge, causing limits to the accuracy of the reconstructions. The number of spikes increased with longer focal length, suggesting that the sensor reached its dynamic range limit. There is a slight increase in RMS error for shorter focal lengths. This increase is most likely caused by the sensitivity limitation of the sensor, due to the small lenslet focal length. It can also be attributed to the reduced spot size yielding insufficient number of pixels per spot. Reduced spot sampling limits the accuracy to which the spot centroid can be determined. 96

115 RMS error (microns) RMS measurement error Defocus w avefronts Human eye w avefronts Post-Lasik w avefronts Lenslet focal length (mm) Figure 46: The average RMS reconstruction error for different lenslet focal lengths. In summary, a lenslet focal length of 8 mm appears ideal for the 2048x2048, 7.4 micron pixel detector chosen. This geometry strikes a balance between the size of the spot formed by each lenslet and the sampling of this spot by the camera sensor. From Figure 46, we can infer that for measuring defocus in most of normal eyes, a lenslet focal length of up to 15 mm would yield reasonable accuracy. In order to measure the most extreme aberrations, i.e. post-lasik wavefronts, we need to choose the lenslet focal length that will yield the most dynamic range, for the given detector. Reconstruction time for this sensor is easily under 5 seconds and can be further optimized. Following the numerical simulation validation, the parameters of the sensor determined previously by the analytical approach remained unchanged: f lenslet = 8 d = mm mm p = 7.4 μm res =

116 5 EXPERIMENTAL PROTOTYPE Following the conceptual approach for optimization and the numerical simulation validation, the third main step in developing the high-dynamic range aberrometer consisted in building an experimental prototype according to the design parameters determined in the earlier sections. The experimental prototype was built at Alcon Laboratories, Inc.. The optical layout is shown in Figure 47. Eye W PBS L 1 L 2 FT M 1 LA CCD L 3 M 2 Probe beam SLD Figure 47: The schematic layout of the experimental prototype. Both the sensor path and the illumination (Probe beam) path are shown. W protective window, PBS Polarizing Beam-Splitting Cube, L 1, L 2 Afocal Relay lenses f = 60 mm, M 1 Hot mirror, FT subject fixation target, LA lenslet array f = 8mm, CCD detector, M 2 Mirror, L 3 Collimator lens, SLD Super luminous diode. 98

117 A good portion of the mechanical design layout and some of the prototype components were borrowed from existing LADARWave TM machines available in the laboratory. A photograph of the prototype is shown in Figure 48. Figure 48: Experimental prototype. The brass colored fixture houses the lenslet array; the CCD camera sits on top; to the right, the black barrel houses the afocal relay. The new parts relevant to this work are the custom made lenslet array, from MEMS Optical Inc. and the semi-custom 1 inch high-resolution camera from Lumenera Corp. (Ottawa, Canada) with a Kodak KAI-4022 CCD detector. The relevant lenslet and detector specifications were: f lenslet = 8 d = mm mm p = 7.4 μm res = More detector specifications can be found in APPENDIX B. 99

118 Presenting the details of the mechanical design and the construction of the prototype is not in the scope of this dissertation. However, in next two sections we would like to briefly present a simple design improvement we made to the illumination path of the sensor, as well as a tolerance analysis method employed early in the design process. We then present the actual results obtained with the experimental prototype. 5.1 Illumination Path The illumination path of the system, also called the probe beam path, is responsible for delivering a narrow beam of light to the eye forming a scatter spot on the retina, which in turn becomes the source for the sensing path of the system. The probe beam path starts with a light source, in our case a super luminous diode at with a wavelength of 820 nm, a collimating lens, and a polarizing beam splitting cube. The probe beam is usually centered with the line of sight of the subject by being aligned with the fixation target axis Wavelength Selection The wavelength selection for the probe beam was decided taking into account several aspects. The near infrared band was favored because it offers a safety benefit and is near invisible to humans, therefore more relaxing for the eye, ensuring a better subject fixation during the measurements. It was also important to choose a source wavelength that was close enough to the visible spectrum where the CCD still had reasonable efficiency. A near infrared wavelength of 820 nm was chosen. A longer wavelength also enhances the reflectivity of the retina [71]. However, since the human eye exhibits longitudinal 100

119 chromatic aberration, Figure 49, near infrared (NIR) measurement results will have to be converted into visible wavelength estimations. An accurate transformation of the NIR wavefront measurements of the human eye is vital to compute the proper correction that should be performed. Studies have revealed that the higher-order aberrations are not affected by this chromatic shift [72]. The only aberration affected is the defocus. There are optical and statistical models that relate the defocus error due to chromatic shift in human eyes. Empirical results show a very good correlation with these models. Figure 49: Zemax simulation of the focal shift dependence on wavelength in a model eye. A reasonable image signal to noise ratio (SNR) was achievable while maintaining safe eye exposure levels. The probe beam power that enters the eye was kept at 10uW. This constituted, roughly 6% of the maximum acceptable exposure granted by the American National Standards Institute [73]. 101

120 5.1.2 Corneal reflection mitigation To prevent the unwanted corneal reflection from entering the sensor path, the probe beam was originally linearly polarized by the PBS cube so that the identically polarized corneal reflection would not be transmitted through the same cube into the sensor path. While this was an effective way of blocking the corneal reflection the PBS cube also blocked the position of the retinal scattered light that maintained its linear polarization, thus limiting the passing signal to a smaller percentage of depolarized scattered light Figure 50. Figure 50: Linearly polarized light scheme. (a) the horizontally polarized corneal reflection is blocked by the vertical transmission polarizer of the PBS. (b) the poertion of the retinal reflection that maintains its horizontal polarization is blocked by the PBS; the only signal let through is the depolarized portion of the retinal scatter. 102

121 In order to increase the overall SNR, we wanted to employ a scheme to block the corneal reflection that did not involve filtering the polarization state of the signal. The solution was to decenter the axis of the probe beam arm relative to pupil center and consequently to the vertex of the cornea, therefore the reflection from the cornea would take place at a slant angle and would be blocked by the aperture of the system. The concept is illustrated in Figure 51. Figure 51: Preventing the corneal reflection from entering the sensor path by decentering the probe beam relative to the corneal vertex. The dotted lines represent the retinal scatter and subsequently the eye wavefront not being affected by the decenter. We calculated that for most of the normal corneal curvatures, a decenter of 1 mm would be enough to effectively divert the unwanted reflection. It was hypothesized that a 1 mm decentration will not affect the aberration measurements, since the retinal spot would still be formed within the eye s foveola, for most normal eyes. For an emmetropic eye the decenter should not affect the location of the retinal spot. For a -10 D myopic eye the 1 mm decenter translates into a 170 micron off-axis displacement of the retinal spot. However the spot itself has a size of 150 microns on average, thus rendering small displacements less significant. 103

122 In case our hypothesis was refuted by our real eye measurements we wanted to have the possibility of quickly returning to the polarization scheme, with minimal adjustments and without interchanging parts. Therefore we proposed to replace the front window cover of the system, see Figure 47, with a quarter wave plate (QWP) of the same dimensions. If the axis of the QWP is aligned with the axis of the PBS cube, there is virtually no change. If we turn the QWP by 45 degrees we essentially obtain circular polarization entering the eye. The percentage of light reflected by the cornea that maintains its polarization will come back circularly polarized in the opposite direction, pass once again through the QWP, become linearly polarized once again, but this time on the vertical direction, and pass through the PBS cube at close to 100% transmission into the sensor path. The concept is shown in Figure 52. Figure 52: Decentered probe beam and circular polarization scheme. (a) The corneal reflection is diverted away from the sensor path. (b) The portion of retinal scatter that maintains its polarization state is let through along with the depolarized part. 104

123 By employing the decenter and circular polarization scheme we were able to show SNR improvements of approximately 40% on average. Figure 53 shows a visual illustration of improved SNR. Figure 53: Shack-Hartmann spot image of the same eye measured on the same device. Centered linearly polarized light (left). Decentered circularly polarized light (right). An added benefit of using circularly polarized light for our illumination path was the improvement in uniformity across the spot image. For certain eyes, linearly polarized light creates non-uniform patterns in the pupil intensity of the signal (bow-tie, crosshatch). These intensity modulations are due to the specific spatial distributions of the corneal birefringence, corneal shapes, and corneal thicknesses. The circularly polarized light seems to alleviate this problem to a certain degree, yielding a more uniform intensity over the pupil and consequently over the Shack-Hartmann image. Similar findings were reported by Marcos et al. [74]. We show such an example in Figure 54. Using a different polarization for eye aberration measurements should pose no threats to the accuracy of the results. It has been reported in literature that no detectable differences have been recorded between measurements with unpolarized light, linearly polarized light, or circularly polarized light [74,75]. 105

124 Figure 54: Shack-Hartmann spot image of the same eye, exhibiting bow-tie intensity pattern, measured on the same device. Centered, linearly polarized light (left). Decentered, circularly polarized light (right). 5.2 Tolerances, Calibration and Alignment For the experimental prototype we were able to have a much larger number of degrees of freedom in adjustability, than a commercial device would under normal circumstances. The elements in our sensor optical path, especially the lenslet array and the CCD camera were mounted in such a way as to permit extensive and precise freedom of movement in all dimensions. For our lenslet array, we were able to adjust the tilt in x and y, the translation in x, y, and z, the clocking about the z-axis, while for the detector we had freedom of movement for all tilts and translations. This unprecedented adjustability, coupled with the availability of several validated alignment and calibration tools, such as wavefront generators and model eyes of known specifications, as well as a proven wavefront reconstruction software, made the alignment and calibration tasks significantly easier. 106

125 5.2.1 Tolerance analysis method using a raytracing software For educational purposes, we do want however to briefly present a tolerance analysis method we used early in the design process. The analysis is performed at the raytracing level (in Zemax) and assumes the unavailability of a Shack-Hartmann image processing and reconstruction software (which was our case in the beginning). The objective of this early analysis was to judge the severity of the effect of optical misalignments on wavefront measurement accuracy, even after sensor calibration. Our hypothesis has been that even though sensor calibration using a reference wavefront minimizes these errors, the impact of the misalignments at the extremes of the dynamic range has to be accounted for. Ideally, if possible, the sensor should be realigned and re-referenced until the extreme wavefronts fall within the error margins as well. The main hurdle we attempted to surpass using this tolerancing technique was translating the spot displacement error due to misalignments, into RMS wavefront measurement error without any reconstruction software. The method we propose consists of a Monte Carlo tolerance analysis based on lenslet chief ray error merit function, followed by a reoptimization of the worst-case trial, using a variable dummy phase surface at the entrance pupil of the system. We present the tolerance analysis technique as five distinct steps for clarity. It is to be noted that most of the tolerancing parameters, such as the number of Monte Carlo runs, the number of effective lenslet sub-apertures employed in the calculation, tolerance values, etc., were selected to illustrate the concept. They can and should be varied depending on the system, the fabrication limitations, and the engineer s objective. 107

126 Step 1: Range of measurement determination The desired dynamic range of wavefront measurement for the sensor is determined. In our example the dynamic range was set between -15 diopters to +15 diopters. Within this range we chose three wavefronts for our calculation: Myopic wavefront (WF M ), a converging spherical wavefront of 15 D. Reference wavefront (WF 0 ), a plane wavefront. Hyperopic wavefront (WF D ) a diverging spherical wavefront of - 15 D. Step 2: Nominal design reference coordinates computation The CCD plane coordinates of the lenslet chief rays intercepts, passing through 25 selected lenslet sub-apertures for the three above-mentioned input wavefronts were computed and stored, using Zemax. Figure 55: The 25 lenslet apertures chosen for the tolerance analysis calculation. We also calculated the displacement vectors between the reference coordinates and the two extreme wavefront coordinates and stored them separately. We selected only 25 representative sub-apertures in order to expedite the simulation runtime. The ray spots corresponding to the selected sub-apertures are shown in Figure

127 Step 3: Monte Carlo tolerance analysis Using commercial values for the tilts and decenters of the optical elements to disturb the system, and using the CCD image plane defocus, decenter, and tilt as compensators, a 100-run Monte Carlo tolerance analysis was performed in Zemax using WF 0 as input and the square root of the sum of squared departures from the reference intercept coordinates as merit function criterion. All individual Monte Carlo trials were saved during the run. Step 4: Worst-case coordinates computation Following the Monte Carlo run, the worst case run was chosen as a being representative of a poorly fabricated, misaligned system. For this worst case system, a new set of reference coordinates was computed using WF 0 as input, effectively accounting for a system calibration. The new expected spot locations for WF M and WF H were computed using the new reference grid together with the displacement vectors calculated in Step 1. After actually running WF M and WF H through the worst case system and obtaining the real intercept coordinates, we compared them, to the expected ones. The discrepancies between the two sets can cause measurement error. Step 5: System re-optimization In order to express the degradation of the system performance due to ray intercept inconsistency into wavefront measurement error, we performed a re-optimization on the worst-case Monte Carlo using a variable Zernike Phase Surface. We first removed all variables from the compensated system and we introduced a dummy Zernike Phase 109

128 Surface (ΔWF) at the entrance pupil of the system. This concept is schematically illustrated in Figure 56. Figure 56: Schematic rendering of the misaligned system and the variable Zernike Phase Surface. We set the first 15 Zernike coefficients m Cn as variables and we optimized the worst case system with respect to the same merit function as in Step 3, using WF M and WF H as inputs. The optimized Zernike coefficients of the ΔWF surface represent the departure from the plane wave needed to compensate for the misalignment errors in order to bring the spot coordinated to their expected locations. Thus, ΔWF represents the wavefront measurement error of the system when measuring at the extremes of the dynamic range. The RMS wavefront error is given by: ( C m ) 2 n RMS WF Error = (5-1) 110

129 For reasonable values for misalignments we obtained RMS measurement errors in the order of 3 waves for our given wavefronts, which converted into Sphere equivalent translated into 0.5 D error. Our calibration and alignment iterations enabled us to achieve much better measurement accuracy, as presented in the next section. As pointed out at the beginning of this section, the tolerance analysis presented here was used only as an exercise, before more sophisticated simulation techniques were developed. 5.3 Experimental Results Following careful alignment and calibration we proceeded to test our experimental prototype for accuracy, repeatability, sensitivity, and dynamic range. The testing was intended as a preliminary proof of concept and not as a full clinical validation of our system. During testing we used both calibrated standard tools with previously known aberrations, as well as human eye measurements. In order to compare the performance of our prototype, we used the LADARWave TM device as a benchmark. When possible, we used the device specifications sheet as a performance requirement while in other cases (i.e. human eye trials) we actually performed the respective measurements on both our prototype and a LADARWave TM stock device Accuracy Measurements For accuracy tests we first used a calibrated tool, which allowed us to generate wavefronts of known aberrations. The tool, named the PSA (positive spherical aberration) tool consisted of a light source and a plano-convex lens, Figure 57. The 111

130 spacing between the source and the lens was adjustable by means of a digital micrometer. By adjusting the source-to-lens distance the vergence of the generated wavefront could be varied. We were careful not to use the PSA tool in any of the calibration procedures that preceded the testing, to ensure objectiveness. Figure 57: Schematic illustration of the PSA. Source-to-lens distance adjusted to hyperopic wavefront (left). Source-to-lens distance adjusted to myopic wavefront (right). For set micrometer positions the defocus term 0 C 2, the spherical aberration term 0 C4 in microns, as well as the Spherical Equivalent in diopters are shown in Table 11. All other coefficients are smaller than half a micron. All coefficients are measured and caluculated over an 8 mm pupil. Table 11: PSA tool aberrations as function of micrometer position. Micrometer Spherical 0 0 C Position (mm) Equivalent (D) 2 (mm) C 4 (mm)

131 We mounted the PSA tool on our prototype and we took three runs through the whole dioptric range of the tool. The LADARWave TM accuracy limits we were attempting to preserve are shown in Table 12. Table 12: LADARWave TM accuracy requirement chart. Spherical Equivalent Dioptric Error High-Order RMS (D) (D) wavefront error (µm) Less than - 8D ±1%SphEq 0.15 Between -8D and -5D ±1%SphEq 0.10 Between -5D and -3D ± Between -3D and +3D ± Between +3D and +5D ± Between +5D and +8D ±1%SphEq 0.10 Greater than + 8D ±1%SphEq 0.15 The results show a very good correlation between expected aberration values and measured values. For illustration purposes we show two measurement results. The first measurement is with the micrometer position set for -6 D. Figure 58: Prototype spot image for PSA tool at -6 D. 113

132 Figure 59: Measured aberrations versus expected values, PSA tool at -6 D. Figure 60: High-Order (no defocus and astigmatism terms) measured aberrations versus expected values, PSA tool at -6 D. 114

133 The measured defocus term 0 C 2 was mm, very close to the expected value of mm. Therefore, the dioptric error in this case was very small, D. The spherical aberration term C 0 4 was measured to be mm compared to the expected value of mm. The high-order RMS wavefront error between measured and expected value was calculated to be 0.11 microns. The next example is the measurement taken with the micrometer position set for -12 D. Notice the spots becoming more closely packed especially around the edge of the pupil, Figure 61. Figure 61: Prototype spot image for PSA tool at -12 D. 115

134 Figure 62: Measured aberrations versus expected values, PSA tool at -12 D. Figure 63: High-Order (no defocus and astigmatism terms) measured aberrations versus expected values, PSA tool at -6 D. 116

135 For the -12 D case, the measured defocus term 0 C 2 was mm, close to the expected value of mm. The difference translated into a dioptric error of D. The spherical aberration term 0 C 4 was measured to be mm compared to the expected value of mm. The high-order RMS wavefront error between measured and expected value was calculated to be 0.29 microns. The same measurements were repeated for the entire range from +8 to -14 D on the PSA tool. The defocus measurements proved to be very accurate as the dioptric error, plotted in Figure 64, was maintained between the allowable error margins. Figure 64: Measured Sphere error using the PSA tool. From the figure above we notice that the sensor has the tendency to overestimate the sphere error, in both positive and negative directions. This could mean a very slight error in the positioning of the CCD camera relative to the lenslet array. 117

136 In terms of high-order RMS wavefront error, the sensor accuracy satisfies the specifications requirements up to -6D myopic wavefront. Beyond that limit, the high order RMS error increases almost linearly with the wavefront vergence. Since the sphere error was accurate for the entire range, we can speculate that the increased high-order error could be caused by a slight misalignment between tool and sensor. The overall high order aberration measurement accuracy is still relatively small. Figure 65: Measured Sphere error using the PSA tool Repeatability measurements As previously said, we took three measurements for each of the dioptric increments of the PSA tool. In order to assess the repeatability of the prototype measurements, we measured the standard deviation of the defocus term (expressed in Spherical Equivalent, 118

137 the standard deviation for the spherical aberration term, as well as the standard deviation for the high-order aberration RMS value. Figure 66: Defocus coefficient expressed as Spherical Equivalent. Standard deviation for three independent measurements. Figure 67: Spherical Aberration 0 4 C coefficient. Standard deviation for three independent measurements. 119

138 Figure 68: High-Order RMS Wavefront value. Standard deviation for three independent measurements. From Figures Figure 66, Figure 67Figure 68 it can be inferred that the sensor measurements have a very high repeatability. The defocus measurement standard deviation is within D Sensitivity preservation In order to assess the sensitivity performance of the prototype, the micrometer was moved between two very close positions: position #1: mm corresponding to 0 D, and position #2: mm corresponding to 0.02 D. Three measurements were taken for each position on both the LADARWave TM device and our prototype. The results are shown in Figure

139 Figure 69: Sensitivity measurement. Comparison between LADARWave TM and our prototype. The results show a very good agreement between the differential defocus measured between the two micrometer positions. The standard deviation for the three measurements was also similar between the devices: D for LADARWave TM and D for our prototype. The sensitivity of the two device was expected to be very similar, since the focal length of the prototype lenslet was chosen to yield equivalent sensitivity relative to the LADARWave TM device, and the spot size covered a very similar number of pixels for both devices, as shown in Figure 70. Figure 70: Spot size. (a) LADARWave TM. (b) Experimental prototype 121

140 5.3.4 Dynamic Range improvement The Dynamic Range measurements were most important for our case because they represented the main objective of our work. In order to demonstrate the dynamic range gain of our prototype in a dramatic way, we wanted to be able to measure a wavefront that was positioned beyond the limit of the LADARWave TM device. Although the PSA tool was only calibrated between +8 and -14 diopters, the micrometer knob had room to move beyond -14 D. We therefore mounted the PSA tool on the LADARWave TM device and rotated the micrometer knob until the system erred out. Just before the measurement collapse the last available reading was of D and mm of Spherical aberration ( C ). We placed the PSA tool on our experimental prototype and performed 0 4 the same measurement. The spot images of the respective devices for the highly aberrated limiting wavefront are shown in Figure 71. Figure 71: Spot image for a D wavefront, exhibiting microns of spherical aberration. (a) LADARWave TM. (b) Experimental prototype 122

OPTICAL SYSTEMS OBJECTIVES

OPTICAL SYSTEMS OBJECTIVES 101 L7 OPTICAL SYSTEMS OBJECTIVES Aims Your aim here should be to acquire a working knowledge of the basic components of optical systems and understand their purpose, function and limitations in terms

More information

Ron Liu OPTI521-Introductory Optomechanical Engineering December 7, 2009

Ron Liu OPTI521-Introductory Optomechanical Engineering December 7, 2009 Synopsis of METHOD AND APPARATUS FOR IMPROVING VISION AND THE RESOLUTION OF RETINAL IMAGES by David R. Williams and Junzhong Liang from the US Patent Number: 5,777,719 issued in July 7, 1998 Ron Liu OPTI521-Introductory

More information

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5 Lecture 3.5 Vision The eye Image formation Eye defects & corrective lenses Visual acuity Colour vision Vision http://www.wired.com/wiredscience/2009/04/schizoillusion/ Perception of light--- eye-brain

More information

Explanation of Aberration and Wavefront

Explanation of Aberration and Wavefront Explanation of Aberration and Wavefront 1. What Causes Blur? 2. What is? 4. What is wavefront? 5. Hartmann-Shack Aberrometer 6. Adoption of wavefront technology David Oh 1. What Causes Blur? 2. What is?

More information

10/25/2017. Financial Disclosures. Do your patients complain of? Are you frustrated by remake after remake? What is wavefront error (WFE)?

10/25/2017. Financial Disclosures. Do your patients complain of? Are you frustrated by remake after remake? What is wavefront error (WFE)? Wavefront-Guided Optics in Clinic: Financial Disclosures The New Frontier November 4, 2017 Matthew J. Kauffman, OD, FAAO, FSLS STAPLE Program Soft Toric and Presbyopic Lens Education Gas Permeable Lens

More information

Chapter 25. Optical Instruments

Chapter 25. Optical Instruments Chapter 25 Optical Instruments Optical Instruments Analysis generally involves the laws of reflection and refraction Analysis uses the procedures of geometric optics To explain certain phenomena, the wave

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Image of Formation Images can result when light rays encounter flat or curved surfaces between two media. Images can be formed either by reflection or refraction due to these

More information

2 The First Steps in Vision

2 The First Steps in Vision 2 The First Steps in Vision 2 The First Steps in Vision A Little Light Physics Eyes That See light Retinal Information Processing Whistling in the Dark: Dark and Light Adaptation The Man Who Could Not

More information

PHGY Physiology. SENSORY PHYSIOLOGY Vision. Martin Paré

PHGY Physiology. SENSORY PHYSIOLOGY Vision. Martin Paré PHGY 212 - Physiology SENSORY PHYSIOLOGY Vision Martin Paré Assistant Professor of Physiology & Psychology pare@biomed.queensu.ca http://brain.phgy.queensu.ca/pare The Process of Vision Vision is the process

More information

Customized Correction of Wavefront Aberrations in Abnormal Human Eyes by Using a Phase Plate and a Customized Contact Lens

Customized Correction of Wavefront Aberrations in Abnormal Human Eyes by Using a Phase Plate and a Customized Contact Lens Journal of the Korean Physical Society, Vol. 49, No. 1, July 2006, pp. 121 125 Customized Correction of Wavefront Aberrations in Abnormal Human Eyes by Using a Phase Plate and a Customized Contact Lens

More information

Aberrations and Visual Performance: Part I: How aberrations affect vision

Aberrations and Visual Performance: Part I: How aberrations affect vision Aberrations and Visual Performance: Part I: How aberrations affect vision Raymond A. Applegate, OD, Ph.D. Professor and Borish Chair of Optometry University of Houston Houston, TX, USA Aspects of this

More information

PHGY Physiology. The Process of Vision. SENSORY PHYSIOLOGY Vision. Martin Paré. Visible Light. Ocular Anatomy. Ocular Anatomy.

PHGY Physiology. The Process of Vision. SENSORY PHYSIOLOGY Vision. Martin Paré. Visible Light. Ocular Anatomy. Ocular Anatomy. PHGY 212 - Physiology SENSORY PHYSIOLOGY Vision Martin Paré Assistant Professor of Physiology & Psychology pare@biomed.queensu.ca http://brain.phgy.queensu.ca/pare The Process of Vision Vision is the process

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to the

More information

Transferring wavefront measurements to ablation profiles. Michael Mrochen PhD Swiss Federal Institut of Technology, Zurich IROC Zurich

Transferring wavefront measurements to ablation profiles. Michael Mrochen PhD Swiss Federal Institut of Technology, Zurich IROC Zurich Transferring wavefront measurements to ablation profiles Michael Mrochen PhD Swiss Federal Institut of Technology, Zurich IROC Zurich corneal ablation Calculation laser spot positions Centration Calculation

More information

WaveMaster IOL. Fast and Accurate Intraocular Lens Tester

WaveMaster IOL. Fast and Accurate Intraocular Lens Tester WaveMaster IOL Fast and Accurate Intraocular Lens Tester INTRAOCULAR LENS TESTER WaveMaster IOL Fast and accurate intraocular lens tester WaveMaster IOL is an instrument providing real time analysis of

More information

Ocular Shack-Hartmann sensor resolution. Dan Neal Dan Topa James Copland

Ocular Shack-Hartmann sensor resolution. Dan Neal Dan Topa James Copland Ocular Shack-Hartmann sensor resolution Dan Neal Dan Topa James Copland Outline Introduction Shack-Hartmann wavefront sensors Performance parameters Reconstructors Resolution effects Spot degradation Accuracy

More information

Exam 3--PHYS 151--S15

Exam 3--PHYS 151--S15 Name: Class: Date: Exam 3--PHYS 151--S15 Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Consider this diagram of the eye and answer the following questions.

More information

Retinal stray light originating from intraocular lenses and its effect on visual performance van der Mooren, Marie Huibert

Retinal stray light originating from intraocular lenses and its effect on visual performance van der Mooren, Marie Huibert University of Groningen Retinal stray light originating from intraocular lenses and its effect on visual performance van der Mooren, Marie Huibert IMPORTANT NOTE: You are advised to consult the publisher's

More information

WaveMaster IOL. Fast and accurate intraocular lens tester

WaveMaster IOL. Fast and accurate intraocular lens tester WaveMaster IOL Fast and accurate intraocular lens tester INTRAOCULAR LENS TESTER WaveMaster IOL Fast and accurate intraocular lens tester WaveMaster IOL is a new instrument providing real time analysis

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 35 Lecture RANDALL D. KNIGHT Chapter 35 Optical Instruments IN THIS CHAPTER, you will learn about some common optical instruments and

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Cardinal Points of an Optical System--and Other Basic Facts

Cardinal Points of an Optical System--and Other Basic Facts Cardinal Points of an Optical System--and Other Basic Facts The fundamental feature of any optical system is the aperture stop. Thus, the most fundamental optical system is the pinhole camera. The image

More information

Physics 11. Unit 8 Geometric Optics Part 2

Physics 11. Unit 8 Geometric Optics Part 2 Physics 11 Unit 8 Geometric Optics Part 2 (c) Refraction (i) Introduction: Snell s law Like water waves, when light is traveling from one medium to another, not only does its wavelength, and in turn the

More information

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term Lens Design I Lecture 3: Properties of optical systems II 205-04-8 Herbert Gross Summer term 206 www.iap.uni-jena.de 2 Preliminary Schedule 04.04. Basics 2.04. Properties of optical systrems I 3 8.04.

More information

Optical Connection, Inc. and Ophthonix, Inc.

Optical Connection, Inc. and Ophthonix, Inc. Optical Connection, Inc. and Ophthonix, Inc. Partners in the delivery of nonsurgical vision optimization www.opticonnection.com www.ophthonix.com The human eye has optical imperfections that can not be

More information

GEOMETRICAL OPTICS AND OPTICAL DESIGN

GEOMETRICAL OPTICS AND OPTICAL DESIGN GEOMETRICAL OPTICS AND OPTICAL DESIGN Pantazis Mouroulis Associate Professor Center for Imaging Science Rochester Institute of Technology John Macdonald Senior Lecturer Physics Department University of

More information

BIOPHYSICS OF VISION GEOMETRIC OPTICS OF HUMAN EYE. Refraction media of the human eye. D eye = 63 diopter, D cornea =40, D lens = 15+

BIOPHYSICS OF VISION GEOMETRIC OPTICS OF HUMAN EYE. Refraction media of the human eye. D eye = 63 diopter, D cornea =40, D lens = 15+ BIOPHYSICS OF VISION THEORY OF COLOR VISION ELECTRORETINOGRAM Two problems: All cows are black in dark! Playing tennis in dark with illuminated lines, rackets, net, and ball! Refraction media of the human

More information

Vision Research at. Validation of a Novel Hartmann-Moiré Wavefront Sensor with Large Dynamic Range. Wavefront Science Congress, Feb.

Vision Research at. Validation of a Novel Hartmann-Moiré Wavefront Sensor with Large Dynamic Range. Wavefront Science Congress, Feb. Wavefront Science Congress, Feb. 2008 Validation of a Novel Hartmann-Moiré Wavefront Sensor with Large Dynamic Range Xin Wei 1, Tony Van Heugten 2, Nikole L. Himebaugh 1, Pete S. Kollbaum 1, Mei Zhang

More information

What is Wavefront Aberration? Custom Contact Lenses For Vision Improvement Are They Feasible In A Disposable World?

What is Wavefront Aberration? Custom Contact Lenses For Vision Improvement Are They Feasible In A Disposable World? Custom Contact Lenses For Vision Improvement Are They Feasible In A Disposable World? Ian Cox, BOptom, PhD, FAAO Distinguished Research Fellow Bausch & Lomb, Rochester, NY Acknowledgements Center for Visual

More information

Introduction. Geometrical Optics. Milton Katz State University of New York. VfeWorld Scientific New Jersey London Sine Singapore Hong Kong

Introduction. Geometrical Optics. Milton Katz State University of New York. VfeWorld Scientific New Jersey London Sine Singapore Hong Kong Introduction to Geometrical Optics Milton Katz State University of New York VfeWorld Scientific «New Jersey London Sine Singapore Hong Kong TABLE OF CONTENTS PREFACE ACKNOWLEDGMENTS xiii xiv CHAPTER 1:

More information

Visual Optics. Visual Optics - Introduction

Visual Optics. Visual Optics - Introduction Visual Optics Jim Schwiegerling, PhD Ophthalmology & Optical Sciences University of Arizona Visual Optics - Introduction In this course, the optical principals behind the workings of the eye and visual

More information

ABO Certification Training. Part I: Anatomy and Physiology

ABO Certification Training. Part I: Anatomy and Physiology ABO Certification Training Part I: Anatomy and Physiology Major Ocular Structures Centralis Nerve Major Ocular Structures The Cornea Cornea Layers Epithelium Highly regenerative: Cells reproduce so rapidly

More information

EYE ANATOMY. Multimedia Health Education. Disclaimer

EYE ANATOMY. Multimedia Health Education. Disclaimer Disclaimer This movie is an educational resource only and should not be used to manage your health. The information in this presentation has been intended to help consumers understand the structure and

More information

phone extn.3662, fax: , nitt.edu ABSTRACT

phone extn.3662, fax: , nitt.edu ABSTRACT Analysis of Refractive errors in the human eye using Shack Hartmann Aberrometry M. Jesson, P. Arulmozhivarman, and A.R. Ganesan* Department of Physics, National Institute of Technology, Tiruchirappalli

More information

Lecture 8. Lecture 8. r 1

Lecture 8. Lecture 8. r 1 Lecture 8 Achromat Design Design starts with desired Next choose your glass materials, i.e. Find P D P D, then get f D P D K K Choose radii (still some freedom left in choice of radii for minimization

More information

OPTI-201/202 Geometrical and Instrumental Optics Copyright 2018 John E. Greivenkamp. Section 16. The Eye

OPTI-201/202 Geometrical and Instrumental Optics Copyright 2018 John E. Greivenkamp. Section 16. The Eye 16-1 Section 16 The Eye The Eye Ciliary Muscle Iris Pupil Optical Axis Visual Axis 16-2 Cornea Right Eye Horizontal Section Zonules Crystalline Lens Vitreous Sclera Retina Macula And Fovea Optic Nerve

More information

November 14, 2017 Vision: photoreceptor cells in eye 3 grps of accessory organs 1-eyebrows, eyelids, & eyelashes 2- lacrimal apparatus:

November 14, 2017 Vision: photoreceptor cells in eye 3 grps of accessory organs 1-eyebrows, eyelids, & eyelashes 2- lacrimal apparatus: Vision: photoreceptor cells in eye 3 grps of accessory organs 1-eyebrows, eyelids, & eyelashes eyebrows: protection from debris & sun eyelids: continuation of skin, protection & lubrication eyelashes:

More information

Representation of Wavefront Aberrations

Representation of Wavefront Aberrations 1 4th Wavefront Congress - San Francisco - February 2003 Representation of Wavefront Aberrations Larry N. Thibos School of Optometry, Indiana University, Bloomington, IN 47405 thibos@indiana.edu http://research.opt.indiana.edu/library/wavefronts/index.htm

More information

SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to:

SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to: SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to: Eric Hamber Secondary 5025 Willow Street Vancouver, BC Table of Contents A. Chapter 6.1 Parts of the eye.. Parts of

More information

EE119 Introduction to Optical Engineering Spring 2002 Final Exam. Name:

EE119 Introduction to Optical Engineering Spring 2002 Final Exam. Name: EE119 Introduction to Optical Engineering Spring 2002 Final Exam Name: SID: CLOSED BOOK. FOUR 8 1/2 X 11 SHEETS OF NOTES, AND SCIENTIFIC POCKET CALCULATOR PERMITTED. TIME ALLOTTED: 180 MINUTES Fundamental

More information

OPTICAL IMAGING AND ABERRATIONS

OPTICAL IMAGING AND ABERRATIONS OPTICAL IMAGING AND ABERRATIONS PARTI RAY GEOMETRICAL OPTICS VIRENDRA N. MAHAJAN THE AEROSPACE CORPORATION AND THE UNIVERSITY OF SOUTHERN CALIFORNIA SPIE O P T I C A L E N G I N E E R I N G P R E S S A

More information

Chapter 6 Human Vision

Chapter 6 Human Vision Chapter 6 Notes: Human Vision Name: Block: Human Vision The Humane Eye: 8) 1) 2) 9) 10) 4) 5) 11) 12) 3) 13) 6) 7) Functions of the Eye: 1) Cornea a transparent tissue the iris and pupil; provides most

More information

OpenStax-CNX module: m Vision Correction * OpenStax

OpenStax-CNX module: m Vision Correction * OpenStax OpenStax-CNX module: m42484 1 Vision Correction * OpenStax This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 Abstract Identify and discuss common vision

More information

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term Lens Design I Lecture 3: Properties of optical systems II 207-04-20 Herbert Gross Summer term 207 www.iap.uni-jena.de 2 Preliminary Schedule - Lens Design I 207 06.04. Basics 2 3.04. Properties of optical

More information

GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS

GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS 209 GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS Reflection of light: - The bouncing of light back into the same medium from a surface is called reflection

More information

Section 22. The Eye The Eye. Ciliary Muscle. Sclera. Zonules. Macula And Fovea. Iris. Retina. Pupil. Optical Axis.

Section 22. The Eye The Eye. Ciliary Muscle. Sclera. Zonules. Macula And Fovea. Iris. Retina. Pupil. Optical Axis. Section 22 The Eye 22-1 The Eye Optical Axis Visual Axis Pupil Iris Cornea Right Eye Horizontal Section Ciliary Muscle Zonules Crystalline Lens Vitreous Sclera Retina Macula And Fovea Optic Nerve 22-2

More information

10/8/ dpt. n 21 = n n' r D = The electromagnetic spectrum. A few words about light. BÓDIS Emőke 02 October Optical Imaging in the Eye

10/8/ dpt. n 21 = n n' r D = The electromagnetic spectrum. A few words about light. BÓDIS Emőke 02 October Optical Imaging in the Eye A few words about light BÓDIS Emőke 02 October 2012 Optical Imaging in the Eye Healthy eye: 25 cm, v1 v2 Let s determine the change in the refractive power between the two extremes during accommodation!

More information

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name:

EE119 Introduction to Optical Engineering Spring 2003 Final Exam. Name: EE119 Introduction to Optical Engineering Spring 2003 Final Exam Name: SID: CLOSED BOOK. THREE 8 1/2 X 11 SHEETS OF NOTES, AND SCIENTIFIC POCKET CALCULATOR PERMITTED. TIME ALLOTTED: 180 MINUTES Fundamental

More information

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

IMAGE SENSOR SOLUTIONS. KAC-96-1/5 Lens Kit. KODAK KAC-96-1/5 Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2 KODAK for use with the KODAK CMOS Image Sensors November 2004 Revision 2 1.1 Introduction Choosing the right lens is a critical aspect of designing an imaging system. Typically the trade off between image

More information

Multiwavelength Shack-Hartmann Aberrometer

Multiwavelength Shack-Hartmann Aberrometer Multiwavelength Shack-Hartmann Aberrometer By Prateek Jain Copyright Prateek Jain 26 A Dissertation Submitted to the Faculty of the COMMITTEE ON OPTICAL SCIENCES (GRADUATE) In Partial Fulfillment of the

More information

Physics Chapter Review Chapter 25- The Eye and Optical Instruments Ethan Blitstein

Physics Chapter Review Chapter 25- The Eye and Optical Instruments Ethan Blitstein Physics Chapter Review Chapter 25- The Eye and Optical Instruments Ethan Blitstein The Human Eye As light enters through the human eye it first passes through the cornea (a thin transparent membrane of

More information

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7)

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7) Lenses- Worksheet 1. Look at the lenses in front of you and try to distinguish the different types of lenses? Describe each type and record its characteristics. 2. Using the lenses in front of you, look

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant

More information

11/23/11. A few words about light nm The electromagnetic spectrum. BÓDIS Emőke 22 November Schematic structure of the eye

11/23/11. A few words about light nm The electromagnetic spectrum. BÓDIS Emőke 22 November Schematic structure of the eye 11/23/11 A few words about light 300-850nm 400-800 nm BÓDIS Emőke 22 November 2011 The electromagnetic spectrum see only 1/70 of the electromagnetic spectrum The External Structure: The Immediate Structure:

More information

3.0 Alignment Equipment and Diagnostic Tools:

3.0 Alignment Equipment and Diagnostic Tools: 3.0 Alignment Equipment and Diagnostic Tools: Alignment equipment The alignment telescope and its use The laser autostigmatic cube (LACI) interferometer A pin -- and how to find the center of curvature

More information

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing Chapters 1 & 2 Chapter 1: Photogrammetry Definitions and applications Conceptual basis of photogrammetric processing Transition from two-dimensional imagery to three-dimensional information Automation

More information

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline Lecture 4: Geometrical Optics 2 Outline 1 Optical Systems 2 Images and Pupils 3 Rays 4 Wavefronts 5 Aberrations Christoph U. Keller, Leiden University, keller@strw.leidenuniv.nl Lecture 4: Geometrical

More information

By Dr. Abdelaziz Hussein

By Dr. Abdelaziz Hussein By Dr. Abdelaziz Hussein Light is a form of radiant energy, consisting of electromagnetic waves a. Velocity of light: In air it is 300,000 km/second. b. Wave length: The wave-length of visible light to

More information

Pantoscopic tilt induced higher order aberrations characterization using Shack Hartmann wave front sensor and comparison with Martin s Rule.

Pantoscopic tilt induced higher order aberrations characterization using Shack Hartmann wave front sensor and comparison with Martin s Rule. Research Article http://www.alliedacademies.org/ophthalmic-and-eye-research/ Pantoscopic tilt induced higher order aberrations characterization using Shack Hartmann wave front sensor and comparison with

More information

Why is There a Black Dot when Defocus = 1λ?

Why is There a Black Dot when Defocus = 1λ? Why is There a Black Dot when Defocus = 1λ? W = W 020 = a 020 ρ 2 When a 020 = 1λ Sag of the wavefront at full aperture (ρ = 1) = 1λ Sag of the wavefront at ρ = 0.707 = 0.5λ Area of the pupil from ρ =

More information

Vision 1. Physical Properties of Light. Overview of Topics. Light, Optics, & The Eye Chaudhuri, Chapter 8

Vision 1. Physical Properties of Light. Overview of Topics. Light, Optics, & The Eye Chaudhuri, Chapter 8 Vision 1 Light, Optics, & The Eye Chaudhuri, Chapter 8 1 1 Overview of Topics Physical Properties of Light Physical properties of light Interaction of light with objects Anatomy of the eye 2 3 Light A

More information

30 Lenses. Lenses change the paths of light.

30 Lenses. Lenses change the paths of light. Lenses change the paths of light. A light ray bends as it enters glass and bends again as it leaves. Light passing through glass of a certain shape can form an image that appears larger, smaller, closer,

More information

Science 8 Unit 2 Pack:

Science 8 Unit 2 Pack: Science 8 Unit 2 Pack: Name Page 0 Section 4.1 : The Properties of Waves Pages By the end of section 4.1 you should be able to understand the following: Waves are disturbances that transmit energy from

More information

Lecture 26. PHY 112: Light, Color and Vision. Finalities. Final: Thursday May 19, 2:15 to 4:45 pm. Prof. Clark McGrew Physics D 134

Lecture 26. PHY 112: Light, Color and Vision. Finalities. Final: Thursday May 19, 2:15 to 4:45 pm. Prof. Clark McGrew Physics D 134 PHY 112: Light, Color and Vision Lecture 26 Prof. Clark McGrew Physics D 134 Finalities Final: Thursday May 19, 2:15 to 4:45 pm ESS 079 (this room) Lecture 26 PHY 112 Lecture 1 Introductory Chapters Chapters

More information

SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to:

SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to: SCIENCE 8 WORKBOOK Chapter 6 Human Vision Ms. Jamieson 2018 This workbook belongs to: Eric Hamber Secondary 5025 Willow Street Vancouver, BC Table of Contents A. Chapter 6.1 Parts of the eye.. Parts of

More information

J. C. Wyant Fall, 2012 Optics Optical Testing and Testing Instrumentation

J. C. Wyant Fall, 2012 Optics Optical Testing and Testing Instrumentation J. C. Wyant Fall, 2012 Optics 513 - Optical Testing and Testing Instrumentation Introduction 1. Measurement of Paraxial Properties of Optical Systems 1.1 Thin Lenses 1.1.1 Measurements Based on Image Equation

More information

Chapter 23 Study Questions Name: Class:

Chapter 23 Study Questions Name: Class: Chapter 23 Study Questions Name: Class: Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. When you look at yourself in a plane mirror, you

More information

Lecture 9. Lecture 9. t (min)

Lecture 9. Lecture 9. t (min) Sensitivity of the Eye Lecture 9 The eye is capable of dark adaptation. This comes about by opening of the iris, as well as a change in rod cell photochemistry fovea only least perceptible brightness 10

More information

2mm pupil. (12) Patent Application Publication (10) Pub. No.: US 2006/ A1. (19) United States. (43) Pub. Date: Sep. 14, 2006.

2mm pupil. (12) Patent Application Publication (10) Pub. No.: US 2006/ A1. (19) United States. (43) Pub. Date: Sep. 14, 2006. (19) United States (12) Patent Application Publication (10) Pub. No.: US 2006/0203198A1 Liang US 20060203198A1 (43) Pub. Date: Sep. 14, 2006 (54) (75) (73) (21) (22) (60) ALGORTHMS AND METHODS FOR DETERMINING

More information

Optics of Wavefront. Austin Roorda, Ph.D. University of Houston College of Optometry

Optics of Wavefront. Austin Roorda, Ph.D. University of Houston College of Optometry Optics of Wavefront Austin Roorda, Ph.D. University of Houston College of Optometry Geometrical Optics Relationships between pupil size, refractive error and blur Optics of the eye: Depth of Focus 2 mm

More information

Statistical Analysis of Hartmann-Shack Images of a Pre-school Population

Statistical Analysis of Hartmann-Shack Images of a Pre-school Population Statistical Analysis of Hartmann-Shack Images of a Pre-school Population by Damber Thapa A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master

More information

Chapter 3 Optical Systems

Chapter 3 Optical Systems Chapter 3 Optical Systems The Human Eye [Reading Assignment, Hecht 5.7.1-5.7.3; see also Smith Chapter 5] retina aqueous vitreous fovea-macula cornea lens blind spot optic nerve iris cornea f b aqueous

More information

PHY 431 Homework Set #5 Due Nov. 20 at the start of class

PHY 431 Homework Set #5 Due Nov. 20 at the start of class PHY 431 Homework Set #5 Due Nov. 0 at the start of class 1) Newton s rings (10%) The radius of curvature of the convex surface of a plano-convex lens is 30 cm. The lens is placed with its convex side down

More information

Use of Computer Generated Holograms for Testing Aspheric Optics

Use of Computer Generated Holograms for Testing Aspheric Optics Use of Computer Generated Holograms for Testing Aspheric Optics James H. Burge and James C. Wyant Optical Sciences Center, University of Arizona, Tucson, AZ 85721 http://www.optics.arizona.edu/jcwyant,

More information

Optical System Design

Optical System Design Phys 531 Lecture 12 14 October 2004 Optical System Design Last time: Surveyed examples of optical systems Today, discuss system design Lens design = course of its own (not taught by me!) Try to give some

More information

THE EYE. People of Asian descent have an EPICANTHIC FOLD in the upper eyelid; no functional difference.

THE EYE. People of Asian descent have an EPICANTHIC FOLD in the upper eyelid; no functional difference. THE EYE The eye is in the orbit of the skull for protection. Within the orbit are 6 extrinsic eye muscles, which move the eye. There are 4 cranial nerves: Optic (II), Occulomotor (III), Trochlear (IV),

More information

E X P E R I M E N T 12

E X P E R I M E N T 12 E X P E R I M E N T 12 Mirrors and Lenses Produced by the Physics Staff at Collin College Copyright Collin College Physics Department. All Rights Reserved. University Physics II, Exp 12: Mirrors and Lenses

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Properties of Structured Light

Properties of Structured Light Properties of Structured Light Gaussian Beams Structured light sources using lasers as the illumination source are governed by theories of Gaussian beams. Unlike incoherent sources, coherent laser sources

More information

Slide 4 Now we have the same components that we find in our eye. The analogy is made clear in this slide. Slide 5 Important structures in the eye

Slide 4 Now we have the same components that we find in our eye. The analogy is made clear in this slide. Slide 5 Important structures in the eye Vision 1 Slide 2 The obvious analogy for the eye is a camera, and the simplest camera is a pinhole camera: a dark box with light-sensitive film on one side and a pinhole on the other. The image is made

More information

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems Chapter 9 OPTICAL INSTRUMENTS Introduction Thin lenses Double-lens systems Aberrations Camera Human eye Compound microscope Summary INTRODUCTION Knowledge of geometrical optics, diffraction and interference,

More information

Choices and Vision. Jeffrey Koziol M.D. Thursday, December 6, 12

Choices and Vision. Jeffrey Koziol M.D. Thursday, December 6, 12 Choices and Vision Jeffrey Koziol M.D. How does the eye work? What is myopia? What is hyperopia? What is astigmatism? What is presbyopia? How the eye works How the Eye Works 3 How the eye works Light rays

More information

Astigmatism. image. object

Astigmatism. image. object TORIC LENSES Astigmatism In astigmatism, different meridians of the eye have different refractive errors. This results in horizontal and vertical lines being focused different distances from the retina.

More information

Big League Cryogenics and Vacuum The LHC at CERN

Big League Cryogenics and Vacuum The LHC at CERN Big League Cryogenics and Vacuum The LHC at CERN A typical astronomical instrument must maintain about one cubic meter at a pressure of

More information

25 cm. 60 cm. 50 cm. 40 cm.

25 cm. 60 cm. 50 cm. 40 cm. Geometrical Optics 7. The image formed by a plane mirror is: (a) Real. (b) Virtual. (c) Erect and of equal size. (d) Laterally inverted. (e) B, c, and d. (f) A, b and c. 8. A real image is that: (a) Which

More information

Applications of Optics

Applications of Optics Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 26 Applications of Optics Marilyn Akins, PhD Broome Community College Applications of Optics Many devices are based on the principles of optics

More information

Optical Design with Zemax

Optical Design with Zemax Optical Design with Zemax Lecture : Correction II 3--9 Herbert Gross Summer term www.iap.uni-jena.de Correction II Preliminary time schedule 6.. Introduction Introduction, Zemax interface, menues, file

More information

The eye & corrective lenses

The eye & corrective lenses Phys 102 Lecture 20 The eye & corrective lenses 1 Today we will... Apply concepts from ray optics & lenses Simple optical instruments the camera & the eye Learn about the human eye Accommodation Myopia,

More information

Testing Aspherics Using Two-Wavelength Holography

Testing Aspherics Using Two-Wavelength Holography Reprinted from APPLIED OPTICS. Vol. 10, page 2113, September 1971 Copyright 1971 by the Optical Society of America and reprinted by permission of the copyright owner Testing Aspherics Using Two-Wavelength

More information

Author Contact Information: Erik Gross VISX Incorporated 3400 Central Expressway Santa Clara, CA, 95051

Author Contact Information: Erik Gross VISX Incorporated 3400 Central Expressway Santa Clara, CA, 95051 Author Contact Information: Erik Gross VISX Incorporated 3400 Central Expressway Santa Clara, CA, 95051 Telephone: 408-773-7117 Fax: 408-773-7253 Email: erikg@visx.com Improvements in the Calculation and

More information

Refraction of Light. Refraction of Light

Refraction of Light. Refraction of Light 1 Refraction of Light Activity: Disappearing coin Place an empty cup on the table and drop a penny in it. Look down into the cup so that you can see the coin. Move back away from the cup slowly until the

More information

The Aberration Structure of the Keratoconic Eye

The Aberration Structure of the Keratoconic Eye The Aberration Structure of the Keratoconic Eye Geunyoung Yoon, Ph.D. Department of Ophthalmology Center for Visual Science Institute of Optics Department of Biomedical Engineering University of Rochester

More information

Introduction. The Human Eye. Physics 1CL OPTICAL INSTRUMENTS AND THE EYE SPRING 2010

Introduction. The Human Eye. Physics 1CL OPTICAL INSTRUMENTS AND THE EYE SPRING 2010 Introduction Most of the subject material in this lab can be found in Chapter 25 of Serway and Faughn. In this lab, you will make images of images using lenses and the optical bench (Experiment A). IT

More information

INTRODUCTION TO ABERRATIONS IN OPTICAL IMAGING SYSTEMS

INTRODUCTION TO ABERRATIONS IN OPTICAL IMAGING SYSTEMS INTRODUCTION TO ABERRATIONS IN OPTICAL IMAGING SYSTEMS JOSE SASIÄN University of Arizona ШШ CAMBRIDGE Щ0 UNIVERSITY PRESS Contents Preface Acknowledgements Harold H. Hopkins Roland V. Shack Symbols 1 Introduction

More information

Warren J. Smith Chief Scientist, Consultant Rockwell Collins Optronics Carlsbad, California

Warren J. Smith Chief Scientist, Consultant Rockwell Collins Optronics Carlsbad, California Modern Optical Engineering The Design of Optical Systems Warren J. Smith Chief Scientist, Consultant Rockwell Collins Optronics Carlsbad, California Fourth Edition Me Graw Hill New York Chicago San Francisco

More information

Digital Wavefront Sensors Measure Aberrations in Eyes

Digital Wavefront Sensors Measure Aberrations in Eyes Contact: Igor Lyuboshenko contact@phaseview.com Internet: www.phaseview.com Digital Measure Aberrations in Eyes 1 in Ophthalmology...2 2 Analogue...3 3 Digital...5 Figures: Figure 1. Major technology nodes

More information

Computer Generated Holograms for Optical Testing

Computer Generated Holograms for Optical Testing Computer Generated Holograms for Optical Testing Dr. Jim Burge Associate Professor Optical Sciences and Astronomy University of Arizona jburge@optics.arizona.edu 520-621-8182 Computer Generated Holograms

More information

The Special Senses: Vision

The Special Senses: Vision OLLI Lecture 5 The Special Senses: Vision Vision The eyes are the sensory organs for vision. They collect light waves through their photoreceptors (located in the retina) and transmit them as nerve impulses

More information

Yokohama City University lecture INTRODUCTION TO HUMAN VISION Presentation notes 7/10/14

Yokohama City University lecture INTRODUCTION TO HUMAN VISION Presentation notes 7/10/14 Yokohama City University lecture INTRODUCTION TO HUMAN VISION Presentation notes 7/10/14 1. INTRODUCTION TO HUMAN VISION Self introduction Dr. Salmon Northeastern State University, Oklahoma. USA Teach

More information