CHARACTERIZATION OF A PUPILLOMETRIC CAMERA

Size: px
Start display at page:

Download "CHARACTERIZATION OF A PUPILLOMETRIC CAMERA"

Transcription

1 S POSTGRADUATE SEMINAR ON ILLUMINATION ENGINEERING, SPRING 2008 LIGHTING UNIT, DEPARTMENT OF ELECTRONICS, HELSINKI UNIVERSITY OF TECHNOLOGY (TKK) OPTICAL PERFORMANCE: CHARACTERIZATION OF A PUPILLOMETRIC CAMERA Petteri Teikari, petteri.teikari@gmail.com Emmi Rautkylä, emmi.rautkylä@tkk.fi ABSTRACT The object of this work is to help to understand the process of characterizing a camera used in pupillometric research. In our case, the characterization consists of measuring geometric aberrations, sharpness (MTF) and noise and determining the dynamic range. Finally some ways to compensate the noticed flaws are presented.

2 TABLE OF CONTENTS ABSTRACT... 1 TABLE OF CONTENTS INTRODUCTION OPTICS & IMAGING Structure of lenses and their optical characteristics Focal length Aperture Image formation Depth of field (DOF) Modular transfer function (MTF) and contrast Noise Dynamic range Optical aberrations Chromatic aberration Geometric aberrations Vignetting Diffraction Aberration correction APPLIED LENS DESIGN Measurement science & Machine vision Photography CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE Pupillometry & overview of the setup Methods Results of the measurements Modulation transfer function and sharpness Geometric aberrations Dynamic range Noise Image restoration Sharpness Geometric aberrations Dynamic range Noise Conclusions DISCUSSION APPENDICES Appendix 1: The Matlab code for an ideal image Appendix 2: The Matlab code for an image with coma Appendix 3: The Matlab code for averaging monochrome images Appendix 4: The Matlab code for image quality calculation Appendix 5: The Matlab code for calculating variance of one row in the image REFERENCES... 41

3 1 INTRODUCTION Cameras do not 'see' in the same way those human beings are able to. They are not equivalent to human optics because their lens design defines how the image is formed. Therefore, if cameras are to be used in research purposes, it is important to know how to minimize the effect of the lens system on the research data. The object of this work is to help to understand the process of characterizing a camera. Under special examination is a pupillometric camera, hence a camera used for providing data about the autonomous nervous system by recording pupil size and dynamics. The experimental part of the paper gives a practical example of characterizing such pupillometric camera very sensitive to aberrations and noise and discusses possible ways to improve the image quality. That, together with the discussion, forms the core of the paper and raises questions for experiments to come. The work is meant for people not very familiar with optics or pupillometry. It takes a simple approach to the optics and imaging in Chapter 2. Chapter 3, for one, gives more insight to metrology with a review of more specific lens design used in machine vision and measurement science applications. INTRODUCTION 3 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

4 2 OPTICS & IMAGING In this chapter basic concepts of optic systems are reviewed in the detail needed for this work. 2.1 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS Lens or lens system is the optical structure that defines how the image is formed. In practice there are always several lenses (basic types illustrated in Figure 1 [1]) in the lens that can be bought from a store and therefore in this work the word lens refers to the lens system. Lenses can be categorized roughly to wide-angle and telephoto lenses where wide-angles have larger angle of view (with smaller focal length) and telephoto lenses have smaller angle of view (with larger focal length). Lenses can either have a fixed focal length (prime lenses) or it can be changed for example from wide-angle to telephoto when they are commonly referred as zoom lenses. Lenses could be characterized also according to their lens design and the number of elements but that kind of characterization is beyond the scope of this work. Figure 2 demonstrates the typical structure of a lens for commercial digital cameras. The lens is mounted on the camera using specific bayonet mounts [2] which are poorly intercompatible in commercial cameras even though there exist adapters to fit different bayonets to a given camera. In machine vision cameras typically there are three bayonet types [3]: 1) CS-mount, 2) C- mount, and 3) M12x0.5 (metric threads). C- mount is most commonly found from mid-range to high-end optics whereas CS-mount is a bit rarer but a CS-mount lens can be fitted into a C- mount using a proper spacer. The flange (back focal) distance is different for C- and CS-mounts, for C-mount it is to the sensor 17.52mm whereas it is 12.52mm for CS-mount [4]. C-mounts are often found from microscopes too. M12x0.5 is found from cheaper cameras. Zoom wheel is used to change the focal length if the focal length is not fixed which is however the case with many machine vision lenses. Aperture controls the amount of light reaching the image forming surface (film or sensor). Aperture is practically always found from all lenses and in commercial lenses it can be adjusted but again in many machine vision lenses the aperture is fixed. In machine vision lenses it is not common to have an image stabilizer which Figure 1. Lenses classified by the curvature of the two optical surfaces [1]. Figure 2. Illustration of the typical structure of an objective used in SLR-cameras. It should be noted that in most of the machine vision lenses there is no possibility to adjust the focus, the focal length (zoom adjustment) or the aperture as they are all fixed. Also image stabilizer is hardly ever found from objectives intended for machine vision applications. (Picture: Tuomas Sauliala). would allow longer exposures with comparable image sharpness to shorter exposure without the image stabilizer. In low light level situations image stabilizer can significantly enhance the image quality. Focus wheel or similar structure then is used to make image sharp. In following chapters the basic optical characteristics of lenses are reviewed. OPTICS & IMAGING 4 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

5 2.1.1 FOCAL LENGTH The focal length of an optical system is a measure of how strongly it converges (focuses) or diverges (diffuses) light. A system with a shorter focal length has greater optical power than one with a long focal length as illustrated in Figure 3. For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or focal points) of the lens. For a converging lens (for example a convex lens), the focal length is positive, and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens (for example a concave lens), the focal length is negative, and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens. Focal length for an ideal lens can be calculated in a following manner [5]: Figure 3. Focal length. The light rays transmitted through an infinitely thin lens meets at point P thus making focal length of the lens f. (Picture: Tuomas Sauliala) 1 f = 1 1 ( n 1) R1 R2 (1) where f = focal length n = refraction index R 1 = radii of curvature 1 (object side of the lens) R 2 = radii of curvature 2 (imaging side of the lens) However real-life lenses are not infinitely thin, therefore for the case of a lens of thickness d in air, and surfaces with radii of curvature R 1 and R 2, the effective focal length f is given by [5]: ( n 1) 1 n 1 1 n d = + f R R nr R (2) where f = focal length n = refraction index d = lens thickness R 1 = radii of curvature 1 (object side of the lens) R 2 = radii of curvature 2 (imaging side of the lens) From Eq. (2) it can be seen that if the lens is convex the focal length will increase as the thickness increases. Estimation of the focal length of a lens system comprised of two infinitely thin lenses can be also derived from Eq. (2): 1 f = 1 f f 2 d ist f f 1 2 (3) where f = focal length d ist = distance of the lenses f 1 = focal length of lens 1 f 2 = focal length of lens 1 OPTICS & IMAGING 5 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

6 As mentioned earlier, focal length relates to the angle of view of the lens. Due to the popularity of the 35 mm standard (135 film, full-frame digital cameras), camera lens combinations are often described in terms of their 35 mm equivalent focal length, that is, the focal length of a lens that would have the same angle of view, or field of view, if used on a full-frame 35 mm camera. Use of a 35 mm equivalent focal length is particularly common with digital cameras, which often use sensors smaller than 35 mm film, and so require correspondingly shorter focal lengths to achieve a given angle of view, by a factor known as the crop factor. In machine vision cameras the imaging sensor is often smaller than the 135 film frame, thus the corresponding angle of view in machine vision lenses have smaller focal length than their 35mm counterparts. Table 1 shows the diagonal, horizontal, and vertical angles of view, in degrees, for lenses producing rectilinear images, when used with 36 mm 24 mm format (that is, 135 film or full-frame 35mm digital using width 36 mm, height 24 mm) [6]. The same comparison could be easily done for machine vision imaging system simply by multiplying the focal length with some constant if the sensor size and distance of the sensor to the imaging plane is known. Table 1. Common lens angles of view for 135 film or full-frame 35 mm digital camera [6]. Focal Diagonal Vertical Horizontal APERTURE Aperture is the circular opening at the center of a lens that admits light. It is generally specified by the f-stop (also known as zone or exposure value), which is the focal length divided by the aperture diameter. It is a dimensionless number that is a quantitative measure of lens speed, an important concept in imaging [7]. Hence, a large aperture corresponds to a small f-stop. A change of one unit in f-stop corresponds to halving or doubling the light exposure. The f-number accurately describes the light-gathering ability of a lens only for objects an infinite distance away. In optical design, an alternative is often needed for systems where the object is not far from the lens. In these cases the working f-number is used. Because the human eye responds to relative luminance differences, the noise is often measured in f-stops. The f- number of the human eye varies from about f/8.3 in a very brightly lit place to about f/2.1 in the dark [ 8 ]. Toxic substances and poisons (like Atropine) can significantly reduce this range. Pharmaceutical products such as eye drops may also cause similar side-effects. Figure 4. Diagram of decreasing apertures, that is, increasing f-numbers, in one-stop increments; each aperture has half the light gathering area of the previous one. The actual size of the aperture will depend on the focal length of the lens. Increasing the f-number will also increase the depth of field as later discussed in more detail, thus aperture does not simply regulate the amount of light entering the image sensor [7]. OPTICS & IMAGING 6 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

7 2.1.3 IMAGE FORMATION Lens form the image on the imaging plane (or focal plane). The front focal point (tip of y arrow in Figure 5) of an optical system, by definition, has the property that any ray that passes through it will emerge from the system parallel to the optical axis. The rear (or back) focal point (tip of y arrow in Figure 5) of the system has the reverse property: rays that enter the system parallel to the optical axis are focused such that they pass through the rear focal point. The front and rear (or back) focal planes are defined as the planes, perpendicular to the optic axis, which pass through the front and rear focal points. An object an infinite distance away from the optical system forms an image at the rear focal plane. For objects a finite distance away, the image is formed at a different location, but rays that leave the object parallel to one another cross at the rear focal plane [9] The following relation exists for the focal length and the object image formation [10]: Figure 5. Image formation on an image plane. (Picture: Tuomas Sauliala) f Dh2 = h + h 1 2 (4) where f = focal length D = imaging distance h 1 = image height h 2 = object height Magnification m is the relationship between the physical size of the object and size of the image on the sensor or plane. It should be noted that magnification of 1:1 is rather high magnification in photography and in machine vision and that can magnifications are only found in macro lenses capable of focusing relatively close [11]. In Figure 5 the magnification can be calculated from the following equation: y q f m = ' = = = y f p a b (5) where m = magnification y = height of the formed image y = height of the object and when a-f = p: 1 f 1 1 = + a b (6) OPTICS & IMAGING 7 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

8 where f = focal length a = distance of the object from the lens b = distance of the imaging plane from the lens It can also be seen from Figure 5 that there is a relation between image plance luminosity, focal length and the distance of the object. If the same object is imaged at a close distance with wide focal length at the same F-number of the lens as with telephoto focal length so that that formed image y has the same size, the so-called effective F-number is then something than the F-number of the optics (aperture). This reduction of light can be expressed with the following equation: F' = f + q D 1 + m = M (7) where F = effective F-number F = F-number of the aperture m = magnification DEPTH OF FIELD (DOF) In optics, particularly as it relates to film and photography, the depth of field (DOF) is the portion of a scene that appears sharp in the image. Although a lens can precisely focus at only one distance, the decrease in sharpness is gradual on either side of the focused distance, so that within the DOF, the unsharpness is imperceptible under normal viewing conditions. The DOF is determined by the subject distance (that is, the distance to the plane that is perfectly in focus), the lens focal length, and the lens f-number (relative aperture). Except at close-up distances, DOF is approximately determined by the subject magnification and the lens f- number. For a given f-number, increasing the magnification, either by moving closer to the subject or using a lens of greater focal length, decreases the DOF; decreasing magnification increases DOF. For a given subject magnification, increasing the f-number (decreasing the aperture diameter) increases the DOF; decreasing f-number decreases DOF as illustrated in Figure 6 [12]. Depth of field has the following relation: = δf Figure 6. The role of an aperture (diaphgram) in image formation. When reducing the aperture (increasing f- value) some of the light rays are clipped away causing the increase of the circle of confusion. With the aperture position illustrated in the figure above, circle of confusion is b. With the biggest aperture the circle of confusion would be a. (Picture: Tuomas Sauliala) (8) where = depth of the sharp area on the image plane δ = diameter of the focus circle F = F-number of aperture OPTICS & IMAGING 8 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

9 When focus is set to the hyperfocal distance, the DOF extends from half the hyperfocal distance to infinity, and is the largest DOF possible for a given f-number. There are two commonly used definitions of hyperfocal distance, leading to values that differ only slightly. The first definition: the hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp; that is, the focus distance with the maximum depth of field. When the lens is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity will be acceptably sharp. The second definition: the hyperfocal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity. The distinction between the two meanings is rarely made, since they are interchangeable and have almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length [13]. The following relation exists for hyperfocal distance: 2 f D d = Fδ (9) where D d = hyperfocal distance f = focal length F = F-number of aperture δ = diameter of the focus circle It can be further derived from Eq. (9): Lens diameter D d = δ (10) where Dd= hyperfocal distance δ = diameter of the focus circle Thus larger lens diameter leads to larger hyperfocal distance. For example old Soviet lens Helios mm f/ has a hyperfocal distance about 250m at full aperture (f/1.5) and 48m at f/8 due to its huge size [14]. It is also possible to increase the DOF by taking multiple images of the same object with different focus distances. Focus stacking is a digital image processing technique which combines multiple images taken at different focus distances to give a resulting image with a greater depth of field than any of the individual source images. Available programs for multi-shot DOF enhancement include Syncroscopy AutoMontage, PhotoAcute Studio, Extended Depth of Field plugin for ImageJ, Helicon Focus and CombineZM. Getting sufficient depth of field can be particularly challenging in microscopy and macro photography [12]. Focus stacking can also be used to create topological maps of structures as it has been done using freely downloadable Extended Depth of Field plug-in with fly s eye in Figure 7 [15]. Figure 7. Image stack of a fly's eye. The whole stack is composed of 32 x 14-bits images of 1280x1024 pixels [15]. OPTICS & IMAGING 9 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

10 2.1.5 MODULAR TRANSFER FUNCTION (MTF) AND CONTRAST The optical transfer function (OTF) describes the spatial (angular) variation as a function of spatial (angular) frequency. When the image is projected onto a flat plane, such as photographic film or a solid state detector, spatial frequency is the preferred domain, but when the image is referred to the lens alone, angular frequency is preferred. OTF may be broken down into the magnitude and phase components as follows: [16] OTF ( ξ, η) = MTF ( ξ, η) PTF ( ξ, η) (11) where MTF ( ξ, η ) = OTF ( ξ, η ) 2 ( η ) ( ξ η ) π λ ξ, PTF, = e i and (ξ,η) are spatial frequency in the x- and y-plane, respectively. The OTF accounts for aberration, which the limiting frequency expression above does not. The magnitude is known as the Modulation Transfer Function (MTF) and the phase portion is known as the Phase Transfer Function (PTF). In imaging systems, the phase component is typically not captured by the sensor. Thus, the important measure with respect to imaging systems is the MTF. Phase is critically important to adaptive optics and holographic systems. The OTF is the Fourier transform of the Point Spread Function (PSF). The sharpness of a photographic imaging system or of a component of the system (lens, film, image sensor, scanner, enlarging lens, etc.) can be thus thought to be characterized by MTF as illustrated in Figure 8 [17]. Another related quantity is the Contrast Transfer Function (CTF). MTF describes the response of an optical system to an image decomposed into sine waves. CTF describes the response of an optical system to an image decomposed into square waves [16]. Contrast levels from 100% to 2% are illustrated on Figure 9 for a variable frequency sine pattern. Contrast is moderately attenuated for MTF = 50% and severely attenuated for MTF = 10%. The 2% pattern is visible only because viewing conditions are favorable: it is surrounded by neutral gray, it is noiseless (grainless), and the display contrast for CRTs and most LCD displays is relatively high. It could easily become invisible under less favorable conditions. Many photographic / machine vision lenses produce a superior image in the center of the frame than around the edges as illustrated in Figure 11. When using a lens designed to expose a 35mm film frame with a smaller-format sensor, only the central "sweet spot" of the image is used; a lens that is unacceptably soft or dark around the edges when used in 35mm format may produce acceptable results on a smaller sensor [18]. Typically when decreasing (increasing F-number) the difference between center and border sharpness becomes less prominent. The plot on Figure 10 illustrates the response of the virtual target to the combined effects of an excellent lens (a simulation of the highly-regarded Canon 28-70mm f/2.8l) and film (a simulation of Fuji Velvia). Both the sine and bar patterns (original and response) are shown. The red curve is the spatial response of the bar pattern to the film + lens. The blue curve is the combined MTF, i.e., the spatial frequency response of the film + lens, expressed in percentage of low frequency response, indicated on the scale on the left. (It goes over 100%.) The thin blue dashed curve is the MTF of the lens only. The edges in the bar pattern have been broadened, and there are small peaks on either side of the edges. The shape of the edge is inversely related to the MTF response: the more extended the MTF response, the sharper (or narrower) the edge. The mid-frequency boost of the MTF response is related to the small peaks on either side of the edges. OPTICS & IMAGING 10 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

11 Figure 8. Visual explanation of MTF and how it relates to image quality. The top is a target composed of bands of increasing spatial frequency, representing 2 to 200 line pairs per mm (lp/mm) on the image plane. Below you can see the cumulative effects of the lens, film, lens+film, scanner and sharpening algorithm [17]. Figure 11. Illustration of MTF curves for Canon mm f/2.8l USM zoom lens. The graphs show MTF in percent for the three line frequencies of 10 lp/mm, 20 lp/mm and 40 lp/mm, from the center of the image (shown at left) all the way to the corner (shown at right). The top two lines represent 10 lp/mm, the middle two lines 20 lp/mm and the bottom two lines 40 lp/mm. The solid lines represent sagittal MTF (lp/mm aligned like the spokes in a wheel). The broken lines represent tangential MTF (lp/mm arranged like the rim of a wheel, at right angles to sagittal lines). On the scale at the bottom 0 represents the center of the image (on axis), 3 represents 3 mm from the center, and 21 represents 21 mm from the center, or the very corner of a 35 mm film image. Separate graphs show results at f8 and full aperture [17]. Figure 9. Contrast levels from 100% to 2% illustrated for a variable frequency sine pattern. Contrast is moderately attenuated for MTF = 50% and severely attenuated for MTF = 10%. The 2% pattern is visible only because viewing conditions are favorable: it is surrounded by neutral gray, it is noiseless (grainless), and the display contrast for CRTs and most LCD displays is relatively high. It could easily become invisible under less favorable conditions [17]. Figure 10. The plot on the right illustrates the response of the virtual target to the combined effects of an excellent lens (Canon 28-70mm f/2.8l) and film (a simulation of Velvia) [17]. OPTICS & IMAGING 11 STRUCTURE OF LENSES AND THEIR OPTICAL CHARACTERISTICS

12 2.2 NOISE Digital imaging records visual information via a sensor placed at the focal plane of a camera's optics to measure the light gathered during an exposure. The sensor is constructed as an array of pixels, each of which is tasked to gather the light arriving within a small patch of sensor area. The efficiency with which the sensor and its pixels gather light, and the accuracy to which it determines the amount gathered by each pixel, are crucial for the quality of the recorded image. The incoming light is the signal the photographer wishes the camera to transcribe faithfully; inaccuracies in the recording process constitute noise, and distort the scene being photographed. In order to extract the best performance from digital imaging, it is helpful to have an understanding of the various contributions to image noise, how various design choices in digital cameras affect this noise, how choices in photographic exposure can help mitigate noise, and how to ameliorate the visual effect of noise post-capture. Possible noise sources in digital imaging are [19]: 1. Photon shot noise (Figure 12) 2. Read noise (Figure 13) 3. Pattern noise (Figure 14) 4. Thermal noise (Figure 15) 5. Pixel response non-uniformity (Figure 18) 6. Quantization error (Figure 17) Photon shot noise: Light is made up of discrete bundles of energy called photons -- the more intense the light, the higher the number of photons per second that illuminate the scene. The stream of photons will have an average flux (number per second) that arrive at a given area of the sensor; also, there will be fluctuations around that average. The statistical laws which govern these fluctuations are called Poisson statistics and are rather universal, encountered in diverse circumstances. The fluctuations in photon counts is visible in images as noise -- Poisson noise, also called photon shot noise; an example is shown in. Figure 12 The term "shot noise" arises from an analogy of the discrete photons that make up a stream of light, to the tiny pellets that compose the stream of buckshot fired from a shotgun [19]. Read noise: Photons collected by the sensels (the photosensitive part of a pixel) stimulate the emission of electrons, one for each captured photon. After the exposure, the accumulated photo-electrons are converted to a voltage in Figure 12. Photon shot noise in an image of the sky from a Canon 1D3 (in the green channel). In the histogram at right, the horizontal coordinate is the raw level (raw units are sometimes called analog-to-digital units ADU or data numbers DN), the vertical axis plots the number of pixels in the sample having that raw level. The photon noise was isolated by taking the difference of two successive images; the raw values for any one pixel then differ only by the fluctuations in the photon count due to Poisson statistics (apart from a much smaller contribution from read noise) [19]. Figure 13. Read noise of a Canon 1D3 at ISO 800. The histogram of the noise is approximately Gaussian. The average value of 1024 is due to an offset Canon applies to raw data [19]. proportion to their number; this voltage is then amplified by an amount proportional to the ISO gain set in the camera, and digitized in an analog-to-digital converter (ADC). The digital numbers representing the photon counts for all the pixels constitute the RAW data for the image (raw units are sometimes called analog-to-digital units ADU, or data numbers DN). In the real world, the raw level does not precisely reflect the photon count. Each electronic circuit component in the signal processing chain -- from sensel readout, to ISO gain, to digitization -- suffers voltage fluctuations that contribute to a deviation of the raw value from the ideal value proportional to the photon count. OPTICS & IMAGING 12 NOISE

13 The fluctuations in the raw value due to the signal processing electronics constitute the read noise of the sensor (Figure 13) [19]. Pattern noise: In terms of its spatial variation, read noise is not quite white. Upon closer inspection, there are one-dimensional patterns in the fluctuations in Figure 14. Because the human eye is adapted to perceive patterns, this pattern or banding noise can be visually more apparent than white noise, even if it comprises a smaller contribution to the overall noise. Pattern noise can have both a fixed component that does not vary from image to image (can be easily fixed with a proper pattern template); as well as a variable component that, while not random from pixel to pixel, is not the same from image to image (which is again harder to eliminate) [19]. Thermal noise: Thermal agitation of electrons in a sensel can liberate a few electrons; these thermal electrons are indistinguishable from the electrons freed by photon (light) absorption, and thus cause a distortion of the photon count represented by the raw data. Thermal electrons are freed at a relatively constant rate per unit time, thus thermal noise increases with exposure time as illustrated in Figure 15. Another thermal contribution to image degradation is amplifier glow (Figure 16), which is caused by infrared radiation (heat) emitted by the readout amplifier For exposures of less than a second or so, read noise is relatively constant and thermal noise constitutes a negligible contribution to overall image noise [19]. Pixel response non-uniformity (PRNU): Not all pixels in a sensor have exactly the same efficiency in capturing and counting photons; even if there were no read noise, photon noise, etc, there would still be a variation in the raw counts from this non-uniformity in pixel response, or PRNU. PRNU "noise" grows in proportion to the exposure level -- different pixels record differing percentages of the photons incident upon them, and so the contribution to the standard deviation of raw values from PRNU rises in direct proportion to the exposure level. On the other hand, photon shot noise grows as the square root of exposure; and read noise is independent of exposure level. Thus PRNU is most important at the highest exposure levels. At lower exposure levels, photon Figure 15. Thermal noise in 20D blackframes at ISO 400. The knee in the data at exposure time 15sec is due to the max pixel raw level reaching 4095 (the maximum possible value on this camera), indicating that the rise in standard deviation is largely due to a few outliers in the distribution [19]. Figure 14. Pattern noise in a 20D at ISO 800. Fixed pattern noise can be removed. By making a template from the average of 16 identical blackframes and subtracting it from the image most of the fixed pattern noise is removed. The residual variable component of pattern noise consists in this example largely of horizontal banding noise [19]. Figure 16. Amplifier glow (lower right) in a 612 sec exposure of a Canon 20D [19]. noise is the dominant contribution until one gets into deep shadows where read noise becomes important [19]. OPTICS & IMAGING 13 NOISE

14 Quantization error: When the analog voltage signal from the sensor is digitized into a raw value, it is rounded to a nearby integer value. Due to this rounding off, the raw value misstates the actual signal by a slight amount; the error introduced by the digitization is called quantization error, and is sometimes referred to as quantization noise. In practice, this is a rather minor contribution to the noise. Figure 17 shows the result of quantization on the noise histogram. Averaging the quantization error over a uniformly distributed set of input values will yield an average quantization error of about 0.3 of the quantization step. Thus, quantization error is negligible in digital imaging provided the noise exceeds the quantization step [19]. In practice the following components in imaging system can influence the abovementioned noise components: 1) Little Figure 17. The error introduced by quantization of physical pixel size (small sensor) which do not a noisy signal is rather small. On the left, noise of allow sufficient amount of photons to hit the width eight levels; on the right, the quantization sensor, 2) Sensor technology and manufacturing, step is increased to eight levels, but the width of 3) High ISO speed, 4) Long exposure time, 5) the histogram increases by less than 10%. Digital processing: sampling and quantization, 6) Raw conversion: noise reduction and sharpening. In pupillometry the used infrared radiation is not used most efficiently by CMOS/CCD sensors that have a peaked sensitivity around 550 nm corresponding to the sensitivity of human eye. Therefore the camera should have as sensitive sensor as possible which is usually given as some minimal lux amount needed for image formation in technical specifications (e.g F1.4 for Watec WAT DM2S [20], S/N ratio: 52dB). Thermal noise can be reduced by cooling the sensor with a Peltier element (e.g. of modification [21], commercial Peltier-elements [22], and more theoretical paper from Hamamatsu on CCD sensors for estimating the effect of cooling and reducing dark current [23]) but it should noted that this cooling can introduce transient type disturbances [Juha Peltonen, TKK, pers. comm..] to sensitive measurement devices if for example simultaneous electroencephalography (EEG) recording is done. 2.3 DYNAMIC RANGE Figure 18. Noise due to pixel response nonuniformity (PRNU) of a Canon 20D at ISO 100, as a function of raw value. Fluctuations in the response from pixel to pixel are about 0.6% [19]. Dynamic range (also called as exposure range) is the range of brightness over which a camera responds. It is usually measured in f-stops. Cameras with a large dynamic range are able to capture shadow detail and highlight detail at the same time. Practical dynamic range is limited by noise, which tends to be worst in the darkest regions. Dynamic range can be specified as total range over which noise remains under a specified level, i.e. the lower the level is, the higher is the image quality [24]. If the imaged scene is static (not the case with pupil) it is possible to use several exposure values for multiple images and increase the dynamic range with a technique called high dynamic range (HDR) imaging [25,26]. Figure 19 shows real-world range of luminances ranging from 10 8 cd/m 2 to 10-5 cd/m 2 [27]. This wide range cannot be achieved using modern commercial digital cameras with a single exposure. Latest commercial digital cameras such as Canon EOS 450D [ 28 ] uses 14-bit analog-digital conversion but this is not the actual dynamic range of the camera as there is noise in the sensor output. In digital imaging a way to maximize S/N-ratio (signal/noise) is to expose to the right of the histogram while avoiding the saturation of white [29]. The Rose criterion (named after Albert Rose) OPTICS & IMAGING 14 DYNAMIC RANGE

15 states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. An SNR less than 5 means less than 100% certainty in identifying image details [30]. Dynamic range is not very critical parameter in pupillometry as mainly the idea is to isolate the dark pupil from light surrounding and in theory only 2 light levels are needed. In practice some additional reserve is needed but typically the image quality is more degraded by excessive noise rather than lack of dynamic range. Figure 19. Typical range of real-world luminances (cd/m 2 ). If the imaging device needs to capture all the different shades in a scene with sun and starlight at the same it would need a dynamic range of 13 f-stops (10 8 /10-5 ) or 130dB [27]. 2.4 OPTICAL ABERRATIONS In an ideal optical system, all rays of light from a point in the object plane converge to the same point in the image plane, forming a clear image (Ideal Matlab-presentation in Figure 20). However, since the lens systems are never perfect, they may produce distortions in the image called aberrations. There are many types of aberrations, categorized into 1 st order, 3 rd order, 5 th order, etc. The most common types of aberrations are the 3 rd order aberrations: chromatic, spherical, coma, field curvature, distortion, and astigmatism. In addition to those the physical structure of the optics might cause vignetting or diffraction of light CHROMATIC ABERRATION The velocity of light changes when it passes through different mediums. Short wavelengths travel slower in glass than long wavelengths. This causes the colors to disperse. The phenomenon is called chromatic aberration and can be seen especially clearly in a prism. Chromatic aberration includes longitudinal and Figure 20. An ideal image from a point source with no aberration present. The source code of the image is presented in Appendix 1. The Matlab code used for this and all subsequent figures include constant values H=1 and W IJK=10.2, and r is a function of x and y. The code used also sets rcos(theta) equal to x. Figure 21. Chromatic aberration is caused by a lens having a different refractive index for different wavelengths of light. (Picture: Tuomas Sauliala) OPTICS & IMAGING 15 OPTICAL ABERRATIONS

16 lateral aberration along the axes. The phenomenon creates "fringes" of color around the image, because each color in the optical spectrum cannot be focused at a single common point on the optical axis [31]. Instead, a series of different colored focal points become arranged behind each other on the optical axis as presented in Figure 21. Chromatic aberration can be best seen in the border areas of the picture where the light refracts the most. It is a common error of lenses having long focal length and therefore considered as a problem in telescope design. It is also found disturbing in luminaire optics. Chromatic aberration can be corrected by combining a positive (concave) low-dispersion lens with a negative (convex) high dispersion lens. Also many types of glass have been developed to reduce chromatic aberration, most notably, glasses containing fluorite [32] GEOMETRIC ABERRATIONS Geometric aberrations are monochromatic that are characterized by the changes of shape of the imaged object as the name implies. In this chapter they are briefly reviewed Spherical Aberration Refracted light rays from an ideal, infinitely thin spherical lens meet in the focus of the lens. However, all the spherical lenses in real life circumstances have thickness, which results in the over-refraction of the light beams coming from the edge of the lens as can be seen in Figure 22. These rays miss the focus point causing blurring to the image. This type of error is called spherical aberration and it is a problem of lenses that have small F-values. It is Figure 23. Spherical aberration is caused by the spherical difficult to correct the error caused shape of lens. (Picture: Tuomas Sauliala) by spherical aberration. The correction may partially be done with a group of two spherical lenses, however, the best result is achieved with aspheric lenses [33] Coma Coma is an aberration that occurs when the object is not on the optical axis [34]. The light rays enter the lens at an angle relative to the axis. This causes the system magnification to vary with the pupil position. The image is distorted to resemble the shape of a comet as can been seen in Figure 23. Coma is a common error in Figure 22. Matlab representation of coma. Coma causes the telescope lenses that have long focal image to represent a comet shape because the object or light length. It is dependent of the shape of source is not on the optical axis. The Matlab code of coma is the lens and can therefore be corrected presented in Appendix 2 for example of the source code. The coefficient W 131 was set to 20.2 in this calculation. OPTICS & IMAGING 16 OPTICAL ABERRATIONS

17 with a set of lenses of different shape. The effect of coma can also be reduced by restricting the height of light rays by adjusting the size and location of the aperture [35] Astigmatism Like coma, astigmatism also occurs off-axis. The farther off axis the object is, the greater the astigmatism. The lens is unable to focus horizontal (tangential) and vertical (sagittal) lines in the same plane [36]. Instead of focusing rays to a point, they meet in two line segments perpendicular to each other. These are the sagittal and tangential focal lines. The light rays in these two planes are imaged at different focal distances. This results in an image with either a sharp horizontal or a sharp vertical line as seen in Figure 24 Astigmatism can be corrected by placing the tangential and sagittal lines on top of each other. In case there is a set of lenses, it can be done by adjusting the distance between them. Other option is to use different radii of curvature in different planes (e.g. a cylindrical lens) Field curvature Figure 24 Matlab representation of astigmatism. Astigmatism occurs when the lens is unable to focus horizontal and vertical lines in the same place. Curvature of field occurs when the image produced by a lens is focused on a curved plane, but the film plane is flat. Figure 25 demonstrates the phenomenon in practice. Figure 26 is a Matlab representation of field curvature Figure 25. Field curvature is the aberration that makes a planar object look curved in the image. Adapted from [37]. Figure 26. Field curvature as a Matlab representation. Various patents have been made to help correcting field curvature aberration in an optical system with either one or more lenses. E.g. one patent suggests using non-planar correction surface shaped such that focal points of the focusing elements lie closer to a single plane than with planar correction surface. Another patent, designed to remove field curvature in system of two imaging elements, talks OPTICS & IMAGING 17 OPTICAL ABERRATIONS

18 on behalf of the fiber optic that can provide an input object to the second imaging element with a reversed field curvature so that a correct output image can be obtained from the second imaging element. [38] In microscopy one method that has been used to reduce field curvature is to insert a field stop in order to remove light rays on the edge [39]. This method unfortunately greatly decreases the light collecting power of the lens. It can also increase distortion [40] Pincushion and barrel distortion Distortion, yet another type of aberration, shifts the image position. The images of lines that meet directly in the origin appear straight, but the images of any surrounding straight lines appear curved [41]. There are two types of distortion: pincushion and barrel. In pincushion distortion, the lines at the edge of the image bow inward. In barrel distortion, the edges bow out (Figure 27). Figure 27. Barrel distortion on the left and pincushion distortion on the right [42]. Pincushion distortion is most common in telephoto-lenses (lenses with a field of view 25 or less [43]) and barrel distortion in wide angle lenses (lenses with a field of view 65 or more [43]). Both types of distortion occur often in wide-angle zoom-lenses of low-cost, because the enlargement ratio changes when moving from the edge towards the center of the lens. Lenses with solid focal lengths can be optimized for certain focal lengths. It enables keeping the enlargement ratio constant throughout the entire picture area. Distortion is difficult to correct for in zoom lenses and possible only with expensive aspheric lenses in practice. The aberration can, however, be minimized by a symmetrical lens design, which is orthoscopic (ray leaves the lens at the same angle at which it entered) [44]. The aperture has no effect on distortion. Neither has the position of the dimmer [33]. However, distortion depends on the focusing distance. Infinity focus and close focus may yield different amounts of distortion with the same lens [44] VIGNETTING In some cases the center of the picture-area is brighter that the edge of the picture area. This phenomenon is caused by both the physical structure of optics (vignetting) and the so-called cosine law. In vignetting, the physical structure of the lens partially restricts the light from arriving to the surface where the image is to be formed. The phenomenon is most common in objectives that have extensive focal lengths. Vignetting can be avoided by extinguishing the picture, in other words by increasing the F-value of the lens. Vignetting is also reduced in many crop sensor cameras [45] as the lenses are typically designed for full frame / 35mm cameras (see Figure 28right [46]). Sometimes vignetting is, however, applied to an otherwise un-vignetted photograph for an artistic effect [47]. For OPTICS & IMAGING 18 OPTICAL ABERRATIONS

19 one s part, the cosine law is a physical fact that can not be altered (Figure 28left). According to the cosine law, the amount of light arriving to the picture-area decreases according to the cosine of the incoming light beam s fourth power [33]. Figure 28. (left). Cosine law is based on the geometry of light s behavior. The ray of light arriving in a steep angle is spread into a larger area than light arriving perpendicularly. (Picture: Tuomas Sauliala). (right) Example of vignetting profile (light falloff) at f/2.8@200mm of Nikon AF-S VR Nikkor mm F2.8G on 35 full frame sensor. Red rectangle indicates the falloff on crop (1.5x) sensors. The first band outside the central area indicates 2/3 stop falloff, and remaining bands are 1/3 stop intervals [45] DIFFRACTION As the lens structures and the cells become smaller, a physical phenomenon called diffraction becomes increasingly important. Diffraction reveals the dualistic nature of light. Traditional geometric optics investigates light as a radiate phenomenon which refracts and reflects according to optical principles. However, light has also wave nature, which can be seen, as a ray of light passes through a small hole. In diffraction, a light beam passing through a small hole, gains rings around itself. The first-order ring is the strongest and its radius obeys the equation [48]: r = 1.22λF (12) where r = radius of the ring λ = wavelength of light. F = F-number As can be seen in Figure 29 the intensity of diffraction depends mainly on the lens s F-number. By extinguishing the lens, most of the distortions of the lens can be fixed, but on the other hand, this augments the diffraction phenomenon. Diffraction as it stands, denies the manufacturing of an ideal lens. There is no way to completely avoid diffraction from occurring [43]. Figure 29. The effect of diffraction on MTF-function with different F-numbers. The Resolution limit in the picture refers to the Rayleigh limit for the resolution of the picture [48]. OPTICS & IMAGING 19 OPTICAL ABERRATIONS

20 2.5 ABERRATION CORRECTION As described before in Chapter 2.2., aberrations can often be corrected already before the lenses are manufactured. That is done by exploiting their calibration information to choose the best suitable material (based on e.g. optical, chemical, mechanical and thermal characteristics, processability and price) and lens design for the purpose. Other option is to take action in the phase when to lens system is built and pay attention to the refraction indices of the components. No matter how well the lens or the lens system is chosen, there might, however, be left errors that can only be corrected afterwards with specific software. In photography for example, aberrations such as barrel and pincushion aberration can not be avoided when an image is taken with a wide angle or teleobjective. PTLens [49] is software that corrects lens pincushion and barrel distortion, vignetting, chromatic aberration and perspective. It can be installed either besides Photoshop or as an individual program. No matter how it is used, it only needs the profile information of the camera that the images have been taken with. The operating system is presented in Figure 30. The menus on left hand side are for correcting purple fringing caused by chromatic aberration and darkening of the edges caused by vignetting. The menu of right hand side is for choosing the image file. Distortion field is for defining the parameters for correcting distortion. Figure 30. (left) Operating system of PTLens for correcting distortion, vignetting, chromatic aberration and perspective. (right) Magnification of the menus on left hand side [49]. Correcting chromatic aberration is demonstrated in Figure 31. The purple fringing can be seen in window frames in the original image on the left. PTLens works well in removing the lateral aberration. Longitudinal aberration can not be removed. Figure 31. Example of chromatic aberration correction with PTLens. On the left original image, on the right the corrected image [49]. OPTICS & IMAGING 20 ABERRATION CORRECTION

21 As can be seen, in the left side image in Figure 32, there is barrel distortion caused by the wide angle objective. In the enhanced image on the right the distortion is corrected rather well. However, the image would still need perspective distortion correction which is also available in PTLens. Figure 32. Example of barrel distortion correction with PTLens. On the left original image, on the right the corrected image [49]. Another software to be used to correct aberrations is DxO Optics Pro. However, is it compatible only with the optics of rather expensive digital SLR cameras. Another drawback is that it does not support raw-format of the image. Hence jpg-images are the only images taken with cheaper cameras that the software is able to analyze. Therefore, it is reasonable to say that DxO Optics Pro is intended for professional photographers. All in all, DxO Optics Pro is a good program, but it is expensive (around 100 Euros) compared to PTLens which is around 10 euros. Photoshop s own repair tools do not reach their level in image restoration and PTLens can be used as a Photoshop plug-in. Good alternative for graphical software is Matlab, a numerical computing environment and programming language. Using Matlab for image enhancement requires some knowledge about signal processing but there are many scripts available that can be easily modified for one s own purposes. In reference [50] there is an example of Peter Kovesi s code for distortion correction. OPTICS & IMAGING 21 ABERRATION CORRECTION

22 3 APPLIED LENS DESIGN In this chapter specific lens designs are presented in regard to machine vision and measurement science applications as well as photographic lenses. Typically special designs are neglected in physics and optics textbooks and this chapter serves as an introduction to some of the special cases. 3.1 MEASUREMENT SCIENCE & MACHINE VISION A telecentric lens is a compound lens with an unusual geometric property in how it forms images. The defining property of a telecentric system is the location of the entrance pupil or exit pupil at infinity (Figure 33). This means that the chief rays (oblique rays which pass through the center of the aperture stop) are parallel to the optical axis in front of or behind the system, respectively [51,52]. If the entrance pupil is at infinity, the lens is object-space telecentric. If the exit pupil is at infinity, the lens is imagespace telecentric. Such lenses are used with image sensors that do not tolerate a wide range of angles of incidence. For example, a 3-CCD color beamsplitter prism assembly works best with a telecentric lens, and many digital image sensors have a minimum of color crosstalk and shading problems when used with telecentric lenses. If both pupils are at infinity, the lens is double telecentric. Such lenses are used in machine vision systems to achieve dimensional and geometric invariance of images within a range of different distances from the lens (e.g. Figure 35) and across the whole field of view (e.g. Figure 34). Because their images have constant magnification and geometry, telecentric lenses are used for determining the precise size of objects independently from their position within the FOV and even when their distance is affected by some degree of unknown variations. These lenses are also commonly used in optical lithography, for forming patterns in semiconductor chips. In pupillometry, telecentric lenses would provide less noise in measurement data to the changes in gaze. Telecentric lenses tend to be larger, heavier, and more expensive than normal lenses of similar focal length and f- Figure 33. Working principle of different types of lenses [51]. Figure 34. Perspective error due to common optics (left image) and perspective error absence (right image) with a telecentric lens [51]. Figure 35. a) This image shows different dimensions for pins on a PCB. b) This image, which was taken through a telecentric lens, provides accurate information. Courtesy of Edmund Optics [52] APPLIED LENS DESIGN 22 MEASUREMENT SCIENCE & MACHINE VISION

23 number. This is partly due to the extra components needed to achieve telecentricity, and partly because the object or image lens elements of an object or image-space telecentric lens must be at least as large as the largest object to be photographed or image to be formed. These lenses can range in cost from hundreds to thousands of euros, depending on quality. Because of their intended applications, telecentric lenses often have higher resolution and transmit more light than normal photographic lenses [53]. It is also possible to transform commercially available lenses to telecentric ones by adding an extra aperture (Figure 36) as shown by Watanabe and Nayar (1999) [54]. Authors have analytically derived the positions of aperture placement for a variety of off-the-shelf lenses and demonstrated that their approach actually work in eliminating magnification variations (Table 2). Figure 36. Telecentric optics achieved by adding an aperture to a conventional lens. This simple modification causes image magnification to be invariant to the position of the sensor plane, i.e., the focus setting [54]. Table 2. Magnification variations for four widely used lenses and their telecentric versions [54]. As a summary telecentric lenses in metrology should be used when [51]: Whenever a thick object (thickness > 1/10 FOV diagonal) must be measured When different measurements must be carried on different object planes When the object to lens distance is not exactly known or when it cannot be previewed When holes must be inspected or measured When the profile of a piece must be extracted When the image brightness must be almost perfectly even When defects can be detected using a directional illumination and a directional point of view However in practice some compromises have to be made due to the larger size (extreme example in Figure 37) and higher cost of telecentric lenses over traditional lenses. By acquiring a telecentric lens will definitely make your measurement more accurate but without further experimenting it is hard to estimate how much better and would that be worth the invested money Figure 37. Extreme example of a large telecentric lens capable of a field of view of over 400 mm (diagonal) [51]. APPLIED LENS DESIGN 23 MEASUREMENT SCIENCE & MACHINE VISION

24 3.2 PHOTOGRAPHY One special design aspect can be found from photographic lenses which is bokeh. Bokeh (derived from Japanese bokeaji ボケ味, "blur") is a photographic term referring to the appearance of out-of-focus areas in an image (Figure 38) produced by a camera lens using a shallow depth of field. Different lens bokeh produces different aesthetic qualities in out-of-focus backgrounds, which are often used to reduce distractions and emphasize the primary subject [55]. It is important to note that excluding bokeh the existing literature on photographic lenses can be used for understanding machine vision optics as well. The shape of the aperture has a great influence on the subjective quality of bokeh. When a lens is stopped down to something other than its maximum aperture size (minimum f-number), outof-focus points are blurred into the polygonal shape of the aperture rather than perfect circles. This is most apparent when a lens produces undesirable, hard-edged bokeh, therefore some lenses have aperture blades with curved edges to make the aperture more closely approximate a circle rather than a polygon. Lens designers can also increase the number of blades to achieve the same effect. Traditional "Portrait" lenses, such as the "fast" 85mm focal length models for 35mm cameras often feature almost circular aperture diaphragms. Mirror (catadioptric) lenses on the other hand are known for unpleasant bokeh [56]. The comparison between the bokeh patterns of thee different 50 mm lenses with different price tags can be seen in Figure 39 with the most expensive one producing the most pleasant and circular bokeh [57]. Bokeh can be simulated by convolving the image with a kernel corresponding to the image of an outof-focus point source taken with a real camera. Diffraction may alter the effective shape of the blur. Some graphics editors have a filter to do this, usually called "Lens Blur" though Gaussian blur is often used to save time or when realistic bokeh is not required. Figure 38. (left) Picture without bokeh, (right) the same picture with synthetic bokeh [55] Figure 39. Comparison of three different 50 mm Canon prime lenses. (left) Canon EF 50mm f/1.2 L USM [~1200 ], (center) Canon EF 50mm f/1.4 USM [~350 ], and (right) Canon EF 50mm f/1.8 II [~100 ] [57]. APPLIED LENS DESIGN 24 PHOTOGRAPHY

25 4 CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 4.1 PUPILLOMETRY & OVERVIEW OF THE SETUP Under characterization was Unibrain Fire-i monochrome board camera (M12x0.5 lens base with the possibility to use C-mount lenses with an adapter) [58] with Sony's ICX098BL CCD sensor [59] and Unibrain 50mm Telephoto Lens (view angle 6, f2.5, part no. 4370) [58]. The camera was set on automatic exposure and placed in the middle of a Goldman perimeter with diameter of 60 cm. The targets were attached one by one on a stand (used for holding subjects chin in visual field measurements) at the horizontal distance of 38 cm from the image plane of the camera. Focusing was done by hand. The Goldman perimeter was illuminated with blue LEDs (λ max=468 nm, hbw=26 nm) with an illuminance of ~40 lux at the target prints. In ideal case infrared radiation should have been used to illuminate target prints but the surface of the prints was too reflective for infrared radiation that the measurement would have been impossible with them. Figure 40. (left) Goldman perimeter in use and being lit by 5 mm blue LEDs which are able to provide even luminance for the inner sphere of the Goldman perimeter. (right) Rear of Goldman perimeter showing the power control box of the LEDs on the left and the actual pupil camera mounted on the back of the sphere (gray box with the red cord). 4.2 METHODS Three targets were used for characterizing the camera. They are presented in Figure 41, Figure 43 and in Figure 45. SFR quadrants in Figure 41 were used for defining the MTF thus the sharpness of the imaging system. The grid in Figure 43 acted as the target for distortion measurements. Target in Figure 45 was a noise target in accordance with ISO standard and it was also used for defining the dynamic range. The original targets were vector graphics of 8320 x 6400 pixels. For testing purposes, they were printed on a standard dull coated photo paper (10 x 15 cm, Zoomi Kuvakauppa, Helsinki, Finland) with the target size being 2.1 x 1.6 cm. Each target was recorded on video for 10 seconds with the frame rate of 15 frames per second (fps). From each video 128 frames were averaged to one for further analysis in comparison to a single frame. The averaged frames are presented in Figure 42, Figure 44, and in Figure 46. All measurements were analyzed in Matlab using Imatest [60], a software package for measuring key image quality factors. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 25 PUPILLOMETRY & OVERVIEW OF THE SETUP

26 Figure 41. Target used for measuring MTF. SFR quadrants adapted from Imatest [60]. Figure 42. Recorded image to assess the sharpness. Figure 43. Target used for recording distortion. The grid adapted from Imatest [60]. Figure 44. Recorded image to define distortion. Figure 45. Target used for recording noise and dynamic range. The ISO chart adapted from Imatest [60]. Figure 46. Recorded image to define noise and dynamic range. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 26 METHODS

27 4.3 RESULTS OF THE MEASUREMENTS MODULATION TRANSFER FUNCTION AND SHARPNESS To define the MTF for the pupil camera, a region of interest (ROI) of 33 x 87 pixels was chosen from the averaged frame in Figure 42. The ROI is presented on the right in the screenshot of Imatest SFR program output in Figure 47. Figure 47. Imatest SFR program output for defining the modulation transfer function. The MTF response plots tell that the black and white edges in the image are not very sharp. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 27 RESULTS OF THE MEASUREMENTS

28 The screenshot consists of two response plots, namely one plot in frequency and one in spatial domain. They convey similar information, but in a different form. A narrow edge in spatial domain corresponds to a broad spectrum in frequency domain and vice-versa. As can be seen from the spatial domain plot (upper left), the edge rises from 10% to 90 % of its final value in 3.82 pixels. Hence the edge is rather broad meaning that it appears blurry in the image. The spatial frequency response plot (lower left) confirms that the camera does not offer images of good quality what it comes to sharpness. According to Imatest tutorials [61] the best indicators of image sharpness are the spatial frequencies where MTF is 50% of its low frequency value (MTF50) or 50% of its peak value (MTF50P). This is because (1) Image contrast is half its low frequency or peak values, hence detail is still quite visible. (2) The eye is relatively insensitive to detail at spatial frequencies where MTF is low: 10% or less. (3) The response of virtually all cameras falls off rapidly in the vicinity of MTF50 and MTF50P. Imatest also tells that for the images to be good, the MTF response should be over 0.3 Cy/Pxl. As can be seen in the Figure x, the MTF50 for the camera under testing is Cy/Pxl, thus it does not fulfil the criterion. Not even the standardized sharpening marked with the red, bold, dashed line in the plots is enough to improve the MTF response to a good level. However, since the MTF is high below the Nyquist frequency and low at and above it, the response should be free of aliasing problems. It is worth of noticing that the horizontal and vertical resolution can be different for CCD sensors, and should have been measured separately. Within this work it is, however, not relevant to repeat the evaluation for horizontal sharpness. Also it is possible that the MTF varies over the lens area. It means that a value taken on the middle of the lens in the test could have been better than a value if it was taken on the edge. For all-inclusive analysis the MTF should be measures more thoroughly. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 28 RESULTS OF THE MEASUREMENTS

29 4.3.2 GEOMETRIC ABERRATIONS The distortion output is shown in Figure 48. Corrected vertical lines are deep magenta; horizontal lines are blue. There is hardly any difference between them and the grey lines to be seen meaning that the distortion is very small. The negative value of SMIA TV Distortion displayed below the image on the left refers to small barrel distortion according to following definition: SMIA TV Distortion > 0 is pincushion SMIA TV Distortion < 0 is barrel. The distortion is calculated according to the following formula: A1 + A2 B SMIA TV Distortion = B where A1, A1 and B refer to the geometry in Figure 49. (13) The fact that k 1 < 0.01 tells that the distortion is insignificant and does not need to be corrected. Figure 49. Illustrating SMIA TV Distortion. SMIA = Standard Mobile Imaging Architecture Figure 48. Imatest Distortion program output for assessing the distortion. The results tell that there is a small amount of barrel distortion in the image but it is insignificant and difficult to see. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 29 RESULTS OF THE MEASUREMENTS

30 4.3.3 DYNAMIC RANGE Imatest Stepchart [62] was used for analyzing the dynamic range. To be able to perform the analysis, it was fundamental to understand the term f-stop (also known as zone or exposure value (EV)) defined in Chapter The camera was set to automatic exposure mode (range of 1/3400 1/31 s) In Figure 50 the upper left plot shows the density response marked with gray squares, as well as the first and second order fits marked with dashed blue and green lines. Dynamic range is grayed out because the printed target has too small a dynamic range to measure a camera's total dynamic range. Figure 50. Stepchart produces detailed results about the dynamic range and noise. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 30 RESULTS OF THE MEASUREMENTS

31 Imatest calculates the dynamic range for several maximum noise levels, from RMS noise = 0.1 f- stop (high image quality) to 1 f-stop (relatively low quality). The upper right box contains dynamic range results thus total dynamic range and range for several quality levels, based on luminance noise. The total dynamic range in f-stops is 6.31 and a medium-high quality image can be achieved over a range of 3.81 f-stops out of a total dynamic range. When a high quality image is required (maximum noise = 0.1 f-stops), the dynamic range is reduced to 2.44 f-stops indicated by the yellow line in the middle plot. These results mean that the practical dynamic range is limited by noise NOISE The stepchart output images presented in Figure 50 were also used for analyzing the noise. In Figure 50 the middle plot shows the RMS noise in f-stops (i.e. noise scaled to the difference in pixel levels between f-stops) which decreases as brightness decreases. In general, the darkest levels have the highest f-stop noise (large negative values of Log Exposure). In the lower left plot the noise is scaled to the difference in pixel levels between the maximum density level and the patch corresponding to a density of 1.5. For this camera, the pixel difference is Noise measured in pixels can be calculated by multiplying the percentage noise by The lower plot also contains the single number used to characterize overall noise performance. The average Luminance -channel noise (Y = 1.88%) is fairly high which corresponds to poor image quality. A more analytical way to characterize the noise in the image is to determine how much the pixel value changes within each tone in the stepchart. For that variance of the pixel values was calculated with Matlab. From the code presented in Appendix 5 it can be seen that the 315 pixel values of row 255 were chosen as source data. Their variance was The lower right plot shows the noise spectrum. The descending spectrum indicates that the neighboring pixels are correlated and that the spectrum and the image are the result of blurring (also called smoothing or low-pass filtering). In general, noise reduction performed by the camera is a likely cause for this kind of spectral, non-white noise. However, no data was available about the postprocessing tools of the camera. 4.4 IMAGE RESTORATION SHARPNESS The test showed that the pupil camera did not record very sharp images. This is mostly due to the moderate quality of the lens. Hence, one option to improve the sharpness would be by changing the lens into one that can maintain good MTF at the required focusing distance. If it is enough to enhance the images instead of improving the camera system, the details can be boosted by setting a threshold for the pixel levels. However, improving MTF by sharpening the image can easily result easily in increased noise GEOMETRIC ABERRATIONS According to the test results there was no need for aberrations correction, because the error caused by barrel distortion was insignificantly small. Bigger errors could be removed from the images with PTLens or other post-processing program as described in Chapter 2.4. Other option would be to focus the camera carefully and avoid cheap wide-angle and teleobjectives when choosing the optics DYNAMIC RANGE As mentioned before, the results did not give the whole picture of the pupil camera s dynamic range, because the dynamic range of the target was not wide enough (printed media has a maximum CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 31 IMAGE RESTORATION

32 dynamic range of a little over 6 f-stops). Hence, the real range would have been somewhat bigger. Also more light could have been used to increase the differences between light and dark. For comparison, the dynamic range of Canon EOS-10D at ISO speed 400 was measured to be Comparing the dynamic range of a digital camera taking still photos and video camera recording pupil for scientific purposes is, however, not very wise or even possible. For example, in still imaging it is possible to take several photos with different exposure levels and that way gain full dynamic range (a method called high dynamic range imaging (HDRI) in image processing, computer graphics and photography [63]). In videography the exposure can not be altered between frames so widening the range has to be done other way. In still imaging it might become important to limit the amount of light entering the camera so that the brightness does not exceed the dynamic range of the camera. Otherwise highlight details can burn and be lost. In pupillary measurements, for one, the light levels are not that high that the camera would need to be prepared for that. And in case the study setup is especially high-lit, it is still possible to limit the light afterwards frame by frame in Photoshop CS2 or in whole in a video editing program. Another thing related to high illuminances is the ISO speed, that is, how fast the sensor responds to light. In still imaging lowering the ISO speed is used for maximizing the dynamic range in bright ambient so that the camera does not need to be mounted to get a good depth of field [64]. In pupil measurements the camera is in any case in a fixed position so it is not necessary to have special adjusting mechanisms in the video camera to be able to capture the maximum range of tones between light and dark areas. In practice camera manufacturers are faced with a classic tradeoff: contrast versus dynamic range. Most images look best with enhanced contrast, but if contrast is increased, dynamic range suffers. Dynamic range can be increased by decreasing contrast, but images tend to look flat. In pupil size measurements the most essential thing is to have a clear contrast between the black pupil and the grayish iris so that the pupil can be distinguished from its background. That is why dynamic range can be left with only a little attention in evaluation of the imaging system NOISE Noise is difficult to quantify because it has a spectral distribution instead of being just a simple number. The most essential, single source of noise in this test was most likely the CCD. Because the number of photons detected by the CCD varies, the CCD produces always some random noise in the image. This noise is often called shot noise and it has white-noise (also called salt-and-pepper) appearance. In addition to the shot noise CCD can cause also so-called thermal noise that emerges from the heating of the cell and increases with exposure time. In the test of 10 seconds the increases temperature was hardly a great source of noise, however, in the recordings of several minutes or hours, the CCD would require cooling with fan or a cooling rib. [65] As was seen in the test results, the amount of noise was the highest at low light levels. This was due to the noise reduction software built in the camera that low-pass filters noise reducing its high frequency components. Noise reduction of the camera operates with a threshold that prevents portions of the image near contrast boundaries from blurring. This technique works well, but it can obscure low contrast details and result in the loss of high spatial frequencies. To inverse this, sharpening or unsharp masking is often applied to the image. The sharpening can, however, increase the amount the noise creating a vicious circle of noise reduction and reproduction. That is why most of the noise removal is being done and should be done by post-processing the image [66]. There is a wide range of software that can remove noise. Michael Almond has evaluated 22 of them and finds PictureCode Noise Ninja the best noise reduction tool today [67]. However, he thanked the program s ability to retain color saturation, which is not very relevant from the CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 32 IMAGE RESTORATION

33 perspective of the black and white pupil camera under testing. Applying different filters to remove noise is also fairly easy by Matlab as explained by Peter Kovesi [68]. As a conclusion it should be noted that in this test the evaluation of the noise consisted of only spatial noise. 4.5 CONCLUSIONS The goal of this work was to characterize the optical performance of a pupillometric camera used in research at Lighting Unit of Helsinki University of Technology (TKK). The approach was similar as presented by Iacoviello and Lucchetti (2005) [ 69 ] who had used similar checkerboard target image (Figure 51) to characterize the noise and sharpness (blur identification). Additionally the authors had used a synthetic benchmark image (Figure 52) in order to simulate the pupillometric setup as well as possible. This was not done by us due to time constraints and due to the belief that it would have brought only a little extra information of the performance of our setup. Compared to more commercial cameras available, Unibrain did not prove to take images of very good quality. In Table 1, there are presented results of similar image quality measurements conducted previously by of one of the authors (Rautkylä, 2006 [70]). As can be seen, systems camera (Canon 300 D), compact camera (Canon A510) and even cell phone camera (Nokia N90) showed to take over 100 times sharper images, when MTF50 value was used as Figure 51. Identification of parameters of degradation for a 2 2 checkerboard target image shot by the CCD camera: (A) the image; (B) horizontal line: white to black blur identification; (C) horizontal line: black to white blur identification [69]. Figure 52. Identification of the blur by means of a benchmark image (A) target image shot by the CCD camera in the same condition of Fig. 5; (B) a proper line of (A) plotted together with the estimated blurred step [69]. indicator of sharpness. Dynamic range was only 6 f-stops with Unibrain compared to 13, 10 and 11 of the other cameras. There was also over 100 times more noise present in the test image. The only category, in which Unibrain gained better results than the other cameras, was the amount of distortion. That was due to the fact that the Unibrain used telephoto lens of narrow angle. Table 1. Comparison on Unibrain to more commercial digital cameras. [70] Unibrain Canon EOS 300D Canon Powershot A510 Nokia N90 MTF Distortion % Dynamic Range Noise However, there was no comparison data for videocameras available. Therefore it is important to consider that recording video instead of taking still images can also be both beneficial to the image quality. Good thing about video is that offers more data for analysis and filters out time-dependent errors. On the other hand, digital video cameras are not technically as good and adjustable as digital still cameras. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 33 CONCLUSIONS

34 In addition it should be noted that in the end the quality of measurement depend on the ability to detect edges from image and namely the edge between the pupil and the iris. In order for the pupil detection program to run optimally the hardware should provide images that have as noise-free and sharp intensity gradients. This has not been always the case in our previous recordings [71] as it can be seen in Figure 53 center where the left part in the plot is rather flat (corresponding the upper portion of the pupil) making edge detection rather difficult. However, that particular problem was due to the improper placement of infrared lighting that caused shading of the pupil by eyelashes. Figure 53. Intensity profiles (1D) of the eye image using one fixed y-(green line) and x-coordinate (red line) [71]. CHARACTERIZING OPTICAL PERFORMANCE IN PRACTICE 34 CONCLUSIONS

Performance Factors. Technical Assistance. Fundamental Optics

Performance Factors.   Technical Assistance. Fundamental Optics Performance Factors After paraxial formulas have been used to select values for component focal length(s) and diameter(s), the final step is to select actual lenses. As in any engineering problem, this

More information

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

IMAGE SENSOR SOLUTIONS. KAC-96-1/5 Lens Kit. KODAK KAC-96-1/5 Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2 KODAK for use with the KODAK CMOS Image Sensors November 2004 Revision 2 1.1 Introduction Choosing the right lens is a critical aspect of designing an imaging system. Typically the trade off between image

More information

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing Chapters 1 & 2 Chapter 1: Photogrammetry Definitions and applications Conceptual basis of photogrammetric processing Transition from two-dimensional imagery to three-dimensional information Automation

More information

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR)

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR) COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR) PAPER TITLE: BASIC PHOTOGRAPHIC UNIT - 3 : SIMPLE LENS TOPIC: LENS PROPERTIES AND DEFECTS OBJECTIVES By

More information

Chapter Ray and Wave Optics

Chapter Ray and Wave Optics 109 Chapter Ray and Wave Optics 1. An astronomical telescope has a large aperture to [2002] reduce spherical aberration have high resolution increase span of observation have low dispersion. 2. If two

More information

Chapter 18 Optical Elements

Chapter 18 Optical Elements Chapter 18 Optical Elements GOALS When you have mastered the content of this chapter, you will be able to achieve the following goals: Definitions Define each of the following terms and use it in an operational

More information

Applications of Optics

Applications of Optics Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 26 Applications of Optics Marilyn Akins, PhD Broome Community College Applications of Optics Many devices are based on the principles of optics

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Image of Formation Images can result when light rays encounter flat or curved surfaces between two media. Images can be formed either by reflection or refraction due to these

More information

OPTICAL SYSTEMS OBJECTIVES

OPTICAL SYSTEMS OBJECTIVES 101 L7 OPTICAL SYSTEMS OBJECTIVES Aims Your aim here should be to acquire a working knowledge of the basic components of optical systems and understand their purpose, function and limitations in terms

More information

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design Computer Aided Design Several CAD tools use Ray Tracing (see

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36 Light from distant things Chapter 36 We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can process. This chapter concerns

More information

E X P E R I M E N T 12

E X P E R I M E N T 12 E X P E R I M E N T 12 Mirrors and Lenses Produced by the Physics Staff at Collin College Copyright Collin College Physics Department. All Rights Reserved. University Physics II, Exp 12: Mirrors and Lenses

More information

Topic 6 - Optics Depth of Field and Circle Of Confusion

Topic 6 - Optics Depth of Field and Circle Of Confusion Topic 6 - Optics Depth of Field and Circle Of Confusion Learning Outcomes In this lesson, we will learn all about depth of field and a concept known as the Circle of Confusion. By the end of this lesson,

More information

Observational Astronomy

Observational Astronomy Observational Astronomy Instruments The telescope- instruments combination forms a tightly coupled system: Telescope = collecting photons and forming an image Instruments = registering and analyzing the

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Laboratory experiment aberrations

Laboratory experiment aberrations Laboratory experiment aberrations Obligatory laboratory experiment on course in Optical design, SK2330/SK3330, KTH. Date Name Pass Objective This laboratory experiment is intended to demonstrate the most

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to the

More information

Using molded chalcogenide glass technology to reduce cost in a compact wide-angle thermal imaging lens

Using molded chalcogenide glass technology to reduce cost in a compact wide-angle thermal imaging lens Using molded chalcogenide glass technology to reduce cost in a compact wide-angle thermal imaging lens George Curatu a, Brent Binkley a, David Tinch a, and Costin Curatu b a LightPath Technologies, 2603

More information

Imaging Optics Fundamentals

Imaging Optics Fundamentals Imaging Optics Fundamentals Gregory Hollows Director, Machine Vision Solutions Edmund Optics Why Are We Here? Topics for Discussion Fundamental Parameters of your system Field of View Working Distance

More information

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation Optical Performance of Nikon F-Mount Lenses Landon Carter May 11, 2016 2.671 Measurement and Instrumentation Abstract In photographic systems, lenses are one of the most important pieces of the system

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

Opto Engineering S.r.l.

Opto Engineering S.r.l. TUTORIAL #1 Telecentric Lenses: basic information and working principles On line dimensional control is one of the most challenging and difficult applications of vision systems. On the other hand, besides

More information

R.B.V.R.R. WOMEN S COLLEGE (AUTONOMOUS) Narayanaguda, Hyderabad.

R.B.V.R.R. WOMEN S COLLEGE (AUTONOMOUS) Narayanaguda, Hyderabad. R.B.V.R.R. WOMEN S COLLEGE (AUTONOMOUS) Narayanaguda, Hyderabad. DEPARTMENT OF PHYSICS QUESTION BANK FOR SEMESTER III PAPER III OPTICS UNIT I: 1. MATRIX METHODS IN PARAXIAL OPTICS 2. ABERATIONS UNIT II

More information

Ch 24. Geometric Optics

Ch 24. Geometric Optics text concept Ch 24. Geometric Optics Fig. 24 3 A point source of light P and its image P, in a plane mirror. Angle of incidence =angle of reflection. text. Fig. 24 4 The blue dashed line through object

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals. Experiment 7 Geometrical Optics You will be introduced to ray optics and image formation in this experiment. We will use the optical rail, lenses, and the camera body to quantify image formation and magnification;

More information

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT

PHYSICS. Chapter 35 Lecture FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E RANDALL D. KNIGHT PHYSICS FOR SCIENTISTS AND ENGINEERS A STRATEGIC APPROACH 4/E Chapter 35 Lecture RANDALL D. KNIGHT Chapter 35 Optical Instruments IN THIS CHAPTER, you will learn about some common optical instruments and

More information

Nikon 24mm f/2.8d AF Nikkor (Tested)

Nikon 24mm f/2.8d AF Nikkor (Tested) Nikon 24mm f/2.8d AF Nikkor (Tested) Name Nikon 24mm ƒ/2.8d AF Nikkor Image Circle 35mm Type Wide Prime Focal Length 24mm APS Equivalent 36mm Max Aperture ƒ/2.8 Min Aperture ƒ/22 Diaphragm Blades 7 Lens

More information

Speed and Image Brightness uniformity of telecentric lenses

Speed and Image Brightness uniformity of telecentric lenses Specialist Article Published by: elektronikpraxis.de Issue: 11 / 2013 Speed and Image Brightness uniformity of telecentric lenses Author: Dr.-Ing. Claudia Brückner, Optics Developer, Vision & Control GmbH

More information

Waves & Oscillations

Waves & Oscillations Physics 42200 Waves & Oscillations Lecture 33 Geometric Optics Spring 2013 Semester Matthew Jones Aberrations We have continued to make approximations: Paraxial rays Spherical lenses Index of refraction

More information

GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS

GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS 209 GIST OF THE UNIT BASED ON DIFFERENT CONCEPTS IN THE UNIT (BRIEFLY AS POINT WISE). RAY OPTICS Reflection of light: - The bouncing of light back into the same medium from a surface is called reflection

More information

Chapter 25. Optical Instruments

Chapter 25. Optical Instruments Chapter 25 Optical Instruments Optical Instruments Analysis generally involves the laws of reflection and refraction Analysis uses the procedures of geometric optics To explain certain phenomena, the wave

More information

Examination, TEN1, in courses SK2500/SK2501, Physics of Biomedical Microscopy,

Examination, TEN1, in courses SK2500/SK2501, Physics of Biomedical Microscopy, KTH Applied Physics Examination, TEN1, in courses SK2500/SK2501, Physics of Biomedical Microscopy, 2009-06-05, 8-13, FB51 Allowed aids: Compendium Imaging Physics (handed out) Compendium Light Microscopy

More information

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses.

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses. Mirrors and Lenses Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses. Notation for Mirrors and Lenses The object distance is the distance from the object

More information

Applied Optics. , Physics Department (Room #36-401) , ,

Applied Optics. , Physics Department (Room #36-401) , , Applied Optics Professor, Physics Department (Room #36-401) 2290-0923, 019-539-0923, shsong@hanyang.ac.kr Office Hours Mondays 15:00-16:30, Wednesdays 15:00-16:30 TA (Ph.D. student, Room #36-415) 2290-0921,

More information

Optical design of a high resolution vision lens

Optical design of a high resolution vision lens Optical design of a high resolution vision lens Paul Claassen, optical designer, paul.claassen@sioux.eu Marnix Tas, optical specialist, marnix.tas@sioux.eu Prof L.Beckmann, l.beckmann@hccnet.nl Summary:

More information

Reflectors vs. Refractors

Reflectors vs. Refractors 1 Telescope Types - Telescopes collect and concentrate light (which can then be magnified, dispersed as a spectrum, etc). - In the end it is the collecting area that counts. - There are two primary telescope

More information

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Real world Optics Sensor Devices Sources of Error

More information

Notes from Lens Lecture with Graham Reed

Notes from Lens Lecture with Graham Reed Notes from Lens Lecture with Graham Reed Light is refracted when in travels between different substances, air to glass for example. Light of different wave lengths are refracted by different amounts. Wave

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant

More information

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1 TSBB09 Image Sensors 2018-HT2 Image Formation Part 1 Basic physics Electromagnetic radiation consists of electromagnetic waves With energy That propagate through space The waves consist of transversal

More information

LEICA Summarit-S 70 mm ASPH. f/2.5 / CS

LEICA Summarit-S 70 mm ASPH. f/2.5 / CS Technical Data. Illustration 1:2 Technical Data Order no. 1155 (CS: 1151) Image angle (diagonal, horizontal, vertical) approx. 42 / 35 / 24, corresponds to approx. 56 focal length in 35 format Optical

More information

Chapter 29/30. Wave Fronts and Rays. Refraction of Sound. Dispersion in a Prism. Index of Refraction. Refraction and Lenses

Chapter 29/30. Wave Fronts and Rays. Refraction of Sound. Dispersion in a Prism. Index of Refraction. Refraction and Lenses Chapter 29/30 Refraction and Lenses Refraction Refraction the bending of waves as they pass from one medium into another. Caused by a change in the average speed of light. Analogy A car that drives off

More information

Optical Components for Laser Applications. Günter Toesko - Laserseminar BLZ im Dezember

Optical Components for Laser Applications. Günter Toesko - Laserseminar BLZ im Dezember Günter Toesko - Laserseminar BLZ im Dezember 2009 1 Aberrations An optical aberration is a distortion in the image formed by an optical system compared to the original. It can arise for a number of reasons

More information

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term Lens Design I Lecture 3: Properties of optical systems II 205-04-8 Herbert Gross Summer term 206 www.iap.uni-jena.de 2 Preliminary Schedule 04.04. Basics 2.04. Properties of optical systrems I 3 8.04.

More information

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems Chapter 9 OPTICAL INSTRUMENTS Introduction Thin lenses Double-lens systems Aberrations Camera Human eye Compound microscope Summary INTRODUCTION Knowledge of geometrical optics, diffraction and interference,

More information

Application Note (A11)

Application Note (A11) Application Note (A11) Slit and Aperture Selection in Spectroradiometry REVISION: C August 2013 Gooch & Housego 4632 36 th Street, Orlando, FL 32811 Tel: 1 407 422 3171 Fax: 1 407 648 5412 Email: sales@goochandhousego.com

More information

Image Formation by Lenses

Image Formation by Lenses Image Formation by Lenses Bởi: OpenStaxCollege Lenses are found in a huge array of optical instruments, ranging from a simple magnifying glass to the eye to a camera s zoom lens. In this section, we will

More information

30 Lenses. Lenses change the paths of light.

30 Lenses. Lenses change the paths of light. Lenses change the paths of light. A light ray bends as it enters glass and bends again as it leaves. Light passing through glass of a certain shape can form an image that appears larger, smaller, closer,

More information

Chapter 25 Optical Instruments

Chapter 25 Optical Instruments Chapter 25 Optical Instruments Units of Chapter 25 Cameras, Film, and Digital The Human Eye; Corrective Lenses Magnifying Glass Telescopes Compound Microscope Aberrations of Lenses and Mirrors Limits of

More information

PHYSICS FOR THE IB DIPLOMA CAMBRIDGE UNIVERSITY PRESS

PHYSICS FOR THE IB DIPLOMA CAMBRIDGE UNIVERSITY PRESS Option C Imaging C Introduction to imaging Learning objectives In this section we discuss the formation of images by lenses and mirrors. We will learn how to construct images graphically as well as algebraically.

More information

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term Lens Design I Lecture 3: Properties of optical systems II 207-04-20 Herbert Gross Summer term 207 www.iap.uni-jena.de 2 Preliminary Schedule - Lens Design I 207 06.04. Basics 2 3.04. Properties of optical

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

Modulation Transfer Function

Modulation Transfer Function Modulation Transfer Function The resolution and performance of an optical microscope can be characterized by a quantity known as the modulation transfer function (MTF), which is a measurement of the microscope's

More information

Lenses, exposure, and (de)focus

Lenses, exposure, and (de)focus Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26

More information

Big League Cryogenics and Vacuum The LHC at CERN

Big League Cryogenics and Vacuum The LHC at CERN Big League Cryogenics and Vacuum The LHC at CERN A typical astronomical instrument must maintain about one cubic meter at a pressure of

More information

AP Physics Problems -- Waves and Light

AP Physics Problems -- Waves and Light AP Physics Problems -- Waves and Light 1. 1974-3 (Geometric Optics) An object 1.0 cm high is placed 4 cm away from a converging lens having a focal length of 3 cm. a. Sketch a principal ray diagram for

More information

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION Revised November 15, 2017 INTRODUCTION The simplest and most commonly described examples of diffraction and interference from two-dimensional apertures

More information

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline Lecture 4: Geometrical Optics 2 Outline 1 Optical Systems 2 Images and Pupils 3 Rays 4 Wavefronts 5 Aberrations Christoph U. Keller, Leiden University, keller@strw.leidenuniv.nl Lecture 4: Geometrical

More information

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros Acquisition Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros Image Acquisition Digital Camera Film Outline Pinhole camera Lens Lens aberrations Exposure Sensors Noise

More information

Microscope anatomy, image formation and resolution

Microscope anatomy, image formation and resolution Microscope anatomy, image formation and resolution Ian Dobbie Buy this book for your lab: D.B. Murphy, "Fundamentals of light microscopy and electronic imaging", ISBN 0-471-25391-X Visit these websites:

More information

CS 443: Imaging and Multimedia Cameras and Lenses

CS 443: Imaging and Multimedia Cameras and Lenses CS 443: Imaging and Multimedia Cameras and Lenses Spring 2008 Ahmed Elgammal Dept of Computer Science Rutgers University Outlines Cameras and lenses! 1 They are formed by the projection of 3D objects.

More information

Chapter 34 Geometric Optics (also known as Ray Optics) by C.-R. Hu

Chapter 34 Geometric Optics (also known as Ray Optics) by C.-R. Hu Chapter 34 Geometric Optics (also known as Ray Optics) by C.-R. Hu 1. Principles of image formation by mirrors (1a) When all length scales of objects, gaps, and holes are much larger than the wavelength

More information

ME 297 L4-2 Optical design flow Analysis

ME 297 L4-2 Optical design flow Analysis ME 297 L4-2 Optical design flow Analysis Nayer Eradat Fall 2011 SJSU 1 Are we meeting the specs? First order requirements (after scaling the lens) Distortion Sharpness (diffraction MTF-will establish depth

More information

Basic Optics System OS-8515C

Basic Optics System OS-8515C 40 50 30 60 20 70 10 80 0 90 80 10 20 70 T 30 60 40 50 50 40 60 30 70 20 80 90 90 80 BASIC OPTICS RAY TABLE 10 0 10 70 20 60 50 40 30 Instruction Manual with Experiment Guide and Teachers Notes 012-09900B

More information

PRINCIPLE PROCEDURE ACTIVITY. AIM To observe diffraction of light due to a thin slit.

PRINCIPLE PROCEDURE ACTIVITY. AIM To observe diffraction of light due to a thin slit. ACTIVITY 12 AIM To observe diffraction of light due to a thin slit. APPARATUS AND MATERIAL REQUIRED Two razor blades, one adhesive tape/cello-tape, source of light (electric bulb/ laser pencil), a piece

More information

OPTICS DIVISION B. School/#: Names:

OPTICS DIVISION B. School/#: Names: OPTICS DIVISION B School/#: Names: Directions: Fill in your response for each question in the space provided. All questions are worth two points. Multiple Choice (2 points each question) 1. Which of the

More information

EE-527: MicroFabrication

EE-527: MicroFabrication EE-57: MicroFabrication Exposure and Imaging Photons white light Hg arc lamp filtered Hg arc lamp excimer laser x-rays from synchrotron Electrons Ions Exposure Sources focused electron beam direct write

More information

Optical Design with Zemax

Optical Design with Zemax Optical Design with Zemax Lecture : Correction II 3--9 Herbert Gross Summer term www.iap.uni-jena.de Correction II Preliminary time schedule 6.. Introduction Introduction, Zemax interface, menues, file

More information

Image Formation and Capture

Image Formation and Capture Figure credits: B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, A. Theuwissen, and J. Malik Image Formation and Capture COS 429: Computer Vision Image Formation and Capture Real world Optics Sensor Devices

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Properties of Structured Light

Properties of Structured Light Properties of Structured Light Gaussian Beams Structured light sources using lasers as the illumination source are governed by theories of Gaussian beams. Unlike incoherent sources, coherent laser sources

More information

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS Equipment and accessories: an optical bench with a scale, an incandescent lamp, matte, a set of

More information

Laboratory 7: Properties of Lenses and Mirrors

Laboratory 7: Properties of Lenses and Mirrors Laboratory 7: Properties of Lenses and Mirrors Converging and Diverging Lens Focal Lengths: A converging lens is thicker at the center than at the periphery and light from an object at infinity passes

More information

Lenses Design Basics. Introduction. RONAR-SMITH Laser Optics. Optics for Medical. System. Laser. Semiconductor Spectroscopy.

Lenses Design Basics. Introduction. RONAR-SMITH Laser Optics. Optics for Medical. System. Laser. Semiconductor Spectroscopy. Introduction Optics Application Lenses Design Basics a) Convex lenses Convex lenses are optical imaging components with positive focus length. After going through the convex lens, parallel beam of light

More information

Geometric optics & aberrations

Geometric optics & aberrations Geometric optics & aberrations Department of Astrophysical Sciences University AST 542 http://www.northerneye.co.uk/ Outline Introduction: Optics in astronomy Basics of geometric optics Paraxial approximation

More information

CAMERA BASICS. Stops of light

CAMERA BASICS. Stops of light CAMERA BASICS Stops of light A stop of light isn t a quantifiable measurement it s a relative measurement. A stop of light is defined as a doubling or halving of any quantity of light. The word stop is

More information

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3 Image Formation Dr. Gerhard Roth COMP 4102A Winter 2015 Version 3 1 Image Formation Two type of images Intensity image encodes light intensities (passive sensor) Range (depth) image encodes shape and distance

More information

Aberrations of a lens

Aberrations of a lens Aberrations of a lens 1. What are aberrations? A lens made of a uniform glass with spherical surfaces cannot form perfect images. Spherical aberration is a prominent image defect for a point source on

More information

LENSES. INEL 6088 Computer Vision

LENSES. INEL 6088 Computer Vision LENSES INEL 6088 Computer Vision Digital camera A digital camera replaces film with a sensor array Each cell in the array is a Charge Coupled Device light-sensitive diode that converts photons to electrons

More information

INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER

INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER Data Optics, Inc. (734) 483-8228 115 Holmes Road or (800) 321-9026 Ypsilanti, Michigan 48198-3020 Fax:

More information

Chapter 23. Mirrors and Lenses

Chapter 23. Mirrors and Lenses Chapter 23 Mirrors and Lenses Mirrors and Lenses The development of mirrors and lenses aided the progress of science. It led to the microscopes and telescopes. Allowed the study of objects from microbes

More information

Diffraction. Interference with more than 2 beams. Diffraction gratings. Diffraction by an aperture. Diffraction of a laser beam

Diffraction. Interference with more than 2 beams. Diffraction gratings. Diffraction by an aperture. Diffraction of a laser beam Diffraction Interference with more than 2 beams 3, 4, 5 beams Large number of beams Diffraction gratings Equation Uses Diffraction by an aperture Huygen s principle again, Fresnel zones, Arago s spot Qualitative

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

Astronomical Cameras

Astronomical Cameras Astronomical Cameras I. The Pinhole Camera Pinhole Camera (or Camera Obscura) Whenever light passes through a small hole or aperture it creates an image opposite the hole This is an effect wherever apertures

More information

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design)

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design) Lens design Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design) Focal length (f) Field angle or field size F/number

More information

EF 15mm f/2.8 Fisheye. EF 14mm f/2.8l USM. EF 20mm f/2.8 USM

EF 15mm f/2.8 Fisheye. EF 14mm f/2.8l USM. EF 20mm f/2.8 USM Wide and Fast If you need an ultra-wide angle and a large aperture, one of the following lenses will fit the bill. Ultra-wide-angle lenses can capture scenes beyond your natural field of vision. The EF

More information

Chapter 34. Images. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

Chapter 34. Images. Copyright 2014 John Wiley & Sons, Inc. All rights reserved. Chapter 34 Images Copyright 34-1 Images and Plane Mirrors Learning Objectives 34.01 Distinguish virtual images from real images. 34.02 Explain the common roadway mirage. 34.03 Sketch a ray diagram for

More information

Reflection! Reflection and Virtual Image!

Reflection! Reflection and Virtual Image! 1/30/14 Reflection - wave hits non-absorptive surface surface of a smooth water pool - incident vs. reflected wave law of reflection - concept for all electromagnetic waves - wave theory: reflected back

More information

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017

Lecture 22: Cameras & Lenses III. Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017 Lecture 22: Cameras & Lenses III Computer Graphics and Imaging UC Berkeley, Spring 2017 F-Number For Lens vs. Photo A lens s F-Number is the maximum for that lens E.g. 50 mm F/1.4 is a high-quality telephoto

More information

SUBJECT: PHYSICS. Use and Succeed.

SUBJECT: PHYSICS. Use and Succeed. SUBJECT: PHYSICS I hope this collection of questions will help to test your preparation level and useful to recall the concepts in different areas of all the chapters. Use and Succeed. Navaneethakrishnan.V

More information

Cameras. CSE 455, Winter 2010 January 25, 2010

Cameras. CSE 455, Winter 2010 January 25, 2010 Cameras CSE 455, Winter 2010 January 25, 2010 Announcements New Lecturer! Neel Joshi, Ph.D. Post-Doctoral Researcher Microsoft Research neel@cs Project 1b (seam carving) was due on Friday the 22 nd Project

More information

Thin Lenses * OpenStax

Thin Lenses * OpenStax OpenStax-CNX module: m58530 Thin Lenses * OpenStax This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 4.0 By the end of this section, you will be able to:

More information

EUV Plasma Source with IR Power Recycling

EUV Plasma Source with IR Power Recycling 1 EUV Plasma Source with IR Power Recycling Kenneth C. Johnson kjinnovation@earthlink.net 1/6/2016 (first revision) Abstract Laser power requirements for an EUV laser-produced plasma source can be reduced

More information

Using Optics to Optimize Your Machine Vision Application

Using Optics to Optimize Your Machine Vision Application Expert Guide Using Optics to Optimize Your Machine Vision Application Introduction The lens is responsible for creating sufficient image quality to enable the vision system to extract the desired information

More information

Optics: An Introduction

Optics: An Introduction It is easy to overlook the contribution that optics make to a system; beyond basic lens parameters such as focal distance, the details can seem confusing. This Tech Tip presents a basic guide to optics

More information

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Cameras Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Camera Focus Camera Focus So far, we have been simulating pinhole cameras with perfect focus Often times, we want to simulate more

More information

Section 1: Sound. Sound and Light Section 1

Section 1: Sound. Sound and Light Section 1 Sound and Light Section 1 Section 1: Sound Preview Key Ideas Bellringer Properties of Sound Sound Intensity and Decibel Level Musical Instruments Hearing and the Ear The Ear Ultrasound and Sonar Sound

More information

Nikon AF-Nikkor 50mm F1.4D Lens Review: 5. Test results (FX): Digital Photography...

Nikon AF-Nikkor 50mm F1.4D Lens Review: 5. Test results (FX): Digital Photography... Seite 1 von 5 5. Test results (FX) Studio Tests - FX format NOTE the line marked 'Nyquist Frequency' indicates the maximum theoretical resolution of the camera body used for testing. Whenever the measured

More information