Handbook of practical camera calibration methods and models CHAPTER 2 CAMERA CALIBRATION MODEL SELECTION

Size: px
Start display at page:

Download "Handbook of practical camera calibration methods and models CHAPTER 2 CAMERA CALIBRATION MODEL SELECTION"

Transcription

1 CHAPTER 2 CAMERA CALIBRATION MODEL SELECTION Executive summary The interface between object space and image space in a camera is the lens. A lens can be modeled using by a pin-hole or a parametric function. This chapter discusses how distortions introduced by the lens can be estimated and if necessary compensated for. Given good calibration, solid state sensors or film can generate object space accuracies up to 1 part in 100,000. Put another way sensors can make full use of feature location precision in the image space of 0.1 to 1 micron. Not all tasks require the ultimate performance but nevertheless correction for gross distortions is still required. At the end of the chapter a table is given that indicates the appropriate camera model for a given task. Later chapters address how camera calibration parameters are estimated. 2.1 Introduction With the possible exception of astronomy, experts in the area of photogrammetry have made the greatest geometric use of lenses over the past hundred years. They have required lenses to perform aerial and satellite mapping of vast areas of the world as well as performing measurements with a precision of up to 1 part in a million of the object space. As a result the models for camera calibration can be relied upon to represent the best that is likely to be achieved with a lens for all but the most exotic applications. This handbook seeks to make this knowledge accessible to a wider audience with the objective of allowing the reader to pick an appropriate level of sophistication for an application requirement. This chapter provides a description of the geometric parameters of lenses and their mathematical models. The calibration procedures that are detailed in this report also make reference to the position and orientation of the camera as one method of calibration makes use of a 3-D method. Issues concerned with the sensor and its part in the image collection process are dealt with in chapter 3. It is useful to be aware of some of the terms used by photogrammetrists when dealing with camera-lens systems. Interior orientation and Exterior orientation are terms employed to describe the internal and external geometric configurations of a camera and lens system, others use the terms intrinsic and extrinsic parameters. Camera calibration is the process of estimating the parameters that best describe what happens to a bundle of rays coming from the object when they pass through the lens and onto 2-1

2 the image plane. The geometric configuration of the passage of a bundle of light rays through the lens to the image plane can be described mathematically by a set of parameters. This chapter explains the parameters relating to radial and decentering distortion, the location of the principal point, the focal length (more correctly known as the principal distance) and the interrelationship of these parameters with translation and rotation of the camera itself. As this handbook is primarily designed for operators of digital cameras, a couple of extra parameters that refer to the shape and orientation of the sensor (array of pixels) are also detailed. It should be noted that camera calibration refers here to the determination of the parameters that allow a camera to be used as an angle-measuring device. The computer vision community refers to the process of estimating the external orientation of the camera as camera calibration. This chapter does deal with the external parameters because these are necessary for some methods of determining the interior parameters. It is recommended that the term camera calibration is used to refer to the process of estimating parameters belonging to a camera and perhaps the term system calibration might be appropriate for a collection of cameras that may make up a measurement system. The camera model developed though this chapter will eventually not only compensate for gross effects such as radial lens distortion but will also take into account differences that occur at differing object distances. At the end of the chapter the usefulness of this model will be put into perspective with a practical guide linking the desired accuracy required with the appropriate components of the model. 2.2 Principal point The location of an image on the image plane formed by the direct axial ray passing through the centre of the lens system is known as the principal point. The focal plane should be perpendicular to the optical axis, but a parameter to correct for any misalignment is usually necessary. This is particularly necessary for the electronic sensor camera where the requirements for geometric alignment of the lens with the sensor array are minimal. The principal point can also be thought of as the foot of the perpendicular from the nominal centre of the lens system (more exactly the rear node of the lens) to the plane of focus. It represents the point that would be the ideal origin for a coordinate system on the image plane. When images are being used for 3-D measurement purposes it is normal to convert from pixel related co-ordinates to image co-ordinates in millimetres. To achieve this the following equation is used x i = (x sensor x centre )s x (2.1) y i = -(y sensor y centre )s y where x i, y i are the new co-ordinates of the image in mm, x sensor, y sensor are the co-ordinates of the image as given by the camera/framegrabber in pixels, x centre, y centre is the notional centre of the image in pixels, and s x, s y are the pixel sizes in mm 2-2

3 The principal point should ideally be at the centre of the image. The centre of the image format may be found by dividing the number of x and y axis pixels in the image by two or by (intersecting the diagonal fiducial marks if they are present in a film camera). Any differences in the x and y directions between the centre of the image plane and the base of this perpendicular are known as offsets of the principal point and conventionally expressed as x pp, y pp or sometimes as x o, y o. The principal distance, c, and the offsets of the principal point (Figure 2.1) are key elements in the set of parameters defining camera calibration. The principal point location was not included in equation 2.1 because it is included as an unknown parameter in a calibration procedure. For applications where the principal point is required a priori then x sensor and y sensor can be substituted by x pp, and y pp. Figure 2.1. The definition of the principal distance, principal point, and image centre To correctly "centre" the image co-ordinates, the offsets (x pp, y pp ) from the principal point to the notional image centre or fiducial origin must be added to image coordinates. These offsets may not always be known initially, so they may be estimated, often at zero, and solved for in the calibration procedure with all other unknowns. The magnitude of the offsets (x pp,y pp ) are usually under one millimetre for a film based camera, and less than twenty of pixels for a digital or video camera. The relationship between the principal point and the image centre will usually remain stable for long periods of time for medium precision purposes. In some cases cameras used for 3-D measurement are only checked every six months or so. However, the relationship cannot be guaranteed when the lens is adjusted or moved in any way. This is especially true with zoom lenses that are particularly prone to principal point shifts as the focal distance is changed. In these situations the unknowns for these values should be included in the photogrammetric solution for each occasion the focus has been altered. Recently, cases of unstable principal point location in Kodak DCS digital cameras have been encountered. This is because these cameras have been made for studio photographic purposes where the principal point location is not an issue. 2-3

4 However, when multiple images are taken with the camera in various orientations the sensor has been shown to move with respect to the camera body. 2.3 Principal distance The perpendicular distance from the perspective centre of the lens system to the image plane is termed the principal distance and usually denoted by c (Figure 2.1). In aerial mapping, where the camera lens is fixed at infinity focus, the terms focal length and principal distance are often used synonymously. In other applications the lenses are usually of variable focus the principal distance will not equal the focal length and will often need to be determined. In some industrial applications, the lens will be focused in each image collection phase, so the principal distance will vary from one image to the next. It is good practice not to confuse the focal length of the lens f with c. The values for the principal distance and the offsets of the principal point can be determined in a laboratory if required. However, other methods (discussed later) provide a direct means of determining their values so an exact value of the principal distance does not have to be known a priori. The method of least squares can be applied to solve the unknown parameters to model the relationship between the image and object co-ordinates given reasonably close initial approximation. Similarly for the values of the offsets to the principal point as noted in section 2.2. The calculated, or a posteriori, values of the principal distance or offsets of the principal point are often of little interest to the user who primarily requires coordinates of features on the object. But such calculations can provide evidence of errors, depending on the closeness of the values to previous calibrations. 2.4 Camera position and orientation The position and orientation of a camera may require defining for some applications and for the calibration methods described later a definition is required. Figure 2.2 illustrates the common photogrammetric definition where the camera position is described by X c,y c,z c and the orientation by ω,φ,κ is defined with reference to a world co-ordinate system. The rotation parameters ω,φ,κ are calculated with respect to the world co-ordinate system in the order they are written in (this is important as other orders will produce different results). The image co-ordinate system co-ordinates x i, y i are aligned parallel to the camera X and Y co-ordinate axes respectively. 2-4

5 Figure 2.2. Definition of the exterior orientation parameters for a camera The six parameters of exterior orientation are the only ones required. The parameters of inner orientation discussed so far are supplemented by further parameters that are described in the later sections of this chapter. If traditional laboratory methods of camera calibration (described briefly in chapter 4) are followed, then it is usually the case that parameters are determined in isolation to one another with problems occurring because each method will usually require physical changes to the camera settings. The methods used for the calibration of cameras to measure objects at distances less than a few hundred metres have evolved over the last few decades. Initially the methods used for aerial cameras where the application was essentially parallel-axis stereoscopic photography were mimicked and the same equipment was used. Nowadays techniques use the favorable geometric conditions of convergent camera configurations to simultaneously extract the all the required calibration parameters. In general, close-range photogrammetric applications require a quick and cost-effective solution to an immediate problem, unlike the situation with aerial work where a camera system may be scheduled to work on mapping projects for years and the relative "leisure" of an annual calibration in an expensive laboratory can be afforded. In both cases there has been an evolution in the mathematical modeling used in camera calibration that has coincided with the ability to solve large, redundant sets of equations and the gradual increase in the film and solid state sensors to provide high geometric accuracy results. The collinearity equations are a set of equations that describe the geometric passage of a straight ray of light from an object point (subscripted by p ) through the perspective centre of the lens (usually referred to as the location of the camera and subscripted by c ) to the image plane (subscripted by i ). These equations use the exterior orientation parameters to describe the direction of the principal ray through the lens with respect to the world co-ordinate system X w, Y w, Z w. In addition the interior 2-5

6 parameters are used to define the image location corresponding to the object point that it is (in an ideal situation) collinear with. Figure 2.3. The central configuration described by the collinearity equations The collinearity equations may be expressed as: x i = x pp - c x. m 11 (X p X c ) + m 12 (Y p Y c ) + m 13 (Z p Z c ) m 31 (X p X c ) + m 32 (Y p - Y c ) + m 33 (Z p Z c ) y i = y pp - c y. m 21 (X p X c ) + m 22 (Y p Y c ) + m 23 (Z p Z c ) (2.2) m 31 (X p X c ) + m 32 (Y p - Y c ) + m 33 (Z p Z c ) where x i, y i are the image co-ordinates of an object point X p, Y p, Z p and X c, Y c, Z c is the location of a camera in object space. The m ij (i =1,3; j=1,3) terms are the elements of a rotational matrix, M, and contain trigonometric functions of the angles ω, φ and κ. The principal distance, c, may be represented by two components, one in each of the x and y axis directions. This is a realistic model for an anamorphic lens, but it is usual to replace c x and c y with a common value c for most photogrammetric applications. Given sufficient image observations and diversity of viewpoints it is possible to use the collinearity equations to solve for the unknown 3-D co-ordinates of the object points. A useful by-product is that not only can the parameters relating to the object co-ordinates be estimated but also all of the other parameters such as camera exterior and interior parameters. In other words the cameras can be fully calibrated. This method is described later in the handbook. The collinearity equations are non-linear and so it is necessary to linearise them by use of a Taylor s Theorem expansion for solution by iterative least squares techniques. Reasonable approximations to all unknowns are needed before a convergent solution can be guaranteed. 2-6

7 It is important for the reader to have a basic understanding of the collinearity equations: to realise that they form the basis for most modern photogrammetry and that they provide a well tried and trusted mathematical model to describe a ray of light passing through a camera s lens. In a later section of this chapter the collinearity equations will be modified to account for the deviations from linearity which occur due to lens distortions and other effects. 2.5 Radial distortion Introduction With a perfect lens system, light rays would pass without deviation from object space to image space and form a sharp image on the plane of focus. Unfortunately, such an ideal system does not exist. Aberrations, or deviations from theoretically exact models, are the reality facing the user of a camera system. Aberrations can be grouped into two categories: those that reduce image quality and those that alter the location of the image. Image quality aberrations may have an influence on geometric location and are briefly discussed. Radial and decentering distortions comprise the primary aberrations that affect image geometry and measurement and modeling of these distortions are crucial to good results. The equations for radial and decentering distortions are derived from the Seidel aberrations, named after the 19 th century German mathematician who developed the relevant equations. These aberrations may be expressed in terms of a polynomial curve. Only the lower order terms in the series are relevant for most lenses. For extremely wide-angled or "fish-eye" lenses, another one or two terms in the series may be of value Radial lens distortion If the image of an off-axis target is displaced radially either closer to or for further from the principal point then it has been radially distorted. The terms barrel or pincushion are used, respectively, to describe the image shape of a rectangle which has been radially distorted closer to or farther from the principal point. 2-7

8 Figure 2.4 Radial lens distortion vectors for pin-cushion distortion the grid represents the corrected image and the ends of the vectors the observed positions. In a map for barrel distortion the vectors would be pointing from the grid towards the principal point Gaussian radial distortion describes the magnitude of radial distortion when the nominal principal distance is used as a basis for calculations. Figure 2.5 illustrates that the magnitude of these distortions varies with radial distance and may change with focus. Lens distortion graphs typically show the distortion in micrometres against the radial distance in millimetres, although it is reasonable to replace the horizontal scale in millimetres by a distance in pixels for some tasks. 2-8

9 Figure 2.5 Radial distortion calibration curves for three object distances Balanced radial distortion is the term used where the Gaussian curve has been mathematically transformed by shifting the principal distance by an amount c which has usually been chosen such that the mean value of the transformed distortion curve out to a certain radial distance is zero. Gaussian radial distortion can be expressed as a series of odd powered terms, δr = K 1 r 3 + K 2 r 5 + K 3 r 7 +. (2.3) where K 1, K 1, K 3 are the coefficients of radial distortion corresponding to infinity focus, δ r is in micrometres, and r 2 = (x x pp ) 2 + (y - y pp ) 2 (2.4) where r is the radial distance to the point which has co-ordinates (x, y) and x pp,and y pp are the offsets of the principal point from an indicated (or assumed) centre of the image. All values are usually expressed in millimetres. Balanced radial distortion, δ rb, can be expressed as δ rb = δ r + K 0 r = K 0 r + K 1 r 3 + K 2 r 5 + K 3 r (2.5) 2-9

10 Radial distortion / mm Radius / mm. Figure 2.6 Balanced radial lens distortion graph The difference between the two graphs (figures 2.5. and 2.6) can be thought of as a simple tilting of the horizontal axis to the angle required to achieve the desired mathematical condition. The angle of tilt is incorporated into the formula for radial distortion (equation 2.3) through the constant term K o as shown in equation 2.5. Although the curve in Figure 2.6 would appear to have a smaller maximum magnitude radial distortion than its equivalent unbalanced curve, there is no computational advantage in such a mathematical transformation. It can be shown that the compensatory changes in principal distance and radial distortion will not affect the results for the co-ordinates of the object points. However, the reason the model was introduced was in the use of aerial camera lenses where the lens designer and manufacturer sought to minimise the radial distortion about a straight line. Lenses could be designed to achieve this +/- a few microns. As a result the cameras were given a calibrated focal length that enabled the radial distortion to be effectively ignored even though it was still of significant magnitude. For most lenses of the type found on non-metric 35mm and 70mm cameras, and the simple C-mount lenses on most video cameras, only the terms K o and K 1 will be significant in describing radial distortion to the micrometre level. For wide angle lenses or those on metric cameras, the K 2 or even the K 3 term may be required to 2-10

11 accurately describe the radial distortion all the way out to the edge of the image format Variation of radial distortion with focussing and depth of field Radial distortion varies with focusing of the lens and also within the photographic depth of field. This latter phenomenon is relevant to just few applications (say at camera to object distances of under 30 focal lengths) and even in those cases it is only significant if there is considerable variation in depth of some target points on the object. The variation with focussing is the major effect of these variations. If the radial distortion coefficients can be determined at two distinct focus settings, preferably one close to the camera and the other at infinity, then the formula shown below as equations 2.6 and 2.7 will allow the calculation of the radial distortion coefficients at any other focus setting. Let S 1 and S 2 be the two distances from the camera to object planes at which parameters of radial distortion K 1 s 1, K 2 s 1, K 3 s 1... and K 1 s 2, K 2 s 2, K 3 s 2 have been determined. If S 1 is the distance from the camera at which the lens is now focused, then the parameters of radial distortion for objects at that distance S 1 from the camera will be K 1s 1 = 1 c 3 S 1 α s 1 1 c ( 1 α s 1 ) 3. K 1s + 1 S 1 c 1 3. K 1s 2 S 2 K2s 1 = 1 c 5 S 1 αs1 1 c ( 1 α s 1 ) 5. K 1s2 + S 1 c 1 5. K 2s2 S 2 (2.6) and so on, where c is the principal distance and αs1 = S 2 S1 S 2 S 1. S 1 c S 1 c (2.7) The variation of radial distortion within the depth of field requires the evaluation of a further coefficient δss1 where s refers to the distance of the plane of focus from the camera and S 1 is the distance to object point under consideration. γ ss 1 = S c S1 c. S 1 S (2.8) The final form of the radial distortion δrss1 for an object point at a distant S 1 from the camera, when the camera is focused at a distance S is 2-11

12 δr ss 1 = γ 2 ss 1 K 1s 1 r 3 + γ 4 ss 1 K 2s 1 r 5 + γ 6 ss 1 K 3s 1 r (2.9) The total amount of radial distortion can be decomposed into its δx and δy components as δx = x r δr ss 1, δy = y r δr ss 1 (2.10) Summary While the above formulae (2.6) through (2.10) are comprehensive, in most situations only the basic formula for radial distortion is ever used, that is equations Only equation 2.3 or 2.5 is required in a model for determining the parameters of radial distortion during the calibration process. The other details have been supplied for completeness. When using the equations for the variation of radial distortion with focussing, if the distance for determining one of the sets of radial distortion parameters is infinity (S 2 = ), then there is a considerable simplification. One minor complication in the application of equations is that an exact value for the radial distortion δr ss 1 cannot be calculated until the distance S 1 to the object point is known, so any solution must be iterative. Usually one iteration will suffice, unless exceptional circumstances of very close imaging and large depth of field are present. Consideration of the above equations in conjunction with a typical radial distortion curve for a close range camera would indicate that for camera to object distances greater than approximately 100x the focal length, the difference between an 'exact' radial distortion calculation and the value from an infinity calibration will be very small. In general the minor effect of variations within the depth of field and focussing can usually be ignored but the effect of radial distortion can only be disregarded in cases where it is small or where geometric accuracy is not important. Typically radial distortion can be as large as 10 pixels (say 100 microns) towards the edge of the image format of a digital camera and is usually an order of magnitude larger than decentering distortion. 2.6 Decentering Distortion All elements in a lens system ideally should be aligned, at the time of manufacture, to be collinear to the optical axis of the entire lens system. Any displacement or rotation of a lens element from a perfect alignment with the optical axis will cause the geometric displacement of images known as decentering distortion. Literally, this title refers to the "off-centering" of the lens elements (figure 2.7). 2-12

13 Figure 2.7. Decentering of a lens element Decentering distortion was initially represented as the same effect that could be achieved by placing a thin prism in front of the lens system. The typical decentering angle expected for ordinary lenses is approximately 1/diameter of the lens in minutes. The lens itself may not be perfectly made and can have a difference in thickness between opposite sides creating a residual wedge effect (Smith, 1990). It is usual for the lens decentering parameters to provide an improvement of between 1/7 and 1/10 of the magnitude of radial lens distortion. The decentering distortion effect contains both a radial and tangential component (figure 2.8). Figure 2.8 Tangential distortion error vectors The commonly accepted mathematical model for decentering distortion includes a term that allows for variation within the depth of field and for different focus settings. In practice these refinements to the basic quadratic formula (equations 2.11, 2.12) are seldom used, apart from extremely close range situations as the decentering distortion is often an order of magnitude less than radial distortion and the secondary effects of variation within the field are also very small. A graphical representation of 2-13

14 decentering distortion can be made in a manner analogous to radial distortion (see Figure 2.9) Tangential distortion / µm Radial distance / mm Figure 2.9 Tangential distortion plot P(r) The function that is graphed is called the "profile function" and is represented by P(r), P(r) = P P2 2 1/2 r 2 (2.11) where the parameters P 1 and P 2 refer to values at infinity focus. The effect of decentering distortion can be represented to sufficient accuracy in a truncated polynomial form as x s = 1 c S r 2 + 2( x xpp) 2 + 2P2 ( x xpp )( y ypp ) P1 ( ) y s = 1 c S r2 + 2( y y pp 2 ) + 2P 1 ( x x pp )( y y pp ) (2.12) P2 ( ) where xs, ys are the components of the decentering distortion at an image point x, y; r is the radial distance as described in equation (2.4) and c is the principal distance for a lens focused on an object plane at a distance S from the lens. 2-14

15 For the case of an object which lies at a distance S 1 from the lens, equations (2.12) must be multiplied by the factor γ ss 1 (see equation 2.8). Decentering distortion is usually an order of magnitude smaller than radial distortion and rarely exceeds 30µm at the edge of the image format for large format film cameras and a few microns for small format CCD cameras. Since decentering distortion is basically a quadratic function, its magnitude will only be one-quarter this size in the middle regions of the image. Consequently, unless the image scale is large, say greater than 1:30 and the depth of field is extensive, equations (2.12) can be adopted for direct use without introducing significant errors. One feature of decentering distortion that is sometimes quoted in calibration reports is the angle φ o. This represents the angle in the image plane from the x-axis to the axis of maximum tangential distortion on the image. It can be represented as φ o = tan-1 P 1 P (2.13) 2 The practical applications of φ o are limited because it cannot be determined to a high precision due to inherent uncertainties in the values for P 1 and P 2 (and their interdependence, in turn, with x pp and y pp and some rotational aspects of the camera s orientation). It is prudent to examine φ o from each camera calibration with a particular camera/lens combination to check for stability. Decentering distortion has been known to vary over time as the result of pressure changes, vibration or shock A model for camera calibration A mathematical model for camera calibration can be formed by the addition of the equations for radial and decentering lens distortion (equations (2.10) and (2.12)) to the fundamental collinearity equation (equations (2.2)). That is, the equations describing the ideal straight line path from the object through the lens to the image are corrupted by the terms describing how light rays are deviated from linearity. If some additional parameters are added to account for any shearing effect in the sensor array (shown as a 1 x and b 1 y below) and any non-perpendicularity of the image plane from the optical axis (shown as b 2 xy below), then a model exists for describing all aspects of lens and camera calibration, namely : x i = x pp - c x. m 11 (X o - X p ) + m 12 (Y o - Y p ) + m 13 (Z o - Z p ) m 31 (X o - X p ) + m 32 (Y o - Y p ) + m 33 (Z o - Z p ) +δ x + x + a 1 x y i = y pp - c y. m 21 (X o - X p ) + m 22 (Y o - Y p ) + m 23 (Z o - Z p ) + δ y + y +b 1 y +b 2 xy m 31 (X o - X p ) + m 32 (Y o - Y p ) + m 33 (Z o - Z p ) (2.14) The inclusion of the lens and camera calibration parameters (such as radial and decentering distortion, etc.) into the collinearity equations used for the general solution for the co-ordinates of the targeted points and camera locations led to the term additional parameters (APs) being coined. The addition of various sets of APs to bundle adjustments become very popular in the late 1970s and early 1980s as a means for reducing the magnitude of the observation residuals on the image frame. The assumption at the time was that if the image residuals were decreased then suspected 2-15

16 systematic errors had been eliminated. It has since been demonstrated that the indiscriminate or excessive use of APs may actually lead to deterioration in the final accuracy of the object co-ordinates while yielding better precision estimates. A more reasoned and sensible approach to the incorporation of APs is that only those parameters that can be shown to have definite physical justification should be used. This is the approach recommended in this handbook. To determine the six exterior orientation parameters defining the camera's spatial location and orientation and the eleven camera calibration parameters consisting of c, x pp, y pp, K 1, K 2, K 3, P 1 and P 2 and the additional parameters a 1, b 1 and b 2, it would theoretically be possible to use only nine targets. This would provide two x, y observation on each target, summing to a total of 18 observation values. The subsequent solution for the 17 unknowns would contain a slight redundancy. Experience has shown that a more desirable arrangement is a three-dimensional array of targets imaged across the entire format area of the camera. Thirty targets may suffice but 50 to 100 are more commonly used to provide ample redundancy and a reliable solution. Instead of using one known (fixed) camera location from which the targets are imaged, it is usual to take images from 4 to 9 camera locations (sometimes called stations ). The geometrical arrangement of the cameras, the intersection angles of rays from object points to cameras, the number of targeted points seen from a diversity of camera locations and the spread of targeted points across the image format are all important factors influencing both the precision of the co-ordinates of the targets on the object and the parameters of camera calibration. Certain tricks-of-the-trade have been devised to ensure that the correlation which is known to exist amongst the required parameters is kept to a minimum, if not eliminated. For example, to improve the estimation of the values of x pp and y pp and to reduce correlations it is useful to roll the camera through approximately 90, either between camera stations or at each camera station such that two exposures are captured at each view-point. Convergent images are crucial for the successful recovery of the principal distance if the object under consideration is planar, and indeed it is also recommended for non-planar objects as the convergence enhances the strength of the geometric intersection of rays. A feature known by photogrammetrists as projective equivalence is something that must be acknowledged by those undertaking camera calibration. Correlation clearly exists between camera station co-ordinates (X c, Y c Z c ) and the interior orientation elements (x pp, y pp, c) for certain geometric configurations. Stereoscopic photography for architectural facade recording provides a good example. In this situation, it does not matter for the determination of co-ordinates on the facade if the principal distance is slightly in error, as a compensating change in the distance to the wall will accommodate this error. This is because architectural facades are essentially planar in nature. The rotations about the X and Y axes (ω and φ) and the offsets of the principal point are clearly correlated (hence the reason for camera rotations to cancel out the effect). There is also a similarity in effect between small amounts of decentering distortion and the offsets of the principal point which show up as correlations. Clearly a 2-16

17 situation is present when all parameters are to be simultaneously derived for inconsistencies in the results of some parameters to occur due to overparameterisation. The design of the layout of camera stations and targets for a successful calibration requires some insight. The scheme proposed in this manual has given due consideration to these correlations A note about zoom lenses One type of lens rarely used with photogrammetric film cameras is the zoom. It is an inherently unstable lens as its operation relies on the movement of one, or more, lens along a rack (or screw thread) relative to other lens components. Such movements or rotations can easily lead to changes in the parameters for decentering distortion and, of course, the principal distance will vary every time the lens is focused. Changes in the principal distance also cause severe variations in radial distortion. However, many video or digital cameras are fitted with zoom lenses as standard. Given the small dimensions of the imaging sensor in most electronic cameras, a zoom lens enhances the flexibility of these cameras. Various researchers have studied the changes in interior orientation for electronic cameras resulting from variation in the principal distance of the zoom lens. They have observed some significant changes with translations of up to 100 pixels in the position of the principal point. This was attributable to a tilting of the optical axis with respect to the sensor. This lens may still perform acceptably for the purposes that it was made and the user may be unaware of the defect. The care and consideration in manufacturing associated with a 3,000 to 6,000 semi-metric film camera or even a good 1,500 non-metric camera will be missing due to the mass production techniques employed. Quite clearly, electronic cameras are often being used for tasks that were not envisaged by the camera or lens manufacturer and so special care must be taken and basic assumptions such that the optical axis is perpendicular to the sensor plane, cannot be made. Another problem found with zoom camera is non-linear variation in radial distortion for shorter principal distances and significant variations in the decentering distortion parameters (up to 5 pixels on a 525 x 350 pixel array!). This result is over an order of magnitude larger than for film cameras. Zoom lens, however, can be calibrated and performance improved. Calibration results have shown to be stable over a period of weeks. In some cases it is necessary with a motorised zoom to approach the desired focal length from the same direction each time due to hysteresis in the lens assembly. All of these factors mean that zoom lenses are not likely to give such good results as single focal length lenses. 2.9 Radiometric aspects of cameras Radiometric aspects of lenses can affect the geometric accuracy of results if not properly understood. This section gives some results from practical experiments carried out to investigate these effects but does not purport to address this area in any depth. 2-17

18 2.9.1 Intensity variations between cameras Figure 2.10 illustrates image intensity variations between different camera and lens permutations. For this experiment, an area of white card was evenly illuminated by a pair of lamps positioned at 45 with respect to the card. Small RMS image intensity differences of ±2 grey levels occurred between the different lenses mounted on the same camera body. The camera demonstrated differences of up to 14 grey levels. Whilst such variations could be removed by adjusting the camera gain, the settings should be determined with respect to signal saturation levels. During this experiment, possible image illumination fall-off at the edge of the format using wide apertures was found to be indistinguishable from the ±2 grey value variations present in all Pulnix images. No significant fall off in intensity was found since the Fujinon 25 mm f/1.4 lenses are of standard construction, unlike short focal length lenses that are often of retrofocus construction to allow for the mm spacing of the C mount standard. RMS Grey Value Lens No. Figure RMS grey level for three camera/lens combinations Figure 2.11 demonstrates some results of varying the lens aperture whilst imaging a uniform white card with two different lenses. The only significant difference occurred at f/5.6. f/4.5 Error bars show the RMS grey level σ RMS Image Grey Value f/5.6 f/8 f/16 f/11 Lens 1 Lens 2 Log relative exposure Figure RMS grey-level and lens aperture for two lenses 2-18

19 Use of test charts To evaluate the resolution of a camera system a lens test chart can be images as in the example shown in figure The resolution will differ across the field of view so if the test chart does not have multiple test sections that can be viewed at one time it is necessary to view the same chart in many locations. Figure 2.12 An image of a lens test resolution chart Visual evaluation of the patterns produced at high magnification demonstrated that there was no significant difference between the centre and the edges of the format for all permutations. Figure 2.13 illustrates a set of intensity profiles through three sets of line pairs. It can be seen that the spatial resolution of the system is somewhere between 38 and 60 lmm -1. Such a value would agree with the 58 lmm -1 theoretical maximum resolution given by the Nyquist theorem for the sensor. The variations in resolving power between the three optics tested was found to be insignificant Intensity l mm l mm 60 l mm Pixel No. Figure Intensity profiles through three sets of line pairs Physical characteristics of photographic film Despite the rapid and continuing advances and advantages of digital image technology there are still uses for film-based imaging - high speed cameras and ultra high accuracy photogrammetry are two examples. The following notes may assist those constrained to use such technology. 2-19

20 Spectral sensitivity. Films can be classified, in part, by the wavelengths of light to which they are sensitive. More common classifications include those sensitive to only short wavelengths ( nm); ortho-chromatic: range nm, most sensitive 420 nm; panchromatic: almost whole visible spectrum nm; and, infrared: nm (red-infrared) and, also, simultaneously nm (blue). Note that with infrared film a special lens should be used to accommodate the longer than usual wavelengths and a heavy blue filter should be used to eliminate the blue rays to which the film may also be sensitive Resolution. The grain size of the crystals of silver bromide, which are sensitive to light and change chemically, may be thought of as the primary element of resolution. The traditional way of thinking of the resolving power of film was to determine the number of line pairs per mm which could be discerned on an image. Each line pair consisted of an alternate black and white stripe of similar width. Typically the resolving power expressed this way varies from 10 to 100 line pairs per mm, with an average value for good quality colour transparency film of around 40. High contrast black and white films with resolutions to 400 line pairs per mm are commercially available (Kodak Tech Pan, for example) while military and other defence users report up to an order of magnitude improvement on that quality again. A more complete definition of resolution is in terms of a modulation transfer function (MTF). Other MTF curves may be derived for lenses and so the MTF of the complete imaging system may be determined. Suffice to state here that, as a general rule, as the speed of a film increases (for example from ASA100 to ASA400) then the grain size increases, the period of exposure decreases and the resolution becomes worse (the image gets grainier and cannot withstand magnification) Speed. The speed of a film is defined in a quaint old-fashioned way by its ASA rating. ASA = American Standards Association and ASA100 means that On a fine sunny day the exposure should be 1/100th of a second at an aperture stop of f-16. Films with speeds as low as 2.5 may be commercially purchased (extremely small grain size, useful for high definition work at slow shutter speeds). The higher the film speed, the lower the exposure time required, but, usually, at the price of poorer resolution Developing. One of the real drawbacks of film used for photogrammetric purposes is that results can never be real-time. It usually takes about an hour to develop a roll of film once it reaches the processing laboratory. The steps in film development are basically simple. They consist of the Developer, which reduces the exposed silver salts to grains of metallic silver. This is a time critical process as the developer is aggressive and will ruin all the film if left in contact for too long. The film is then put in a Stop Bath to prevent further development before being placed in a Fixing solution. The fixer dissolves out unreduced silver salts, and then the film must be thoroughly washed to remove all traces of the chemicals. 2-20

21 Film stretching. The image is present in the film s emulsion, which is, in effect, glued to the carrier material. This carrier material can vary tremendously in physical properties with respect to stability under temperature, humidity and when wet during the developing process. Older style film bases such as cellulose-acetate are easily susceptible to dimensional change when subjected to atmospheric and mechanical stress. The commercial processing of film where it is drawn through the processing tanks and then force dried with hot air can leave the best photogrammetric procedure in tatters. The author has actually experienced some 35mm colour transparencies of original size 36 by 24 mm returning from the One-hour fast commercial processor measuring 38 by 24 mm. An accuracy of under 1 part in 20 was not part of the photogrammetric design. Modern plastic films use carriers with names like cronaflex and are made from ester plastics. They are truly stable, but are not used on most small format commercial films, only specialist or aerial survey camera films. One warning if using these modern films: you must have scissors in the dark-room as they are impossible to tear off a roll by biting, as is the conventional practice. Film deformation is of course the very reason why fiducial marks, reseau or grid plates were invented and added to, or in front of, the image plane of cameras used for serious photogrammetry. A further note of caution must be added here, when applying a two-dimensional transformation to allow for film stretch or deformation based on measurements at fiducial or grid crosses. A conformal (4-parameter), rather than affine (6-parameter), transformation should be selected as although the residuals will always be less with the latter, some undesirable warping may be introduced as a result of applying the affine transformation. A projective (8-parameter) transformation should never be applied Holding the Film Flat During Exposure There are basically three options for film on an image plane: (1) Do nothing. This is the existing situation in most 35 and 70mm cameras. The film tends to bow away from the pressure backing plate by amounts of, approximately 0.5 to 1.0 mm for these types of cameras respectively. This makes no discernible difference to your holiday snap-shots but can completely ruin any high accuracy tasks requiring accuracies greater than about 1:500. The principal distance has different values across the image and the geometric fidelity of the photogrammetric solution is severely compromised. (2) Glass Plate. A glass plate added just in front of the image plane does two things. It constrains the amount of flexing that the film can do and is an excellent medium to accept fiducial marks or grid crosses. Two problems remain: the film is still not perfectly flat (despite even a pressure plate in the film wind-on mechanism pushing the film against the glass... the film buckles in a so-called orange-peel fashion) and the presence of the glass plate acts like an extra lens element and causes the normal focussing to change and more 2-21

22 radial distortion to be present. This latter effect may be overcome by calibration procedures. (3) Vacuum. This is the most expensive and complicated option. It is also the only one that satisfies all high accuracy constraints. One word of caution: the backing plate must be flattened to precise engineering tolerances or else the concept of holding the film tightly against it becomes a nonsense. Film unflatness is considered to be the main factor limiting the attainment of higher accuracies in non-metric small format photogrammetry. The deviation of the film away from its backing plate has been determined (for example, Donnelly, 1990) to be up to 0.5mm in 35mm cameras (image format 24 x 36mm) and estimated to be as high as 1.0mm in 70mm cameras (image format 55 x 55mm). Bulges cannot be considered central, nor symmetric, although for a particular film type and camera, it tends to be highly consistent in a sequence of exposures. One problem associated with film unflatness is that during the later measurement of the film, the bulge existing at the time of exposure cannot be re-created. In fact, the film will almost certainly be flattened out with a glass plate in the set-up phase prior to measuring, causing the film's dimensions to be 50 to 100 µm longer than when in the back of the camera. Whilst this would intuitively suggest a radial movement (distortion) of image locations, which would be included in any radial distortion parameters derived from the film measurement process, it has, in fact, been shown that this film bulge does not influence the radial distortion parameters of the lens. This bulge effect is very largely removed from consideration if a conformal or affine transformation is used in the interior orientation procedure before the lens distortion calculations commence. Localised expansion or shrinkage will not be effectively dealt with by corner or side fiducial marks and an extensive grid of reseau crosses is required to counteract such effects. Much investigation has taken place as to the most appropriate formula to use for interpolation inside such a reseau. A transformation based on the nearest surrounding four crosses seems most appropriate and Kotowski and Weber (1984) proposed a bilinear interpolation x 1 = a 1 + a 2 x + a 3 y + a 4 xy y 1 = a 5 + a 6 x + a 7 y + a 8 xy (4.12) that provides in a unique solution and avoids discontinuity problems across the edges of the reseau cells. Robson (1990) conducted a study into the dimensional stability of films and concluded that humidity was a major factor. Changes in moisture content cause the emulsion to expand or contract and this change must be resisted by the base material. Robson's work demonstrates many of the uncertainties inherent with analogue camera systems and provides reasons why accuracy greater than 1:5000 are extremely difficult to obtain with simple non-metric cameras which do not possess vacuum, reseau or other film flattening devices. 2-22

23 2.11. Selection of the appropriate model for a given measurement task The selection of an appropriate model for lens distortion will depend on the application. If a wide-angle lens (of short focal length) is used then distortion is likely to be greater than for a normal lens. If the quantity of distortion is unknown a straight line can be imaged at the end of the format of the image and the deviation of the line from straight can be measured using any simple graphics program. Remember that this distortion only indicates the difference in distortion between the middle of the line and the edges and not the level of gross distortion. Table 2.1.gives a rough idea of which lens distortion modeling parameters might be applied to achieve a given level of accuracy. It is only approximate as the requirements change considerable depending on the focal length and design of the lens but at least the progression of the general scheme is indicated. Level of accuracy Model parameters required Comment required across whole image 5 pixels K 1 Gross lens distortion removed 1 pixel x pp, y pp, K 1 Improvement due to principal point location 0.5 pixels x pp, y pp, K 1, P 1, P 2 Decentring distortion added 0.1 pixels x pp, y pp, c, K 1, P 1, P 2 More accurate calibration needed 0.05 pixels x pp, y pp, c, K 1, K 2, K 3, P 1, P 2 Higher order lens distortion terms 0.01 pixels x pp, y pp, c, K 1, K 2, K 3, P 1, P 2, a, b Sensor parameters required Table 2.1. Approximate indication of which lens parameters to use This table only gives a rough guide. Using more parameters is not detrimental if the calibration method is capable of estimating that parameter properly, but over parameterisation can lead to unrealistic precision estimates compared to the accuracy of calibration actually achieved. Hence, for many applications a reasonably full model can be used without problems but more computations will be required and the calibration may be more difficult Summary Knowledge of the geometry of the bundle of light rays that pass through a lens is essential for the accurate application of high accuracy techniques. The formulae described in this chapter are capable of correcting the geometric location of an image to within a few tenths of a micrometre. It is often important that the user does not to become too concerned or involved with the mechanics of calibration techniques, since it is really the accurate determination of object co-ordinates in 2-D or 3-D that are usually the primary aim. In routine tasks, 2-23

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

IMAGE SENSOR SOLUTIONS. KAC-96-1/5 Lens Kit. KODAK KAC-96-1/5 Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2 KODAK for use with the KODAK CMOS Image Sensors November 2004 Revision 2 1.1 Introduction Choosing the right lens is a critical aspect of designing an imaging system. Typically the trade off between image

More information

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing Chapters 1 & 2 Chapter 1: Photogrammetry Definitions and applications Conceptual basis of photogrammetric processing Transition from two-dimensional imagery to three-dimensional information Automation

More information

Handbook of practical camera calibration methods and models CHAPTER 6 MISCELLANEOUS ISSUES

Handbook of practical camera calibration methods and models CHAPTER 6 MISCELLANEOUS ISSUES CHAPTER 6 MISCELLANEOUS ISSUES Executive summary This chapter collects together some material on a number of miscellaneous issues such as use of cameras underwater and some practical tips on the use of

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant

More information

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems Chapter 9 OPTICAL INSTRUMENTS Introduction Thin lenses Double-lens systems Aberrations Camera Human eye Compound microscope Summary INTRODUCTION Knowledge of geometrical optics, diffraction and interference,

More information

OPTICAL SYSTEMS OBJECTIVES

OPTICAL SYSTEMS OBJECTIVES 101 L7 OPTICAL SYSTEMS OBJECTIVES Aims Your aim here should be to acquire a working knowledge of the basic components of optical systems and understand their purpose, function and limitations in terms

More information

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals. Experiment 7 Geometrical Optics You will be introduced to ray optics and image formation in this experiment. We will use the optical rail, lenses, and the camera body to quantify image formation and magnification;

More information

Optical design of a high resolution vision lens

Optical design of a high resolution vision lens Optical design of a high resolution vision lens Paul Claassen, optical designer, paul.claassen@sioux.eu Marnix Tas, optical specialist, marnix.tas@sioux.eu Prof L.Beckmann, l.beckmann@hccnet.nl Summary:

More information

Opto Engineering S.r.l.

Opto Engineering S.r.l. TUTORIAL #1 Telecentric Lenses: basic information and working principles On line dimensional control is one of the most challenging and difficult applications of vision systems. On the other hand, besides

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

digital film technology Resolution Matters what's in a pattern white paper standing the test of time

digital film technology Resolution Matters what's in a pattern white paper standing the test of time digital film technology Resolution Matters what's in a pattern white paper standing the test of time standing the test of time An introduction >>> Film archives are of great historical importance as they

More information

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS INFOTEH-JAHORINA Vol. 10, Ref. E-VI-11, p. 892-896, March 2011. MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS Jelena Cvetković, Aleksej Makarov, Sasa Vujić, Vlatacom d.o.o. Beograd Abstract -

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Applications of Optics

Applications of Optics Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 26 Applications of Optics Marilyn Akins, PhD Broome Community College Applications of Optics Many devices are based on the principles of optics

More information

Performance Factors. Technical Assistance. Fundamental Optics

Performance Factors.   Technical Assistance. Fundamental Optics Performance Factors After paraxial formulas have been used to select values for component focal length(s) and diameter(s), the final step is to select actual lenses. As in any engineering problem, this

More information

Cameras have number of controls that allow the user to change the way the photograph looks.

Cameras have number of controls that allow the user to change the way the photograph looks. Anatomy of a camera - Camera Controls Cameras have number of controls that allow the user to change the way the photograph looks. Focus In the eye the cornea and the lens adjust the focus on the retina.

More information

11/25/2009 CHAPTER THREE INTRODUCTION INTRODUCTION (CONT D) THE AERIAL CAMERA: LENS PHOTOGRAPHIC SENSORS

11/25/2009 CHAPTER THREE INTRODUCTION INTRODUCTION (CONT D) THE AERIAL CAMERA: LENS PHOTOGRAPHIC SENSORS INTRODUCTION CHAPTER THREE IC SENSORS Photography means to write with light Today s meaning is often expanded to include radiation just outside the visible spectrum, i. e. ultraviolet and near infrared

More information

PROPERTY OF THE LARGE FORMAT DIGITAL AERIAL CAMERA DMC II

PROPERTY OF THE LARGE FORMAT DIGITAL AERIAL CAMERA DMC II PROPERTY OF THE LARGE FORMAT DIGITAL AERIAL CAMERA II K. Jacobsen a, K. Neumann b a Institute of Photogrammetry and GeoInformation, Leibniz University Hannover, Germany jacobsen@ipi.uni-hannover.de b Z/I

More information

LENSES. INEL 6088 Computer Vision

LENSES. INEL 6088 Computer Vision LENSES INEL 6088 Computer Vision Digital camera A digital camera replaces film with a sensor array Each cell in the array is a Charge Coupled Device light-sensitive diode that converts photons to electrons

More information

CALIBRATION OF OPTICAL SATELLITE SENSORS

CALIBRATION OF OPTICAL SATELLITE SENSORS CALIBRATION OF OPTICAL SATELLITE SENSORS KARSTEN JACOBSEN University of Hannover Institute of Photogrammetry and Geoinformation Nienburger Str. 1, D-30167 Hannover, Germany jacobsen@ipi.uni-hannover.de

More information

EXPERIMENT ON PARAMETER SELECTION OF IMAGE DISTORTION MODEL

EXPERIMENT ON PARAMETER SELECTION OF IMAGE DISTORTION MODEL IARS Volume XXXVI, art 5, Dresden 5-7 September 006 EXERIMENT ON ARAMETER SELECTION OF IMAGE DISTORTION MODEL Ryuji Matsuoa*, Noboru Sudo, Hideyo Yootsua, Mitsuo Sone Toai University Research & Information

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Camera & Color Overview Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Book: Hartley 6.1, Szeliski 2.1.5, 2.2, 2.3 The trip

More information

CALIBRATION OF IMAGING SATELLITE SENSORS

CALIBRATION OF IMAGING SATELLITE SENSORS CALIBRATION OF IMAGING SATELLITE SENSORS Jacobsen, K. Institute of Photogrammetry and GeoInformation, University of Hannover jacobsen@ipi.uni-hannover.de KEY WORDS: imaging satellites, geometry, calibration

More information

Optical Components - Scanning Lenses

Optical Components - Scanning Lenses Optical Components Scanning Lenses Scanning Lenses (Ftheta) Product Information Figure 1: Scanning Lenses A scanning (Ftheta) lens supplies an image in accordance with the socalled Ftheta condition (y

More information

Imaging Optics Fundamentals

Imaging Optics Fundamentals Imaging Optics Fundamentals Gregory Hollows Director, Machine Vision Solutions Edmund Optics Why Are We Here? Topics for Discussion Fundamental Parameters of your system Field of View Working Distance

More information

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR)

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR) COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR) PAPER TITLE: BASIC PHOTOGRAPHIC UNIT - 3 : SIMPLE LENS TOPIC: LENS PROPERTIES AND DEFECTS OBJECTIVES By

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Image of Formation Images can result when light rays encounter flat or curved surfaces between two media. Images can be formed either by reflection or refraction due to these

More information

Technical Note How to Compensate Lateral Chromatic Aberration

Technical Note How to Compensate Lateral Chromatic Aberration Lateral Chromatic Aberration Compensation Function: In JAI color line scan cameras (3CCD/4CCD/3CMOS/4CMOS), sensors and prisms are precisely fabricated. On the other hand, the lens mounts of the cameras

More information

Technical Report Synopsis: Chapter 4: Mounting Individual Lenses Opto-Mechanical System Design Paul R. Yoder, Jr.

Technical Report Synopsis: Chapter 4: Mounting Individual Lenses Opto-Mechanical System Design Paul R. Yoder, Jr. Technical Report Synopsis: Chapter 4: Mounting Individual Lenses Opto-Mechanical System Design Paul R. Yoder, Jr. Introduction Chapter 4 of Opto-Mechanical Systems Design by Paul R. Yoder, Jr. is an introduction

More information

Lenses, exposure, and (de)focus

Lenses, exposure, and (de)focus Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

APPLICATIONS FOR TELECENTRIC LIGHTING

APPLICATIONS FOR TELECENTRIC LIGHTING APPLICATIONS FOR TELECENTRIC LIGHTING Telecentric lenses used in combination with telecentric lighting provide the most accurate results for measurement of object shapes and geometries. They make attributes

More information

Using molded chalcogenide glass technology to reduce cost in a compact wide-angle thermal imaging lens

Using molded chalcogenide glass technology to reduce cost in a compact wide-angle thermal imaging lens Using molded chalcogenide glass technology to reduce cost in a compact wide-angle thermal imaging lens George Curatu a, Brent Binkley a, David Tinch a, and Costin Curatu b a LightPath Technologies, 2603

More information

Cameras. CSE 455, Winter 2010 January 25, 2010

Cameras. CSE 455, Winter 2010 January 25, 2010 Cameras CSE 455, Winter 2010 January 25, 2010 Announcements New Lecturer! Neel Joshi, Ph.D. Post-Doctoral Researcher Microsoft Research neel@cs Project 1b (seam carving) was due on Friday the 22 nd Project

More information

Focus on an optical blind spot A closer look at lenses and the basics of CCTV optical performances,

Focus on an optical blind spot A closer look at lenses and the basics of CCTV optical performances, Focus on an optical blind spot A closer look at lenses and the basics of CCTV optical performances, by David Elberbaum M any security/cctv installers and dealers wish to know more about lens basics, lens

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

Pixel Response Effects on CCD Camera Gain Calibration

Pixel Response Effects on CCD Camera Gain Calibration 1 of 7 1/21/2014 3:03 PM HO M E P R O D UC T S B R IE F S T E C H NO T E S S UP P O RT P UR C HA S E NE W S W E B T O O L S INF O C O NTA C T Pixel Response Effects on CCD Camera Gain Calibration Copyright

More information

Laboratory 1: Uncertainty Analysis

Laboratory 1: Uncertainty Analysis University of Alabama Department of Physics and Astronomy PH101 / LeClair May 26, 2014 Laboratory 1: Uncertainty Analysis Hypothesis: A statistical analysis including both mean and standard deviation can

More information

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term Lens Design I Lecture 3: Properties of optical systems II 205-04-8 Herbert Gross Summer term 206 www.iap.uni-jena.de 2 Preliminary Schedule 04.04. Basics 2.04. Properties of optical systrems I 3 8.04.

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Those who wish to succeed must ask the right preliminary questions Aristotle Images

More information

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall,

More information

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation Optical Performance of Nikon F-Mount Lenses Landon Carter May 11, 2016 2.671 Measurement and Instrumentation Abstract In photographic systems, lenses are one of the most important pieces of the system

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing

More information

CAMERA BASICS. Stops of light

CAMERA BASICS. Stops of light CAMERA BASICS Stops of light A stop of light isn t a quantifiable measurement it s a relative measurement. A stop of light is defined as a doubling or halving of any quantity of light. The word stop is

More information

Sample Copy. Not For Distribution.

Sample Copy. Not For Distribution. Photogrammetry, GIS & Remote Sensing Quick Reference Book i EDUCREATION PUBLISHING Shubham Vihar, Mangla, Bilaspur, Chhattisgarh - 495001 Website: www.educreation.in Copyright, 2017, S.S. Manugula, V.

More information

Ch 24. Geometric Optics

Ch 24. Geometric Optics text concept Ch 24. Geometric Optics Fig. 24 3 A point source of light P and its image P, in a plane mirror. Angle of incidence =angle of reflection. text. Fig. 24 4 The blue dashed line through object

More information

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36 Light from distant things Chapter 36 We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can process. This chapter concerns

More information

NON-METRIC BIRD S EYE VIEW

NON-METRIC BIRD S EYE VIEW NON-METRIC BIRD S EYE VIEW Prof. A. Georgopoulos, M. Modatsos Lab. of Photogrammetry, Dept. of Rural & Surv. Engineering, National Technical University of Athens, 9, Iroon Polytechniou, GR-15780 Greece

More information

Chapter 29/30. Wave Fronts and Rays. Refraction of Sound. Dispersion in a Prism. Index of Refraction. Refraction and Lenses

Chapter 29/30. Wave Fronts and Rays. Refraction of Sound. Dispersion in a Prism. Index of Refraction. Refraction and Lenses Chapter 29/30 Refraction and Lenses Refraction Refraction the bending of waves as they pass from one medium into another. Caused by a change in the average speed of light. Analogy A car that drives off

More information

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Cameras Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Camera Focus Camera Focus So far, we have been simulating pinhole cameras with perfect focus Often times, we want to simulate more

More information

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1 TSBB09 Image Sensors 2018-HT2 Image Formation Part 1 Basic physics Electromagnetic radiation consists of electromagnetic waves With energy That propagate through space The waves consist of transversal

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Understanding Optical Specifications

Understanding Optical Specifications Understanding Optical Specifications Optics can be found virtually everywhere, from fiber optic couplings to machine vision imaging devices to cutting-edge biometric iris identification systems. Despite

More information

Performance Comparison of Spectrometers Featuring On-Axis and Off-Axis Grating Rotation

Performance Comparison of Spectrometers Featuring On-Axis and Off-Axis Grating Rotation Performance Comparison of Spectrometers Featuring On-Axis and Off-Axis Rotation By: Michael Case and Roy Grayzel, Acton Research Corporation Introduction The majority of modern spectrographs and scanning

More information

Geometry of Aerial Photographs

Geometry of Aerial Photographs Geometry of Aerial Photographs Aerial Cameras Aerial cameras must be (details in lectures): Geometrically stable Have fast and efficient shutters Have high geometric and optical quality lenses They can

More information

Colour correction for panoramic imaging

Colour correction for panoramic imaging Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in

More information

ABC Math Student Copy. N. May ABC Math Student Copy. Physics Week 13(Sem. 2) Name. Light Chapter Summary Cont d 2

ABC Math Student Copy. N. May ABC Math Student Copy. Physics Week 13(Sem. 2) Name. Light Chapter Summary Cont d 2 Page 1 of 12 Physics Week 13(Sem. 2) Name Light Chapter Summary Cont d 2 Lens Abberation Lenses can have two types of abberation, spherical and chromic. Abberation occurs when the rays forming an image

More information

E X P E R I M E N T 12

E X P E R I M E N T 12 E X P E R I M E N T 12 Mirrors and Lenses Produced by the Physics Staff at Collin College Copyright Collin College Physics Department. All Rights Reserved. University Physics II, Exp 12: Mirrors and Lenses

More information

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc How to Optimize the Sharpness of Your Photographic Prints: Part II - Practical Limits to Sharpness in Photography and a Useful Chart to Deteremine the Optimal f-stop. Robert B.Hallock hallock@physics.umass.edu

More information

Volume 1 - Module 6 Geometry of Aerial Photography. I. Classification of Photographs. Vertical

Volume 1 - Module 6 Geometry of Aerial Photography. I. Classification of Photographs. Vertical RSCC Volume 1 Introduction to Photo Interpretation and Photogrammetry Table of Contents Module 1 Module 2 Module 3.1 Module 3.2 Module 4 Module 5 Module 6 Module 7 Module 8 Labs Volume 1 - Module 6 Geometry

More information

Camera Calibration Certificate No: DMC III 27542

Camera Calibration Certificate No: DMC III 27542 Calibration DMC III Camera Calibration Certificate No: DMC III 27542 For Peregrine Aerial Surveys, Inc. #201 1255 Townline Road Abbotsford, B.C. V2T 6E1 Canada Calib_DMCIII_27542.docx Document Version

More information

CS 443: Imaging and Multimedia Cameras and Lenses

CS 443: Imaging and Multimedia Cameras and Lenses CS 443: Imaging and Multimedia Cameras and Lenses Spring 2008 Ahmed Elgammal Dept of Computer Science Rutgers University Outlines Cameras and lenses! 1 They are formed by the projection of 3D objects.

More information

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT 1 Image Fusion Sensor Merging Magsud Mehdiyev Geoinfomatics Center, AIT Image Fusion is a combination of two or more different images to form a new image by using certain algorithms. ( Pohl et al 1998)

More information

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design Computer Aided Design Several CAD tools use Ray Tracing (see

More information

Invited Review Paper Digital camera self -calibration

Invited Review Paper Digital camera self -calibration ISPRS Journal of Photogrammetry & Remote Sensing 5 (997) 49-59 Invited Review Paper Digital camera self -calibration Clive S. Fraser * Department of Geomatics, The University of Melbourne, Parkville, Vic.

More information

CALIBRATION OF AN AMATEUR CAMERA FOR VARIOUS OBJECT DISTANCES

CALIBRATION OF AN AMATEUR CAMERA FOR VARIOUS OBJECT DISTANCES CALIBRATION OF AN AMATEUR CAMERA FOR VARIOUS OBJECT DISTANCES Sanjib K. Ghosh, Monir Rahimi and Zhengdong Shi Laval University 1355 Pav. Casault, Laval University QUEBEC G1K 7P4 CAN A D A Commission V

More information

The suitability of the Pulnix TM6CN CCD camera for photogrammetric measurement. S. Robson, T.A. Clarke, & J. Chen.

The suitability of the Pulnix TM6CN CCD camera for photogrammetric measurement. S. Robson, T.A. Clarke, & J. Chen. The suitability of the Pulnix TM6CN CCD camera for photogrammetric measurement S. Robson, T.A. Clarke, & J. Chen. School of Engineering, City University, Northampton Square, LONDON, EC1V OHB, U.K. ABSTRACT

More information

Waves & Oscillations

Waves & Oscillations Physics 42200 Waves & Oscillations Lecture 33 Geometric Optics Spring 2013 Semester Matthew Jones Aberrations We have continued to make approximations: Paraxial rays Spherical lenses Index of refraction

More information

Why select a BOS zoom lens over a COTS lens?

Why select a BOS zoom lens over a COTS lens? Introduction The Beck Optronic Solutions (BOS) range of zoom lenses are sometimes compared to apparently equivalent commercial-off-the-shelf (or COTS) products available from the large commercial lens

More information

not to be republished NCERT Introduction To Aerial Photographs Chapter 6

not to be republished NCERT Introduction To Aerial Photographs Chapter 6 Chapter 6 Introduction To Aerial Photographs Figure 6.1 Terrestrial photograph of Mussorrie town of similar features, then we have to place ourselves somewhere in the air. When we do so and look down,

More information

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7)

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7) Lenses- Worksheet 1. Look at the lenses in front of you and try to distinguish the different types of lenses? Describe each type and record its characteristics. 2. Using the lenses in front of you, look

More information

SFR 406 Spring 2015 Lecture 7 Notes Film Types and Filters

SFR 406 Spring 2015 Lecture 7 Notes Film Types and Filters SFR 406 Spring 2015 Lecture 7 Notes Film Types and Filters 1. Film Resolution Introduction Resolution relates to the smallest size features that can be detected on the film. The resolving power is a related

More information

1.6 Beam Wander vs. Image Jitter

1.6 Beam Wander vs. Image Jitter 8 Chapter 1 1.6 Beam Wander vs. Image Jitter It is common at this point to look at beam wander and image jitter and ask what differentiates them. Consider a cooperative optical communication system that

More information

Comparison of FRD (Focal Ratio Degradation) for Optical Fibres with Different Core Sizes By Neil Barrie

Comparison of FRD (Focal Ratio Degradation) for Optical Fibres with Different Core Sizes By Neil Barrie Comparison of FRD (Focal Ratio Degradation) for Optical Fibres with Different Core Sizes By Neil Barrie Introduction The purpose of this experimental investigation was to determine whether there is a dependence

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Unit 1: Image Formation

Unit 1: Image Formation Unit 1: Image Formation 1. Geometry 2. Optics 3. Photometry 4. Sensor Readings Szeliski 2.1-2.3 & 6.3.5 1 Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor

More information

Applied Optics. , Physics Department (Room #36-401) , ,

Applied Optics. , Physics Department (Room #36-401) , , Applied Optics Professor, Physics Department (Room #36-401) 2290-0923, 019-539-0923, shsong@hanyang.ac.kr Office Hours Mondays 15:00-16:30, Wednesdays 15:00-16:30 TA (Ph.D. student, Room #36-415) 2290-0921,

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to the

More information

Evaluation of Distortion Error with Fuzzy Logic

Evaluation of Distortion Error with Fuzzy Logic Key Words: Distortion, fuzzy logic, radial distortion. SUMMARY Distortion can be explained as the occurring of an image at a different place instead of where it is required. Modern camera lenses are relatively

More information

The principles of CCTV design in VideoCAD

The principles of CCTV design in VideoCAD The principles of CCTV design in VideoCAD 1 The principles of CCTV design in VideoCAD Part VI Lens distortion in CCTV design Edition for VideoCAD 8 Professional S. Utochkin In the first article of this

More information

Physics 11. Unit 8 Geometric Optics Part 2

Physics 11. Unit 8 Geometric Optics Part 2 Physics 11 Unit 8 Geometric Optics Part 2 (c) Refraction (i) Introduction: Snell s law Like water waves, when light is traveling from one medium to another, not only does its wavelength, and in turn the

More information

Laboratory 7: Properties of Lenses and Mirrors

Laboratory 7: Properties of Lenses and Mirrors Laboratory 7: Properties of Lenses and Mirrors Converging and Diverging Lens Focal Lengths: A converging lens is thicker at the center than at the periphery and light from an object at infinity passes

More information

Industrial quality control HASO for ensuring the quality of NIR optical components

Industrial quality control HASO for ensuring the quality of NIR optical components Industrial quality control HASO for ensuring the quality of NIR optical components In the sector of industrial detection, the ability to massproduce reliable, high-quality optical components is synonymous

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes: Evaluating Commercial Scanners for Astronomical Images Robert J. Simcoe Associate Harvard College Observatory rjsimcoe@cfa.harvard.edu Introduction: Many organizations have expressed interest in using

More information

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design)

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design) Lens design Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design) Focal length (f) Field angle or field size F/number

More information

PRINCIPLE PROCEDURE ACTIVITY. AIM To observe diffraction of light due to a thin slit.

PRINCIPLE PROCEDURE ACTIVITY. AIM To observe diffraction of light due to a thin slit. ACTIVITY 12 AIM To observe diffraction of light due to a thin slit. APPARATUS AND MATERIAL REQUIRED Two razor blades, one adhesive tape/cello-tape, source of light (electric bulb/ laser pencil), a piece

More information

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES Petteri PÖNTINEN Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing, Finland petteri.pontinen@hut.fi KEY WORDS: Cocentricity,

More information

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Optical Coherence: Recreation of the Experiment of Thompson and Wolf Optical Coherence: Recreation of the Experiment of Thompson and Wolf David Collins Senior project Department of Physics, California Polytechnic State University San Luis Obispo June 2010 Abstract The purpose

More information

Lab 10. Images with Thin Lenses

Lab 10. Images with Thin Lenses Lab 10. Images with Thin Lenses Goals To learn experimental techniques for determining the focal lengths of positive (converging) and negative (diverging) lenses in conjunction with the thin-lens equation.

More information

Radial Polarization Converter With LC Driver USER MANUAL

Radial Polarization Converter With LC Driver USER MANUAL ARCoptix Radial Polarization Converter With LC Driver USER MANUAL Arcoptix S.A Ch. Trois-portes 18 2000 Neuchâtel Switzerland Mail: info@arcoptix.com Tel: ++41 32 731 04 66 Principle of the radial polarization

More information

Image Formation by Lenses

Image Formation by Lenses Image Formation by Lenses Bởi: OpenStaxCollege Lenses are found in a huge array of optical instruments, ranging from a simple magnifying glass to the eye to a camera s zoom lens. In this section, we will

More information

Tangents. The f-stops here. Shedding some light on the f-number. by Marcus R. Hatch and David E. Stoltzmann

Tangents. The f-stops here. Shedding some light on the f-number. by Marcus R. Hatch and David E. Stoltzmann Tangents Shedding some light on the f-number The f-stops here by Marcus R. Hatch and David E. Stoltzmann The f-number has peen around for nearly a century now, and it is certainly one of the fundamental

More information

Properties of Structured Light

Properties of Structured Light Properties of Structured Light Gaussian Beams Structured light sources using lasers as the illumination source are governed by theories of Gaussian beams. Unlike incoherent sources, coherent laser sources

More information

Handbook of practical camera calibration methods and models CHAPTER 5 CAMERA CALIBRATION CASE STUDIES

Handbook of practical camera calibration methods and models CHAPTER 5 CAMERA CALIBRATION CASE STUDIES CHAPTER 5 CAMERA CALIBRATION CASE STUDIES Executive summary This chapter discusses a number of calibration procedures for determination of the focal length, principal point, radial and tangential lens

More information

Camera Calibration Certificate No: DMC II

Camera Calibration Certificate No: DMC II Calibration DMC II 230 015 Camera Calibration Certificate No: DMC II 230 015 For Air Photographics, Inc. 2115 Kelly Island Road MARTINSBURG WV 25405 USA Calib_DMCII230-015_2014.docx Document Version 3.0

More information

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Real world Optics Sensor Devices Sources of Error

More information

Geometric optics & aberrations

Geometric optics & aberrations Geometric optics & aberrations Department of Astrophysical Sciences University AST 542 http://www.northerneye.co.uk/ Outline Introduction: Optics in astronomy Basics of geometric optics Paraxial approximation

More information