c 2007 by Joseph Donald Coombs. All rights reserved.

Size: px
Start display at page:

Download "c 2007 by Joseph Donald Coombs. All rights reserved."

Transcription

1 c 2007 by Joseph Donald Coombs. All rights reserved.

2 A NOVEL DEFOCUS BLURRING MODEL OF LAYERED DEPTH SCENES FOR COMPUTATIONAL PHOTOGRAPHY BY JOSEPH DONALD COOMBS B.S., University of Tulsa, 2005 THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Electrical and Computer Engineering in the Graduate College of the University of Illinois at Urbana-Champaign, 2007 Urbana, Illinois

3 ABSTRACT Image blurring due to optical defocus is problematic in the presence of occlusion. The interaction between occluding and occluded objects may be described through ray optics, but this solution often proves intractable in application. Conversely, mathematically concise models (i.e., convolution) offer computational ease at the expense of accuracy. This paper presents a novel blur model that boasts both applicability and accuracy. Two existing blur models are reviewed in order to derive this compromise model. The rigorous geometric derivations that lead to reversed projection blur are simplified to allow a formulation similar to the basic, convolutional multi-component blur model. The three models are then tested using real and synthetic images. The novel model is shown to perform favorably in terms of subjective quality and objective metrics. iii

4 ACKNOWLEDGMENTS I would like to thank my advisor, Dr. Minh Do, whose tireless support and boundless enthusiasm made this thesis possible. Credit is also due the many friends and colleagues who have made the past two years more manageable. Though too numerous to list in so brief a space, they have my sincere gratitude for their diligence and friendship. My thanks also extend to my family, whose love has been a constant comfort and inspiration. iv

5 TABLE OF CONTENTS LIST OF TABLES vi LIST OF FIGURES vii CHAPTER 1 INTRODUCTION CHAPTER 2 BACKGROUND Optical Imaging and Defocus Simple Convolution Model Multicomponent Blur Model Reversed Projection Blur Model Motivation for a New Model CHAPTER 3 A NOVEL BLUR MODEL Discrete Approximation to RPB A Novel Blur Model Modification for superior performance near image boundary Generalization to three or more object planes Image Synthesis CHAPTER 4 APPLICATION TO REAL PHOTOGRAPHS Seeing Through Occlusions Image Synthesis CHAPTER 5 CONCLUSION Future Work REFERENCES APPENDIX A ENERGY-PRESERVING CONVOLUTION v

6 LIST OF TABLES Table 4.1: MSE and PSNR for the first experiment using three blur models. 28 Table 4.2: MSE/PSNR (db) for the second experiment using three blur models and three focus settings vi

7 LIST OF FIGURES Figure 2.1: Simple optical imaging system. Distances d o, d i, and f related by imaging equation Figure 2.2: Example foreground and background objects. Left: 3D view. Right: Imaged with no blur Figure 2.3: One row of result obtained through MCB model. Left: d f < d o < d b. Right: d o = d f < d b. Dashed lines indicate individual contributions from foreground and background. These results Figure 2.4: do not change if the occluded background region is altered... 8 Left: Flux cone extending from circular lens aperture (base) to focused object point (tip). Right: Reverse-projected flux cone impinging upon foreground and background objects Figure 2.5: One row of result obtained through RPB model. Left: d f < d o < d b. Right: d o = d f < d b. Dashed lines indicate individual contributions from foreground and background Figure 2.6: Ray tracing diagram showing the RPB model s ability to see through occlusions Figure 2.7: Result under RPB model for d f < f o < d b. Note that the bright, occluded background region is partially visible near the occlusion edge Figure 3.1: Heights y o and y i vary with d o and d i. If variation in d i is negligible, then all y o arriving at the same y i fall on a single line. Approximation holds for d o d i Figure 3.2: Applied filter masks in foreground (left) and background (right). Note that the applied background segment is the same portion of its overall mask as the mask remainder in the foreground.. 17 Figure 3.3: Left to right: Background image, simple occlusion edge, and arbitrary occlusion edge Figure 3.4: Images synthesized by three blur models. Left to right: MCB, discretized RPB, and novel blur model. (a) d o = d f, r f = 0; (b) d f < d o < d b, r b = r f ; (c) d o = d b, r b = 0; (d) d o > d b, r f > f b Figure 3.5: Left to right: Images synthesized by MCB and novel models using arbitrary occlusion edge and d o = d b (r b = 0) Figure 4.1: Objects used in the first experiment. Left: Foreground object. Center: Background object. Right: All-in-focus image of overall scene; foreground partially transparent to show extent of occlusion vii

8 Figure 4.2: Figure 4.3: Figure 4.4: Figure 4.5: Figure 4.6: Image and one-row intensity profile in (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image Objects used in the second experiment. Left: Foreground object. Center: Background object. Right: All-in-focus image of overall scene; foreground partially transparent to show extent of occlusion Images for d o = d b, r f = 24.5, and r b = 0 using (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image Images for d o < d f, r f = 15.6, and r b = 36.0 using (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image Images for d o > d b, r f = 45.6, and r b = 20.9 using (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image Figure A.1: Optical image and rectangular sampling grid Figure A.2: One-dimensional example showing the origin and extent of blur darkening near a signal s original boundaries Figure A.3: Left to right: Unblurred image, image blurred with zero-padding, and image blurred with zero-padding and EPC viii

9 CHAPTER 1 INTRODUCTION Modeling image blurring due to optical defocus is an important problem in many image processing and computer vision applications. Often, defocus blur is treated simply as image degradation; it is the unwanted filter in an elementary inverse problem. Conversely, defocus blur can provide information about the observed scene. Multichannel image restoration uses two or more differently focused images to estimate the underlying image more accurately. Another application makes use of the optical origins of defocus blur: the distance of an object relative to the properly focused object plane determines the severity of its blur. In depth from defocus, two or more defocused images are used to estimate the physical distance between camera and object [1], [2]. Defocus blur may also be used to lend realism to synthetic images; superimposed objects may appear more natural when one exhibits defocus. Universal to all applications is the desire to express optical defocus via an accurate yet easily applicable model. This thesis reviews two existing models, the Multicomponent Blur and Reversed Projection Blur models [3], [4], and analyzes the strengths and shortcomings of each. A novel blur model is then presented, and its application to image synthesis and segmentation is explored. This new model is shown to offer 1

10 an attractive combination of physical accuracy and computational simplicity. Finally, all three blur models are applied to real photographs for objective comparison. 2

11 CHAPTER 2 BACKGROUND In order to accurately model defocus blur, an understanding of optical imaging and defocus is essential. This paper limits its scope to the field of ray optics; diffractive effects are not considered in any derivations. This decision is justified for general photographic applications, but it precludes certain microscopy or astronomical imaging applications. 2.1 Optical Imaging and Defocus A simple lens system is shown in Figure 2.1. An object at distance d o creates an image at d i. These distances are related by the focal length, f, of the lens in the well-known imaging equation (2.1). Note that this relationship is one-to-one; each d i corresponds to one and only one d o, and vice versa [5] = 1 d o d i f ( 1 R B = R A d i d i f 1 ) d o (2.1) (2.2) Optical defocus arises when d o and d i are mismatched. Consider the case when the 3

12 do di z f f Figure 2.1: Simple optical imaging system. Distances d o, d i, and f related by imaging equation. imaging plane is moved to some d i d i. It can be shown through simple ray tracing that this causes each point in the focused image to swell to a circular region. Geometric derivation reveals that this circle s radius, R B, is proportional to the aperture radius, R A, and the magnitude of the focusing error, d i d i (2.2). The circular region itself is called the point spread function (PSF). 2.1 Simple Convolution Model For fixed d o and d i, it is widely accepted that defocus blur can be modeled as the 2D convolution of the focused image and the PSF. While the precise form of the PSF is complicated and wavelength-dependent, it is commonly approximated either as uniform (the so-called pillbox PSF) (2.3) or as Gaussian (2.4) in broad-spectrum imaging 4

13 applications [1]. To obtain the blurred image, one simply convolves the focused image i(x, y) with an appropriate PSF (2.5). The operator denotes 2D convolution, which is defined in the discrete domain as (2.6) [6]. However, this linear, shift-invariant (LSI) model breaks down for objects with varying depth; the object must be confined to a plane at z = d o. Some variation in object depth is permissible in practice due to a real camera s depth of field; image sampling renders minor defocus not merely negligible but completely imperceptible. For this reason, it is possible to approximate many objects as flat for the purpose of defocus blur: d o [d o ɛ, d o + ɛ] d i. An image containing more than two or more flat objects at different depths is called a layered depth image. h p (x, y) = 1 πr 2 B, x 2 + y 2 R 2 B 0, else (2.3) i(x, y) h(x, y) = h g (x, y) = 1 1 x 2 +y 2 2πσ 2 e 2 σ 2 (2.4) i (x, y) = i(x, y) h(x, y) (2.5) m= n= i(m, n)h(x m, y n) (2.6) 2.2 Multicomponent Blur Model While minor variation in object depth is tolerated by the convolution model, real objects commonly exhibit large depth discontinuities (e.g., transitions between foreground and background). The elementary case is a scene consisting of two approximately flat objects: one object at d o = d f superimposed over another object at d o = d b. This scene is said to have layered depth; either object, imaged in isolation, could be blurred using the convolution model. Indeed, the composite image contains 5

14 regions in which simple convolution yields the proper result. These regions lie far from the objects boundary, or occlusion edge. However, the overall LSI representation is no longer valid: using two separate blurring filters depending on (x, y) is at best a linear shift-varying (LSV) formulation. In fact, for pixel locations near the occlusion edge, the application of either blurring filter is inappropriate. The support of the foreground object is limited, and intuition dictates that the occluded portion of the background object should be omitted. One representation of defocus in the presence of depth discontinuity is the multicomponent blur (MCB) model proposed by Nguyen and Huang [3]. This model uses focused images of each object to construct a composite at arbitrary focus. The support of the background image is limited such that o b (x, y) 0 (x, y) / X f, where X f is the support of the foreground image. In other words, the occluded portion of the background ground image is set to zero. Each image is convolved independently with its own PSF, and the results are added together (2.7). Note that here o f (x, y) and o b (x, y) are well-focused images of each object, not the objects themselves. i f (x, y) = o f (x, y) h f (x, y) i b (x, y) = o b (x, y) h b (x, y) (2.7) i(x, y) = i f (x, y) + i b (x, y) This derivation is pleasingly intuitive and allows straightforward application to discrete data, but it yields visual artifacts in practice. For instance, consider the simple object definitions given in Equation (2.8) and shown in Figure 2.2. (Here, let the foreground be transparent for o f = 0 and opaque otherwise.) The first step in applying MCB to this scene is to zero out the occluded region of o b (x, y), which 6

15 y x z Figure 2.2: Example foreground and background objects. Left: 3D view. Right: Imaged with no blur. encompasses all pixels with x > 0. Suppose that the focused object distance d o exactly equals d f. In other words, the foreground object is well-focused. Intuitively, one expects the resulting image to exactly match the all-in-focus image shown in Figure 2.2: the only object subject to blurring is the background, which is constant-valued and thus unaltered by any energy-preserving filter. However, when the background is blurred under MCB, its support expands through basic convolution, and the result gains nonzero values in the region x > 0. Similarly, its DC value falls off for negative x values near zero. This behavior introduces an unpleasant sharpening artifact when the convolved background image is added to the foreground image. This artifact is shown in Figure 2.3. Its appearance is similar to the Mach band optical illusion, but its origin is completely different [3]. It should also not be mistaken for a diffractive effect; no part of the MCB derivation departs from the ray optics regime. Rather, this artifact and others like it are caused by the overlap between foreground and background 7

16 Figure 2.3: One row of result obtained through MCB model. Left: d f < d o < d b. Right: d o = d f < d b. Dashed lines indicate individual contributions from foreground and background. These results do not change if the occluded background region is altered. convolutions. While some interaction between foreground and background is expected near an occlusion edge, the MCB model does not capture the correct behavior. 0.75, x 0 o f (x, y) = 0, else (2.8) o b (x, y) = Reversed Projection Blur Model The MCB model provides a pleasingly simple tool for blurring depth-variant scenes, but its results near occlusion edges are inaccurate. Asada, Fujiwara, and Matsuyama present the reversed projection blur (RPB) model to address this issue [4]. In contrast to the previous models, RPB does not rely on convolution. Rather, fundamental geometric arguments are used to express the intensity at each image pixel as surface integrals over the objects o f and o b (2.9). (Note: here, o f and o b are objects, not images.) The derivation of RPB begins by fixing d i and finding the corresponding d o 8

17 (xo, yo) y x y x Sb (xo, yo, zo) z Sf z (xi, yi) Figure 2.4: Left: Flux cone extending from circular lens aperture (base) to focused object point (tip). Right: Reverse-projected flux cone impinging upon foreground and background objects. via the imaging equation (2.1). As shown in Figure 2.4, simple ray tracing creates a flux cone defined by the lens aperture and (x o, y o, d o ), the focused object point. If an object is present at d o, the flux cone contains all light rays that contribute to image pixel (x i, y i, d i ). This geometric construction is not new in RPB; similar reasoning derives the basic convolution blur model. However, RPB allows rays in the flux cone to individually stop or extend beyond (x o, y o, d o ). Each ray is traced away from the lens until it strikes an object, as in Figure 2.4. This creates the reverse-projected flux cone that gives the model its name. The portion of the cone intersected by various objects creates surfaces, S, over which luminous intensity is integrated to obtain i(x i, y i ) (2.9). (The normalization term affixed to each integral is related to the flux cone s solid angle.) i(x, y) = πr2 A cos4 θ d 2 i S f o f (x, y) πr 2 f do f + πr2 A cos4 θ d 2 i S b o b (x, y) πr 2 b do b (2.9) R F = R A d F d o d o, R B = R A d B d o d o (2.10) Revisiting the previous example allows one to contrast RPB and MCB qualitatively. Suppose the observed scene again consists of the foreground and background 9

18 Figure 2.5: One row of result obtained through RPB model. Left: d f < d o < d b. Right: d o = d f < d b. Dashed lines indicate individual contributions from foreground and background. objects defined in (2.8) (Figure 2.2). Again, let d o = d f ; the foreground object is in focus. For each image pixel corresponding to x o 0, the flux cone terminates neatly on a single point (x o, y o ) on the z = d f plane. Thus, the foreground object is faithfully recreated on the imaging plane. Similarly, when x o > 0, the flux cone is projected beyond the foreground to a complete circle in the z = d b plane. Thus, the image intensity at those points depends wholly on a blurred version of the background object. This result, pictured in Figure 2.5, produces a sharp, focused edge free of visual artifacts. It has been demonstrated experimentally that this subjectively pleasing result is in fact correct [4]. RPB offers another interesting, though initially confusing, advantage over MCB. Consider the case when d f < d o < d b ; that is, the focused object plane lies between the foreground and background objects. As seen in Figures 2.3 and 2.5, both MCB and RPB yield similar results. However, there is a key difference hidden away within the calculations. Consider the image point (x i, y i ) = (x o, y o ) = 0. Under the MCB model, this pixel intensity is calculated as in (2.11), where o b0 is zero-valued for x 0 (2.12). 10

19 z z = do Sb Sf Figure 2.6: Ray tracing diagram showing the RPB model s ability to see through occlusions. The RPB model is computed as in (2.9), with S f, S b defined as in (2.13), (2.14). The definition of S b is of particular interest: it contains only (x, y) locations that are occluded by the foreground! In other words, occluded portions of the background object contribute to image intensity under RPB. This important result is illustrated in Figure 2.6. i MCB (0, 0) = (o f h f )(0, 0) + (o b0 h b )(0, 0) (2.11) 0.25, x > 0 o b0 = (2.12) 0, else x 2 + y 2 R 2 f S f = (x, y) : (2.13) x 0 S b = (x, y) : x 2 + y 2 R 2 b x < 0 (2.14) The RPB model s ability to see through occlusions can be demonstrated with another example. Consider the case where the background object is instead defined as in (2.15). (The foreground object is defined as before.) The occluded portion of the 11

20 Figure 2.7: Result under RPB model for d f < f o < d b. Note that the bright, occluded background region is partially visible near the occlusion edge. background now has relatively high intensity, but is not visible when the foreground object is focused. When d f < d o < d b, the RPB model allows the bright, occluded region of the background to contribute to image intensity (Figure 2.7). This property of the RPB model has been verified experimentally [4], and has sparked considerable interest in occlusion-removal research [7], [8]. 1, x 0 o b (x, y) = 0.25, else (2.15) 2.4 Motivation for a New Model The RPB model allows significant improvement over MCB near occlusion edges, but this gain is not without cost. The process of reverse projection can be thought of as splitting a uniform, circular PSF (2.3) across multiple surfaces. However, when simple convolution suffices, it is widely held that the Gaussian PSF (2.4) yields superior 12

21 results [2]. Additionally, the relative complexity of computation and the painstaking geometric knowledge of both camera and scene required under RPB render the model inapplicable to discrete images. These problems often preclude the use of RPB, despite its superior accuracy [9]. Therein lies the motivation for the development of a new blur model. The new model is formulated to combine the simplicity and applicability of MCB with the proper occlusion handling of RPB. The succeeding chapters will present the development and testing of such a model. 13

22 CHAPTER 3 A NOVEL BLUR MODEL 3.1 Discrete Approximation to RPB The goal of the new blur model is to capture the accuracy of RPB while retaining MCB s applicability to discrete images with no specific geometric knowledge. Since RPB is formulated in the continuous domain, a natural first step is to develop a discrete approximation. This discretized RPB model must replace fundamental geometry and surface integrals with discrete operations that can be applied to independent foreground and background images with minimal prior information. Ideally, the application of this model would require only what MCB requires: two images and a PSF for each. The geometry of optical imaging offers a method to greatly simplify RPB. As intimated in the previous chapter, an image formed by a simple lens system is magnified by the ratio of the image and object distances (3.1). (The negative sign indicates that the image is also inverted.) This implies that, for a fixed location on the imaging plane, changing the object distance alters the height of the focused object point (3.2). In other words, the image gets taller as object distance decreases, and object fea- 14

23 yo1 yo2 z yi1 yi2 do1 do2 di1 di2 Figure 3.1: Heights y o and y i vary with d o and d i. If variation in d i is negligible, then all y o arriving at the same y i fall on a single line. Approximation holds for d o d i. tures appear in different image pixels accordingly. This effect is compounded by the fact that decreasing d o leads to increasing d i under the imaging equation (2.1). These relationships are entirely accurate under the regime of ray optics; no approximation has yet been made. y i = d i d o y o (3.1) d o = ad o, y o = 1 a y o = y i = y i (3.2) Now suppose that d i is approximately constant for a wide range of d o. This is not such a stretch; a typical camera may adjust d i on the order of millimeters to change the focused d o on the order of meters. Under this approximation, it is readily seen that y i corresponds to a set of y o on a straight line drawn through the center of the lens (Figure 3.1). This line corresponds to the central ray in RPB s flux cone. This is an important result; it centers the circular cross section of the flux cone about the same pixel in both foreground and background images. 15

24 Having determined the location of the flux cone, the surface integrals required under RPB must be discretized. The mean of pixels within the circular cross section may suffice, but the normalization terms from equation (2.9) must be included. Suppose that the known PSFs from MCB are replaced by their radii, r b and r f, which also equal the conic section radius in each image. Now generate a filter mask for each image similar to the pillbox PSF (2.3) using r f and r b. In MATLAB, suitable filter masks are generated by the fspecial command in the Image Processing Toolbox. Since these filters are energy-preserving (i.e., sum to one), they satisfy the normalization required under RPB. For instance, one half of h f sums to the same value as one half of h b for any r f and r b. At each pixel, only a fraction of each PSF actually contributes to the overall image intensity. In the foreground, it is the portion of the filter mask that falls within X f, the support of the foreground image. The mask remainder, or unused portion of the foreground PSF, determines the applied background filter mask. For the sake of simplicity, suppose the occlusion edge is a straight line. The mask remainder and applied background mask comprise the same fraction or percentage of their overall filter masks (Figure 3.2). Note that this relationship is difficult to enforce when the foreground and background masks are unequally sized (i.e., r b r f ). In this case, it is often necessary to scale the borderline values in the applied background mask in order to approximate the true edge, which falls on a noninteger location. This is also illustrated in Figure 3.2. Once the appropriate filter mask portions mask have been applied, the results are summed together. This process is then repeated for all pixels in the image. Unfortunately, these steps must be repeated for all pixel locations; the mask remainder 16

25 y y x x Figure 3.2: Applied filter masks in foreground (left) and background (right). Note that the applied background segment is the same portion of its overall mask as the mask remainder in the foreground. changes depending on each pixel s proximity to the occlusion edge. Furthermore, the applied background segment must be flipped about an axis parallel to the occlusion edge for certain combinations of d o, d f, and d b. Figure 2.6 shows that, when d f < d o < d b, the applied background segment occupies the opposite side of the flux cone s cross section. Thus, to accurately emulate RPB, it is necessary to know whether or not d o falls between the foreground and background planes. It is not possible to uniquely determine d o, d f, and d b given only r f and r b using (2.2); the problem is underdetermined. Let i f (x, y), i b (x, y), and i RP B (x, y) be the focused foreground image, focused background image, and synthesized composite image, respectively. The discretized RPB algorithm can be summarized as follows: 1. Find mask remainder: segment of foreground mask that is cut off by occlusion edge. 2. Use mask remainder to determine appropriate segment of background filter mask. 17

26 3. If d f < d o < d b, flip the applied background segment. 4. Apply full foreground filter mask and partial background filter mask to find i f (x a, y a ) and i b (x a, y a ), where (x a, y a ) is the current pixel location. 5. i RP B (x a, y a ) = i f (x a, y a ) + i b (x a, y a ) 6. Repeat for all (x, y). Even given the limiting assumption that the occlusion edge is a straight line, this algorithm is relatively cumbersome. There are a few tricks that can ease the computational burden. For instance, if the occlusion edge is a vertical line, then all x a share the same mask remainder and applied background filter mask. However, the algorithm is not easily generalized to arbitrary occlusion shapes. Additionally, it requires knowledge of the scene geometry: the object distances relative to the focused object distance must be known. Thus, the overall goal of matching MCB s ease of use is not achieved by the discretized RPB model. 3.2 A Novel Blur Model Three problems hinder the discretized RPB model: it requires geometric information beyond the blur radii of the foreground and background PSFs, it requires that the occluding edge be limited to a straight line, and it cannot be expressed in terms of LSI convolution. The novel blur method presented in this paper will address each of these in turn. First, the required knowledge of d o compared to d f and d b must be dispensed with. Meeting this requirement requires another approximation. Instead of laboring to carve out the correct portion of the background filter mask, consider simply multiplying 18

27 the entire background filter mask by an appropriate scalar value. For instance, if the foreground object occupies half of its filter mask, the background filter is scaled by half. In general, if the foreground object occupies a ratio α of its filter mask, the background filter is scaled by (1 α). This approximation allows contributions from background pixels that should technically be excluded, but it solves the first problem posed by discretized RPB. In fact, it solves the remaining two problems as well. The quantity α must be computed for every pixel location, but its solution is straightforward. Consider a function u(x, y) that describes the support of the foreground object (3.3). (Let X f be the set of all opaque (x, y) within i f (x, y).) This function is called the occlusion map of the overall image; it is zero in the region where the background is visible when both objects are sharply focused. Assuming that the foreground PSF h f is energy-preserving and positive-valued, α is simply the convolution of i f and u (3.4). 1, (x, y) X f u(x, y) = 0, else (3.3) α(x, y) = h f (x, y) u(x, y) (3.4) Already, the new blur model is nearing completion. This formulation of α solves both remaining problems with discretized RPB: it allows the occlusion map, u(x, y), to be arbitrarily shaped, and it allows every step of the process to be written in terms of LSI convolution. The foreground and background images are convolved with their respective PSFs, and the results are weighted according to α and summed (3.5). This notation is similar to image compositing or matting [10]-[12], but the underlying problems are exact opposites. Video matting seeks to solve for α given an image, while this paper s blur model creates α as it generates the image itself. This blending 19

28 of independently blurred foreground and background also appears in [7], but here it is both more concise and more general. One element of (3.5) remains unexplained. The foreground object is darkened near the occlusion edge during convolution. (This familiar effect is depicted in Figure A.3 in the Appendix.) This is a natural consequence of LSI convolution, but it interacts unfavorably with the weighted-average formulation of (3.5). Many methods may be applied to preserve energy in h f i f as (x, y) moves outside the opaque foreground region; symmetric extension of the foreground is one well-known option. Instead, this thesis poses an energy-preserving convolution (EPC) scheme (3.6), which is further developed in the Appendix. The purpose of EPC is simply to prevent the zero values surrounding an image of finite extent from bleeding into the convolution result. Intuitively, EPC modifies the convolution result to simply equal the mean (assuming a pillbox PSF) of the foreground pixels still overlapped by the filter mask. Fewer pixels are included nearer (and beyond) the occlusion, but the result is always a simple average of foreground pixels. The EPC operator is denoted as i, where i indicates the support function about which about which normalization is performed. Again, refer to the Appendix for a more thorough development. i c (x, y) = α(x, y)[h f (x, y) f i f (x, y)] + (1 α(x, y))[h b (x, y) i b (x, y)] (3.5) h f (x, y) f i f (x, y) = [h f (x, y) i f (x, y)]/α(x, y) (3.6) Modification for superior performance near image boundary The novel blur model s standard formulation (3.5), like MCB and discretized RPB, yields poor results near the periphery of i c. Using EPC, it is possible to greatly 20

29 improve the quality of i c near its outer bounds. (Appendix A demonstrates the performance of EPC near image boundaries.) α(x, y) = h f (x, y) c u(x, y) (3.7) i c (x, y) = α(x, y)[h f (x, y) f i f (x, y)] + (1 α(x, y))[h b (x, y) c i b (x, y)] (3.8) Compared to (3.4) and (3.5), this new formulation adds two EPC operators. This increases the computational cost from four LSI convolutions to six and adds two divisions per pixel. Note that (3.8) need only be evaluated only within a distance of maxr f, r b from the image boundary. Even with this reduction, the extra EPCs represent a substantial increase in computation to correct an artifact addressed by neither MCB nor discretized RPB. As such, this step is omitted from the standard formulation in (3.5) Generalization to three or more object planes It has been assumed during derivation that the scene consists of only two objects: a foreground and a background. The model may be generalized to more complicated scenes through an iterative, back-to-front process reminiscent of layered depth image (LDI) or simple z-buffer rendering [13]: 1. Combine the farthest image and its nearest neighbor into a composite image as in (3.5). 2. Treat composite image as sharply focused background image; combine with next nearest image to generate new composite. 3. Repeat Step 2 until all images are integrated with the overall composite. 21

30 Note that, since each composite image is treated as a focused background image, each subsequent application uses h b (x, y) = δ(x, y). Thus, background convolution need only be calculated in the first step. This implies that, for N > 2 objects, the overall number of convolutions is 1+3(N 1), not 4(N 1). (If enhanced performance is desired near image boundaries, the cost rises to 1 + 5(N 1) convolutions.) Additionally, if two images share a common d o, they may be incorporated simultaneously. It is implicitly assumed that the order of these objects is known. That is, one must know which object of any given pair is farther from the camera. 3.3 Image Synthesis The following examples demonstrate the qualitative differences between MCB, discretized RPB, and the novel blur model developed in this paper. A series of arbitrarily blurred images are synthesized using the foreground and background images shown in Figure 3.3. The results for all three models using the simple foreground (i.e., a linear occlusion edge) are shown in Figure 3.4. The images generated using discretized RPB and the novel model appear remarkably similar. The distortion visible near the occlusion in Figures 3.4(b) and 3.4(d) is likely caused by imprecision in selecting the appropriate background filter mask. MCB fares poorly in comparison, but yields relatively good results in the case when both foreground and background are equally blurred (Figure 3.4(b)). Figure 3.4(c) demonstrates the occlusion transparency predicted by the discretized RPB and novel blur models when the foreground is defocused. This behavior will be verified experimentally later in this thesis. Figure 3.5 shows the application of MCB and novel blur models in the presence of an arbitrary blur map. (Discretized RPB is omitted due to its requirement that 22

31 Figure 3.3: Left to right: Background image, simple occlusion edge, and arbitrary occlusion edge. the occlusion edge be perfectly linear.) The novel model yields subjectively pleasing results, while MCB exhibits a familiar artifact due to overlap in its two convolutions. These results favor the new blur model presented in this paper, but it remains to be seen whether this verdict is echoed by real photographic images. 23

32 (a) (b) (c) (d) Figure 3.4: Images synthesized by three blur models. Left to right: MCB, discretized RPB, and novel blur model. (a) d o = d f, r f = 0; (b) d f < d o < d b, r b = r f ; (c) d o = d b, r b = 0; (d) d o > d b, r f > f b. 24

33 Figure 3.5: Left to right: Images synthesized by MCB and novel models using arbitrary occlusion edge and d o = d b (r b = 0). 25

34 CHAPTER 4 APPLICATION TO REAL PHOTOGRAPHS This paper has presented a new blur algorithm and demonstrated its ability to synthesize arbitrarily blurred images using two or more objects. While its performance is subjectively preferable to MCB and its computation significantly easier than RPB, a number of approximations were made in its derivation. First, its derivation resides wholly within the regime of ray optics; diffractive effects are neglected. Additionally, blur due to optical defocus is approximated as an LSI convolution for fixed object depth. This equivalence requires an ideal lens; it neglects the numerous aberrations common in real lenses [14]. Finally, the new blur model assumes that changes in sensor position, d i, are negligible between differently focused images, and it includes some background pixels that would be omitted under RPB. Recall that, unlike RPB, the new blur model does not reshape the background filter mask to exclude inappropriate pixels. Rather, it simply scales the output of the entire filter mask. Given these numerous assumptions and approximations, one may rightly question the new model s validity. This chapter will verify its accuracy through real photographs of a simple, two-object scene. The camera employed features a fixed focal length lens with 26

35 Figure 4.1: Objects used in the first experiment. Left: Foreground object. Center: Background object. Right: All-in-focus image of overall scene; foreground partially transparent to show extent of occlusion. a wide aperture (f-number: 1.2) and a pixel CMOS sensor. 4.1 Seeing Through Occlusions The first experiment tests one important prediction of RPB: the ability to see through defocused occlusions. Two flat objects comprise the scene: the background is a dark board with two stripes (white tape) on its surface. One stripe serves as a focusing aid, while the other is wholly occluded by the foreground. The foreground consists of a dark board with a simple printed pattern to ease focusing. These objects are shown independently and superimposed in Figure 4.1. (While the foreground appears translucent to demonstrate the occlusion boundary, it is in fact wholly opaque.) This scene contains a background object, the left-hand white stripe, that is occluded yet close enough to the occlusion edge to be visible under RPB and the new blur model for moderate foreground defocus. The experiment is simple: a photograph is taken with the background plane sharply in focus. Then, independent images of the foreground and background are used to generate synthetic images using MCB, discretized RPB, and the novel blur model. The results are then compared both subjectively and in terms of MSE. Before any blur model may be applied, it is necessary to estimate the blurring 27

36 Table 4.1: MSE and PSNR for the first experiment using three blur models. Blur Model MSE PSNR (db) MCB Discretized RPB Novel filter for foreground and background independently. Since the background is identically focused in all images, its PSF is simply the delta function (i.e. identity). The foreground PSF is determined to be a pillbox function (2.3) with r = 21.8; this radius minimizes MSE within the foreground region. The results predicted by each blur model, as well as the actual photograph, are shown in Figure 4.2. Note that MCB and the novel blur model also allow the application of a Gaussian PSF, and this opportunity has been duly investigated. However, the MSE obtained via Gaussian PSF (σ = 11.8) is actually worse than that obtained via pillbox. For this reason, and for the sake of brevity, only the results using the pillbox PSF are depicted in Figure 4.2 and tabulated in Table 4.1. Interestingly, the images generated by discretized RPB and by the novel blur model are nearly identical. In fact, returning to the formulation of discretized RPB, it is apparent that these models converge in the case when the background PSF is identity. In this case, only one pixel from the background image is considered at each pixel of the composite image; both models scale this pixel according to its occlusion proximity and sum it with the result of the foreground convolution (3.5). In this experiment, these models result surpasses that of MCB, both subjectively and in terms of MSE. The MCB result shown in Figure 4.2 is the only image in which the occluded white stripe is not visible. The smaller peak visible in its one-row result (also Figure 4.2) is not due to the occluded stripe, but rather to the addition of the blurred foreground to the unblurred background. (See Figure 2.3 for another example of this phenomenon.) 28

37 (a) (b) (c) (d) Figure 4.2: Image and one-row intensity profile in (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image. 29

38 This experiment validates the claim that defocus can cause occlusion transparency, as predicted in RPB. Furthermore, the novel blur model developed in this thesis is shown to accurately model physical reality. 4.2 Image Synthesis The novel blur model has been successfully applied to a simple scene, but real objects are typically more complicated (i.e., have more spectral content) than these elementary shapes. Additionally, there are several more cases to consider: the previous experiment assumed d o = d b, but this is obviously not always true. Thus, the second experiment introduces two major changes. First, the background scene is replaced by a much more complicated object: a detailed photograph of sea shells. Second, real images are captured using a variety of focus settings. The foreground, background, and an all-in-focus image are shown in Figure 4.3. The first set of trials revisits the situation in the previous experiment: d o = d b, and r f = 24.5 (r b = 0). The result using all three blur models is shown in Figure 4.4. The second and third sets of trials, shown in Figures 4.5 and 4.6, respectively, represent cases in which neither object is well-focused. In Figure 4.5, d o < d f, and thus r f < r b. In Figure 4.6, the opposite is true: d o > d b and r f > r b. The MSE and PSNR in each case is summarized in Table 4.2. Perhaps the most remarkable result is the closeness to which the novel blur model matches discretized RPB. When d o = d b, the models are equivalent. In the other cases, discretized RPB and the novel model achieve roughly equal results in terms of MSE and PSNR. This result is encouraging; it validates the assumptions used to express the novel blur model in terms of LSI convolution. Furthermore, the great difference 30

39 Figure 4.3: Objects used in the second experiment. Left: Foreground object. Center: Background object. Right: All-in-focus image of overall scene; foreground partially transparent to show extent of occlusion. (a) (b) (c) (d) Figure 4.4: Images for d o = d b, r f = 24.5, and r b = 0 using (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image. 31

40 (a) (b) (c) (d) Figure 4.5: Images for d o < d f, r f = 15.6, and r b = 36.0 using (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image. (a) (b) (c) (d) Figure 4.6: Images for d o > d b, r f = 45.6, and r b = 20.9 using (a) MCB, (b) discretized RPB, (c) novel model, and (d) real image. 32

41 Table 4.2: MSE/PSNR (db) for the second experiment using three blur models and three focus settings. Blur Model d o = d b d o < d f d o > d b MCB 305.9/ / /24.8 Discretized RPB 64.9/ / /29.3 Novel 64.9/ / /29.5 in quality between these two models and the MCB model clearly demonstrates the value of these more advanced models. 33

42 CHAPTER 5 CONCLUSION This thesis has presented a novel blur model capable of generating arbitrarily blurred images at variable depth. The model incorporates geometrical knowledge as in the reversed projection blur (RPB) model, but can be expressed in terms of the LSI convolution, as in multicomponent blur (MCB). As seen in Chapter 4, the simplifying assumptions used in derivation do not adversely affect performance; the new model surpasses MCB and rivals a discrete approximation to RPB in terms of subjective quality and MSE. 5.1 Future Work The novel blur model has proven well-suited to image synthesis, but it may prove applicable to additional problems. Still within the problem of image synthesis, it may be desirable to synthesize arbitrarily blurred images using two or more occlusioncontaining photographs instead of prior foreground and background images [9]. Another applicable problem is occlusion mapping and removal. Recent work has been devoted to automatically removing highly defocused occlusions (i.e., a finger in 34

43 front of a camera lens) without in-painting [7]. This problem relies upon the results of RPB, but is hindered by that model s complexity and inapplicability to discrete images. The attractiveness of a new model that combines RPB s power with a simple, convolutional framework is readily apprehended. This blur model may also find application in depth from defocus (DFD). Previous efforts in DFD have presented the interaction between image segments with different object depths in a coarse, probabilistic light [2]. This model allows a more rigorous deterministic analysis, which may help in understanding the interaction between segments. 35

44 REFERENCES [1] A. P. Pentland, A new sense for depth of field, IEEE Trans. Pattern Anal. Machine Intell., vol. 9, no. 4, pp , July [2] S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus. New York: Springer- Verlag, [3] T. C. Nguyen and T. S. Huang, Image blurring effects due to depth discontinuities: Blurring that creates emergent image details, Image Vision Comput., vol. 10, no. 10, pp , December [4] N. Asada, H. Fujiwara, and T. Matsuyama, Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model, IEEE Trans. on Pattern Anal. and Machine Intell., vol. 20, no. 2, pp , February [5] B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics. New York: John Wiley & Sons, [6] A. Bovik et al., Handbook of Image & Video Processing. Boston, MA: Elsevier Academic Press, [7] S. McCloskey, M. Langer, and K. Siddiqi, Seeing around occluding objects, in Proc. 18th International Conf. on Pattern Recog. 2006, pp [8] P. Favaro and S. Soatto, Seeing beyond occlusions, in IEEE Comp. Soc. Conf. on Comp. Vision and Pattern Recog. June 2003, vol. 2, pp [9] A. Kubota and K. Aizawa, Reconstructing arbitrarily focused images from two differently focused images using linear filters, IEEE Trans. Image Proc., vol. 14, no. 11, pp , November [10] Y. Chuang, B. Curless, D. Salesin, and R. Szeliski, A Bayesian approach to digital matting, in IEEE Comp. Soc. Conf. on Comp. Vision and Pattern Recog. 2001, vol. 2, pp [11] J. Sun, J. Jia, C. Tang, and H. Shum, Poisson matting, ACM Trans. Graphics, vol. 23, no. 3, pp , August [12] M. McGuire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, Defocus video matting, ACM Trans. Graphics, vol. 24, no. 3, pp , July

45 [13] J. Shade, S. Gortler, L. He, and R. Szeliski, Layered depth images, in International Conf. on Comp. Graphics and Interactive Techniques 1998, pp [14] A. Cox, Photographic Optics. London: Focal Press,

46 APPENDIX A ENERGY-PRESERVING CONVOLUTION Convolution in 2D is problematic for images of finite support, or cropped images. Digital photographs fall within this category; a rectangular grid of sensors is typically used to digitize the optical image. As shown in Figure A.1, the coverage of the sampling grid may not equal the extent of the optical image. More generally, a digital image has some limited domain, X i, over which it is nonzero (A.1). Standard convolution (A.2) leads to undesirable darkening near the boundaries of this domain. A simple 1D example is shown in Figure A.2. Methods used to address this problem typically take the form of filling in the space outside X i, as in symmetric extension (mirroring). What follows is an alternative methodology that is novel to the author s best knowledge. X i = domain(i(x, y)) i(x, y) h(x, y) = i(m, n)h(x m, y n) m= n= (A.1) (A.2) For a given image and filter mask, let k(x, y) be the result of their convolution, limited to the image s original support. 38

47 Optical Image Sampling Grid Figure A.1: Optical image and rectangular sampling grid. h(t) t dh i(t) di k(t) di dh Figure A.2: One-dimensional example showing the origin and extent of blur darkening near a signal s original boundaries. 39

48 i(x, y) h(x, y), (x, y) X i k(x, y) = 0, else (A.3) Now define u(x, y) as a binary image equal to one only within the support of i(x, y), and let β(x, y) be the convolution of u and h, also limited to X i : 1, (x, y) X i u(x, y) = 0, else u(x, y) h(x, y), (x, y) X i β(x, y) = 0, else (A.4) (A.5) It is readily apparent that the value of β(x, y) is equal to the summation of filter taps in h that fall within X i when the convolution is evaluated at location (x, y). In other words, β(x, y) is the weight given to i by the convolution. For an energypreserving, positive filter (i.e., x,y h(x, y) = 1 and h(x, y) 0 (x, y)), β(x, y) is the fraction of the filter mask that is applied to the image. For interior values of (x, y), β = 1, while for (x, y) nearer the limit of support, β (0, 1). Note that β cannot equal zero within X i for positive-valued h. If h(x, y) is constrained to be energy-preserving and positive-valued, the convolution k(x, y) is a weighted average of pixels in i(x, y), with weights given by the filter taps in h. The fraction β(x, y) is then simply the weight given to the image itself; (1 β(x, y)) is the weight given to the image s zero padding. In other words, darkening is caused by assigning filter weight to zero, and it can be undone by increasing the weight given to k(x, y) to one. Since β (0, 1] within and near X i, the zero-weighting can be removed by simple division (A.7). 40

49 Figure A.3: Left to right: Unblurred image, image blurred with zero-padding, and image blurred with zero-padding and EPC. k(x, y) = β(x, y)k (x, y) + (1 β(x, y))0 k (x, y) = k(x, y) β(x, y) h(x, y) i i(x, y) = k (x, y) (A.6) (A.7) (A.8) The quantity k (x, y) is simply the convolution of h and i, adaptively reweighted to prevent zero-padding from bleeding into X i. For this reason, the process is termed energy-preserving convolution (EPC). Note that the name EPC refers to the convolutionweighting scheme, not to the energy-preserving nature of the filter h(x, y). When h is a PSF, EPC produces a blurred image that does not visibly darken near its extremities (Figure A.3). This is a vital component of the new blurring model developed in this thesis. The notation in (A.8) is used to denote EPC; the subscript i indicates that u(x, y) is one within the domain X i and zero elsewhere. The EPC k (x, y) may also be computed for (x, y) outside X i if k(x, y) is not support-limited as in (A.3), but care must be exercised as β(x, y) may be zero-valued. In this case, simply set β(x, y) = 1 and k (x, y) = k(x, y) = 0. This idea is used in the novel blur method presented in (3.5); it allows EPC to extend the blurred image a distance equal to the radius of the PSF without darkening. Note that the blurred 41

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

Low Spatial Frequency Noise Reduction with Applications to Light Field Moment Imaging

Low Spatial Frequency Noise Reduction with Applications to Light Field Moment Imaging Low Spatial Frequency Noise Reduction with Applications to Light Field Moment Imaging Christopher Madsen Stanford University cmadsen@stanford.edu Abstract This project involves the implementation of multiple

More information

Evolving Measurement Regions for Depth from Defocus

Evolving Measurement Regions for Depth from Defocus Evolving Measurement Regions for Depth from Defocus Scott McCloskey, Michael Langer, and Kaleem Siddiqi Centre for Intelligent Machines, McGill University {scott,langer,siddiqi}@cim.mcgill.ca Abstract.

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to the

More information

Performance Factors. Technical Assistance. Fundamental Optics

Performance Factors.   Technical Assistance. Fundamental Optics Performance Factors After paraxial formulas have been used to select values for component focal length(s) and diameter(s), the final step is to select actual lenses. As in any engineering problem, this

More information

E X P E R I M E N T 12

E X P E R I M E N T 12 E X P E R I M E N T 12 Mirrors and Lenses Produced by the Physics Staff at Collin College Copyright Collin College Physics Department. All Rights Reserved. University Physics II, Exp 12: Mirrors and Lenses

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Image of Formation Images can result when light rays encounter flat or curved surfaces between two media. Images can be formed either by reflection or refraction due to these

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Chapter 18 Optical Elements

Chapter 18 Optical Elements Chapter 18 Optical Elements GOALS When you have mastered the content of this chapter, you will be able to achieve the following goals: Definitions Define each of the following terms and use it in an operational

More information

5.0 NEXT-GENERATION INSTRUMENT CONCEPTS

5.0 NEXT-GENERATION INSTRUMENT CONCEPTS 5.0 NEXT-GENERATION INSTRUMENT CONCEPTS Studies of the potential next-generation earth radiation budget instrument, PERSEPHONE, as described in Chapter 2.0, require the use of a radiative model of the

More information

Single-Image Shape from Defocus

Single-Image Shape from Defocus Single-Image Shape from Defocus José R.A. Torreão and João L. Fernandes Instituto de Computação Universidade Federal Fluminense 24210-240 Niterói RJ, BRAZIL Abstract The limited depth of field causes scene

More information

On the Recovery of Depth from a Single Defocused Image

On the Recovery of Depth from a Single Defocused Image On the Recovery of Depth from a Single Defocused Image Shaojie Zhuo and Terence Sim School of Computing National University of Singapore Singapore,747 Abstract. In this paper we address the challenging

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST) Gaussian Blur Removal in Digital Images A.Elakkiya 1, S.V.Ramyaa 2 PG Scholars, M.E. VLSI Design, SSN College of Engineering, Rajiv Gandhi Salai, Kalavakkam 1,2 Abstract In many imaging systems, the observed

More information

Laboratory 1: Uncertainty Analysis

Laboratory 1: Uncertainty Analysis University of Alabama Department of Physics and Astronomy PH101 / LeClair May 26, 2014 Laboratory 1: Uncertainty Analysis Hypothesis: A statistical analysis including both mean and standard deviation can

More information

Image Formation by Lenses

Image Formation by Lenses Image Formation by Lenses Bởi: OpenStaxCollege Lenses are found in a huge array of optical instruments, ranging from a simple magnifying glass to the eye to a camera s zoom lens. In this section, we will

More information

Frequency Domain Enhancement

Frequency Domain Enhancement Tutorial Report Frequency Domain Enhancement Page 1 of 21 Frequency Domain Enhancement ESE 558 - DIGITAL IMAGE PROCESSING Tutorial Report Instructor: Murali Subbarao Written by: Tutorial Report Frequency

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR)

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR) COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR) PAPER TITLE: BASIC PHOTOGRAPHIC UNIT - 3 : SIMPLE LENS TOPIC: LENS PROPERTIES AND DEFECTS OBJECTIVES By

More information

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses.

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses. Mirrors and Lenses Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses. Notation for Mirrors and Lenses The object distance is the distance from the object

More information

Computational Cameras. Rahul Raguram COMP

Computational Cameras. Rahul Raguram COMP Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

The popular conception of physics

The popular conception of physics 54 Teaching Physics: Inquiry and the Ray Model of Light Fernand Brunschwig, M.A.T. Program, Hudson Valley Center My thinking about these matters was stimulated by my participation on a panel devoted to

More information

Declaration. Michal Šorel March 2007

Declaration. Michal Šorel March 2007 Charles University in Prague Faculty of Mathematics and Physics Multichannel Blind Restoration of Images with Space-Variant Degradations Ph.D. Thesis Michal Šorel March 2007 Department of Software Engineering

More information

Intorduction to light sources, pinhole cameras, and lenses

Intorduction to light sources, pinhole cameras, and lenses Intorduction to light sources, pinhole cameras, and lenses Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 October 26, 2011 Abstract 1 1 Analyzing

More information

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS 1 LUOYU ZHOU 1 College of Electronics and Information Engineering, Yangtze University, Jingzhou, Hubei 43423, China E-mail: 1 luoyuzh@yangtzeu.edu.cn

More information

Spherical Mirrors. Concave Mirror, Notation. Spherical Aberration. Image Formed by a Concave Mirror. Image Formed by a Concave Mirror 4/11/2014

Spherical Mirrors. Concave Mirror, Notation. Spherical Aberration. Image Formed by a Concave Mirror. Image Formed by a Concave Mirror 4/11/2014 Notation for Mirrors and Lenses Chapter 23 Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to

More information

PHYSICS FOR THE IB DIPLOMA CAMBRIDGE UNIVERSITY PRESS

PHYSICS FOR THE IB DIPLOMA CAMBRIDGE UNIVERSITY PRESS Option C Imaging C Introduction to imaging Learning objectives In this section we discuss the formation of images by lenses and mirrors. We will learn how to construct images graphically as well as algebraically.

More information

Image Formation and Capture

Image Formation and Capture Figure credits: B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, A. Theuwissen, and J. Malik Image Formation and Capture COS 429: Computer Vision Image Formation and Capture Real world Optics Sensor Devices

More information

Chapter 23. Mirrors and Lenses

Chapter 23. Mirrors and Lenses Chapter 23 Mirrors and Lenses Mirrors and Lenses The development of mirrors and lenses aided the progress of science. It led to the microscopes and telescopes. Allowed the study of objects from microbes

More information

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING Clemson University TigerPrints All Theses Theses 8-2009 EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING Jason Ellis Clemson University, jellis@clemson.edu

More information

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Huei-Yung Lin and Chia-Hong Chang Department of Electrical Engineering, National Chung Cheng University, 168 University Rd., Min-Hsiung

More information

Chapter 2 Fourier Integral Representation of an Optical Image

Chapter 2 Fourier Integral Representation of an Optical Image Chapter 2 Fourier Integral Representation of an Optical This chapter describes optical transfer functions. The concepts of linearity and shift invariance were introduced in Chapter 1. This chapter continues

More information

Chapter 23. Mirrors and Lenses

Chapter 23. Mirrors and Lenses Chapter 23 Mirrors and Lenses Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to

More information

Geometrical Optics. Have you ever entered an unfamiliar room in which one wall was covered with a

Geometrical Optics. Have you ever entered an unfamiliar room in which one wall was covered with a Return to Table of Contents HAPTER24 C. Geometrical Optics A mirror now used in the Hubble space telescope Have you ever entered an unfamiliar room in which one wall was covered with a mirror and thought

More information

4.5 Fractional Delay Operations with Allpass Filters

4.5 Fractional Delay Operations with Allpass Filters 158 Discrete-Time Modeling of Acoustic Tubes Using Fractional Delay Filters 4.5 Fractional Delay Operations with Allpass Filters The previous sections of this chapter have concentrated on the FIR implementation

More information

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36 Light from distant things Chapter 36 We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can process. This chapter concerns

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

Simulated Programmable Apertures with Lytro

Simulated Programmable Apertures with Lytro Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows

More information

BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL. HEADLINE: HDTV Lens Design: Management of Light Transmission

BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL. HEADLINE: HDTV Lens Design: Management of Light Transmission BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL HEADLINE: HDTV Lens Design: Management of Light Transmission By Larry Thorpe and Gordon Tubbs Broadcast engineers have a comfortable familiarity with electronic

More information

Defocus Map Estimation from a Single Image

Defocus Map Estimation from a Single Image Defocus Map Estimation from a Single Image Shaojie Zhuo Terence Sim School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore 117417, SINGAPOUR Abstract In this

More information

Introduction. Chapter Time-Varying Signals

Introduction. Chapter Time-Varying Signals Chapter 1 1.1 Time-Varying Signals Time-varying signals are commonly observed in the laboratory as well as many other applied settings. Consider, for example, the voltage level that is present at a specific

More information

Sharpness, Resolution and Interpolation

Sharpness, Resolution and Interpolation Sharpness, Resolution and Interpolation Introduction There are a lot of misconceptions about resolution, camera pixel count, interpolation and their effect on astronomical images. Some of the confusion

More information

Notation for Mirrors and Lenses. Chapter 23. Types of Images for Mirrors and Lenses. More About Images

Notation for Mirrors and Lenses. Chapter 23. Types of Images for Mirrors and Lenses. More About Images Notation for Mirrors and Lenses Chapter 23 Mirrors and Lenses Sections: 4, 6 Problems:, 8, 2, 25, 27, 32 The object distance is the distance from the object to the mirror or lens Denoted by p The image

More information

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam In the following set of questions, there are, possibly, multiple correct answers (1, 2, 3 or 4). Mark the answers you consider correct.

More information

Chapter 17. Shape-Based Operations

Chapter 17. Shape-Based Operations Chapter 17 Shape-Based Operations An shape-based operation identifies or acts on groups of pixels that belong to the same object or image component. We have already seen how components may be identified

More information

Developing the Model

Developing the Model Team # 9866 Page 1 of 10 Radio Riot Introduction In this paper we present our solution to the 2011 MCM problem B. The problem pertains to finding the minimum number of very high frequency (VHF) radio repeaters

More information

Section 2 concludes that a glare meter based on a digital camera is probably too expensive to develop and produce, and may not be simple in use.

Section 2 concludes that a glare meter based on a digital camera is probably too expensive to develop and produce, and may not be simple in use. Possible development of a simple glare meter Kai Sørensen, 17 September 2012 Introduction, summary and conclusion Disability glare is sometimes a problem in road traffic situations such as: - at road works

More information

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions 10.2 SUMMARY Refraction in Lenses Converging lenses bring parallel rays together after they are refracted. Diverging lenses cause parallel rays to move apart after they are refracted. Rays are refracted

More information

Topic 6 - Optics Depth of Field and Circle Of Confusion

Topic 6 - Optics Depth of Field and Circle Of Confusion Topic 6 - Optics Depth of Field and Circle Of Confusion Learning Outcomes In this lesson, we will learn all about depth of field and a concept known as the Circle of Confusion. By the end of this lesson,

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Mutually Optimizing Resolution Enhancement Techniques: Illumination, APSM, Assist Feature OPC, and Gray Bars

Mutually Optimizing Resolution Enhancement Techniques: Illumination, APSM, Assist Feature OPC, and Gray Bars Mutually Optimizing Resolution Enhancement Techniques: Illumination, APSM, Assist Feature OPC, and Gray Bars Bruce W. Smith Rochester Institute of Technology, Microelectronic Engineering Department, 82

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals. Experiment 7 Geometrical Optics You will be introduced to ray optics and image formation in this experiment. We will use the optical rail, lenses, and the camera body to quantify image formation and magnification;

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION doi:0.038/nature727 Table of Contents S. Power and Phase Management in the Nanophotonic Phased Array 3 S.2 Nanoantenna Design 6 S.3 Synthesis of Large-Scale Nanophotonic Phased

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

Optical transfer function shaping and depth of focus by using a phase only filter

Optical transfer function shaping and depth of focus by using a phase only filter Optical transfer function shaping and depth of focus by using a phase only filter Dina Elkind, Zeev Zalevsky, Uriel Levy, and David Mendlovic The design of a desired optical transfer function OTF is a

More information

Computer Generated Holograms for Testing Optical Elements

Computer Generated Holograms for Testing Optical Elements Reprinted from APPLIED OPTICS, Vol. 10, page 619. March 1971 Copyright 1971 by the Optical Society of America and reprinted by permission of the copyright owner Computer Generated Holograms for Testing

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 4

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 4 FIBER OPTICS Prof. R.K. Shevgaonkar Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture: 4 Modal Propagation of Light in an Optical Fiber Fiber Optics, Prof. R.K. Shevgaonkar,

More information

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems Chapter 9 OPTICAL INSTRUMENTS Introduction Thin lenses Double-lens systems Aberrations Camera Human eye Compound microscope Summary INTRODUCTION Knowledge of geometrical optics, diffraction and interference,

More information

Chapter 23. Mirrors and Lenses

Chapter 23. Mirrors and Lenses Chapter 23 Mirrors and Lenses Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to

More information

High Dynamic Range (HDR) Photography in Photoshop CS2

High Dynamic Range (HDR) Photography in Photoshop CS2 Page 1 of 7 High dynamic range (HDR) images enable photographers to record a greater range of tonal detail than a given camera could capture in a single photo. This opens up a whole new set of lighting

More information

ABC Math Student Copy. N. May ABC Math Student Copy. Physics Week 13(Sem. 2) Name. Light Chapter Summary Cont d 2

ABC Math Student Copy. N. May ABC Math Student Copy. Physics Week 13(Sem. 2) Name. Light Chapter Summary Cont d 2 Page 1 of 12 Physics Week 13(Sem. 2) Name Light Chapter Summary Cont d 2 Lens Abberation Lenses can have two types of abberation, spherical and chromic. Abberation occurs when the rays forming an image

More information

2.1 BASIC CONCEPTS Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal.

2.1 BASIC CONCEPTS Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal. 1 2.1 BASIC CONCEPTS 2.1.1 Basic Operations on Signals Time Shifting. Figure 2.2 Time shifting of a signal. Time Reversal. 2 Time Scaling. Figure 2.4 Time scaling of a signal. 2.1.2 Classification of Signals

More information

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation Optical Performance of Nikon F-Mount Lenses Landon Carter May 11, 2016 2.671 Measurement and Instrumentation Abstract In photographic systems, lenses are one of the most important pieces of the system

More information

Chapter 34. Images. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

Chapter 34. Images. Copyright 2014 John Wiley & Sons, Inc. All rights reserved. Chapter 34 Images Copyright 34-1 Images and Plane Mirrors Learning Objectives 34.01 Distinguish virtual images from real images. 34.02 Explain the common roadway mirage. 34.03 Sketch a ray diagram for

More information

Appendix III Graphs in the Introductory Physics Laboratory

Appendix III Graphs in the Introductory Physics Laboratory Appendix III Graphs in the Introductory Physics Laboratory 1. Introduction One of the purposes of the introductory physics laboratory is to train the student in the presentation and analysis of experimental

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

Thin Lenses * OpenStax

Thin Lenses * OpenStax OpenStax-CNX module: m58530 Thin Lenses * OpenStax This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 4.0 By the end of this section, you will be able to:

More information

Image Processing by Bilateral Filtering Method

Image Processing by Bilateral Filtering Method ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image

More information

The Noise about Noise

The Noise about Noise The Noise about Noise I have found that few topics in astrophotography cause as much confusion as noise and proper exposure. In this column I will attempt to present some of the theory that goes into determining

More information

Coded Computational Photography!

Coded Computational Photography! Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!

More information

Removing Temporal Stationary Blur in Route Panoramas

Removing Temporal Stationary Blur in Route Panoramas Removing Temporal Stationary Blur in Route Panoramas Jiang Yu Zheng and Min Shi Indiana University Purdue University Indianapolis jzheng@cs.iupui.edu Abstract The Route Panorama is a continuous, compact

More information

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation Single Digital mage Multi-focusing Using Point to Point Blur Model Based Depth Estimation Praveen S S, Aparna P R Abstract The proposed paper focuses on Multi-focusing, a technique that restores all-focused

More information

Copyright 1997 by the Society of Photo-Optical Instrumentation Engineers.

Copyright 1997 by the Society of Photo-Optical Instrumentation Engineers. Copyright 1997 by the Society of Photo-Optical Instrumentation Engineers. This paper was published in the proceedings of Microlithographic Techniques in IC Fabrication, SPIE Vol. 3183, pp. 14-27. It is

More information

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS Equipment and accessories: an optical bench with a scale, an incandescent lamp, matte, a set of

More information

Image Quality Assessment for Defocused Blur Images

Image Quality Assessment for Defocused Blur Images American Journal of Signal Processing 015, 5(3): 51-55 DOI: 10.593/j.ajsp.0150503.01 Image Quality Assessment for Defocused Blur Images Fatin E. M. Al-Obaidi Department of Physics, College of Science,

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION A full-parameter unidirectional metamaterial cloak for microwaves Bilinear Transformations Figure 1 Graphical depiction of the bilinear transformation and derived material parameters. (a) The transformation

More information

ME scope Application Note 01 The FFT, Leakage, and Windowing

ME scope Application Note 01 The FFT, Leakage, and Windowing INTRODUCTION ME scope Application Note 01 The FFT, Leakage, and Windowing NOTE: The steps in this Application Note can be duplicated using any Package that includes the VES-3600 Advanced Signal Processing

More information

Why learn about photography in this course?

Why learn about photography in this course? Why learn about photography in this course? Geri's Game: Note the background is blurred. - photography: model of image formation - Many computer graphics methods use existing photographs e.g. texture &

More information

Tangents. The f-stops here. Shedding some light on the f-number. by Marcus R. Hatch and David E. Stoltzmann

Tangents. The f-stops here. Shedding some light on the f-number. by Marcus R. Hatch and David E. Stoltzmann Tangents Shedding some light on the f-number The f-stops here by Marcus R. Hatch and David E. Stoltzmann The f-number has peen around for nearly a century now, and it is certainly one of the fundamental

More information

Wallace and Dadda Multipliers. Implemented Using Carry Lookahead. Adders

Wallace and Dadda Multipliers. Implemented Using Carry Lookahead. Adders The report committee for Wesley Donald Chu Certifies that this is the approved version of the following report: Wallace and Dadda Multipliers Implemented Using Carry Lookahead Adders APPROVED BY SUPERVISING

More information

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images A. Vadivel 1, M. Mohan 1, Shamik Sural 2 and A.K.Majumdar 1 1 Department of Computer Science and Engineering,

More information

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution Extended depth-of-field in Integral Imaging by depth-dependent deconvolution H. Navarro* 1, G. Saavedra 1, M. Martinez-Corral 1, M. Sjöström 2, R. Olsson 2, 1 Dept. of Optics, Univ. of Valencia, E-46100,

More information

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors In: M.H. Hamza (ed.), Proceedings of the 21st IASTED Conference on Applied Informatics, pp. 1278-128. Held February, 1-1, 2, Insbruck, Austria Evolving High-Dimensional, Adaptive Camera-Based Speed Sensors

More information

6.A44 Computational Photography

6.A44 Computational Photography Add date: Friday 6.A44 Computational Photography Depth of Field Frédo Durand We allow for some tolerance What happens when we close the aperture by two stop? Aperture diameter is divided by two is doubled

More information

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response - application: high dynamic range imaging Why learn

More information

Diffraction. Interference with more than 2 beams. Diffraction gratings. Diffraction by an aperture. Diffraction of a laser beam

Diffraction. Interference with more than 2 beams. Diffraction gratings. Diffraction by an aperture. Diffraction of a laser beam Diffraction Interference with more than 2 beams 3, 4, 5 beams Large number of beams Diffraction gratings Equation Uses Diffraction by an aperture Huygen s principle again, Fresnel zones, Arago s spot Qualitative

More information