Space-Variant Approaches to Recovery of Depth from Defocused Images

Size: px
Start display at page:

Download "Space-Variant Approaches to Recovery of Depth from Defocused Images"

Transcription

1 COMPUTER VISION AND IMAGE UNDERSTANDING Vol. 68, No. 3, December, pp , 1997 ARTICLE NO. IV Space-Variant Approaches to Recovery of Depth from Defocused Images A. N. Rajagopalan and S. Chaudhuri* Department of Electrical Engineering, Indian Institute of Technology, Bombay , India Received September 25, 1995; accepted July 15, 1996 from shading have been proposed in the literature for The recovery of depth from defocused images involves calcu- recovering the depth [2]. lating the depth of various points in a scene by modeling the The problem addressed in this paper is as follows effect that the focal parameters of the camera have on images Given a scene with object points at unknown depths from acquired with a small depth of field. In the approach to depth the camera, obtain the distances using the depth from from defocus (DFD), previous methods assume the depth to defocus (DFD) technique. Depth from defocus is a passive be constant over fairly large local regions and estimate the depth through inverse filtering by considering the system to be ranging method that uses a single camera. It is a generalized shift-invariant over those local regions. But a subimage when version of the depth from focusing method [3] in the sense analyzed in isolation introduces errors in the estimate of the that it is not required to focus an object in order to find depth. In this paper, we propose two new approaches for estimating its depth. Instead, two images of the object (which may or the depth from defocused images. The first approach may not be focused) acquired with different but known proposed here models the DFD system as a block shift-variant camera parameter settings are processed to determine the one and incorporates the interaction of blur among neighboring depth. In comparison with stereo vision and structure from subimages in an attempt to improve the estimate of the depth. motion methods, the problem of feature correspondence The second approach looks at the depth from defocus problem in the space-frequency representation framework. In particular, does not arise in DFD. This is an important advantage as the complex spectrogram and the Wigner distribution are the task of setting up the correspondence is usually a diffi- shown to be likely candidates for recovering the depth from cult one. The main problems associated with the DFD are defocused images. The performances of the proposed methods the requirement of proper modeling of defocusing in terms are tested on both synthetic and real images. The proposed of the camera parameters and the need for precise cam- methods yield good results and the quality of the estimates era calibration. obtained using these methods is compared with the existing Pentland [4, 5] was perhaps the first person to investigate method Academic Press the DFD problem. His method was based on comparing two images, one formed with a very small (pinhole) aperture 1. INTRODUCTION and the other image formed with a normal aperture. The point spread function (PSF) of the blur due to camera parameters was approximated by a circularly symmetric The image of a scene formed by an optical system, such 2D Gaussian function. The defocus operator was recovered as a lens, contains both photometric and geometric inforthrough deconvolution in the frequency domain. Since his mation about the scene. Brightness or radiance and chroinitial investigations, a wide variety of related techniques maticity of objects in the scene are part of the photometric have been developed. In [6], Subbarao proposed a more information, whereas distance and shape are parts of the general method in which he removed the constraint of one geometric information. Recovering the depth of an object image being formed with a pinhole aperture and allowed from a set of images sensed by a camera is an important several camera parameters to be varied simultaneously. problem in computer vision. It is a useful cue, for instance, A Gaussian defocus operator is assumed and the blur is for the purpose of navigation, object recognition, and scene recovered in the frequency domain through inverse filterinterpretation [1]. Various methods such as stereo, depth ing. A rotationally symmetric circular defocus model infrom focus or defocus, structure from motion, and shape stead of a Gaussian blur is assumed in [7]. In [8], Pentland et al. perform an inverse filtering in * To whom all correspondence should be addressed. sc@ee. the spatial domain using Parseval s theorem. Hwang et al. iitb.ernet.in. estimate the blur parameter using a differential algorithm /97 $25.00 Copyright 1997 by Academic Press All rights of reproduction in any form reserved.

2 310 RAJAGOPALAN AND CHAUDHURI in the spatial domain [9]. Subbarao et al. have proposed a scheme that collapses the 2D image into a 1D image sequence and estimates the depth using the Fourier coefficients of the 1D sequence [10]. Bove poses the problem as one of signal estimation in which, given noisy observations of regions of two images, one attempts to recover a good estimate of parameters of the defocusing process responsible for the structured differences between two images [11]. Ens and Lawrence formulate the problem as a regularized deconvolution problem and argue that this can lead to greater accuracy [12]. They consider different sizes of the windows in the two defocused images while obtaining the solution. Surya and Subbarao have proposed a spatial domain approach in [13]. The method approximates the image over a local region by a cubic polynomial and recovers the depth using the S transform. Pentland et al. introduce the notion of active depth from defocus [14] in which a known pattern of light is projected onto the scene and the apparent blurring of the pattern is measured to estimate the depth. In all of the above methods, the depth is assumed to be constant over a large local region (about pixels) and the depth is estimated by considering the system to be shift-invariant over that local region. But splitting an image into independent subimages introduces errors due to improper selection of boundary [6]. An image region cannot be analyzed in isolation because, due to blurring, the intensity at the border of the region is affected by the intensity immediately outside the region. This is also called the image overlap problem [6] because the intensity distri- bution produced by adjacent patches of visible surfaces in the scene overlap on the image detector. As will be shown, the DFD system is actually a linear but shift-variant system. Hence, the problem of estimating the blur parameter is one of space-variant (SV) filtering. In this paper, we propose two new approaches for estimating the depth from defocused images. The first approach, called the block shift-variant blur model, attempts to tackle the problem of overlap in PSF over neighboring regions. This model approximates the shift-variant system by a block shift-variant system. The image is split into a number of blocks of small dimensions (about pixels) within which the blur is assumed to be constant. However, the blurring parameter is allowed to change over neighboring blocks. In the second approach, we look at the DFD problem in the space-frequency representation (SFR) framework, since the problem of estimating the blur parameter is one of space-variant filtering. This difference in outlook of the DFD problem enables us to work in a more generalized setup than others have. First, in this framework, we are able to derive the relationship governing the two defocused images in terms of a space-variant filter. It will be shown FIG. 1. Geometry of image formation process. that the nature of the resultant filter (e.g., whether it is linear or non-linear) depends on the choice of the particu- lar SFR. Though numerous SFRs have been discussed in the literature [15], we shall consider only the complex spectrogram (CS) and the Wigner distribution (WD) here. It may be mentioned here that Gonzalo et al. have used the Wigner distribution for space-variant filtering, but there was no attempt to identify the blur parameter [16]. Due to the nature of the SFR, the estimate of the depth is now available at all points in the image. We shall show that the conventional method of estimating the depth from defocus (such as that of Subbarao [6]) is just a special case of both the approaches to be presented here. The paper is organized as follows. The theoretical basis of DFD is briefly presented in Section 2. In Section 3, we discuss the block shift-variant blur model for estimating the depth. The concept of space-frequency representation is discussed in Section 4. The estimation of the blur parame- ter using the complex spectrogram and the Wigner distri- bution is described in Section 5. Simulations and experimental results with real images are presented in Section 6. Section 7 concludes the paper. 2. MODELING THE DEFOCUS Fundamental to the concept of recovering the depth by defocus is the relationship between the focused and the defocused images of a scene. In this section, we briefly review the image formation process and describe defocused images as blurred versions of the focused one. A detailed discussion on this topic can be found in [17]. Figure 1 shows the geometry of the basic image formation process. When a point light source is in focus, all light rays that are radiated by the object point and intercepted by the lens are refracted by the lens to converge at a point on the image plane. For a thin lens the relationship between the object distance D, the focal length F l, and the image plane- to-lens distance v is given by the lens law 1/D 1/v 1/F l. Each point on the object plane is projected onto a

3 SPACE-VARIANT APPROACHES 311 FIG. 2. An example of images defocused due to different space-varying blurs: (a) original image and (b, c) defocused images obtained using the smoothly varying blurs 1 (i, j) and 2 (i, j) 2 1 (i, j), respectively. single point on the image plane, thus causing a focused image f(x, y) to be formed on the image plane. When the point light source is not in focus, its image on the image plane is not a point but a circular patch 1 of radius r b.if the distance between the lens and the image plane is v 0 and the lens aperture is r 0, then from the lens geometry given in Fig. 1, one can show [6] that 1 The shape of the patch also depends on the aperture of the imaging system. We assume the aperture to be circular. r b r 0 v 0 1 F l 1 v 0 1 D. It is possible to convince oneself that the radius r b of the patch is independent of the location of the point on the object plane. For the discrete image, the blur radius (also called the blur parameter) in pixels is given by r b, where 0 is a constant that depends on the particulars of the optics and the sampling resolution and has to be determined

4 312 RAJAGOPALAN AND CHAUDHURI FIG. 3. (a) True values of u(i, j) for the blur functions used in Fig. 2. Estimate of u(i, j) using (b) the existing method, (c) the BSV model, (d) the CS method with a square window, (e) the CS method with a Gaussian window, and (f) the PWD method with a Gaussian window. initially by an appropriate calibration procedure. Thus is a function of the depth at a point. For two different lens settings, we get g(i, j) f(i, j) h(i, j), where represents the convolution operation. From (1) and (2), we observe that the PSF is a function of D. Thus, the defocusing system is shift-variant. It may By eliminating D from (1), one can show that 1 2, where and are constants that depend on the camera parameter settings [6]. The distribution of light energy over the patch, or the blurring function, can be accurately modeled using physical optics [17]. For a diffraction-limited lens system, the PSF of the camera system may be approximately modeled [1, 6, 18, 19] as a circularly symmetric 2D Gaussian function m m r m v m m 1, 2. (1) be assumed to behave like a shift invariant system only F lm v m D, for subimages over which D is nearly constant, and this particular assumption has been used by most of the researchers as discussed in the previous section. For the general situation, where the depth at various points in the scene may be continuously varying, would vary all over the image. Hence, the intensity of the (i, j)th pixel in the defocused image is given by g(i, j) m n f(m, n)h(i m, j n; m, n) h(i, j) exp (i2 j 2 ) 2 2. (2) The blurred image of a point is the convolution of its focused image and the PSF of the system corresponding to that depth [5]. The defocused image g(i, j) is given by where f( ) is the focused image and h( ; m, n) is the spacevarying PSF at the pixel location (m, n). 3. THE BLOCK SHIFT-VARIANT BLUR MODEL In this section, we propose a block shift-variant blur model [20] that attempts to solve the problem of space-

5 SPACE-VARIANT APPROACHES 313 FIG. 3 Continued varying PSF over neighboring regions discussed in Section corresponding defocused image due only to the focused 1. An image is partitioned into a number of smaller subimages subimage f i (m) is denoted by f hi (m). Thus we have and the depth (and hence the blur) is assumed to f hi (m) f i (m) h i (m). It should be noted here that the be constant over these subimages. But the blur is assumed observed (blurred) image of the ith focused subimage is to be varying over adjacent subimages while recovering not the same as f hi (m). The contribution from neighboring the blur parameter associated with any subimage. It may subimages must also be considered while expressing the be mentioned here that the proposed block shift-variant blurred image. (BSV) technique is quite different from that of Ens and For simplicity, we assume a toroidal model for the image Lawrence [12]. While they use varying spatial supports in [21 23]. We define the neighborhood N fi of subimage f i two defocused images, they do not consider any spatial as N fi f i J, f i J 1,...,f i,...,f i J where J indicates the interaction of blur among neighboring subimages. The blur order of the neighborhood, i.e., J 1 for first order and is assumed to be constant over the entire region of support. J 2 for second order. If y i is the defocused image corre- The technique proposed here assumes a piecewise constant sponding to the neighborhood N fi limited blur, then variation in the blur parameter. We consider 1D signals or 1D images for notational simplicity. We form focused subimages f i, i 0, 1 to I y i (m) f hni (m a i d), n i m and i 0, 1,..., I 1, 1, by partitioning the original image of size N pixels into I such smaller blocks (I 1 I 2 blocks in 2D images) each (3) of size d pixels. For a particular lens setting, the subimage where a i i (I 1)/2 and n i i J, i J 1,..., f i, which corresponds to a depth D i of the scene, will be i,...,i J. For example, if I 5 and J 1, then y 1 (m) blurred by the blur parameter given by (1) with D f h0 (m d) f h1 (m d) f h2 (m d). Thus, a i decides the D i. The value of for the ith focused region is denoted amount of shift for the image y i. (Note that (3) is actually by (i). The associated PSF is denoted by h i (m) and the an approximation because the contribution from outside

6 314 RAJAGOPALAN AND CHAUDHURI FIG. 4. Illustration of step blurring due to discontinuity in depth: (a) original Lena image and (b, c) blurred images due to two different settings of camera parameters. the neighborhood has been ignored.) Let p i (k) plex conjugate. An N point DFT is taken to account for exp( j(2 /N)ka i d) where N is the size of the image. Taking the effect of circular convolution. Now (4) can be written the N point discrete Fourier transform (DFT) of both in matrix form as the sides of (3), we get Y i (k) p* i (k) n i F ni (k)h ni (k), k and i 0, 1,..., I 1, Y(k) B I I A I I F H (k), for any k, (5) (4) where Y(k) [Y 0 (k), Y 1 (k),...,y I 1 (k)] T and F H (k) [F 0 (k)h 0 (k), F 1 (k)h 1 (k),...,f I 1 (k)h I 1 (k)] T. The matrix where Y i (k), F ni (k), and H ni (k) are the DFTs of y i (m), A is symmetric and circulant. For example, if I 5 and f ni (m), and h ni (m) respectively, while represents the com- J 1, then

7 A The matrix B is diagonal with entries b ii p* i (k). Equivalently, from (5), we get SPACE-VARIANT APPROACHES 315 For an N-point 1D image, one has to calculate the N/d number of N-point DFTs to obtain the quantities i1 and i2 in (8). Thus, the order of computation is (N 2 /d) log N. With an increase in the size N of the image or in the number I of focused subimages, the above scheme becomes computationally intensive. To overcome this problem, the image may be initially partitioned into a number of independent regions. All these partitions are processed independently with the assumption of a toroidal model and the blur parameters are estimated using the above scheme. If M represents the number of such partitions, the saving in computation is of the order of (M log N)/(log(N/M)). F H (k) C I I Y(k), (6) We now proceed to show that the formulations proposed by Pentland [5] or Subbarao [6] are a special case of the above. When the neighborhood is not considered for estiwhere C (B A) 1. Since A is a circulant matrix it can be inverted efficiently. Further, the matrix C can be pre- computed for a given neighborhood structure. It may be noted that it is not necessary to assume a toroidal model for the image. However, the invertibility of the correspond- ing A matrix must be ensured. As we are interested in estimating only the blur parame- ters, we attempt to eliminate the focused image f i using the two defocused images. Henceforth, the quantities H i (k), Y i (k), and (i) subscripted by another variable j indicate that they correspond to the jth defocused image, j 1, 2. To estimate a specific j (i), we have, from (6), ij (k) F i (k)h ij (k) I 1 c in Y nj (k), k, j 1, 2, (7) n 0 where c in is the (i, n)th element of the matrix C. If the PSF is assumed to be Gaussian, then by dividing (7) (for j 2 by j 1) and equating the square of the magnitude, we get exp( 2 (k) s(i)) i1 (k) 2 i2 (k) 2 k, min s(i) N 1 (exp( 2 (k) s(i)) i1 (k) 2 i2 (k) 2 ) 2. (8) k 0 The function in (8) is minimized with respect to s(i) using the gradient descent method. Once s(i) is known, the depth D i can be estimated using (1). Thus, all the values of s(i), i 0toI 1, can be estimated simultaneously. The above analysis carries over to the 2D case in a straightforward manner. It turns out that, for the 2D case, the matrix A is block circulant. mating the blur parameter, i.e., J 0, we have from (3) that y i (m) f hi (m a i d). The matrix A in (5) is then the identity matrix. Therefore, in (5), the matrix C (B A) 1 is a diagonal matrix with entries c ii p i (k). To estimate a specific j (i), we have from (5) that F i (k)h ij (k) c ii Y ij (k), j 1, 2. Assuming the PSF to be Gaussian, it can be shown that exp( 2 (k)s(i)) Y i1 (k) 2 Y i2 (k) 2, which is the model proposed by Pentland and Subbarao [5, 6]. In this model, the estimate of s(i) (in the continuous domain) is given by s(i) (1/A r ) R ( 1/ 2 ) log( Y i2 ( ) 2 / Y i1 ( ) 2 ) d, where R is the region in the space not contaiing points where Y i2 ( ) 2 Y i1 ( ) 2.It may be noted here that the order of computation required for this method is only N log d which is much less than the requirement of the proposed scheme. However, a significant improvement in accuracy is obtained with the proposed scheme. 4. SPACE-FREQUENCY PROCESSING where s(i) 2 2(i) 2 1(i) and (k) is the discrete fre- quency variable such that (k) (2 /N)k for k 0to N/2 1 and (k) (2 /N)(N k) for k N/2 to N 1. We now pose the problem of estimating s(i) as 4.1. Representation Linear space-invariant filtering of a signal x 1 (t) is based on obtaining its Fourier transform X 1 ( ) and multiplying it by a chosen function H ( ), the transfer function of the filter, to obtain a new signal x 2 (t) whose Fourier transform is the product X 1 ( )H ( ). An intuitive generalization for performing a space-variant filtering operation on x 1 (t) is to go through similar steps obtain a space-frequency rep- resentation of the signal 1 (, t) (a space-varying spectrum), multiply it by an arbitrary function H (, t) which may be regarded as a space-varying transfer function, then compute the signal x 2 (t) whose space-frequency representation (SFR) is the product 2 (, t) 1 (,t)h (,t), assuming that it exists [24]. We shall now discuss the complex spectrogram and the Wigner distribution, in brief. The complex spectrogram (CS) and the Wigner distribution (WD) of a signal x(t) are defined, respectively, by

8 316 RAJAGOPALAN AND CHAUDHURI C x (, t) x(t )u* w(t t) exp( j t ) dt, and W x (, t) x t t 2 x* t t 2 exp( j t ) dt, where u w (t) is a chosen window function and u* w (t) represents its complex conjugate. In a sense, the WD is a selfwindowed version of the CS. The inverse relations for computing x(t) from its CS and WD are, respectively, x(t) k C x(, t) exp(j t) d /2, and x(t) k W x, t 2 exp(j t) d /2, where k u w (0) /u* w (0) and k x(0) /x*(0). It may be noted that the signal x(t) can be recovered from its WD only up to an unknown scale factor. The discrete-space CS and WD are defined, respectively, as and C x (, n) m x(m)u* w (m n) exp( j m) FIG. 5. Estimates of u(i, j) for the step blur shown in Fig. 4 using (a) the existing method, (b) the BSV model, (c) the CS method with a square window, (d) the CS method with a Gaussian window, and (e) the PWD model with a Gaussian window. Figures 5a and 5b have been drawn with spikes for better depiction of the plots. W 4.2. Filtering x (, n) 2 x(n m)x*(n m) exp( j2 m). m The ability to generate an SFR of a signal from which the signal can be uniquely recovered (operations analogous The discrete space CS is 2 periodic in while the discrete to the Fourier transform) suggests space-variant filtering space WD is periodic in [25]. The pseudo-wigner by manipulation of the SFR. Let signals x 1 (t) and x 2 (t) be distribution (PWD) is a finite length windowed WD, and such that is defined as W x(, n) 2 L 1 m L 1 x(n m)x*(n m)u w (m) u* w ( m) exp( j2 m). C x2 (, t) C x1 (, t)h (, t). (10) (9) Using the definition of CS in (10), and after some algebraic manipulations, the input output relation can be shown to be where The pseudo-wd is primarily used for computational bene- fit. The window u w (n) is usually a rectangular one in order to truncate the length of the data. However, any other window may also be used. x 2 (t) h (t, t )x 1(t ) dt,

9 SPACE-VARIANT APPROACHES 317 FIG. 5 Continued h (t, t ) k u* w (t t) H (,t) exp(j (t t )) d /2. WD ing spatial kernels h (t, t ) and h (t, t ) for the CS and the are different. Thus, the CS space-frequency filter is a linear space-variant 5. SPACE-VARIANT FILTERING FOR RECOVERING filter with a kernel function h (t, t ). THE DEPTH FROM DEFOCUS If the signals x 1 (t) and x 2 (t) are such that In this section, we propose the use of the complex spec- W x2 (, t) W x1 (, t)h (, t), trogram and the pseudo-wigner distribution for recovering the depth from defocused images [26]. The estimate of the then one can show that blur parameter is obtained at all points in the image Depth Recovery Using the Complex Spectrogram x 2 (t) h (t, t )x 1(t )x* 1 (t t ) dt, Let f(t) be the focused image of the scene while g 1 (t) and g 2 (t) be the corresponding defocused images obtained where with different camera parameter settings. As discussed in Section 3, only a single variable t is used to represent the two-dimensional function for notational simplicity. Let h (t, t ) k H 2, t 2 exp(j (t t )) d /2. Hence, the WD space-frequency filter is a quadratic filter. It may be noted that for the same H (, t), the correspond- and C g1 (, t) C f (, t)h 1 (, t), (11)

10 318 RAJAGOPALAN AND CHAUDHURI FIG. 6. A demonstration that there should be sufficient spectral information in the scene for proper recovery of depth: (a) example of a scene without much spectral content (obtained by severely blurring Fig. 2a) and (b, c) two of its defocused images. C g2 (, t) C f (, t)h 2 (, t), (12) C g2 (, n) C g1 (, n)h(, n). (13) For the ith pixel, i.e., n i, we get C g2 ( ) i C g1 ( ) i H( ) i, where C( ) i implies evaluation of the func- tion C(, n) atn i. For the specific case, when the PSF is a Gaussian function, we get C g2 ( ) i 2 C g1 ( ) i 2 exp( 2 s(i)), where s(i) 2 2(i) 2 1(i), while 1 (i) and 2 (i) correspond where H 1 (, t) and H 2 (, t) are the space-varying transfer functions of the DFD system corresponding to the two different camera parameter settings. From (11) and (12), it follows that C g2 (, t) C g1 (, t)h(, t), where H(, t) H 2 (, t)/h 1 (, t). In the discrete space domain, the corresponding expression is given by

11 SPACE-VARIANT APPROACHES 319 FIG. 7. Estimates of u(i, j) for the scene in Fig. 6a using (a) the BSV model, (b) the CS method, and (c) the PWD method. FIG. 8. is 20 db. Effect of sensor noise on accuracy is tested here. Additive, white Gaussian noise is added to the images in Figs. 2b and 2c. The SNR

12 320 RAJAGOPALAN AND CHAUDHURI FIG. 9. The experiment in Fig. 8 is repeated for a very large sensor noise. The SNR is 5 db. FIG. 10. Results of the estimate of u(i, j) for the images in Fig. 8 using (a) the BSV model, (b) the CS method, and (c) the PWD method.

13 SPACE-VARIANT APPROACHES 321 FIG. 11. Results of the estimates of u(i, j) with the noisy images of Fig. 9 when the sensor perturbation is large. to the blur parameters of the first and the second defocused 5.2. Depth Recovery Using the Wigner Distribution images at the ith pixel. Now, C gj ( ) i is the Fourier trans- In this section, we propose a formulation that uses the form of the function g j (m)u w (m i), j 1, 2. Any suitable pseudo-wigner distribution (PWD) for estimating the window u w (m) may be used for this study. Thus, one can depth from defocused images. In the Wigner distribution, find the samples of the spectrum C g2 ( ) i and C g1 ( ) i effiinstead of using an arbitrary window u w (m) to localize the ciently using FFT algorithms. If 2L 1 is the length of spectral estimation in space, the data itself is used as the the window u w ( ), then one needs to evaluate an N -point window. Let us denote, as in the previous section, DFT where N 2L 1 at each pixel. Thus, the problem of estimating s(i) can be posed as W g1 (, t) W f (, t)h 1 (, t) (14) min s(i) N 1 ( C g2 (k) i 2 C g1 (k) i 2 exp( ( (k)) 2 s(i))) 2, k 0 where (k) is the discrete frequency variable. By using the estimated value of s(i) and (1), one can estimate the depth. The algorithm is repeated for all the pixels in the image. It may be noted that if the window is chosen to be rectangular and if the complex spectrogram is evaluated only for specific values of n, we arrive at the model proposed by Subbarao [6]. and W g2 (, t) W f (, t)h 2 (, t). (15) From (14) and (15), we have W g2 (, t) W g1 (, t)h(, t). The corresponding expression in the discrete space domain is given by W g2 2, n W g1 2, n H(, n). (16)

14 322 RAJAGOPALAN AND CHAUDHURI For the ith pixel in the image, we have W g2 ( /2) i W g1 ( /2) i H( ) i. One can find the samples of the spectrum W g2 ( /2) and W g1 ( /2) efficiently using FFT algorithms. Thus, the problem of estimating s(i) can be posed as min s(i) N 1 ( W g2 (k) i 2 W g1 (k) i 2 exp( ( (k)) 2 s(i))) 2. k 0 The computational complexity of the PWD is of the order of N (log 2 N 4) [25]. This complexity is of the same order of magnitude as that of the complex spectrogram. Once s(i) is estimated, the depth can be estimated as discussed earlier. The algorithm is repeated for all pixels in the image. 6. EXPERIMENTAL RESULTS We present results on the recovery of the depth using the block shift-variant (BSV) model, the complex spectrogram (CS), and the pseudo-wigner distribution (PWD) methods. We also compare the results of the proposed schemes with that of the existing method proposed by Subbarao [6]. In the discussion to follow, the following comments are in order: The positive square root of the blur parameter s(i, j) (which is given by 2 2(i, j) 2 1(i, j)) is denoted by u(i, j). For the BSV model, to reduce computation the defocused images were partitioned into regions of size pixels each and were processed independently. Thus, for an image size of pixels, the number of partitions M would be 4. A first order neighborhood (J 1) was chosen for each region. The size d d of each focused subimage was chosen to be pixels because the maximum value of u(i, j) in our simulations was 2.2 and the area under a Gaussian curve is negligible beyond 3. When the value of s(i, j) is estimated over all the blocks (subimages) in each of the regions using the BSV model, the estimate of s(i, j) is treated to be the same for all the pixels within a block. For the CS and the PWD methods, the size of the window was chosen to be pixels. The blur parame- ter s(i, j) was estimated at every pixel in the image. FIG. 12. Two blurred images of a scene consisting of a textured planar object, for two different focusing ranges of the camera, are shown. Note that W g2 ( /2, n), W g1 ( /2, n), and H(, n) are all 2 periodic functions of. It is interesting to note that W gj ( /2, n) is the Fourier transform of the signal 2g j (n m)g* j (n m), j 1, 2. To bring down the computational complexity, one usually uses the pseudo-wigner distribution of window size 2L 1. Thus, (16) becomes W g2 2, n W g1 2, n H(, n), (17) where W gj ( ) is given by (9). For the existing method, we present results using a window of size pixels to enable comparison with the corresponding estimates given by the BSV model and by using space-frequency representations. In all our simulations, the blurring operator was the 2D Gaussian function. In all 3D plots, the blur parameter values (or the depth

15 SPACE-VARIANT APPROACHES 323 FIG. 13. Estimates of the depth for the scene in Fig. 12 using (a) the BSV model, (b) the CS method, and (c) the PWD method. The depth profile has been plotted only for points lying on the object (i.e., the textured region) and not for points in the background. The planar nature of the object is quite visible from these plots. values, as the case maybe) are plotted against the coordiversions 42.0, and 2.0. The original image I b and its defocused I b1 and I b2 are shown in Figs. 2a, 2b, and 2c, respec- nates of the image plane. tively. The actual value of u(i, j) is plotted in Fig. 3a. In the first set of simulations, a binary random-dot pat- The existing method, the BSV model, the CS, and the tern image was blurred by a 2D Gaussian function with PWD methods were used to estimate the value of u(i, 1. This was done to obtain a gray level image. The j) from the defocused images. The estimates of u(i, j) image I b was then used as the original focused image. Two corresponding to these methods are plotted in Figs. 3b 3f, space-variant defocused images were generated as follows. respectively. Figures 3d and 3e both correspond to the The first space-variant blurred image was generated by estimate of u(i, j) using the CS method but with a square blurring I b with a space-varying blur of the form 1 (i, j) window and a Gaussian window (with 16), respectively. a exp( ((i N 1 /2) 2 (j N 2 /2) 2 )/2b 2 ), where the image The corresponding root mean square (rms) errors are given size is N 1 N 2 and a and b are constants. The second in the figures. From the plots of Fig. 3, we note that the defocused image was generated by blurring I b with 2 (i, j) estimates given by the CS, the PWD and the BSV models given by 2 (i, j) 1 (i, j), where is a constant. Such a are quite accurate. The error in estimation of the blur linear relationship exists between 1 (i, j) and 2 (i, j) when parameter is very large for the existing method compared defocused images of a scene are obtained using different to that for the proposed methods. The SFR based methods values of the camera aperture. The values chosen for the offer higher accuracy compared to that for the BSV model. various parameters were N 1 128, N 2 128, a 1.0, b The performance of the existing method can be improved

16 324 RAJAGOPALAN AND CHAUDHURI FIG. 14. Example of a scene with step discontinuity in depth. Two images of the same scene with (a) less and (b) more blurring, obtained by setting different focusing ranges of the camera, are shown here. From the plots of Fig. 3, we also note that the estimates given by the CS method are smoother than those given by the PWD method. In the second set of simulation results, the original Lena image was used as the focused image. This image was then blurred by a discontinuous blur function given by 1 (i, j) 0.01 for j N/2 and 1 (i, j) 1.0 otherwise, to get the first defocused image. The second defocused image was generated by blurring the Lena image with a step blur given by 2 (i, j) 1.0 for j N/2 and by 2 (i, j) 2.4 otherwise. Hence, the actual value of u(i, j) was 1.0 for j N/2 and 2.2, otherwise. The original Lena image and its defocused versions are shown in Figs. 4a, 4b, and 4c, respectively. The estimates of u(i, j) using the two defocused images were again obtained using the existing method, the BSV model, the CS, and the PWD methods. The corresponding estimates of u(i, j) are plotted in Figs. 5a 5e, respectively. Figures 5c and 5d correspond to the estimates using the CS method with a square window and a Gaussian window (with 16). From the plots of Fig. 5, we note that the BSV model, the CS, and the PWD methods are all able to capture the presence of the edge quite well, although the estimates tend to have a large variance. The rms errors for all the methods are quite large due to the presence of discontinuity in the blur function. The performance of the CS method with the Gaussian window is again observed to be better than that with the square window. Hence, the Gaussian window was used in the rest of the simulations. It may be noted that the edge captured by the PWD method is sharper than that given by the CS method. In the third set of simulation results, we demonstrate the effect on the estimate of the blur parameter due to a reduction in the spectral content of the original focused image. For this purpose, the random-dot pattern in Fig. 2a was blurred by a 2D Gaussian function with a large of value 2.3. The severely blurred image which is shown in Fig. 6a was then taken to be the original image. Two space-variant defocused images were generated from this image using the blur values 1 (i, j) and 2 (i, j) discussed in the first set of simulations. The two defocused images are shown in Figs. 6b and 6c. Using these defocused images, the estimates of u(i, j) were obtained using the BSV model, the CS, and the PWD methods. The corresponding esti- mates of u(i, j) are plotted in Figs. 7a 7c, respectively. From the plots of Fig. 7, we observe that in comparison to their counterparts in Fig. 3, the estimates are relatively poor. This example demonstrates the fact that the DFD scheme is quite sensitive to the presence of spectral content in the scene, a fact that has often been stressed by research- by choosing a window of larger size (about pixels). It may be noted here that the BSV model assumes a constant depth over d d pixels only, whereas for the existing method the depth is assumed to be constant over the entire size of the window. The performance of the CS method is better with the Gaussian window than with the square window. For the PWD method, it was found that its performance using the Gaussian window was only marginally better compared to using a square window. Hence, only the plot using the Gaussian window (with 16) has been shown for the PWD in the rest of the simulations. ers in this area [5, 6]. An increase in the rms error for all the methods corroborates this fact. In the next set of simulations, we look at the effect of

17 SPACE-VARIANT APPROACHES 325 FIG. 15. Estimates of the depth for the scene (only the textured region) in Fig. 14 using (a) the BSV model, (b) the CS method, and (c) the PWD method. These plots clearly bring out the presence of step discontinuity of depth in the scene. sensor noise on the accuracy of the estimates of the blur parameter. For this purpose, two noisy defocused images with a signal to noise ratio (SNR) of 20 db were generated from I b1 and I b2. The noise was assumed to be additive, white Gaussian in nature. The noisy defocused images are shown in Figs. 8a and 8b. Using these noisy images, the estimates of u(i, j) using the BSV model, the CS, and the PWD methods are plotted in Figs. 10a 10c, respectively. From the plots of Fig. 10, one can note a marginal deterioration (the rms error is slightly higher) in the accuracy of the estimates of u(i, j) in comparison to the corresponding estimates for the noiseless case given in Fig. 3. Next, we generate highly noisy, defocused images from I b1 and I b2 with a SNR of only 5 db. The noisy images are shown in Figs. 9a and 9b. Using these noisy images, the estimates of u(i, j) using the BSV model, the CS, and the PWD methods are plotted in Figs. 11a 11c, respectively. From the plots of Fig. 11, it is clear that the proposed methods have still been able to capture the variations in depth, though there is a definite loss in the accuracy of the esti- mate of the blur parameter due to the presence of high sensor noise in the data. This is also reflected in an increase in the rms error. Finally, the performances of the proposed methods were tested on a number of real images. Different types of scenes were constructed for this purpose. A Pulnix CCD camera with a focal length of 2.5 cm was used to grab the pictures. The lens aperture was kept constant at an f-number of 4. In the first experimental setup, the scene consisted of a planar object (with a dot pattern engraved on it to provide some texture). The farthest point on the object was at a distance of 125 cm, while the nearest point was about 115 cm from the camera. The variation in depth was along the horizontal direction. Blurred images of the scene were taken for two different focusing ranges of 115 and 90 cm, respectively. The corresponding defocused images are shown in Figs. 12a and 12b. In the following discussion, the depth was estimated only for points lying on the (textured) object in the two defocused images and not for points belonging to the background. The estimates of the depth

18 326 RAJAGOPALAN AND CHAUDHURI FIG. 16. Another example of a scene where two planes are slanted inwards to simulate a triagular distribution of depth. The two images correspond to different amounts of defocusing of the scene. were obtained using the BSV model, the CS, and the PWD methods, and the estimates are plotted in Figs. 13a 13c, respectively. It may be noted here that the camera was coarsely calibrated for the same setting of the camera parameters using another object at a known depth. The estimates of the depth were found to be reasonably accurate. From the plots, the planar nature of the object is quite evident. The rms values of the errors in estimating the depth for these methods are given in the corresponding plots. In the second experimental setup, the scene contained abrupt (staircase) variations in the depth. The nearest and the farthest points on the object were at a distance of 70 and 95 cm from the camera. Blurred images of the scene were taken at focusing ranges of 90 and 120 cm, and the images are shown in Figs. 14a and 14b, respectively. The depth was estimated using the proposed methods and the estimates are plotted in Figs. 15a 15c. From the plots, we note that abrupt variations in the depth of the scene have been well captured. The rms values of the errors obtained in these experiments are 7.43 cm for the BSV, 6.18 cm for the CS, and 6.41 cm for the PWD method. Also, compare the above results to those given in Fig. 5 for a simulated object having a step discontinuity in depth. The step edge becomes slanted due to the window that is needed to compute the local power spectrum. Defocused images corresponding to another experimental setup are shown in Figs. 16a and 16b. These were taken for focusing ranges of 120 and 90 cm of the camera. The scene consists of two planar objects slanted inward to meet at a vertically oriented straight line. The nearest and the farthest points on the object were at distances of 105 and 120 cm from the camera. The estimates of the depths are plotted in Figs. 17a 17c and are found to be reasonably accurate. A triangular distribution of depth in the scene is quite apparent from these plots. The average ranging error in this study was 6.1% for the proposed methods. Two defocused images of one more setup are shown in Figs. 18a and 18b. The focusing ranges were 90 and 120 cm. The scene consists of two planes, parallel to the image plane, connected by another slanted plane. A few objects, such as a blade, a key, and a ring, were hung on these planes at different points, marked A, B, and C in Fig. 18. It may be observed that both of the images are quite blurred, and even the identification of these objects in the images is difficult. The actual and the estimated values of the depth corresponding to the points A, B, and C are given in Table 1 for all three methods. The rms errors in the estimates of the depth for this scene were found to be 7.52 cm for the BSV model, 5.81 cm for the CS method, and 5.92 cm for the PWD method. 7. CONCLUSIONS In this paper, we have proposed two new approaches for recovering the depth from defocused images. In the approach to recovering depth from defocused images, previous methods assume the depth to be constant over fairly large local regions and estimate the depth through inverse filtering by considering the system to be shift-invariant over those local regions. But it is known that analyzing a subimage in isolation introduces errors in the estimate because the intensity at the border of the region is affected

19 SPACE-VARIANT APPROACHES 327 FIG. 17. Estimates of the depth for the scene in Fig. 16 using (a) the BSV model, (b) the CS method, and (c) the PWD method. The depth profile is plotted only for the textured region in the scene. parameter. The performances of the CS and the PWD methods were found to be more accurate compared to the BSV model. We also note that the performance of the CS is marginally better than PWD in estimating a smoothly varying blur. Simulation results have been presented to demonstrate the sensitivity of the estimate of the blur pa- rameter to the spectral content in the scene and to the presence of sensor noise in defocused images. The methods were shown to perform poorly if there is insufficient spectral content in the scene. The performance of the proposed methods for real images was also found to be quite satisfactory. The proposed formulation is amenable to the incorporation of a smoothness constraint on the blur parameter. Because the change in the depth of a scene is usually gradual, this a priori information can be used to improve the performance of the proposed methods. Our work is currently directed to this end. by the intensity immediately outside the region. The first approach proposed here models the DFD system as a block shift-variant system and incorporates the interaction of blurring among neighboring subimages in an attempt to solve this problem. In the second approach, we look at the DFD problem in the space-frequency representation framework. This enables us to work in a more generalized setup than others have. Due to the nature of the SFR, the estimate of the blur parameter is available at all points in the image. In particular, we show that the complex spectrogram and the pseudo-wigner distribution are likely candidates for recovering the depth from defocused images. The performance of all the proposed methods has been demonstrated on both synthetic and real images. Results show that for the same size of the window over which the depth is assumed constant, the proposed BSV model outperforms the existing method in estimating the blur

20 328 RAJAGOPALAN AND CHAUDHURI ACKNOWLEDGMENTS The authors thank the reviewers for their helpful and constructive suggestions that have greatly improved the presentation of the paper. Thanks are also due to Professors U. B. Desai and P. G. Poonacha of IIT, Bombay, for many useful discussions. REFERENCES FIG. 18. Two defocused images of an experimental scene consisting 1. B. K. P. Horn, Robot Vision, MIT Press, Cambridge, MA, R. A. Jarvis, A perspective on range finding techniques for computer vision, IEEE Trans. Pattern Anal. Mach. Intell. 5, 1983, E. P. Krotkov, Active Computer Vision by Cooperative Focus and Stereo, Springer-Verlag, New York, A. P. Pentland, Depth of scene from depth of field, in Proc. Image Understanding Workshop, A. P. Pentland, A new sense for depth of field, IEEE Trans. Pattern Anal. Mach. Intell. 9, 1987, M. Subbarao, Parallel depth recovery by changing camera parameters, in Proc. IEEE Conference on Computer Vision, FL, 1988, pp M. Subbarao, Efficient depth recovery through inverse optics, in Machine Vision for Inspection and Measurement (H. Freeman, Ed.), Academic Press, New York, A. Pentland, T. Darell, M. Turk, and W. Huang, A simple real-time range camera, in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1989, pp T. Hwang, J. J. Clark, and A. C. Yuille, A depth recovery algorithm using defocus information, in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1989, pp M. Subbarao and T. Wei, Depth from defocusing and rapid autofocusing: A practical approach, in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1992, pp V. M. Bove, Entropy-based depth from focus, J. Opt. Soc. Amer. A 10 (1993), J. Ens and P. Lawrence, An investigation of methods for determining depth from focus, IEEE Trans. Pattern Anal. Mach. Intell., 1993, G. Surya and M. Subbarao, Depth from defocus by changing camera of a blade (marked A ), a key (marked B ), and a ring (marked C ) aperture, in Proc. IEEE Conference on Computer Vision and Pattern at different depths. Recognition, New York, 1993, pp A. Pentland, S. Scherock, T. Darrell, and B. Girod, Simple range cameras based on focal error, J. Opt. Soc. Amer. A 11, 1994, F. Hlawatsch and G. F. Boudreaux-Bartels, Linear and quadratic time-frequency signal representations, IEEE SP Magazine 1992, C. Gonzalo, J. Bescos, L. R. Berriel-Valdos, and J. Santamaria, Space- TABLE 1 variant filtering through the Wigner distribution function, Appl. Opt. Estimates of Depth at Various Locations 28, 1989, Location True BSV CS PWD 17. M. Born and E. Wolf, Principles of Optics, Pergamon, London, in scene depth model method method 18. J. W. Goodman, Introduction to Fourier Optics, McGraw Hill, New York, A 75 cm 82.1 cm 70.4 cm 73.6 cm 19. W. F. Schreiber, Fundamentals of Electronic Imaging Systems, B 85 cm 91.3 cm 90.3 cm 89.8 cm Springer-Verlag, New York/Berlin, C 95 cm cm 93.3 cm 93.6 cm 20. A. N. Rajagopalan and S. Chaudhuri, A block shift-variant blur model

21 SPACE-VARIANT APPROACHES 329 for recovering depth from defocused images, in Proc. IEEE Confer- the mixed time-frequency domain, IEEE Trans. Acoust. Speech Sigence on Image Processing, Washington, D.C., 1995, nal Process. 33, 1985, J. E. Besag and P. A. Moran, On the estimation and testing of spatial 25. T. A. C. M. Claasen and Mecklenbrauker, The Wigner distribuinteraction in Gaussian lattices, Biometrika 62 (1975), tion A tool for time-frequency signal analysis, Philips J. Res. 35, 22. R. L. Kashyap, Random field models on torus lattices, in Proc. IEEE 1980, Conference on Pattern Recognition, Miami, FL, 1980, pp A. N. Rajagopalan and S. Chaudhuri, Recovery of depth from defo- 23. A. K. Katsaggelos, Digital Image Restoration, Springer-Verlag, Ber- cused images using space-frequency representation, in Proc. Indian lin, N. S. Subotic and B. E. A. Saleh, Time-variant filtering of signals in Conference on Pattern Recognition, Image Processing, and Computer Vision, IIT Kharagpur, India, 1995, pp

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

Single-Image Shape from Defocus

Single-Image Shape from Defocus Single-Image Shape from Defocus José R.A. Torreão and João L. Fernandes Instituto de Computação Universidade Federal Fluminense 24210-240 Niterói RJ, BRAZIL Abstract The limited depth of field causes scene

More information

Focused Image Recovery from Two Defocused

Focused Image Recovery from Two Defocused Focused Image Recovery from Two Defocused Images Recorded With Different Camera Settings Murali Subbarao Tse-Chung Wei Gopal Surya Department of Electrical Engineering State University of New York Stony

More information

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST) Gaussian Blur Removal in Digital Images A.Elakkiya 1, S.V.Ramyaa 2 PG Scholars, M.E. VLSI Design, SSN College of Engineering, Rajiv Gandhi Salai, Kalavakkam 1,2 Abstract In many imaging systems, the observed

More information

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Performance Evaluation of Different Depth From Defocus (DFD) Techniques Please verify that () all pages are present, () all figures are acceptable, (3) all fonts and special characters are correct, and () all text and figures fit within the Performance Evaluation of Different

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

SHAPE FROM FOCUS. Keywords defocus, focus operator, focus measure function, depth estimation, roughness and tecture, automatic shapefromfocus.

SHAPE FROM FOCUS. Keywords defocus, focus operator, focus measure function, depth estimation, roughness and tecture, automatic shapefromfocus. SHAPE FROM FOCUS k.kanthamma*, Dr S.A.K.Jilani** *(Department of electronics and communication engineering, srinivasa ramanujan institute of technology, Anantapur,Andrapradesh,INDIA ** (Department of electronics

More information

Coded Computational Photography!

Coded Computational Photography! Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!

More information

Evolving Measurement Regions for Depth from Defocus

Evolving Measurement Regions for Depth from Defocus Evolving Measurement Regions for Depth from Defocus Scott McCloskey, Michael Langer, and Kaleem Siddiqi Centre for Intelligent Machines, McGill University {scott,langer,siddiqi}@cim.mcgill.ca Abstract.

More information

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE International Journal of Electronics and Communication Engineering and Technology (IJECET) Volume 7, Issue 4, July-August 2016, pp. 85 90, Article ID: IJECET_07_04_010 Available online at http://www.iaeme.com/ijecet/issues.asp?jtype=ijecet&vtype=7&itype=4

More information

On the Recovery of Depth from a Single Defocused Image

On the Recovery of Depth from a Single Defocused Image On the Recovery of Depth from a Single Defocused Image Shaojie Zhuo and Terence Sim School of Computing National University of Singapore Singapore,747 Abstract. In this paper we address the challenging

More information

Optimal Camera Parameters for Depth from Defocus

Optimal Camera Parameters for Depth from Defocus Optimal Camera Parameters for Depth from Defocus Fahim Mannan and Michael S. Langer School of Computer Science, McGill University Montreal, Quebec H3A E9, Canada. {fmannan, langer}@cim.mcgill.ca Abstract

More information

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm 1 Rupali Patil, 2 Sangeeta Kulkarni 1 Rupali Patil, M.E., Sem III, EXTC, K. J. Somaiya COE, Vidyavihar, Mumbai 1 patilrs26@gmail.com

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

Depth from Focusing and Defocusing. Carnegie Mellon University. Pittsburgh, PA result is 1.3% RMS error in terms of distance

Depth from Focusing and Defocusing. Carnegie Mellon University. Pittsburgh, PA result is 1.3% RMS error in terms of distance Depth from Focusing and Defocusing Yalin Xiong Steven A. Shafer The Robotics Institute Carnegie Mellon University Pittsburgh, PA 53 Abstract This paper studies the problem of obtaining depth information

More information

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

Blind Single-Image Super Resolution Reconstruction with Defocus Blur Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Blind Single-Image Super Resolution Reconstruction with Defocus Blur Fengqing Qin, Lihong Zhu, Lilan Cao, Wanan Yang Institute

More information

Comparison of an Optical-Digital Restoration Technique with Digital Methods for Microscopy Defocused Images

Comparison of an Optical-Digital Restoration Technique with Digital Methods for Microscopy Defocused Images Comparison of an Optical-Digital Restoration Technique with Digital Methods for Microscopy Defocused Images R. Ortiz-Sosa, L.R. Berriel-Valdos, J. F. Aguilar Instituto Nacional de Astrofísica Óptica y

More information

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation Single Digital mage Multi-focusing Using Point to Point Blur Model Based Depth Estimation Praveen S S, Aparna P R Abstract The proposed paper focuses on Multi-focusing, a technique that restores all-focused

More information

Frequency Domain Enhancement

Frequency Domain Enhancement Tutorial Report Frequency Domain Enhancement Page 1 of 21 Frequency Domain Enhancement ESE 558 - DIGITAL IMAGE PROCESSING Tutorial Report Instructor: Murali Subbarao Written by: Tutorial Report Frequency

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1 Today Defocus Deconvolution / inverse filters MIT.7/.70 Optics //05 wk5-a- MIT.7/.70 Optics //05 wk5-a- Defocus MIT.7/.70 Optics //05 wk5-a-3 0 th Century Fox Focus in classical imaging in-focus defocus

More information

Blind Blur Estimation Using Low Rank Approximation of Cepstrum

Blind Blur Estimation Using Low Rank Approximation of Cepstrum Blind Blur Estimation Using Low Rank Approximation of Cepstrum Adeel A. Bhutta and Hassan Foroosh School of Electrical Engineering and Computer Science, University of Central Florida, 4 Central Florida

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Project 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Position-Dependent Defocus Processing for Acoustic Holography Images

Position-Dependent Defocus Processing for Acoustic Holography Images Position-Dependent Defocus Processing for Acoustic Holography Images Ruming Yin, 1 Patrick J. Flynn, 2 Shira L. Broschat 1 1 School of Electrical Engineering & Computer Science, Washington State University,

More information

Computer Vision, Lecture 3

Computer Vision, Lecture 3 Computer Vision, Lecture 3 Professor Hager http://www.cs.jhu.edu/~hager /4/200 CS 46, Copyright G.D. Hager Outline for Today Image noise Filtering by Convolution Properties of Convolution /4/200 CS 46,

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

Bias errors in PIV: the pixel locking effect revisited.

Bias errors in PIV: the pixel locking effect revisited. Bias errors in PIV: the pixel locking effect revisited. E.F.J. Overmars 1, N.G.W. Warncke, C. Poelma and J. Westerweel 1: Laboratory for Aero & Hydrodynamics, University of Technology, Delft, The Netherlands,

More information

Optical transfer function shaping and depth of focus by using a phase only filter

Optical transfer function shaping and depth of focus by using a phase only filter Optical transfer function shaping and depth of focus by using a phase only filter Dina Elkind, Zeev Zalevsky, Uriel Levy, and David Mendlovic The design of a desired optical transfer function OTF is a

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Region Growing: A New Approach

Region Growing: A New Approach IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 7, NO. 7, JULY 1998 1079 [4] K. T. Lay and A. K. Katsaggelos, Image identification and restoration based on the expectation-maximization algorithm, Opt. Eng.,

More information

LENSLESS IMAGING BY COMPRESSIVE SENSING

LENSLESS IMAGING BY COMPRESSIVE SENSING LENSLESS IMAGING BY COMPRESSIVE SENSING Gang Huang, Hong Jiang, Kim Matthews and Paul Wilford Bell Labs, Alcatel-Lucent, Murray Hill, NJ 07974 ABSTRACT In this paper, we propose a lensless compressive

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

Coded Aperture Pairs for Depth from Defocus

Coded Aperture Pairs for Depth from Defocus Coded Aperture Pairs for Depth from Defocus Changyin Zhou Columbia University New York City, U.S. changyin@cs.columbia.edu Stephen Lin Microsoft Research Asia Beijing, P.R. China stevelin@microsoft.com

More information

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Keshav Thakur 1, Er Pooja Gupta 2,Dr.Kuldip Pahwa 3, 1,M.Tech Final Year Student, Deptt. of ECE, MMU Ambala,

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS 1 LUOYU ZHOU 1 College of Electronics and Information Engineering, Yangtze University, Jingzhou, Hubei 43423, China E-mail: 1 luoyuzh@yangtzeu.edu.cn

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2018, Lecture 14 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with

More information

Improved motion invariant imaging with time varying shutter functions

Improved motion invariant imaging with time varying shutter functions Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

Study of Turbo Coded OFDM over Fading Channel

Study of Turbo Coded OFDM over Fading Channel International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 3, Issue 2 (August 2012), PP. 54-58 Study of Turbo Coded OFDM over Fading Channel

More information

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications ) Why is this important What are the major approaches Examples of digital image enhancement Follow up exercises

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

Digital Image Processing 3/e

Digital Image Processing 3/e Laboratory Projects for Digital Image Processing 3/e by Gonzalez and Woods 2008 Prentice Hall Upper Saddle River, NJ 07458 USA www.imageprocessingplace.com The following sample laboratory projects are

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

A Spatial Mean and Median Filter For Noise Removal in Digital Images

A Spatial Mean and Median Filter For Noise Removal in Digital Images A Spatial Mean and Median Filter For Noise Removal in Digital Images N.Rajesh Kumar 1, J.Uday Kumar 2 Associate Professor, Dept. of ECE, Jaya Prakash Narayan College of Engineering, Mahabubnagar, Telangana,

More information

Enhanced DCT Interpolation for better 2D Image Up-sampling

Enhanced DCT Interpolation for better 2D Image Up-sampling Enhanced Interpolation for better 2D Image Up-sampling Aswathy S Raj MTech Student, Department of ECE Marian Engineering College, Kazhakuttam, Thiruvananthapuram, Kerala, India Reshmalakshmi C Assistant

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Deconvolution , , Computational Photography Fall 2017, Lecture 17 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Defense Technical Information Center Compilation Part Notice

Defense Technical Information Center Compilation Part Notice UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11345 TITLE: Measurement of the Spatial Frequency Response [SFR] of Digital Still-Picture Cameras Using a Modified Slanted

More information

Defocusing and Deblurring by Using with Fourier Transfer

Defocusing and Deblurring by Using with Fourier Transfer Defocusing and Deblurring by Using with Fourier Transfer AKIRA YANAGAWA and TATSUYA KATO 1. Introduction Image data may be obtained through an image system, such as a video camera or a digital still camera.

More information

Quantification of glottal and voiced speech harmonicsto-noise ratios using cepstral-based estimation

Quantification of glottal and voiced speech harmonicsto-noise ratios using cepstral-based estimation Quantification of glottal and voiced speech harmonicsto-noise ratios using cepstral-based estimation Peter J. Murphy and Olatunji O. Akande, Department of Electronic and Computer Engineering University

More information

Computer Vision Slides curtesy of Professor Gregory Dudek

Computer Vision Slides curtesy of Professor Gregory Dudek Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short

More information

Non Linear Image Enhancement

Non Linear Image Enhancement Non Linear Image Enhancement SAIYAM TAKKAR Jaypee University of information technology, 2013 SIMANDEEP SINGH Jaypee University of information technology, 2013 Abstract An image enhancement algorithm based

More information

Coded photography , , Computational Photography Fall 2017, Lecture 18

Coded photography , , Computational Photography Fall 2017, Lecture 18 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18 Course announcements Homework 5 delayed for Tuesday. - You will need cameras

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

A DEVELOPED UNSHARP MASKING METHOD FOR IMAGES CONTRAST ENHANCEMENT

A DEVELOPED UNSHARP MASKING METHOD FOR IMAGES CONTRAST ENHANCEMENT 2011 8th International Multi-Conference on Systems, Signals & Devices A DEVELOPED UNSHARP MASKING METHOD FOR IMAGES CONTRAST ENHANCEMENT Ahmed Zaafouri, Mounir Sayadi and Farhat Fnaiech SICISI Unit, ESSTT,

More information

Chapter 2 Fourier Integral Representation of an Optical Image

Chapter 2 Fourier Integral Representation of an Optical Image Chapter 2 Fourier Integral Representation of an Optical This chapter describes optical transfer functions. The concepts of linearity and shift invariance were introduced in Chapter 1. This chapter continues

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

Defocus Map Estimation from a Single Image

Defocus Map Estimation from a Single Image Defocus Map Estimation from a Single Image Shaojie Zhuo Terence Sim School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore 117417, SINGAPOUR Abstract In this

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Toward Non-stationary Blind Image Deblurring: Models and Techniques Toward Non-stationary Blind Image Deblurring: Models and Techniques Ji, Hui Department of Mathematics National University of Singapore NUS, 30-May-2017 Outline of the talk Non-stationary Image blurring

More information

Computational approach for depth from defocus

Computational approach for depth from defocus Journal of Electronic Imaging 14(2), 023021 (Apr Jun 2005) Computational approach for depth from defocus Ovidiu Ghita* Paul F. Whelan John Mallon Vision Systems Laboratory School of Electronic Engineering

More information

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution Extended depth-of-field in Integral Imaging by depth-dependent deconvolution H. Navarro* 1, G. Saavedra 1, M. Martinez-Corral 1, M. Sjöström 2, R. Olsson 2, 1 Dept. of Optics, Univ. of Valencia, E-46100,

More information

An Efficient Noise Removing Technique Using Mdbut Filter in Images

An Efficient Noise Removing Technique Using Mdbut Filter in Images IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 3, Ver. II (May - Jun.2015), PP 49-56 www.iosrjournals.org An Efficient Noise

More information

Declaration. Michal Šorel March 2007

Declaration. Michal Šorel March 2007 Charles University in Prague Faculty of Mathematics and Physics Multichannel Blind Restoration of Images with Space-Variant Degradations Ph.D. Thesis Michal Šorel March 2007 Department of Software Engineering

More information

BASIC OPERATIONS IN IMAGE PROCESSING USING MATLAB

BASIC OPERATIONS IN IMAGE PROCESSING USING MATLAB BASIC OPERATIONS IN IMAGE PROCESSING USING MATLAB Er.Amritpal Kaur 1,Nirajpal Kaur 2 1,2 Assistant Professor,Guru Nanak Dev University, Regional Campus, Gurdaspur Abstract: - This paper aims at basic image

More information

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008 ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 409 414 SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES

More information

Fourier Transform. Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase

Fourier Transform. Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase Fourier Transform Fourier Transform Any signal can be expressed as a linear combination of a bunch of sine gratings of different frequency Amplitude Phase 2 1 3 3 3 1 sin 3 3 1 3 sin 3 1 sin 5 5 1 3 sin

More information

Chapter 4 SPEECH ENHANCEMENT

Chapter 4 SPEECH ENHANCEMENT 44 Chapter 4 SPEECH ENHANCEMENT 4.1 INTRODUCTION: Enhancement is defined as improvement in the value or Quality of something. Speech enhancement is defined as the improvement in intelligibility and/or

More information

Image De-Noising Using a Fast Non-Local Averaging Algorithm

Image De-Noising Using a Fast Non-Local Averaging Algorithm Image De-Noising Using a Fast Non-Local Averaging Algorithm RADU CIPRIAN BILCU 1, MARKKU VEHVILAINEN 2 1,2 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720, Tampere FINLAND

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 22: Computational photography photomatix.com Announcements Final project midterm reports due on Tuesday to CMS by 11:59pm BRDF s can be incredibly complicated

More information

Fourier transforms, SIM

Fourier transforms, SIM Fourier transforms, SIM Last class More STED Minflux Fourier transforms This class More FTs 2D FTs SIM 1 Intensity.5 -.5 FT -1.5 1 1.5 2 2.5 3 3.5 4 4.5 5 6 Time (s) IFT 4 2 5 1 15 Frequency (Hz) ff tt

More information

EEL 6562 Image Processing and Computer Vision Image Restoration

EEL 6562 Image Processing and Computer Vision Image Restoration DEPARTMENT OF ELECTRICAL & COMPUTER ENGINEERING EEL 6562 Image Processing and Computer Vision Image Restoration Rajesh Pydipati Introduction Image Processing is defined as the analysis, manipulation, storage,

More information

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain Image Enhancement in spatial domain Digital Image Processing GW Chapter 3 from Section 3.4.1 (pag 110) Part 2: Filtering in spatial domain Mask mode radiography Image subtraction in medical imaging 2 Range

More information

Efficient Color Object Segmentation Using the Dichromatic Reflection Model

Efficient Color Object Segmentation Using the Dichromatic Reflection Model Efficient Color Object Segmentation Using the Dichromatic Reflection Model Vladimir Kravtchenko, James J. Little The University of British Columbia Department of Computer Science 201-2366 Main Mall, Vancouver

More information

Image and Video Processing

Image and Video Processing Image and Video Processing () Image Representation Dr. Miles Hansard miles.hansard@qmul.ac.uk Segmentation 2 Today s agenda Digital image representation Sampling Quantization Sub-sampling Pixel interpolation

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part : Image Enhancement in the Spatial Domain AASS Learning Systems Lab, Dep. Teknik Room T9 (Fr, - o'clock) achim.lilienthal@oru.se Course Book Chapter 3-4- Contents. Image Enhancement

More information

Edge-Raggedness Evaluation Using Slanted-Edge Analysis

Edge-Raggedness Evaluation Using Slanted-Edge Analysis Edge-Raggedness Evaluation Using Slanted-Edge Analysis Peter D. Burns Eastman Kodak Company, Rochester, NY USA 14650-1925 ABSTRACT The standard ISO 12233 method for the measurement of spatial frequency

More information

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Huei-Yung Lin and Chia-Hong Chang Department of Electrical Engineering, National Chung Cheng University, 168 University Rd., Min-Hsiung

More information

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals Gerhard Schmidt Christian-Albrechts-Universität zu Kiel Faculty of Engineering Institute of Electrical Engineering

More information

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope PROCEEDINGS OF SPIE SPIEDigitalLibrary.org/conference-proceedings-of-spie Measurement of low-order aberrations with an autostigmatic microscope William P. Kuhn Measurement of low-order aberrations with

More information

Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering

Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering L. Sahawneh, B. Carroll, Electrical and Computer Engineering, ECEN 670 Project, BYU Abstract Digital images and video used

More information

Enhanced Shape Recovery with Shuttered Pulses of Light

Enhanced Shape Recovery with Shuttered Pulses of Light Enhanced Shape Recovery with Shuttered Pulses of Light James Davis Hector Gonzalez-Banos Honda Research Institute Mountain View, CA 944 USA Abstract Computer vision researchers have long sought video rate

More information

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools Course 10 Realistic Materials in Computer Graphics Acquisition Basics MPI Informatik (moving to the University of Washington Goal of this Section practical, hands-on description of acquisition basics general

More information

Method for out-of-focus camera calibration

Method for out-of-focus camera calibration 2346 Vol. 55, No. 9 / March 20 2016 / Applied Optics Research Article Method for out-of-focus camera calibration TYLER BELL, 1 JING XU, 2 AND SONG ZHANG 1, * 1 School of Mechanical Engineering, Purdue

More information

2D Discrete Fourier Transform

2D Discrete Fourier Transform 2D Discrete Fourier Transform In these lecture notes the figures have been removed for copyright reasons. References to figures are given instead, please check the figures yourself as given in the course

More information

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates Copyright SPIE Measurement of Texture Loss for JPEG Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates ABSTRACT The capture and retention of image detail are

More information

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS INFOTEH-JAHORINA Vol. 10, Ref. E-VI-11, p. 892-896, March 2011. MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS Jelena Cvetković, Aleksej Makarov, Sasa Vujić, Vlatacom d.o.o. Beograd Abstract -

More information