Single Image Blind Deconvolution with Higher-Order Texture Statistics

Size: px
Start display at page:

Download "Single Image Blind Deconvolution with Higher-Order Texture Statistics"

Transcription

1 Single Image Blind Deconvolution with Higher-Order Texture Statistics Manuel Martinello and Paolo Favaro Heriot-Watt University School of EPS, Edinburgh EH14 4AS, UK Abstract. We present a novel method for solving blind deconvolution, i.e., the task of recovering a sharp image given a blurry one. We focus on blurry images obtained from a coded aperture camera, where both the camera and the scene are static, and allow blur to vary across the image domain. As most methods for blind deconvolution, we solve the problem in two steps: First, we estimate the coded blur scale at each pixel; second, we deconvolve the blurry image given the estimated blur. Our approach is to use linear high-order priors for texture and second-order priors for the blur scale map, i.e., constraints involving two pixels at a time. We show that by incorporating the texture priors in a least-squares energy minimization we can transform the initial blind deconvolution task in a simpler optimization problem. One of the striking features of the simplified optimization problem is that the parameters that define the functional can be learned offline directly from natural images via singular value decomposition. We also show a geometrical interpretation of image blurring and explain our method from this viewpoint. In doing so we devise a novel technique to design optimally coded apertures. Finally, our coded blur identification results in computing convolutions, rather than deconvolutions, which are stable operations. We will demonstrate in several experiments that this additional stability allows the method to deal with large blur. We also compare our method to existing algorithms in the literature and show that we achieve state-of-the-art performance with both synthetic and real data. Keywords: coded aperture, single image, image deblurring, depth estimation. 1 Introduction Recently there has been enormous progress in image deblurring from a single image. Perhaps one of the most remarkable results is to have shown that it is possible to extend the depth of field of a camera by modifying the camera optical response [1 7]. Moreover, techniques based on applying a mask at the lens aperture have demonstrated the ability to recover a coarse depth of the This research was partly supported by SELEX Galileo grant SELEX/HWU/ 2010/SOW3.

2 2 Manuel Martinello, Paolo Favaro (a) (b) Fig. 1. Results on an outdoor scene [exposure time 1/200s]. (a) Blurry coded image captured with mask b (see Fig. 4). (b) Sharp image reconstructed with our method. scene [4, 5, 8]. Depth has then been used for digital refocusing [9] and advanced image editing. In this paper we present a novel method for image deblurring and demonstrate it on blurred images obtained from a coded aperture camera. Our algorithm uses as input a single blurred image (see Fig. 1 (a)) and automatically returns the corresponding sharp one (see Fig. 1 (b)). Our main contribution is to provide a computationally efficient method that achieves state-of-the-art performance in terms of depth and image reconstruction with coded aperture cameras. We demonstrate experimentally that our algorithm can deal with larger amounts of blur than previous coded aperture methods. One of the leading approaches in the literature [5] recovers a sharp image by sequentially testing a deconvolution method for several given hypotheses for the blur scale. Then, the blur scale that yields a sharp image that is consistent with both the model and the texture priors is chosen. In contrast, in our approach we show that one can identify the blur scale by computing convolutions, rather than deconvolutions, of the blurry image with a finite set of filters. As a consequence, our method is numerically stable especially when dealing with large blur scales. In the next sections, we present all the steps needed to define our algorithm for image deblurring. The task is split in two steps: First the blur scale is identified and second, the coded image is deblurred with the estimated blur scale. We present an algorithm for blur scale identification in section 3.1. Image deblurring is then solved iteratively in section 3.2. A discussion on mask selection is then presented in section 4.1. Comparisons to existing methods are shown in section Prior Work This work relates to several fields ranging from computer vision to image and signal processing, and from optics to astronomy and computer graphics. For simplicity, we group past work based on the technique being employed.

3 Single Image Blind Deconvolution with Higher-Order Texture Statistics 3 Coded Imaging: Early work in coded imaging appears in the field of astronomy. One of the most interesting pattern designs is the Modified Uniformly Redundant Arrays (MURA) [10] for which a simple coding and decoding procedure was devised (see one such pattern in Fig. 4). In our tests the MURA pattern seems very well behaved, but too sensitive to noise (see Fig. 5). Coded patterns have also been used to design lensless systems, but these systems require either long exposures or are sensitive to noise [11]. More recently, coding of the exposure [12] or of the aperture [4] has been used to preserve high spatial frequencies in blurred images so that deblurring is well-posed. We test the mask proposed in [4] and find that it works well for image deblurring, but not for blur scale identification. A mask that we have tested and has yielded good performance is the four-holes mask of Hiura and Matsuyama [13]. In [13] however, the authors used multiple images. A study on good apertures for deblurring multiple coded images via Wiener filtering has instead led to two novel designs [14, 15]. Although the masks were designed to be used together, we have tested each of them independently for comparisons purposes. We found, as predicted by the authors, that the masks are quite robust to noise and quite well designed for image deblurring. Image deblurring and depth estimation with a coded aperture camera has also been demonstrated by Levin et al. [5]. One of their main contributions is the design of an optimal mask. We indeed find this mask quite effective both on synthetic data and real data. However, as already noticed in [16], we have found that the coded aperture technique, if approached as in [5], fails when dealing with large blur amounts. The method we propose in this paper, instead, overtakes this limitation, especially when using the four-hole mask. Finally, a design based on annular masks has also been proposed in [17] and has been exploited for depth estimation in [3]. We also tested this mask in our experiments, but, contrary to our expectations, we did not find its performance superior to the other masks. 3D Point Spread Functions: While there are several techniques to extract depth from images, we briefly mention some recent work by Greengard et al. [18] because their optical design included and exploited diffraction effects. They investigated 3D point spread functions (PSF) whose transverse cross sections rotate as a result of diffraction, and showed that such PSFs yield an order of magnitude increase in the sensitivity with respect to depth variations. The main drawback however, is that the depth range and resolution is limited due to the angular resolution of the reconstructed PSF. Depth-Invariant Blur: An alternative approach to coded imaging is wavefront coding. The key idea is to use aspheric lenses to render the lens point spread function (PSF) depth-invariant. Then, shift-invariant deblurring with a fixed known blur can be applied to sharpen the image [19, 20]. However, while the results are quite promising, the PSF is not fully depth-invariant and artifacts are still present in the reconstructed image. Other techniques based on depthinvariant PSFs exploit the chromatic aberrations of lenses [7] or use diffusion [21]. However, in the first case, as the focal sweep is across the spectrum, the method is mostly designed for grayscale imaging. While the results shown in

4 4 Manuel Martinello, Paolo Favaro these recent works are stunning, there are two inherent limitations: 1) Depth is lost in the imaging process; 2) In general, as method based on focal sweep are not exactly depth-invariant, the deblurring performance decays for objects that are too close or too far away from the camera. Multiple viewpoint: The extension of the depth of field can also be achieved by using multiple images and/or multiple viewpoints. One technique is to obtain multiple viewpoints by capturing multiple coded images [8, 13, 22] or by capturing a single image by using a plenoptic camera [9, 6, 23, 24]. These methods however, exploit multiple images or require a more costly optical design (e.g., a calibrated microlens array). Motion Deblurring and Blind Deconvolution: This work also relates to work in blind deconvolution, and in particular on motion deblurring. There has been a quite steady progress in uniform motion deblurring [25 29] thanks to the modeling and exploitation of texture statistics. Although these methods deal with an unknown and general blur pattern, they assume that blur is not changing across the image domain. More recently, the space-varying case has been studied [30 32] albeit with some restrictions on the type of motion or the scene depth structure. Blurred face recognition: Work in the recognition of blurred faces [33] is also related to our method. Their approach extracts features from motionblurred images of faces and then uses the subspace distance to identify the blur. In contrast, our method can be applied to space-varying blur and our analysis provides a novel method to evaluate (and design) masks. 2 Single Image Blind Deconvolution Blind deconvolution from a single image is a very challenging problem: We need to recover more unknowns than the available observations. This challenge will be illustrated in the next section, where we present the image formation model of a blurred image obtained from a coded aperture camera. To make the problem feasible and well-behaved, one can introduce additional constraints on the solution. In particular, we constrain the higher-order statistics of sharp texture (sec. 2.2) and impose that the blur scale be piecewise smooth across the image pixels (sec. 2.3). 2.1 Image Model In the simplest instance, a blurred image of a plane facing the camera can be described via the convolution of a sharp image with the blur kernel. However, the convolutional model breaks down with more general surfaces and, in particular, at occlusion boundaries. In this case, one can describe a blurred image with a linear model. For the sake of notational simplicity, we write images as column vectors, where all pixels are sorted in lexicographical order. Thus, a blurred image with N pixels is a column vector g R N. Similarly, a sharp image with

5 Single Image Blind Deconvolution with Higher-Order Texture Statistics 5 M pixels is a column vector f R M. Then, g satisfies g = H d f, (1) where the N M matrix H d represents the coded blur. d is a column vector with M pixels and collects the blur scale corresponding to each pixel of f. The i-th column of H d is an image, rearranged as a vector, of the coded blur with scale d i generated by the i-th pixel of f. Notice that this model is indeed a generalization of the convolutional case. In the convolutional model, H d reduces to a Toeplitz matrix. Our task is to recover the unknown sharp image f given the blurred image g. To achieve this goal it is necessary to recover the blur scale at each pixel d. The theory of linear algebra tells us that: If N = M and the equations in eq. (1) are not linearly dependent, and we are given both g and H d, then we can recover the sharp image f. However, in our case we are not given the matrix H d and the blurred image g is affected by noise. This introduces two challenges: First, to obtain H d we need to retrieve the blur scale d; second, because of noise in g and of the ill-conditioning of the linear system in eq. (1), the estimation of f might be unstable. The first challenge implies that we do not have a unique solution. The second challenge implies that even if the solution were unique, its estimation would not be reliable. However, not all is lost. It is possible to add more equations to eq. (1) until a unique reliable solution can be obtained. This technique is based on observing that, typically, one expects the unknown sharp image and blur scale map to have some regularity. For instance, both sharp textures and blur scale maps are not likely to look like noise. In the next two sections we will present and illustrate our sharp image and blur scale priors. 2.2 Sharp Image Prior Images of the real world exhibit statistical regularities that have been studied intensively in the past 20 years and have been linked to the human visual system and its evolution [34]. For the purpose of image deblurring, the most important aspect of this study is that natural images form a much smaller subset of all possible images. In general, the characterization of the statistical properties of natural images is done by applying a given transform, typically related to a component of human vision. Among the most common statistics used in image processing are the second order statistics, i.e., relations between pairs of pixels. For instance, this category includes the distributions of image gradients [35, 36]. However, a more accurate account of the image structure can be captured with high-order statistics, i.e., relations between several pixels. In this work, we consider this general case, but restrict the relations to linear ones of the form Σf 0 (2) where Σ is a rectangular matrix. Eq. (2) implies that all sharp images live approximately on a subspace. Despite their crude simplicity, these linear constraints allow for some flexibility. For example, the case of second-order statistics

6 6 Manuel Martinello, Paolo Favaro results in rows of Σ with only two nonzero values. Also, by designing Σ one can selectively apply the constraints only on some of the pixels. Another example is to choose each row of Σ as a Haar feature applied to some pixels. Notice that in our approach we do not make any of these choices. Rather, we estimate Σ directly from natural images. Natural image statistics, such as gradients, typically exhibit a peaked distribution. However, performing inference on such distributions results in minimizations of non convex functionals for which we do not have probably optimal algorithms. Furthermore, we are interested in simplifying the optimization task as much as possible to gain in computational efficiency. This has led us to enforce the linear relation above by minimizing the convex cost Σf 2 2. (3) As we do not have an analytical expression for Σ that satisfies eq. (2), we need to learn it directly from the data. We will see later that this step is necessary only when performing the deconvolution step given the estimated blur. Instead, when estimating the blur scale our method allows us to use Σ implicitly, i.e., without ever recovering it. 2.3 Blur Scale Prior The statistics of range images can be characterized with an approach similar to that for optical images [37]. The study in [37] verified the random collage model, i.e., that a scene is a collection of piecewise constant surfaces. This has been observed in the distributions of Haar filter responses on the logarithm of the range data, which showed strong cusps in the isoprobability contours. Unfortunately, a prior following these distributions faithfully would result in non convex energy minimization. A practical convex solution to enforce the piecewise constant model, is to use total variation [38]. Common choices are the isotropic and anisotropic total variation. In our algorithm we have implemented the latter. We minimize d 1, i.e., the sum of the absolute value of the components of the gradient of d. 3 Blur Scale Identification and Image Deblurring We can combine the image model introduced in sec. 2.1 with the priors in sec. 2.2 and 2.3 and formulate the following energy minimization problem: ˆd, ˆf = argmin g H d f α Σf β d 1, (4) d,f where the parameters α, β > 0 determine the amount of regularization for texture and blur scale respectively. Notice that the formulation above is common to many approaches including, in particular, [5]. Our approach, however, in addition to using a more accurate blur matrix H d, considers different priors and a different depth identification procedure.

7 Single Image Blind Deconvolution with Higher-Order Texture Statistics 7 Our next step is to notice that, given d, the proposed cost is simply a leastsquares problem in the unknown sharp texture f. Hence, it is possible to compute f in closed-form and plug it back in the cost functional. The result is a much simpler problem to solve. We summarize all the steps in the following Theorem: Theorem 1. The set of extrema of the minimization (4) coincides with the set of extrema of the minimization ˆd = argmin Hd g β d 1 ( d ) 1 (5) ˆf = ασ T Σ + H Ṱ H d ˆd H Ṱ g d where H d. = I H d ( ασ T Σ + H T d H d) 1 H T d, and I is the identity matrix. Proof. See Appendix. Notice that the new formulation requires the definition of a square and symmetric matrix Hd. This matrix depends on the parameter α and the prior matrix Σ, both of which are unknown. However, for the purpose of estimating the unknown blur scale map d, it is possible to bypass the estimation of α and Σ by learning directly the matrix Hd from data. 3.1 Learning Procedure and Blur Scale Identification We break down the complexity of solving eq. (5) by using local blur uniformity, i.e., by assuming that blur is constant within a small region of pixels. Then, we further simplify the problem by considering only a finite set of L blur sizes d 1,..., d L. In practice, we find that both assumptions work well. The local blur uniformity holds reasonably well except at occluding boundaries, which form a small subset of the image domain. At occluding boundaries the solution tends to favor small blur estimates. We also found experimentally that the discretization is not a limiting factor in our method. The number of blur sizes L can be set to a value that matches the level of accuracy of the method without reaching a prohibitive computational load. Now, by combining the assumptions we find that eq. (5) at one pixel x can be approximated by ˆd(x) = argmin Hd (x)g β d(x) 1 (6) d(x) ˆd(x) = argmin Hd(x) g x 2 2 (7) d(x) where g x is a column vector of δ 2 pixels extracted from a δ δ patch centered at the pixel x of g. Experimentally, we find that the size δ of the patch should not be smaller than the maximum scale of the coded blur in the captured image g. H d(x) is a δ2 δ 2 matrix that depends on the blur size d(x) {d 1,..., d L }.

8 8 Manuel Martinello, Paolo Favaro So we assume that Hd (x, y) 0 for y such that y x 1 > δ/2. Notice that the term β d 1 drops because of the local blur uniformity assumption. The next step is to explicitly compute Hd(x). Since the blur size d(x) is one of L values, we only need to compute Hd 1,..., Hd L matrices. As each Hd i depends on α and the local Σ, we propose to learn each Hd i directly from data. Suppose that we are given a set of T column vectors g x1,..., g xt extracted from blurry images of a plane parallel to the camera image plane. The column vectors will all share the same blur scale d i. Hence, we can rewrite the cost functional in eq. (7) for all x as Hd i G i 2 2 (8). where G i = [gx1 g xt ]. By definition of G i, Hd i G i 2 2 = 0. Hence, we find that Hd i can be computed via the singular value decomposition of G i = U i S i Vi T. If U i = [U di Q di ] where Q di corresponds to the singular values of S i that are zero (or negligible), then Hd i = Q di Q T d i. The procedure is then repeated for each blur scale d i with i = 1,..., L. Next, we can use the estimated matrices Hd 1,..., Hd L on a new image g and optimize with respect to d: ˆd = argmin Hd(x) g x β d(x) 1. (9) d x The first term represents unitary terms, i.e., terms that are defined on single pixels; the second term represents binary terms, i.e., terms that are defined on pairs of pixels. The minimization problem (9) can then be solved efficiently via graph cuts [39]. Notice that the procedure above can be applied to other surfaces as well, so that instead of a collection of parallel planes, one can consider, for example, a collection of quadratic surfaces. Also, notice that there are no restrictions on the size of a patch. In particular, the same procedure can be applied to a patch of the size of the input image. In our experiments for depth estimation, however, we consider only small patches and parallel planes as local surfaces. 3.2 Image Deblurring In the previous section we have devised a procedure to compute the blur scale at each pixel d. In this section we assume that d is given and devise a procedure to compute the image f. In principle, one could use the closed-form solution ( ) 1 f = ασ T Σ + H Ṱ H d ˆd H Ṱ g. (10) However, notice that computing this equation entails solving a large matrix inversion, which is not practical for moderate image dimensions. A simpler approach is to solve the least squares problem (4) in f via an iterative method. Therefore, we consider solving the problem ˆf = argmin g H ˆdf α Σf 2 2 (11) f d

9 Single Image Blind Deconvolution with Higher-Order Texture Statistics 9 by using a least-squares conjugate gradient descent algorithm in f [40]. The main component for the iteration in f is the gradient E f of the cost (11) with respect to f ( ) E f = ασ T Σ + H Ṱ H d ˆd f H Ṱ g. (12) d The descent algorithm iterates until E f 0. Because of the convexity of the cost functional with respect to f, the solution is also a global minimum. To compute Σ we use a database of sharp images F = [f 1 f T ] where {f i } i=1,...,t are sharp images rearranged as column vectors, and compute the singular value decomposition F = U F Σ F V T F. Then, we partition U F = [U F,1 U F,2 ] such that U F,2 corresponds to the smallest singular values of Σ F. The high-order prior is defined as Σ. = U F,2 U T F,2, such that we have Σf i 0. The regularization parameter α is instead manually tuned. The matrix H ˆd is computed as described in Section A Geometric Viewpoint on Blur Scale Identification In the previous sections we have seen that the blur scale at each pixel can be obtained by minimizing eq. (9). We search among matrices Hd 1,..., Hd L the one that yields the minimum l 2 norm when applied to the vector g x. We show that this has a geometrical interpretation: Each matrix Hd i defines a subspace and Hd i g x 2 2 is the distance of each vector g x from that subspace. Recall that Hd i = Q di Q T d i and that U i = [U di Q di ] is an orthonormal matrix. Then, we obtain that Hd i g x 2 2 = Q di Q T d i g x 2 2 = Q T d i g x 2 2 = g x 2 2 Ud T i g x 2 2. If we now divide by the scalar number g x 2 2, we obtain exactly the square of the subspace distance [41] M(g, U di ) = K ( ) 2 1 Ud T g i,j (13) g where K is the rank of the subspace U di, U di = [U di,1... U di,k], and U di,j, j = 1,, K are orthonormal vectors. The geometrical interpretation brings a fresh look to image blurring and deblurring. Consider the image model (1). Let us take the singular value decomposition of the blur matrix H d j=1 H d = U d S d V T d (14) where S d is a diagonal matrix with positive entries, and both U d and V d are orthonormal matrices. Formally, the vector f undergoes a rotation (Vd T ), then a scaling (S d ), and then again another rotation (U d ). This means that if f lives in a subspace, the initial subspace is mapped to another rotated subspace, possibly of smaller dimension (see Fig. 2, middle). Notice that as we change the blur scale, the rotations and scaling are also changing and may result in yet a different subspace (see Fig. 2, right).

10 10 Manuel Martinello, Paolo Favaro f 2 f 3 f 1 (a) H d1 g 1 g 2 g 3 (b) H d2 g 1 g 2 (c) g 3 Fig. 2. Coded images subspaces. (a) Image patches on a subspace. (b) Subspace containing images blurred with H d1 ; blurring has the effect of rotating and possibly reducing the dimensionality of the original subspace. (c) Subspace containing images blurred with H d2. It is important to understand that rotations of the vector f can result in blurring. To clarify this, consider blurred and sharp images with only 3 pixels (we cannot visualize the case of more than 3 pixels), i.e., g 1 = [g 1,x g 1,y g 1,z ] T and f 1 = [f 1,x f 1,y f 1,z ] T. Then, we can plot the vectors g 1 and f 1 as 3D points (see Fig. 2). Let g 1 = 1 and f 1 = 1. Then, we can rotate f 1 about the origin and overlap it exactly on g 1. In this case rotation corresponded to blurring. The opposite is also true. We can rotate the vector g 1 onto the vector f 1 and thus perform deblurring. Furthermore, notice that in this simple example the most blurred images are vectors with identical entries. Such blurred images lie along the diagonal direction [1 1 1] T. In general, blurry images tend to have entries with similar values and hence tend to cluster around the diagonal direction. Our ability to discriminate between different blur scales in a blurry image boils down to being able to determine the subspaces where the patches of such blurry image live. If sharp images do not live on a subspace, but uniformly in the entire space, our only way to distinguish the blur size is that the blurring H d scales some dimensions of f to zero and that the scaling varies with blur size. This case has links to the zero-sheet approach in the Fourier domain [42]. However, if the sharp images live on a subspace, the blurring H d may preserve

11 Single Image Blind Deconvolution with Higher-Order Texture Statistics 11 Input: A single coded image g and a collection of coded images of L planar scenes. Output: The blur scale map d of the scene. Preprocessing (offline) Pick an image patch size larger than twice the maximum blur scale; for i = 1,..., L do Compute the singular value decomposition U is ivi T of a collection of image patches coded with blur scale d i ; Calculate the subspace U di as the columns of U i corresponding to nonzero singular values of S i; end Blur identification (online) Solve ˆd = arg min d {d1,,d L } Px M2 (g x, U d ) + β d(x) g x Algorithm 1: Blur scale identification from a single coded image via the subspace distance method. all the directions and blur scale identification is still possible by determining the rotation of the sharp images subspace. This is the principle that we exploit. Notice that the evaluation of the subspace distance M involves the calculation of the inner product between a patch and a column of U di. Hence, this calculation can be done exactly as the convolution of a column of U di, rearranged as an image patch, with the whole image g. We can conclude that the algorithm requires computing a set of L K convolutions with the coded image, which is a stable operation of polynomial computational complexity. As we have shown that minimizing eq. (13) is equivalent to minimizing H d i g x 2 2 up to a scalar value, we summarize the blur scale identification procedure in Algorithm Coded Aperture Selection In this section we discuss how to obtain an optimal pattern for the purpose of image deblurring. As pointed out in [19] we identify two main challenges: The first one is that accurate deblurring requires accurate identification of the blur scale; the second one is that accurate deblurring requires little texture loss due to blurring. A first step towards addressing these challenges is to define a metric for blur scale identification and a metric for texture loss. Our metric for blur scale identification can be defined directly from section 4. Indeed, the ability to determine which subspace a coded image patch belongs to can be measured via the distance between the subspaces associated to each blur scale M(U d1, U d2 ) = K ( 2. Ud T U 1,i d 2,j) (15) i,j Clearly, the wider apart all the subspaces are, and the less prone to noise the subspace association is. We find that a good visual summary of the spacing

12 12 Manuel Martinello, Paolo Favaro d20 d25 d d20 d25 d d10 M( (Ud10, Ud20) =!K d10 d25 M( (Ud25, Ud25) = 0 d25 d (a) Ideal distance matrix d (b) Circular aperture Fig. 3. Distance matrix computation. The top-left corner of each matrix is the distance between subspaces corresponding to small blur scales, and, vice versa, the Tuesday, 15 February 2011 Tuesday, 15 February 2011 bottom-right corner is the distance between subspaces corresponding to large blur scales. Notice that large subspace distances are bright and small subspace distances are dark. The maximum distance ( K) is achievable when two subspaces are orthogonal to each other. between all the subspaces is a (symmetric) matrix with distances between any two subspaces. We compute such matrix for a conventional camera and show the results in Fig. 3, together with the ideal distance matrix. In each distance matrix, subspaces associated to blur scales ranging from the smallest to the largest ones are arranged along the rows from left to right and along the columns from top to bottom. Along the diagonal the distance is necessarily 0 as we compare identical subspaces. Also, by definition the metric cannot exceed K, where K is the minimum rank among the subspaces. In Fig. 5 we report the distance matrices computed for each of the apertures we consider in this work (see Fig. 4). Notice that the subspace distance map for a conventional camera (Fig. 3(b)) is overall darker than the matrices for coded aperture cameras (Fig. 5). This shows the poor blur scale identifiability of the circular aperture and the improvement that can be achieved when using a more elaborate pattern. The rank K can be used to address the second challenge, i.e., the definition of a metric for texture loss. So far we have seen that blurring can be interpreted as a combination of rotations and scaling. Deblurring can then be interpreted as a combination of rotations and scaling in the opposite direction. However, when blurring scales some directions to 0, part of the texture content has been lost. This suggests that a simple measure for texture loss is the dimension of the coded subspace: The higher the dimension and the more texture content we can restore. As the (coded images) subspace dimension is K, we can immediately conclude that the subspace distance matrix that most closely resembles the ideal distance matrix (see Fig. 3(a)) is the one that simultaneously achieves the best depth identification and the least texture loss. Finally, we propose to use the average L 1 fitting of any distance matrix to the ideal distance matrix scaled of K, i.e., K(11 T I) M. The fitting yields the values in Table 1. We can

13 Single Image Blind Deconvolution with Higher-Order Texture Statistics 13 (a) (b) (c) (d) (e) (f) (g) (h) Fig. 4. Coded aperture patterns and PSFs. All the aperture patterns we consider in this work (top row) and their calibrated PSFs for two different blur scales (second and bottom row). (a) and (b) aperture masks used in both [13] and [43]; (c) annular mask used in [17]; (d) pattern proposed by [5]; (e) pattern proposed by [4]; (f) and (g) aperture masks used in [15]; (h) MURA pattern used in [10]. (a) Mask 4(a) (b) Mask 4(b) (c) Mask 4(c) (d) Mask 4(d) (e) Mask 4(e) (f) Mask 4(f) (g) Mask 4(g) (h) Mask 4(h) Fig. 5. Subspace distances for the eight masks in Fig. 4. Notice that the subspace rank K determines the maximum distance achievable, and therefore, coded apertures with overall darker subspace distance maps have poor blur scale identifiability (i.e., sensitive to noise). Masks 4(a) 4(b) 4(c) 4(d) 4(e) 4(f) 4(g) 4(h) L 1 fitting Table 1. L 1 fitting of any distance matrix to the ideal distance matrix scaled of K.

14 14 Manuel Martinello, Paolo Favaro also see visually in Fig. 5 that mask 4(b) and mask 4(d) are the coded apertures that we can expect to achieve the best results in texture deblurring. The quest for the optimal mask is, however, still an open problem. Even if we look for the optimal mask via brute-force search, a single aperture pattern requires the evaluation of eq. (15) and the computation of all the subspaces associated to each blur scale. In particular, the latter process requires about 15 minutes on a QuadCore 2.8GHz with Matlab 7, which makes the evaluation of a large number of masks unfeasible. Devising a fast procedure to determine the optimal mask will be subject of future work. 5 Experiments In this section we demonstrate the effectiveness of our approach on both synthetic and real data. We show that the proposed algorithm performs better than previous methods on different coded apertures and different datasets. We also show that the masks proposed in the literature do not always yield the best performance. 5.1 Performance Comparison Before proceeding with tests on real images, we perform extensive simulations to compare accuracy and robustness of our algorithm with 4 competing methods including the current state-of-the-art approach. The methods are all based on the hypothesis plane deconvolution used by [5] as explained in the Introduction. The main difference among the competing methods is that the deconvolution step is performed either using the Lucy-Richardson method [44], or regularized filtering (i.e., with image gradient smoothness), or Wiener filtering [45], or Levin s procedure [5]. We use the 8 masks shown in Fig. 4. All the patterns have been proposed and used by other researchers [4, 5, 10, 13, 15, 17]. For each mask and a given blur scale map d, we simulate a coded image by using eq. (1), where f is an image of 4, pixels with either random texture or a set of patches from natural images (examples of these patches are shown in Fig. 6). Then, for each algorithm we obtain a blur scale map estimate ˆd and compute its discrepancy with the ground-truth. The ground-truth blur scale map d that we use is shown in pseudo-colors at the top-left of both Fig. 7 and Fig. 8 and it represents a stair composed of 39 steps at different distances (and thus different blur scales) from the camera. We assume that the focal plane is set to be between the camera and the first object of interest in the scene. With this setting, the bottom part of the blur scale map (small blur sizes) corresponds to points close to the camera, and the top part (large blur sizes) to points far from the camera. Each step of the stair is a square of pixels, we have squeezed the actual illustration along the vertical axis to fit in the paper. The size of the blur ranges from 7 to 30 pixels. Notice that in measuring the errors we consider all pixels, including those at the blur scale discontinuities, given by the difference of blur scale between neighboring steps. In Fig. 7 we show, for each mask in Fig. 4,

15 Single Image Blind Deconvolution with Higher-Order Texture Statistics 15 image noise level σ= 0 image noise level σ= Fig. 6. Real texture. Some of the patches extracted from real images that have been used in our tests. The same patches are shown with no noise (top part) and when a Gaussian noise is added to them (bottom part). the results of the proposed method (right) together with the results obtained by the current state-of-the-art algorithm (left) on random texture. The same procedure, but with texture from natural images, is reported in Fig. 8. For the three best performing masks (mask 4(a), mask 4(b), and mask 4(d)), we report the results with the same graphical layout in Fig. 9, in order to better appreciate the improvement of our method over previous ones, especially for large blur scales. Every plot shows, for each of the 39 steps we consider, the mean and 3 times the standard deviation of the estimated blur scale values (ordinate axis) against the true blur scale level (abscissa axis). The ideal estimate is the diagonal line where each estimated level corresponds to the correct true blur scale level. If there is no bias in the estimation of the blur scale map, the ideal estimate should lie between 3 times the standard deviation about the mean with probability close to 1. Our method performs consistently well with all the masks and at different blur scale levels. In particular, the best performances are observed for mask 4b (Fig. 9(b)) and d (Fig. 9(c)), while the performance of competing methods rapidly degenerates with increasing pattern scales. This demonstrates that our method has potential for restoring objects at a wider range of blur scales and with higher accuracy than in previous algorithms. A quantitative comparison among all the methods and masks is given in Table 2 and Table 4 (for random texture) and in Table 3 and Table 5 (for real texture). In each table, the left half reports the average error of the blur scale estimate (measured as d ˆd 1, where d and ˆd are the ground-truth and the estimated blur scale map respectively); the right half reports the error on the reconstructed sharp image ˆf, measured as f ˆf f ˆf 2 2, where f is the ground-truth image. The gradient term is added to improve sensitivity to artifacts in the reconstruction. As one can see from Tables 2-5, several levels of noise have been considered in the performance comparison: σ = 0 (Table 2 and

16 16 Manuel Martinello, Paolo Favaro Far Close GT (a) Mask 4(a) (b) Mask 4(b) (c) Mask 4(c) (d) Mask 4(d) (e) Mask 4(e) (f) Mask 4(f ) (g) Mask 4(g) (h) Mask 4(h) Tuesday, 15 February 2011 Fig. 7. Blur scale estimation - random texture. GT: Ground-truth blur scale map. (a-h) Estimated blur scale maps for all the eight masks we consider in the paper. For each mask, the figure reports the blur scale map estimated with both Levin et al. s method (left) and our method (right). Table 3), σ = 0.001, σ = 0.002, and σ = (Table 4 and Table 5). The noise level is however adjusted to accommodate the difference in overall incoming light between the masks, i.e., if the mask i has an incoming light of li 1, the noise level for that mask is given by: 1 σi = σ. (16) li Thus, masks such as 4(f), 4(g) and 4(h) are subject to lower noise levels than masks such as 4(a) and 4(b). Our method produces more consistent and accurate blur scale maps than previous methods for both random texture and natural images, and across the 8 masks that it has been tested with. 5.2 Results on Real Data We now apply the proposed blur scale estimation algorithm to coded aperture images captured by inserting the selected mask into a Canon 50mm f /1.4 lens 1 The value of li represents the quantity of lens aperture that is open: when the lens aperture is totally open, li = 1; instead, when the mask completely blocks the light, li = 0.

17 Single Image Blind Deconvolution with Higher-Order Texture Statistics 17 Far Close GT (a) Mask 4(a) (b) Mask 4(b) (c) Mask 4(c) (d) Mask 4(d) Tuesday, 15 February 2011 (e) Mask 4(e) (f) Mask 4(f) (g) Mask 4(g) (h) Mask 4(h) Fig. 8. Blur scale estimation - real texture. GT: Ground-truth blur scale map. (a-h) Estimated blur scale maps for all the eight masks we consider in the paper. For each mask, the figure reports the blur scale map estimated with both Levin et al. s method (left) and our method (right). mounted on a Canon EOS-5D DSLR as described in [5, 15]. Based on the analysis in section 4.1 we choose mask 4(b) and mask 4(b). Each of the 4 holes in the first mask is 3.5mm large, which corresponds to the same overall section of a conventional (circular) aperture with diameter 7.9mm (f/6.3 in a 50mm lens). All indoor images have been captured by setting the shutter speed to 30ms (ISO ) while outdoors the exposure has been set to 2ms or lower (ISO 100). Firstly, we need to collect (or synthesize) a sequence of L coded images, where L is the number of blur scale levels we want to distinguish. There are two techniques to acquire these coded images: (1) If the aim is just to estimate the depth map (or blur scale map), one can capture real coded images of a planar surface with sharp natural texture (e.g., a newspaper) at different blur scale levels. (2) If the goal is to reconstruct both depth map and all-in-focus image, one has to capture the PSF of the camera at each depth level, by projecting a grid of bright dots on a plane and using a long exposure; then, coded images are simulated by applying the measured PSFs on sharp natural images collected from the web. In the experiments presented in this paper, we use the latter approach since we estimate both the blur scale map and the all-in-focus image. The PSFs have been captured on a plane at 40 different depths between 60cm and 140cm from the camera. The focal plane of the camera was set at 150cm.

18 18 Manuel Martinello, Paolo Favaro Lucy Richardson Levin Our method Lucy Richardson Levin Our method Lucy Richardson Levin Our method Estimated blur scale Estimated blur scale Estimated blur scale True blur scale True blur scale True blur scale Lucy Richardson Levin Our method Lucy Richardson Levin Our method Lucy Richardson Levin Our method Estimated blur scale Estimated blur scale Estimated blur scale True blur scale (a) Mask 4(a) True blur scale (b) Mask 4(b) True blur scale (c) Mask 4(d) Fig. 9. Comparison of the estimated blur scale levels obtained from the 3 best methods using both random (top) and real (bottom) texture. Each graph reports the performance of the algorithms with (a) masks 4(a), (b) masks 4(b), and (c) mask 4(d). Both mean and standard deviation (in the graphs, we show three times the computed standard deviation) of the estimated blur scale are shown in an errorbar with the algorithms performances (solid lines) over the ideal characteristic curve (diagonal dashed line) for 39 blur sizes. Notice how the performance dramatically changes based on the nature of texture (top row vs bottom row). Moreover, in the case of real images the standard deviation of the estimates obtained with our method are more uniform for mask 4(b) than for mask 4(d). In the case of mask 4(d) the performance is reasonably accurate only with small blur scales. In the first experiments, we show the advantage of our approach over Levin et al. s method on a scene with blur sizes similar to the ones used in the performance test. The same dataset has been captured by using mask 4(b) (see Fig. 11) and mask 4(d) (see Fig. 12). The size of the blur, especially at the background, is very large; This can be appreciated in Fig. 10(a), which shows the same scenario captured with the same camera setting, but without mask on the lens. For a fair comparison, we do not use any regularization or user intervention to the estimated blur scale maps. As already seen in the Section 5.1 (especially in Fig. 9), Levin et al. s method yields an accurate blur scale estimate with mask 4(d) when the size of the blur is small, but it fails with large amounts of blur. The proposed approach overcomes this limitation and yields to a deblurred image that in both cases, Fig. 11(e) and Fig. 12(e), is closer to the ground-truth (Fig. 10(b)). Notice also that our method gives an accurate reconstruction of the blur scale, even without using regularization (β = 0 in eq. (9)). Some artefacts are still present in the reconstructed all-in-focus images. These are mainly due to the very large size of the

19 Single Image Blind Deconvolution with Higher-Order Texture Statistics 19 Masks - (image noise level σ = 0) Methods Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al.[5] Our method Table 2. Random texture. Performance (mean error) of 5 algorithms in blur scale estimation and image deblurring for the apertures in Fig. 4, assuming there is not noise. Masks - (image noise level σ = 0) Methods Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al.[5] Our method Table 3. Real texture. Performance (mean error) of 5 algorithms in blur scale estimation and image deblurring for the apertures in Fig. 4, assuming there is not noise. blur and to the raw blur-scale map: When adding regularization to the blur-scale map (β > 0), the deblurring algorithm yields to better results, as one can see in the next examples. In Fig. 13 we have the same indoor scenario, but now the items are slightly closer to the focal plane of the camera; then the maximum amount of blur is reduced. Although the background is still very blur in the coded image (Fig. 13(a)), our accurate blur-scale estimation yields to a deblurred image (Fig. 13(b)), where the text of the magazine becomes readable. Since the reconstructed blur-scale map corresponds to the depth map (relative depth) of the scene, we can use it together with the all-in-focus image to generate a 3D image 2. This image, when watched with red-cyan glasses, allows one to perceive the depth information extracted with our approach. All the regularized blur-scale maps in this work are estimated from eq. (9) by setting β = 0.5; the raw maps, instead, are obtained without regularization term (β = 0). We have tested our approach on different outdoor scenes: Fig. 15 and Fig. 14. In these scenarios we apply the subspaces we have learned within 150cm from the camera to a very large range of depths. Several challenges are present in these scenes, such as occlusions, shadows, and lack of texture. Our method demonstrates robustness to all of them. Notice again that the raw blur-scale maps shown in Fig. 15(c) and Fig. 14(c) are already very close to the maps that include regularization (Fig. 15(d) and Fig. 14(d) respectively). For each dataset, a 2 In this work, a 3D image corresponds to an image captured with a stereo camera, where one lens has a red filter and the second lens has a cyan filter. When one watches this type of images with red-cyan glasses, each eye will see only one view: The shift between the two views gives the perception of depth.

20 20 Manuel Martinello, Paolo Favaro Masks - (image noise level σ = 0.001) Methods Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al Our method Methods Masks - (image noise level σ = 0.002) Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al Our method Methods Masks - (image noise level σ = 0.005) Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al Our method Table 4. Random texture. Performance (mean error) of 5 algorithms in blur scale estimation and image deblurring for the apertures in Fig. 4, under different levels of noise. Methods Masks - (image noise level σ = 0.001) Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al.[5] Our method Methods Masks - (image noise level σ = 0.002) Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al Our method Methods Masks - (image noise level σ = 0.005) Blur scale estimation Image deblurring a b c d e f g h a b c d e f g h Lucy-Richardson Regularized filtering Wiener filtering Levin et al Our method Table 5. Real texture. Performance (mean error) of 5 algorithms in blur scale estimation and image deblurring for the apertures in Fig. 4, under different levels of noise.

21 Single Image Blind Deconvolution with Higher-Order Texture Statistics 21 (a) Conventional aperture (b) Ground-truth (pinhole camera) Fig. 10. (a) Picture taken with the conventional camera without placing the mask on the lens. (b)image captured by simulating a pinhole camera (f/22.0), which can be used as ground-truth for the image texture. 3D image (Fig. 14(e) and Fig. 15(e)) has been generated by using just the output of our method: the deblurred images (b) and the blur-scale maps (d). The ground-truth images have been taken by simulating a pinhole camera (f/22.0). 5.3 Computational Cost We downsample 4 times the input images from an original resolution of 12,8 megapixel (4, 368 2, 912) and use sub-pixel accuracy, in order to keep the algorithm efficient. We have seen from experiments on real data that the raw blur-scale map is already very close to the regularized map. This means that we can obtain a reasonable blur scale map very efficiently: When β = 0 the value of the blur scale at one pixel is independent of the other pixels and the calculations can be carried out in parallel. Since the algorithm takes about 5ms for processing 40 blur scale levels at each pixel, it is suitable for real-time applications. We have run the algorithm on a QuadCore 2.8GHz with 16GB memory. The code has been written mainly in Matlab 7. The deblurring procedure, instead, takes about 100s to process the whole image for 40 blur scale levels. 6 Conclusions We have presented a novel method to recover the all-in-focus image from a single blurred image captured with a coded aperture camera. The method is split in two steps: A subspace-based blur scale identification approach and an image deblurring algorithm based on conjugate gradient descent. The method is simple, general, and computationally efficient. We have compared our method to existing algorithms in the literature and showed that we achieve state of the art

22 22 Manuel Martinello, Paolo Favaro (a) Input image (b) Raw blur-scale map (c) Deblurred image (d) Raw blur-scale map (e) Deblurred image Fig. 11. Comparison on real data - mask 4(b). (a) Input image captured by using mask 4(b). (b-c) Blur-scale map and all-in-focus image reconstructed with Levins et al. s method [5]; (d-e) Results obtained from our method. (a) Input image (b) Raw blur-scale map (c) Deblurred image (d) Raw blur-scale map (e) Deblurred image Fig. 12. Comparison on real data - mask 4(d). (a) Input image captured by using mask 4(d). (b-c) Blur-scale map and all-in-focus image reconstructed with Levins et al. s method [5]; (d-e) Results obtained from our method.

23 Single Image Blind Deconvolution with Higher-Order Texture Statistics 23 (a) Input (b) All-in-focus image (c) Blur-scale map (d) 3D image Fig. 13. Close-range indoor scene [exposure time: 1/30s]. (a) coded image captured with mask 4(b); (b) estimated all-in-focus image; (c) estimated blur-scale map; (d) 3D image (to be watched with red-cyan glasses). performance in blur scale identification and image deblurring with both synthetic and real data while retaining polynomial time complexity. Appendix Proof of Theorem 1 To prove the theorem we rewrite the least squares problem in f as H d f g α Σf 2 2 = [ Hd ασ ] f [ ] g 2 = 0 H d f ḡ 2 2 (17) 2

24 24 Manuel Martinello, Paolo Favaro (a) Input image (b) Deblurred image (c) Raw blur-size map (d) Estimated blur-size map (e) 3D image (f) Ground-truth image Fig. 14. Long-range outdoor scene [exposure time: 1/200s]. (a) coded image captured with mask 4(b); (b) estimated all-in-focus image; (c) raw blur-scale map (without regularization); (d) regularized blur-scale map; (e) 3D image (to be watched with red-cyan glasses); (f) ground-truth image.

25 Single Image Blind Deconvolution with Higher-Order Texture Statistics 25 (a) Input image (b) Deblurred image (c) Raw blur-size map (d) Estimated blur-size map (e) 3D image (f) Ground-truth image Fig. 15. Mid-range outdoor scene [exposure time: 1/200s]. (a) coded image captured with mask 4(b); (b) estimated all-in-focus image; (c) raw blur-scale map (without regularization); (d) regularized blur-scale map; (e) 3D image (to be watched with red-cyan glasses); (f) ground-truth image.

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

Project 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

Computational Cameras. Rahul Raguram COMP

Computational Cameras. Rahul Raguram COMP Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene

More information

Computational Approaches to Cameras

Computational Approaches to Cameras Computational Approaches to Cameras 11/16/17 Magritte, The False Mirror (1935) Computational Photography Derek Hoiem, University of Illinois Announcements Final project proposal due Monday (see links on

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2018, Lecture 14 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

Coded photography , , Computational Photography Fall 2017, Lecture 18

Coded photography , , Computational Photography Fall 2017, Lecture 18 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18 Course announcements Homework 5 delayed for Tuesday. - You will need cameras

More information

Deblurring. Basics, Problem definition and variants

Deblurring. Basics, Problem definition and variants Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying

More information

Postprocessing of nonuniform MRI

Postprocessing of nonuniform MRI Postprocessing of nonuniform MRI Wolfgang Stefan, Anne Gelb and Rosemary Renaut Arizona State University Oct 11, 2007 Stefan, Gelb, Renaut (ASU) Postprocessing October 2007 1 / 24 Outline 1 Introduction

More information

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Deconvolution , , Computational Photography Fall 2017, Lecture 17 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Admin Deblurring & Deconvolution Different types of blur

Admin Deblurring & Deconvolution Different types of blur Admin Assignment 3 due Deblurring & Deconvolution Lecture 10 Last lecture Move to Friday? Projects Come and see me Different types of blur Camera shake User moving hands Scene motion Objects in the scene

More information

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Toward Non-stationary Blind Image Deblurring: Models and Techniques Toward Non-stationary Blind Image Deblurring: Models and Techniques Ji, Hui Department of Mathematics National University of Singapore NUS, 30-May-2017 Outline of the talk Non-stationary Image blurring

More information

Coded Computational Photography!

Coded Computational Photography! Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

Compressive Coded Aperture Superresolution Image Reconstruction

Compressive Coded Aperture Superresolution Image Reconstruction Compressive Coded Aperture Superresolution Image Reconstruction Roummel F. Marcia and Rebecca M. Willett Department of Electrical and Computer Engineering Duke University Research supported by DARPA and

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Total Variation Blind Deconvolution: The Devil is in the Details*

Total Variation Blind Deconvolution: The Devil is in the Details* Total Variation Blind Deconvolution: The Devil is in the Details* Paolo Favaro Computer Vision Group University of Bern *Joint work with Daniele Perrone Blur in pictures When we take a picture we expose

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

Computational Camera & Photography: Coded Imaging

Computational Camera & Photography: Coded Imaging Computational Camera & Photography: Coded Imaging Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Image removed due to copyright restrictions. See Fig. 1, Eight major types

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura MIT CSAIL 6.869 Advances in Computer Vision Fall 2013 Problem Set 6: Anaglyph Camera Obscura Posted: Tuesday, October 8, 2013 Due: Thursday, October 17, 2013 You should submit a hard copy of your work

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008 ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 409 414 SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES

More information

Coded Aperture Pairs for Depth from Defocus

Coded Aperture Pairs for Depth from Defocus Coded Aperture Pairs for Depth from Defocus Changyin Zhou Columbia University New York City, U.S. changyin@cs.columbia.edu Stephen Lin Microsoft Research Asia Beijing, P.R. China stevelin@microsoft.com

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Computer Vision Slides curtesy of Professor Gregory Dudek

Computer Vision Slides curtesy of Professor Gregory Dudek Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short

More information

Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision Anat Levin, William Freeman, and Fredo Durand

Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision Anat Levin, William Freeman, and Fredo Durand Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-2008-049 July 28, 2008 Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision

More information

Image Deblurring with Blurred/Noisy Image Pairs

Image Deblurring with Blurred/Noisy Image Pairs Image Deblurring with Blurred/Noisy Image Pairs Huichao Ma, Buping Wang, Jiabei Zheng, Menglian Zhou April 26, 2013 1 Abstract Photos taken under dim lighting conditions by a handheld camera are usually

More information

What are Good Apertures for Defocus Deblurring?

What are Good Apertures for Defocus Deblurring? What are Good Apertures for Defocus Deblurring? Changyin Zhou, Shree Nayar Abstract In recent years, with camera pixels shrinking in size, images are more likely to include defocused regions. In order

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do? Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution

More information

Multi-Image Deblurring For Real-Time Face Recognition System

Multi-Image Deblurring For Real-Time Face Recognition System Volume 118 No. 8 2018, 295-301 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Multi-Image Deblurring For Real-Time Face Recognition System B.Sarojini

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain Image Enhancement in spatial domain Digital Image Processing GW Chapter 3 from Section 3.4.1 (pag 110) Part 2: Filtering in spatial domain Mask mode radiography Image subtraction in medical imaging 2 Range

More information

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm 1 Rupali Patil, 2 Sangeeta Kulkarni 1 Rupali Patil, M.E., Sem III, EXTC, K. J. Somaiya COE, Vidyavihar, Mumbai 1 patilrs26@gmail.com

More information

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror Image analysis CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror A two- dimensional image can be described as a function of two variables f(x,y). For a grayscale image, the value of f(x,y) specifies the brightness

More information

Localization (Position Estimation) Problem in WSN

Localization (Position Estimation) Problem in WSN Localization (Position Estimation) Problem in WSN [1] Convex Position Estimation in Wireless Sensor Networks by L. Doherty, K.S.J. Pister, and L.E. Ghaoui [2] Semidefinite Programming for Ad Hoc Wireless

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

Coded Aperture Imaging

Coded Aperture Imaging Coded Aperture Imaging Manuel Martinello School of Engineering and Physical Sciences Heriot-Watt University A thesis submitted for the degree of PhilosophiæDoctor (PhD) May 2012 1. Reviewer: Prof. Richard

More information

Princeton University COS429 Computer Vision Problem Set 1: Building a Camera

Princeton University COS429 Computer Vision Problem Set 1: Building a Camera Princeton University COS429 Computer Vision Problem Set 1: Building a Camera What to submit: You need to submit two files: one PDF file for the report that contains your name, Princeton NetID, all the

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

DIGITAL IMAGE PROCESSING UNIT III

DIGITAL IMAGE PROCESSING UNIT III DIGITAL IMAGE PROCESSING UNIT III 3.1 Image Enhancement in Frequency Domain: Frequency refers to the rate of repetition of some periodic events. In image processing, spatial frequency refers to the variation

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

Hybrid Halftoning A Novel Algorithm for Using Multiple Halftoning Techniques

Hybrid Halftoning A Novel Algorithm for Using Multiple Halftoning Techniques Hybrid Halftoning A ovel Algorithm for Using Multiple Halftoning Techniques Sasan Gooran, Mats Österberg and Björn Kruse Department of Electrical Engineering, Linköping University, Linköping, Sweden Abstract

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

multiframe visual-inertial blur estimation and removal for unmodified smartphones

multiframe visual-inertial blur estimation and removal for unmodified smartphones multiframe visual-inertial blur estimation and removal for unmodified smartphones, Severin Münger, Carlo Beltrame, Luc Humair WSCG 2015, Plzen, Czech Republic images taken by non-professional photographers

More information

Blind Blur Estimation Using Low Rank Approximation of Cepstrum

Blind Blur Estimation Using Low Rank Approximation of Cepstrum Blind Blur Estimation Using Low Rank Approximation of Cepstrum Adeel A. Bhutta and Hassan Foroosh School of Electrical Engineering and Computer Science, University of Central Florida, 4 Central Florida

More information

Visible Light Communication-based Indoor Positioning with Mobile Devices

Visible Light Communication-based Indoor Positioning with Mobile Devices Visible Light Communication-based Indoor Positioning with Mobile Devices Author: Zsolczai Viktor Introduction With the spreading of high power LED lighting fixtures, there is a growing interest in communication

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

Color Constancy Using Standard Deviation of Color Channels

Color Constancy Using Standard Deviation of Color Channels 2010 International Conference on Pattern Recognition Color Constancy Using Standard Deviation of Color Channels Anustup Choudhury and Gérard Medioni Department of Computer Science University of Southern

More information

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho) Recent Advances in Image Deblurring Seungyong Lee (Collaboration w/ Sunghyun Cho) Disclaimer Many images and figures in this course note have been copied from the papers and presentation materials of previous

More information

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication Image Enhancement DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 15, 2013 Mårten Björkman (CVAP)

More information

Enhanced Method for Image Restoration using Spatial Domain

Enhanced Method for Image Restoration using Spatial Domain Enhanced Method for Image Restoration using Spatial Domain Gurpal Kaur Department of Electronics and Communication Engineering SVIET, Ramnagar,Banur, Punjab, India Ashish Department of Electronics and

More information

Fast Blur Removal for Wearable QR Code Scanners (supplemental material)

Fast Blur Removal for Wearable QR Code Scanners (supplemental material) Fast Blur Removal for Wearable QR Code Scanners (supplemental material) Gábor Sörös, Stephan Semmler, Luc Humair, Otmar Hilliges Department of Computer Science ETH Zurich {gabor.soros otmar.hilliges}@inf.ethz.ch,

More information

Lecture 3 - Regression

Lecture 3 - Regression Lecture 3 - Regression Instructor: Prof Ganesh Ramakrishnan July 25, 2016 1 / 30 The Simplest ML Problem: Least Square Regression Curve Fitting: Motivation Error measurement Minimizing Error Method of

More information

Image Processing by Bilateral Filtering Method

Image Processing by Bilateral Filtering Method ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST) Gaussian Blur Removal in Digital Images A.Elakkiya 1, S.V.Ramyaa 2 PG Scholars, M.E. VLSI Design, SSN College of Engineering, Rajiv Gandhi Salai, Kalavakkam 1,2 Abstract In many imaging systems, the observed

More information

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII IMAGE PROCESSING INDEX CLASS: B.E(COMPUTER) SR. NO SEMESTER:VII TITLE OF THE EXPERIMENT. 1 Point processing in spatial domain a. Negation of an

More information

Learning to Predict Indoor Illumination from a Single Image. Chih-Hui Ho

Learning to Predict Indoor Illumination from a Single Image. Chih-Hui Ho Learning to Predict Indoor Illumination from a Single Image Chih-Hui Ho 1 Outline Introduction Method Overview LDR Panorama Light Source Detection Panorama Recentering Warp Learning From LDR Panoramas

More information

Physics 3340 Spring Fourier Optics

Physics 3340 Spring Fourier Optics Physics 3340 Spring 011 Purpose Fourier Optics In this experiment we will show how the Fraunhofer diffraction pattern or spatial Fourier transform of an object can be observed within an optical system.

More information

Lecture 3: Linear Filters

Lecture 3: Linear Filters Signal Denoising Lecture 3: Linear Filters Math 490 Prof. Todd Wittman The Citadel Suppose we have a noisy 1D signal f(x). For example, it could represent a company's stock price over time. In order to

More information

SAR Imaging from Partial-Aperture Data with Frequency-Band Omissions

SAR Imaging from Partial-Aperture Data with Frequency-Band Omissions SAR Imaging from Partial-Aperture Data with Frequency-Band Omissions Müjdat Çetin a and Randolph L. Moses b a Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 77

More information

Non Linear Image Enhancement

Non Linear Image Enhancement Non Linear Image Enhancement SAIYAM TAKKAR Jaypee University of information technology, 2013 SIMANDEEP SINGH Jaypee University of information technology, 2013 Abstract An image enhancement algorithm based

More information

Lenses, exposure, and (de)focus

Lenses, exposure, and (de)focus Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26

More information

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game 37 Game Theory Game theory is one of the most interesting topics of discrete mathematics. The principal theorem of game theory is sublime and wonderful. We will merely assume this theorem and use it to

More information

Antennas and Propagation. Chapter 5c: Array Signal Processing and Parametric Estimation Techniques

Antennas and Propagation. Chapter 5c: Array Signal Processing and Parametric Estimation Techniques Antennas and Propagation : Array Signal Processing and Parametric Estimation Techniques Introduction Time-domain Signal Processing Fourier spectral analysis Identify important frequency-content of signal

More information

Focused Image Recovery from Two Defocused

Focused Image Recovery from Two Defocused Focused Image Recovery from Two Defocused Images Recorded With Different Camera Settings Murali Subbarao Tse-Chung Wei Gopal Surya Department of Electrical Engineering State University of New York Stony

More information

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016 Image acquisition Midterm Review Image Processing CSE 166 Lecture 10 2 Digitization, line of image Digitization, whole image 3 4 Geometric transformations Interpolation CSE 166 Transpose these matrices

More information

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Keshav Thakur 1, Er Pooja Gupta 2,Dr.Kuldip Pahwa 3, 1,M.Tech Final Year Student, Deptt. of ECE, MMU Ambala,

More information

6.869 Advances in Computer Vision Spring 2010, A. Torralba

6.869 Advances in Computer Vision Spring 2010, A. Torralba 6.869 Advances in Computer Vision Spring 2010, A. Torralba Due date: Wednesday, Feb 17, 2010 Problem set 1 You need to submit a report with brief descriptions of what you did. The most important part is

More information

Improving the Generalized Likelihood Ratio Test for Unknown Linear Gaussian Channels

Improving the Generalized Likelihood Ratio Test for Unknown Linear Gaussian Channels IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 49, NO 4, APRIL 2003 919 Improving the Generalized Likelihood Ratio Test for Unknown Linear Gaussian Channels Elona Erez, Student Member, IEEE, and Meir Feder,

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation Optical Performance of Nikon F-Mount Lenses Landon Carter May 11, 2016 2.671 Measurement and Instrumentation Abstract In photographic systems, lenses are one of the most important pieces of the system

More information

Image Processing. Image Processing. What is an Image? Image Resolution. Overview. Sources of Error. Filtering Blur Detect edges

Image Processing. Image Processing. What is an Image? Image Resolution. Overview. Sources of Error. Filtering Blur Detect edges Thomas Funkhouser Princeton University COS 46, Spring 004 Quantization Random dither Ordered dither Floyd-Steinberg dither Pixel operations Add random noise Add luminance Add contrast Add saturation ing

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror Image analysis CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror 1 Outline Images in molecular and cellular biology Reducing image noise Mean and Gaussian filters Frequency domain interpretation

More information

Non-Uniform Motion Blur For Face Recognition

Non-Uniform Motion Blur For Face Recognition IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 08, Issue 6 (June. 2018), V (IV) PP 46-52 www.iosrjen.org Non-Uniform Motion Blur For Face Recognition Durga Bhavani

More information

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic Recent advances in deblurring and image stabilization Michal Šorel Academy of Sciences of the Czech Republic Camera shake stabilization Alternative to OIS (optical image stabilization) systems Should work

More information

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution Extended depth-of-field in Integral Imaging by depth-dependent deconvolution H. Navarro* 1, G. Saavedra 1, M. Martinez-Corral 1, M. Sjöström 2, R. Olsson 2, 1 Dept. of Optics, Univ. of Valencia, E-46100,

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

fast blur removal for wearable QR code scanners

fast blur removal for wearable QR code scanners fast blur removal for wearable QR code scanners Gábor Sörös, Stephan Semmler, Luc Humair, Otmar Hilliges ISWC 2015, Osaka, Japan traditional barcode scanning next generation barcode scanning ubiquitous

More information

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Point Spread Function Engineering for Scene Recovery. Changyin Zhou Point Spread Function Engineering for Scene Recovery Changyin Zhou Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences

More information

Defocus Map Estimation from a Single Image

Defocus Map Estimation from a Single Image Defocus Map Estimation from a Single Image Shaojie Zhuo Terence Sim School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore 117417, SINGAPOUR Abstract In this

More information

Compressive Through-focus Imaging

Compressive Through-focus Imaging PIERS ONLINE, VOL. 6, NO. 8, 788 Compressive Through-focus Imaging Oren Mangoubi and Edwin A. Marengo Yale University, USA Northeastern University, USA Abstract Optical sensing and imaging applications

More information

Enhanced Shape Recovery with Shuttered Pulses of Light

Enhanced Shape Recovery with Shuttered Pulses of Light Enhanced Shape Recovery with Shuttered Pulses of Light James Davis Hector Gonzalez-Banos Honda Research Institute Mountain View, CA 944 USA Abstract Computer vision researchers have long sought video rate

More information

Image Forgery. Forgery Detection Using Wavelets

Image Forgery. Forgery Detection Using Wavelets Image Forgery Forgery Detection Using Wavelets Introduction Let's start with a little quiz... Let's start with a little quiz... Can you spot the forgery the below image? Let's start with a little quiz...

More information

Improved motion invariant imaging with time varying shutter functions

Improved motion invariant imaging with time varying shutter functions Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia

More information

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab 2009-2010 Vincent DeVito June 16, 2010 Abstract In the world of photography and machine vision, blurry

More information

Spline wavelet based blind image recovery

Spline wavelet based blind image recovery Spline wavelet based blind image recovery Ji, Hui ( 纪辉 ) National University of Singapore Workshop on Spline Approximation and its Applications on Carl de Boor's 80 th Birthday, NUS, 06-Nov-2017 Spline

More information

Chapter 17. Shape-Based Operations

Chapter 17. Shape-Based Operations Chapter 17 Shape-Based Operations An shape-based operation identifies or acts on groups of pixels that belong to the same object or image component. We have already seen how components may be identified

More information