On the Recovery of Depth from a Single Defocused Image

Size: px
Start display at page:

Download "On the Recovery of Depth from a Single Defocused Image"

Transcription

1 On the Recovery of Depth from a Single Defocused Image Shaojie Zhuo and Terence Sim School of Computing National University of Singapore Singapore,747 Abstract. In this paper we address the challenging problem of recovering the depth of a scene from a single image using defocus cue. To achieve this, we first present a novel approach to estimate the amount of spatially varying defocus blur at edge locations. We re-blur the input image and show that the gradient magnitude ratio between the input and re-blurred images depends only on the amount of defocus blur. Thus, the blur amount can be obtained from the ratio. A layered depth map is then extracted by propagating the blur amount at edge locations to the entire image. Experimental results on synthetic and real images demonstrate the effectiveness of our method in providing a reliable estimate of the depth of a scene. Keywords: Image processing, depth recovery, defocus blur, Gaussian gradient, markov random field. Introduction Depth recovery plays an important role in computer vision and computer graphics with applications such as robotics, D reconstruction or image refocusing. In principle depth can be recovered either from monocular cues (shading, shape, texture, motion etc.) or from binocular cues (stereo correspondences). Conventional methods for estimating the depth of a scene have relied on multiple images. Stereo vision [,] measures disparities between a pair of images of the same scene taken from two different viewpoints and uses the disparities to recover the depth. Structure from motion (SFM) [,4] computes the correspondences between images to obtain the D motion field. The D motion field is used to recover the D motion and the depth. Depth from focus (DFF) [5,6] captures a set of images using multiple focus settings and measures the sharpness of image at each pixel locations. The sharpest pixel is selected to form a all-in-focus image and the depth of the pixel depends on which image the pixel is selected from. Depth from defocus (DFD) [7,8] requires a pair of images of the same scene with different focus setting. It estimates the degree of defocus blur and the depth of scene can be recovered providing the camera setting. These methods either suffer from the occlusion problem or can not be applied to dynamic scenes. X. Jiang and N. Petkov (Eds.): CAIP 9, LNCS 57, pp , 9. c Springer-Verlag Berlin Heidelberg 9

2 89 S. Zhuo and T. Sim (a) (b) Fig.. The depth recovery result of the book image. (a) The input defocused image. (b) Recovered layered depth map. The larger intensity means larger blur amount and depth in all the depth maps presented in this paper. Recently, approaches have been proposed to recover depth from a single image in very specific settings. Several methods [9,] use active illumination to aid depth recovery by projecting structured patterns onto the scene. The depth is measured by the attenuation of the projected light or the deformation of the projected pattern. The coded aperture method [] changes the shape of defocus blur kernel by inserting a customized mask into the camera lens, which makes the blur kernel more sensitive to depth variation. The depth is determined after a deconvolution process using a set of calibrated blur kernels. Saxena et al. [] collect a training set of monocular images and their corresponding ground-truth depth maps and apply supervised learning to predict the value of the depth map as a function of the input image. In this paper we focus on a more challenging problem of recovering the depth layers from a single defocused image captured by an uncalibrated conventional camera. As the most related work, the inverse diffusion method [], which models the defocus blur as a diffusion process, uses the inhomogeneous reverse heat equation to obtain an estimate of the blur at edge locations and then proposed a graph-cut based method for inferring the depth in the scene. In contrast, we model the defocus blur as a D Gaussian blur. The input image is re-blurred using a known Gaussian function and the gradient magnitude ratio between input and re-blurred images is calculated. Then the blur amount at edge locations can be derived from the ratio. We also construct a MRF to propagate the blur estimate from the edge location to the entire image and finally obtain a layered depth map of the scene. Our work has three main contributions. Firstly, we propose an efficient blur estimation method based on the gradient magnitude ratio, and we will show that our method is robust to noise, inaccurate edge location and interference from near edges. Secondly, without any modification to the camera or using additional illumination, our blur estimation method combined with MRF optimization can obtain the depth map of a scene by using only single defocused image captured by conventional camera. As shown in Fig., our method can extract a layered depth map of the scene with fairly good extent of accuracy. Finally, we discuss

3 On the Recovery of Depth from a Single Defocused Image 89 two kinds of ambiguities in recovering depth from a single image using defocus cue, one of which is usually overlooked by previous methods. Defocus Model As the amount of defocus blur is estimated at edge locations, we must model the edge first. We adopt the ideal step edge model which is f(x) =Au(x)+B, () where u(x) is the step function. A and B are the amplitude and offset of the edge respectively. Note that the edge is located at x =. When an object is placed at the focus distance d f, all the rays from a point of the object will converge to a single sensor point and the image will appear sharp. Rays from a point of another object at distance d will reach multiple sensor points and result in a blurred image. The blurred pattern depends on the shape of aperture and is often called the circle of confusion (CoC) [4]. The diameter of CoC characterizes the amount of defocus and can be written as c = d d f d f N (d f f ), () where f and N are the focal length and the stop number of the camera respectively. Fig. shows a thin lens model and how the diameter of circle of confusion changes with d and N, given fixed f and d f. As we can see, the diameter of the CoC c is a non-linear monotonically increasing function of the object distance d. The defocus blur can be modeled as the convolution of a sharp image with Focal plane d d f (a) Lens f Image sensor c Diameter of CoC c (mm) N = N =4 N =8 5 5 Object distance d (mm) (b) Fig.. (a) A thin lens model. (b) The diameter of CoC c as a function of the object distance d andf-stopnumbern given d f = 5mm, f =8mm. the point spread function (PSF). The PSF can be approximated by a Gaussian function g(x, σ), where the standard deviation σ = kc is proportional to the diameter of the CoC c. Weuseσ as a measure of the depth of the scene. A blurred edge i(x) can be represented as follows, i(x) =f(x) g(x, σ). ()

4 89 S. Zhuo and T. Sim σ i i blur amount blurred edge re-blurred edges gradients gradient ratio Fig.. Our blur estimation approach: here, and are the convolution and gradient operators respectively. The black dash line denotes the edge location. Blur Estimation Fig. shows the overview of our local blur estimation method. A step edge is re-blurred using a Gaussian function with know standard deviation. Then the ratio between the gradient magnitude of the step edge and its re-blurred version is calculated. The ratio is maximum at the edge location. Using the maximum value, we can compute the amount of the defocus blur of an edge. For convenience, we describe our blur estimation algorithm for D case first and then extend it to D image. The gradient of the re-blurred edge is: i (x) = ( i(x) g(x, σ ) ) = ( (Au(x)+B) g(x, σ) g(x, σ ) ) = A π(σ + σ ) exp( x (σ + σ ) ), where σ is the standard deviation of the re-blur Gaussian function. We call it the re-blur scale. The gradient magnitude ratio between the original and re-blurred edges is i(x) i = σ + σ exp( x (x) σ σ x (σ + σ ) ). (5) It can be proved that the ratio is maximum at the edge location (x =).The maximum value is given by R = i() i = σ + σ. (6) () σ Giving the insight on (4) and (6), we notice that the edge gradient depends on both the edge amplitude A and blur amount σ, while the maximum of the gradient magnitude ratio R eliminates the effect of edge amplitude A and depends only on σ and σ. Thus, given the maximum value R, we can calculate the unknown blur amount σ using σ = (4) σ. (7) R For blur estimation in D images, we use D isotropic Gaussian function to perform re-blur. As any direction of a D isotropic Gaussian function is a D

5 On the Recovery of Depth from a Single Defocused Image 89 Gaussian, the blur estimation is similar to that in D case. In D image, the gradient magnitude can be computed as follows: i(x, y) = i x + i y (8) where i x and i y are the gradients along x and y directions respectively. 4 Layered Depth Map Extraction After we obtain the depth estimates at edge locations, we need to propagate the depth estimates from edge locations to other regions that do not contain edges. We seek a regularized depth labeling ˆσ which is smooth and close to the estimation in Eq. (7). We also prefer the depth discontinuities to be aligned with the image edges. Thus, We formulate this as a energy minimization over the discrete Markov Random Field (MRF) whose energy is given by E(ˆσ) = V i(ˆσ i)+λ V ij(ˆσ i, ˆσ j). (9) i i j N (i) where each pixel in the image is a node of the MRF and λ balance the single node potential V i (ˆσ i ) and pairwise potential V ij (ˆσ i, ˆσ j ) which are defined as V i(ˆσ i)=m(i)(σ i ˆσ i), () V ij(ˆσ i, ˆσ j)= j N (i) w ij(ˆσ i ˆσ j), () where M( ) is a binary mask with non-zeros only at edge locations. the weight w ij = exp{ (I(i) I(j)) } encodes the difference of neighboring colors I(i)and I(j). 8-neighborhood system N (i) is adopted in our definition. We use FastPD [5] to minimized the MRF energy defined in Eq. (9). FastPD can guarantee a approximately optimal solution and is much faster than previous MRF optimization methods such as conventional graph cut techniques. 5 Experiments There are two parameters in our method: the re-blur scale σ and the λ. Weset σ =,λ =, which gives good results in all our examples. We use Canny edge detector [6] and tune its parameters to obtain desired edge detection output. The depth map are actually the estimated σ values at each pixel. We first test the performance of our method on the synthetic bar image shown in Fig. 4(a). The blur amount of the edge increases linearly from to 5. We first add noises to the bar image. Under noise condition, although the result of edges with larger blur amount is more affected by noise, our method can still achieve reliable estimation result (see Fig. 4(b)). We then create more bar images with different edge distances. Fig. 4(c) shows that interferences from neighboring edges increase estimation errors when the blur amount is large (> ), but the

6 894 S. Zhuo and T. Sim 5 4 no noise var =. var =. 5 4 dst = dst =5 dst = 5 4 shift = shift = shift = (a) (b) (c) (d) Fig. 4. Performance of our blur estimation method. (a) The synthetic image with blur edges. (b) Estimation errors under Gaussian noise condition. (c) Estimation errors with edge distances of, 5 and pixels. (d) Estimation errors with edge shifts of, and pixels.the x and y axes are the blur amount and corresponding estimation error (a) (b) (c) Fig. 5. The depth recovery results of flower and building images. (a) The input defocused images. (b) The sparse blur maps. (c) The final layered depth maps. errors are controlled in a relative low level. Furthermore, we shift the detected edges to simulate inaccurate edge location and test our method. The result is shown in Fig. 4(d). When the edge is sharp, the shift of edge locations causes quite large estimation errors. However, in practice, the sharp edges usually can be located very accurately, which greatly reduces the estimation error.

7 On the Recovery of Depth from a Single Defocused Image 895 (a) (b) (c) Fig. 6. Comparison of our method and the inverse diffusion method. (a) The input image. (b) The result of inverse diffusion method. (c) Our result. The image is from [] (a) (b) Fig. 7. The depth recovery result of the photo frame image. (a) The input defocused image. (b) Recovered layered depth map. As show in Fig. 5, we test our method on some real images. In the flower image, the depth of the scene changes continuously from the bottom to the top of the image. The sparse blur map gives a reasonable measure of the blur amount at edge locations. The depth map reflects the continuous change of the depth. In the building image, there are mainly depth layers in the scene: the wall in the nearest layer, the buildings in the middle layer, and the sky in the farthest layer. Our method extracts these three layers quite accurately and produces the depth map shown in Fig. 5(c). Both of the results are obtained using labels of depth with the blur amount from to. One more example is the book image shown in Fig.. The result is obtain using 6 depth labels with blur amount from to. As we can see from the recovered depth map, our method is able to obtain a good estimate of the depth of the scene from a single image. In Fig. 6, we compare our method with the inverse diffusion method []. Both methods generate reasonable layered depth maps. However, our method has higher accuracy in local estimation and thus, our depth map captures more details of the depth. As shown in the figure, the difference in the depth of the left and right arms can be perceived in our result. In contrast, the inverse diffusion method does not recover this depth difference.

8 896 S. Zhuo and T. Sim 6 Ambiguities in Depth Recovery There are two kinds of ambiguities in depth recovery from single image using defocus cue. The first one is the focal plane ambiguity. When an object appears blur in the image, it can be on either side of the focal plane. To remove this ambiguity, most of the depth from defocus methods including our method assume all objects of interest are located on one side of the focal plane. When taking images, we just put the focus point on the nearest/farthest point in the scene. The second ambiguity is called the blur/sharp edge ambiguity. The defocus measure we obtained may be due to a sharp edge that is out of focus or a blur edge that is in focus. This ambiguity is often overlooked by previous work and may cause some artifacts in our result. One example is shown in Fig. 7. The region indicated by the white rectangle is actually blur texture of the photo in the frame, but our method treats it as sharp edges due to defocus blur, which results in error estimation of the depth in that region. 7 Conclusion In this paper, we show that the depth of a scene can be recovered from a single defocused image. A new method is presented to estimate the blur amount at edge locations based on the gradient magnitude ratio. The layered depth map is then extracted using MRF optimization. We show that our method is robust to noise, inaccurate edge location and interferences of neighboring edges and can generate more accurate scene depth maps compared with existing methods. We also discuss ambiguities arising in recovering depth from single images using defocus cue. In the future, we would like to apply our blur estimation method to images with motion blur to estimate the blur kernels. Acknowledgement. The author would like to thank the anonymous reviewers for their helpful suggestions. The work is supported by NUS Research Grant #R References. Barnard, S., Fischler, M.: Computational stereo. ACM Comput. Surv. 4(4), (98). Dhond, U., Aggarwal, J.: Structure from stereo: A review. IEEE Trans. Syst. Man Cybern. 9(6), (989). Dellaert, F., Seitz, S.M., Thorpe, C.E., Thrun, S.: Structure from motion without correspondence. In: Proc. CVPR, pp () 4. Tomasi, C., Kanade, T.: Shape and motion from image streams under orthography: A factorization method. Int. J. Comput. Vision 9, 7 54 (99) 5. Asada, N., Fujiwara, H., Matsuyama, T.: Edge and depth from focus. Int. J. Comput. Vision 6(), 5 6 (998) 6. Nayar, S., Nakagawa, Y.: Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 6(8), 84 8 (994)

9 On the Recovery of Depth from a Single Defocused Image Favaro, P., Favaro, P., Soatto, S.: A geometric approach to shape from defocus. IEEE Trans. Pattern Anal. Mach. Intell. 7(), (5) 8. Pentland, A.P.: A new sense for depth of field. IEEE Trans. Pattern Anal. Mach. Intell. 9(4), 5 5 (987) 9. Moreno-Noguer, F., Belhumeur, P.N., Nayar, S.K.: Active refocusing of images and videos. ACM Trans. Graphics, 67 (7). Nayar, S.K., Watanabe, M., Noguchi, M.: Real-time focus range sensor. IEEE Trans. Pattern Anal. Mach. Intell. 8(), (996). Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera with a coded aperture. ACM Trans. Graphics (7). Saxena, A., Sun, M., Ng, A.: Maked: Learning d scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell., (8). Namboodiri, V.P., Chaudhuri, S.: Recovery of relative depth from a single observation using an uncalibrated (real-aperture) camera. In: Proc. CVPR (8) 4. Hecht, E.: Optics, 4th edn. Addison-Wesley, Reading () 5. Komodakis, N., Tziritas, G., Paragios, N.: Performance vs computational efficiency for optimizing single and dynamic mrfs: Setting the state of the art with primaldual strategies. Proc. CVIU (), 4 9 (8) 6. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 8(6), (986)

Defocus Map Estimation from a Single Image

Defocus Map Estimation from a Single Image Defocus Map Estimation from a Single Image Shaojie Zhuo Terence Sim School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore 117417, SINGAPOUR Abstract In this

More information

Pattern Recognition 44 (2011) Contents lists available at ScienceDirect. Pattern Recognition. journal homepage:

Pattern Recognition 44 (2011) Contents lists available at ScienceDirect. Pattern Recognition. journal homepage: Pattern Recognition 44 () 85 858 Contents lists available at ScienceDirect Pattern Recognition journal homepage: www.elsevier.com/locate/pr Defocus map estimation from a single image Shaojie Zhuo, Terence

More information

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation Single Digital mage Multi-focusing Using Point to Point Blur Model Based Depth Estimation Praveen S S, Aparna P R Abstract The proposed paper focuses on Multi-focusing, a technique that restores all-focused

More information

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Project 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/

More information

Edge Width Estimation for Defocus Map from a Single Image

Edge Width Estimation for Defocus Map from a Single Image Edge Width Estimation for Defocus Map from a Single Image Andrey Nasonov, Aleandra Nasonova, and Andrey Krylov (B) Laboratory of Mathematical Methods of Image Processing, Faculty of Computational Mathematics

More information

Computational Cameras. Rahul Raguram COMP

Computational Cameras. Rahul Raguram COMP Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

Coded Aperture Pairs for Depth from Defocus

Coded Aperture Pairs for Depth from Defocus Coded Aperture Pairs for Depth from Defocus Changyin Zhou Columbia University New York City, U.S. changyin@cs.columbia.edu Stephen Lin Microsoft Research Asia Beijing, P.R. China stevelin@microsoft.com

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Coded Computational Photography!

Coded Computational Photography! Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!

More information

Evolving Measurement Regions for Depth from Defocus

Evolving Measurement Regions for Depth from Defocus Evolving Measurement Regions for Depth from Defocus Scott McCloskey, Michael Langer, and Kaleem Siddiqi Centre for Intelligent Machines, McGill University {scott,langer,siddiqi}@cim.mcgill.ca Abstract.

More information

Optimal Camera Parameters for Depth from Defocus

Optimal Camera Parameters for Depth from Defocus Optimal Camera Parameters for Depth from Defocus Fahim Mannan and Michael S. Langer School of Computer Science, McGill University Montreal, Quebec H3A E9, Canada. {fmannan, langer}@cim.mcgill.ca Abstract

More information

Exact Blur Measure Outperforms Conventional Learned Features for Depth Finding

Exact Blur Measure Outperforms Conventional Learned Features for Depth Finding Exact Blur Measure Outperforms Conventional Learned Features for Depth Finding Akbar Saadat Passive Defence R&D Dept. Tech. Deputy of Iranian Railways Tehran, Iran Abstract Image analysis methods that

More information

uncorrected proof Fast depth from defocus from focal stacks Author Proof Stephen W. Bailey Jose I. Echevarria Bobby Bodenheimer Diego Gutierrez

uncorrected proof Fast depth from defocus from focal stacks Author Proof Stephen W. Bailey Jose I. Echevarria Bobby Bodenheimer Diego Gutierrez Vis Comput DOI 10.1007/s00371-014-1050-2 ORIGINAL ARTICLE 1 1 2 3 4 2 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Fast depth from defocus from focal stacks Stephen W. Bailey Jose I. Echevarria Bobby Bodenheimer

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

Depth from Diffusion

Depth from Diffusion Depth from Diffusion Changyin Zhou Oliver Cossairt Shree Nayar Columbia University Supported by ONR Optical Diffuser Optical Diffuser ~ 10 micron Micrograph of a Holographic Diffuser (RPC Photonics) [Gray,

More information

Coded Aperture Flow. Anita Sellent and Paolo Favaro

Coded Aperture Flow. Anita Sellent and Paolo Favaro Coded Aperture Flow Anita Sellent and Paolo Favaro Institut für Informatik und angewandte Mathematik, Universität Bern, Switzerland http://www.cvg.unibe.ch/ Abstract. Real cameras have a limited depth

More information

Coded Aperture Imaging

Coded Aperture Imaging Coded Aperture Imaging Manuel Martinello School of Engineering and Physical Sciences Heriot-Watt University A thesis submitted for the degree of PhilosophiæDoctor (PhD) May 2012 1. Reviewer: Prof. Richard

More information

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Toward Non-stationary Blind Image Deblurring: Models and Techniques Toward Non-stationary Blind Image Deblurring: Models and Techniques Ji, Hui Department of Mathematics National University of Singapore NUS, 30-May-2017 Outline of the talk Non-stationary Image blurring

More information

Single-Image Shape from Defocus

Single-Image Shape from Defocus Single-Image Shape from Defocus José R.A. Torreão and João L. Fernandes Instituto de Computação Universidade Federal Fluminense 24210-240 Niterói RJ, BRAZIL Abstract The limited depth of field causes scene

More information

Image Deblurring with Blurred/Noisy Image Pairs

Image Deblurring with Blurred/Noisy Image Pairs Image Deblurring with Blurred/Noisy Image Pairs Huichao Ma, Buping Wang, Jiabei Zheng, Menglian Zhou April 26, 2013 1 Abstract Photos taken under dim lighting conditions by a handheld camera are usually

More information

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Deconvolution , , Computational Photography Fall 2017, Lecture 17 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008 ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 409 414 SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES

More information

Three dimensional moving pictures with a single imager and microfluidic lens

Three dimensional moving pictures with a single imager and microfluidic lens Purdue University Purdue e-pubs Open Access Dissertations Theses and Dissertations 8-2016 Three dimensional moving pictures with a single imager and microfluidic lens Chao Liu Purdue University Follow

More information

Non-Uniform Motion Blur For Face Recognition

Non-Uniform Motion Blur For Face Recognition IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 08, Issue 6 (June. 2018), V (IV) PP 46-52 www.iosrjen.org Non-Uniform Motion Blur For Face Recognition Durga Bhavani

More information

Computational Approaches to Cameras

Computational Approaches to Cameras Computational Approaches to Cameras 11/16/17 Magritte, The False Mirror (1935) Computational Photography Derek Hoiem, University of Illinois Announcements Final project proposal due Monday (see links on

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Declaration. Michal Šorel March 2007

Declaration. Michal Šorel March 2007 Charles University in Prague Faculty of Mathematics and Physics Multichannel Blind Restoration of Images with Space-Variant Degradations Ph.D. Thesis Michal Šorel March 2007 Department of Software Engineering

More information

Simulated Programmable Apertures with Lytro

Simulated Programmable Apertures with Lytro Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows

More information

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Performance Evaluation of Different Depth From Defocus (DFD) Techniques Please verify that () all pages are present, () all figures are acceptable, (3) all fonts and special characters are correct, and () all text and figures fit within the Performance Evaluation of Different

More information

Computational approach for depth from defocus

Computational approach for depth from defocus Journal of Electronic Imaging 14(2), 023021 (Apr Jun 2005) Computational approach for depth from defocus Ovidiu Ghita* Paul F. Whelan John Mallon Vision Systems Laboratory School of Electronic Engineering

More information

NTU CSIE. Advisor: Wu Ja Ling, Ph.D.

NTU CSIE. Advisor: Wu Ja Ling, Ph.D. An Interactive Background Blurring Mechanism and Its Applications NTU CSIE Yan Chih Yu Advisor: Wu Ja Ling, Ph.D. 1 2 Outline Introduction Related Work Method Object Segmentation Depth Map Generation Image

More information

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Huei-Yung Lin and Chia-Hong Chang Department of Electrical Engineering, National Chung Cheng University, 168 University Rd., Min-Hsiung

More information

Light-Field Database Creation and Depth Estimation

Light-Field Database Creation and Depth Estimation Light-Field Database Creation and Depth Estimation Abhilash Sunder Raj abhisr@stanford.edu Michael Lowney mlowney@stanford.edu Raj Shah shahraj@stanford.edu Abstract Light-field imaging research has been

More information

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Distance Estimation with a Two or Three Aperture SLR Digital Camera Distance Estimation with a Two or Three Aperture SLR Digital Camera Seungwon Lee, Joonki Paik, and Monson H. Hayes Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University

More information

Selection of Temporally Dithered Codes for Increasing Virtual Depth of Field in Structured Light Systems

Selection of Temporally Dithered Codes for Increasing Virtual Depth of Field in Structured Light Systems Selection of Temporally Dithered Codes for Increasing Virtual Depth of Field in Structured Light Systems Abstract Temporally dithered codes have recently been used for depth reconstruction of fast dynamic

More information

Accelerating defocus blur magnification

Accelerating defocus blur magnification Accelerating defocus blur magnification Florian Kriener, Thomas Binder and Manuel Wille Google Inc. (a) Input image I (b) Sparse blur map β (c) Full blur map α (d) Output image J Figure 1: Real world example

More information

Image and Depth from a Single Defocused Image Using Coded Aperture Photography

Image and Depth from a Single Defocused Image Using Coded Aperture Photography Image and Depth from a Single Defocused Image Using Coded Aperture Photography Mina Masoudifar a, Hamid Reza Pourreza a a Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran

More information

Defocusing and Deblurring by Using with Fourier Transfer

Defocusing and Deblurring by Using with Fourier Transfer Defocusing and Deblurring by Using with Fourier Transfer AKIRA YANAGAWA and TATSUYA KATO 1. Introduction Image data may be obtained through an image system, such as a video camera or a digital still camera.

More information

fast blur removal for wearable QR code scanners

fast blur removal for wearable QR code scanners fast blur removal for wearable QR code scanners Gábor Sörös, Stephan Semmler, Luc Humair, Otmar Hilliges ISWC 2015, Osaka, Japan traditional barcode scanning next generation barcode scanning ubiquitous

More information

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho) Recent Advances in Image Deblurring Seungyong Lee (Collaboration w/ Sunghyun Cho) Disclaimer Many images and figures in this course note have been copied from the papers and presentation materials of previous

More information

Position-Dependent Defocus Processing for Acoustic Holography Images

Position-Dependent Defocus Processing for Acoustic Holography Images Position-Dependent Defocus Processing for Acoustic Holography Images Ruming Yin, 1 Patrick J. Flynn, 2 Shira L. Broschat 1 1 School of Electrical Engineering & Computer Science, Washington State University,

More information

Depth from Focusing and Defocusing. Carnegie Mellon University. Pittsburgh, PA result is 1.3% RMS error in terms of distance

Depth from Focusing and Defocusing. Carnegie Mellon University. Pittsburgh, PA result is 1.3% RMS error in terms of distance Depth from Focusing and Defocusing Yalin Xiong Steven A. Shafer The Robotics Institute Carnegie Mellon University Pittsburgh, PA 53 Abstract This paper studies the problem of obtaining depth information

More information

Deblurring. Basics, Problem definition and variants

Deblurring. Basics, Problem definition and variants Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying

More information

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE International Journal of Electronics and Communication Engineering and Technology (IJECET) Volume 7, Issue 4, July-August 2016, pp. 85 90, Article ID: IJECET_07_04_010 Available online at http://www.iaeme.com/ijecet/issues.asp?jtype=ijecet&vtype=7&itype=4

More information

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2018, Lecture 14 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with

More information

Constrained Unsharp Masking for Image Enhancement

Constrained Unsharp Masking for Image Enhancement Constrained Unsharp Masking for Image Enhancement Radu Ciprian Bilcu and Markku Vehvilainen Nokia Research Center, Visiokatu 1, 33720, Tampere, Finland radu.bilcu@nokia.com, markku.vehvilainen@nokia.com

More information

Improved motion invariant imaging with time varying shutter functions

Improved motion invariant imaging with time varying shutter functions Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia

More information

Hand segmentation using a chromatic 3D camera

Hand segmentation using a chromatic 3D camera Hand segmentation using a chromatic D camera P. Trouvé, F. Champagnat, M. Sanfourche, G. Le Besnerais To cite this version: P. Trouvé, F. Champagnat, M. Sanfourche, G. Le Besnerais. Hand segmentation using

More information

Focused Image Recovery from Two Defocused

Focused Image Recovery from Two Defocused Focused Image Recovery from Two Defocused Images Recorded With Different Camera Settings Murali Subbarao Tse-Chung Wei Gopal Surya Department of Electrical Engineering State University of New York Stony

More information

Depth Estimation Algorithm for Color Coded Aperture Camera

Depth Estimation Algorithm for Color Coded Aperture Camera Depth Estimation Algorithm for Color Coded Aperture Camera Ivan Panchenko, Vladimir Paramonov and Victor Bucha; Samsung R&D Institute Russia; Moscow, Russia Abstract In this paper we present an algorithm

More information

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation Kalaivani.R 1, Poovendran.R 2 P.G. Student, Dept. of ECE, Adhiyamaan College of Engineering, Hosur, Tamil Nadu,

More information

Correcting Over-Exposure in Photographs

Correcting Over-Exposure in Photographs Correcting Over-Exposure in Photographs Dong Guo, Yuan Cheng, Shaojie Zhuo and Terence Sim School of Computing, National University of Singapore, 117417 {guodong,cyuan,zhuoshao,tsim}@comp.nus.edu.sg Abstract

More information

6.A44 Computational Photography

6.A44 Computational Photography Add date: Friday 6.A44 Computational Photography Depth of Field Frédo Durand We allow for some tolerance What happens when we close the aperture by two stop? Aperture diameter is divided by two is doubled

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

A Review over Different Blur Detection Techniques in Image Processing

A Review over Different Blur Detection Techniques in Image Processing A Review over Different Blur Detection Techniques in Image Processing 1 Anupama Sharma, 2 Devarshi Shukla 1 E.C.E student, 2 H.O.D, Department of electronics communication engineering, LR College of engineering

More information

Fast Blur Removal for Wearable QR Code Scanners (supplemental material)

Fast Blur Removal for Wearable QR Code Scanners (supplemental material) Fast Blur Removal for Wearable QR Code Scanners (supplemental material) Gábor Sörös, Stephan Semmler, Luc Humair, Otmar Hilliges Department of Computer Science ETH Zurich {gabor.soros otmar.hilliges}@inf.ethz.ch,

More information

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Point Spread Function Engineering for Scene Recovery. Changyin Zhou Point Spread Function Engineering for Scene Recovery Changyin Zhou Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

Blind Single-Image Super Resolution Reconstruction with Defocus Blur Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Blind Single-Image Super Resolution Reconstruction with Defocus Blur Fengqing Qin, Lihong Zhu, Lilan Cao, Wanan Yang Institute

More information

LENSLESS IMAGING BY COMPRESSIVE SENSING

LENSLESS IMAGING BY COMPRESSIVE SENSING LENSLESS IMAGING BY COMPRESSIVE SENSING Gang Huang, Hong Jiang, Kim Matthews and Paul Wilford Bell Labs, Alcatel-Lucent, Murray Hill, NJ 07974 ABSTRACT In this paper, we propose a lensless compressive

More information

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do? Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution

More information

Coded photography , , Computational Photography Fall 2017, Lecture 18

Coded photography , , Computational Photography Fall 2017, Lecture 18 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18 Course announcements Homework 5 delayed for Tuesday. - You will need cameras

More information

What are Good Apertures for Defocus Deblurring?

What are Good Apertures for Defocus Deblurring? What are Good Apertures for Defocus Deblurring? Changyin Zhou, Shree Nayar Abstract In recent years, with camera pixels shrinking in size, images are more likely to include defocused regions. In order

More information

Extended depth of field for visual measurement systems with depth-invariant magnification

Extended depth of field for visual measurement systems with depth-invariant magnification Extended depth of field for visual measurement systems with depth-invariant magnification Yanyu Zhao a and Yufu Qu* a,b a School of Instrument Science and Opto-Electronic Engineering, Beijing University

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

SHAPE FROM FOCUS. Keywords defocus, focus operator, focus measure function, depth estimation, roughness and tecture, automatic shapefromfocus.

SHAPE FROM FOCUS. Keywords defocus, focus operator, focus measure function, depth estimation, roughness and tecture, automatic shapefromfocus. SHAPE FROM FOCUS k.kanthamma*, Dr S.A.K.Jilani** *(Department of electronics and communication engineering, srinivasa ramanujan institute of technology, Anantapur,Andrapradesh,INDIA ** (Department of electronics

More information

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy A Novel Image Deblurring Method to Improve Iris Recognition Accuracy Jing Liu University of Science and Technology of China National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

Blur Estimation for Barcode Recognition in Out-of-Focus Images

Blur Estimation for Barcode Recognition in Out-of-Focus Images Blur Estimation for Barcode Recognition in Out-of-Focus Images Duy Khuong Nguyen, The Duy Bui, and Thanh Ha Le Human Machine Interaction Laboratory University Engineering and Technology Vietnam National

More information

Detail Recovery for Single-image Defocus Blur

Detail Recovery for Single-image Defocus Blur IPSJ Transactions on Computer Vision and Applications Vol. 1 1 10 (Mar. 2009) Regular Paper Detail Recovery for Single-image Defocus Blur 1 Yu-Wing Tai, 1 Huixuan Tang, 2 Michael S. Brown 1 and Stephen

More information

Supplementary Material of

Supplementary Material of Supplementary Material of Efficient and Robust Color Consistency for Community Photo Collections Jaesik Park Intel Labs Yu-Wing Tai SenseTime Sudipta N. Sinha Microsoft Research In So Kweon KAIST In the

More information

Method for out-of-focus camera calibration

Method for out-of-focus camera calibration 2346 Vol. 55, No. 9 / March 20 2016 / Applied Optics Research Article Method for out-of-focus camera calibration TYLER BELL, 1 JING XU, 2 AND SONG ZHANG 1, * 1 School of Mechanical Engineering, Purdue

More information

Refocusing Phase Contrast Microscopy Images

Refocusing Phase Contrast Microscopy Images Refocusing Phase Contrast Microscopy Images Liang Han and Zhaozheng Yin (B) Department of Computer Science, Missouri University of Science and Technology, Rolla, USA lh248@mst.edu, yinz@mst.edu Abstract.

More information

CS6670: Computer Vision Noah Snavely. Administrivia. Administrivia. Reading. Last time: Convolution. Last time: Cross correlation 9/8/2009

CS6670: Computer Vision Noah Snavely. Administrivia. Administrivia. Reading. Last time: Convolution. Last time: Cross correlation 9/8/2009 CS667: Computer Vision Noah Snavely Administrivia New room starting Thursday: HLS B Lecture 2: Edge detection and resampling From Sandlot Science Administrivia Assignment (feature detection and matching)

More information

To Denoise or Deblur: Parameter Optimization for Imaging Systems

To Denoise or Deblur: Parameter Optimization for Imaging Systems To Denoise or Deblur: Parameter Optimization for Imaging Systems Kaushik Mitra a, Oliver Cossairt b and Ashok Veeraraghavan a a Electrical and Computer Engineering, Rice University, Houston, TX 77005 b

More information

Realistic Image Synthesis

Realistic Image Synthesis Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106

More information

Sharpness Metric Based on Line Local Binary Patterns and a Robust segmentation Algorithm for Defocus Blur

Sharpness Metric Based on Line Local Binary Patterns and a Robust segmentation Algorithm for Defocus Blur Sharpness Metric Based on Line Local Binary Patterns and a Robust segmentation Algorithm for Defocus Blur 1 Ravi Barigala, M.Tech,Email.Id: ravibarigala149@gmail.com 2 Dr.V.S.R. Kumari, M.E, Ph.D, Professor&HOD,

More information

Frequency Domain Enhancement

Frequency Domain Enhancement Tutorial Report Frequency Domain Enhancement Page 1 of 21 Frequency Domain Enhancement ESE 558 - DIGITAL IMAGE PROCESSING Tutorial Report Instructor: Murali Subbarao Written by: Tutorial Report Frequency

More information

Multi Focus Structured Light for Recovering Scene Shape and Global Illumination

Multi Focus Structured Light for Recovering Scene Shape and Global Illumination Multi Focus Structured Light for Recovering Scene Shape and Global Illumination Supreeth Achar and Srinivasa G. Narasimhan Robotics Institute, Carnegie Mellon University Abstract. Illumination defocus

More information

Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision Anat Levin, William Freeman, and Fredo Durand

Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision Anat Levin, William Freeman, and Fredo Durand Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-2008-049 July 28, 2008 Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision

More information

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS Yatong Xu, Xin Jin and Qionghai Dai Shenhen Key Lab of Broadband Network and Multimedia, Graduate School at Shenhen, Tsinghua

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

Image Denoising using Dark Frames

Image Denoising using Dark Frames Image Denoising using Dark Frames Rahul Garg December 18, 2009 1 Introduction In digital images there are multiple sources of noise. Typically, the noise increases on increasing ths ISO but some noise

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Single Camera Catadioptric Stereo System

Single Camera Catadioptric Stereo System Single Camera Catadioptric Stereo System Abstract In this paper, we present a framework for novel catadioptric stereo camera system that uses a single camera and a single lens with conic mirrors. Various

More information

Coded Aperture and Coded Exposure Photography

Coded Aperture and Coded Exposure Photography Coded Aperture and Coded Exposure Photography Martin Wilson University of Cape Town Cape Town, South Africa Email: Martin.Wilson@uct.ac.za Fred Nicolls University of Cape Town Cape Town, South Africa Email:

More information

Vision Review: Image Processing. Course web page:

Vision Review: Image Processing. Course web page: Vision Review: Image Processing Course web page: www.cis.udel.edu/~cer/arv September 7, Announcements Homework and paper presentation guidelines are up on web page Readings for next Tuesday: Chapters 6,.,

More information

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera Advanced Computer Graphics CSE 163 [Spring 2017], Lecture 14 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir To Do Assignment 2 due May 19 Any last minute issues or questions? Next two lectures: Imaging,

More information

OFFSET AND NOISE COMPENSATION

OFFSET AND NOISE COMPENSATION OFFSET AND NOISE COMPENSATION AO 10V 8.1 Offset and fixed pattern noise reduction Offset variation - shading AO 10V 8.2 Row Noise AO 10V 8.3 Offset compensation Global offset calibration Dark level is

More information

Single-shot three-dimensional imaging of dilute atomic clouds

Single-shot three-dimensional imaging of dilute atomic clouds Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Funded by Naval Postgraduate School 2014 Single-shot three-dimensional imaging of dilute atomic clouds Sakmann, Kaspar http://hdl.handle.net/10945/52399

More information

Prof. Feng Liu. Winter /10/2019

Prof. Feng Liu. Winter /10/2019 Prof. Feng Liu Winter 29 http://www.cs.pdx.edu/~fliu/courses/cs4/ //29 Last Time Course overview Admin. Info Computer Vision Computer Vision at PSU Image representation Color 2 Today Filter 3 Today Filters

More information

Computational Camera & Photography: Coded Imaging

Computational Camera & Photography: Coded Imaging Computational Camera & Photography: Coded Imaging Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Image removed due to copyright restrictions. See Fig. 1, Eight major types

More information

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution Extended depth-of-field in Integral Imaging by depth-dependent deconvolution H. Navarro* 1, G. Saavedra 1, M. Martinez-Corral 1, M. Sjöström 2, R. Olsson 2, 1 Dept. of Optics, Univ. of Valencia, E-46100,

More information

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Keshav Thakur 1, Er Pooja Gupta 2,Dr.Kuldip Pahwa 3, 1,M.Tech Final Year Student, Deptt. of ECE, MMU Ambala,

More information