Edge Width Estimation for Defocus Map from a Single Image

Similar documents
Defocus Map Estimation from a Single Image

Pattern Recognition 44 (2011) Contents lists available at ScienceDirect. Pattern Recognition. journal homepage:

On the Recovery of Depth from a Single Defocused Image

AN IMPROVED NO-REFERENCE SHARPNESS METRIC BASED ON THE PROBABILITY OF BLUR DETECTION. Niranjan D. Narvekar and Lina J. Karam

Deconvolution , , Computational Photography Fall 2017, Lecture 17

A Review over Different Blur Detection Techniques in Image Processing

Accelerating defocus blur magnification

Coded Aperture for Projector and Camera for Robust 3D measurement

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

Restoration of Motion Blurred Document Images

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Fast Blur Removal for Wearable QR Code Scanners (supplemental material)

Image Enhancement II: Neighborhood Operations

Deconvolution , , Computational Photography Fall 2018, Lecture 12

A No Reference Image Blur Detection using CPBD Metric and Deblurring of Gaussian Blurred Images using Lucy-Richardson Algorithm

Vehicle Speed Estimation Based On The Image

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Digital Image Processing

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015

CS6670: Computer Vision Noah Snavely. Administrivia. Administrivia. Reading. Last time: Convolution. Last time: Cross correlation 9/8/2009

Coded Computational Photography!

Vision Review: Image Processing. Course web page:

A moment-preserving approach for depth from defocus

Spline wavelet based blind image recovery

Automatic Content-aware Non-Photorealistic Rendering of Images

Digital Image Processing

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Sharpness Metric Based on Line Local Binary Patterns and a Robust segmentation Algorithm for Defocus Blur


Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

GLOBAL BLUR ASSESSMENT AND BLURRED REGION DETECTION IN NATURAL IMAGES

DEFOCUSING BLUR IMAGES BASED ON BINARY PATTERN SEGMENTATION

S 3 : A Spectral and Spatial Sharpness Measure

A Global-Local Noise Removal Approach to Remove High Density Impulse Noise

Automatic optical measurement of high density fiber connector

Admin Deblurring & Deconvolution Different types of blur

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

fast blur removal for wearable QR code scanners

Single Image Haze Removal with Improved Atmospheric Light Estimation

Journal of mathematics and computer science 11 (2014),

An Adaptive Kernel-Growing Median Filter for High Noise Images. Jacob Laurel. Birmingham, AL, USA. Birmingham, AL, USA

Motivation: Image denoising. How can we reduce noise in a photograph?

NO-REFERENCE IMAGE BLUR ASSESSMENT USING MULTISCALE GRADIENT. Ming-Jun Chen and Alan C. Bovik

Depth from Diffusion

Image Processing Final Test

Midterm Examination CS 534: Computational Photography

No-Reference Image Quality Assessment using Blur and Noise

Degradation Based Blind Image Quality Evaluation

Design of an Efficient Edge Enhanced Image Scalar for Image Processing Applications

Focused Image Recovery from Two Defocused

Demosaicing Algorithms

An Automatic System for Detecting the Vehicle Registration Plate from Video in Foggy and Rainy Environments using Restoration Technique

Research on Hand Gesture Recognition Using Convolutional Neural Network

An Analysis of Image Denoising and Restoration of Handwritten Degraded Document Images

Motivation: Image denoising. How can we reduce noise in a photograph?

Image Enhancement for Astronomical Scenes. Jacob Lucas The Boeing Company Brandoch Calef The Boeing Company Keith Knox Air Force Research Laboratory

IMAGE TAMPERING DETECTION BY EXPOSING BLUR TYPE INCONSISTENCY. Khosro Bahrami and Alex C. Kot

PRACTICAL IMAGE AND VIDEO PROCESSING USING MATLAB

Image Filtering. Median Filtering

A Bi-level Block Coding Technique for Encoding Data Sequences with Sparse Distribution

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

CS448f: Image Processing For Photography and Vision. Fast Filtering Continued

Correcting Over-Exposure in Photographs

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University

Blur Detection for Historical Document Images

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS

Image Enhancement using Histogram Equalization and Spatial Filtering

Practical Image and Video Processing Using MATLAB

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Image Deblurring with Blurred/Noisy Image Pairs

Target detection in side-scan sonar images: expert fusion reduces false alarms

Recent Advances in Sampling-based Alpha Matting

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

Image Processing for feature extraction

Deblurring. Basics, Problem definition and variants

Image Quality Assessment for Defocused Blur Images

A New Connected-Component Labeling Algorithm

Depth Estimation Algorithm for Color Coded Aperture Camera

Example Based Colorization Using Optimization

Restoration for Weakly Blurred and Strongly Noisy Images

Computational Cameras. Rahul Raguram COMP

Quantitative Analysis of Local Adaptive Thresholding Techniques

Midterm is on Thursday!

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

Fixing the Gaussian Blur : the Bilateral Filter

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

An Efficient Noise Removing Technique Using Mdbut Filter in Images

Refocusing Phase Contrast Microscopy Images

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

FROM BLIND SOURCE SEPARATION TO BLIND SOURCE CANCELLATION IN THE UNDERDETERMINED CASE: A NEW APPROACH BASED ON TIME-FREQUENCY ANALYSIS

Live Hand Gesture Recognition using an Android Device

A Real Time Static & Dynamic Hand Gesture Recognition System

On the evaluation of edge preserving smoothing filter

Introduction to digital image processing

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Transcription:

Edge Width Estimation for Defocus Map from a Single Image Andrey Nasonov, Aleandra Nasonova, and Andrey Krylov (B) Laboratory of Mathematical Methods of Image Processing, Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Moscow, Russia kryl@cs.msu.ru Abstract. The paper presents a new edge width estimation method based on Gaussian edge model and unsharp mask analysis. The proposed method is accurate and robust to noise. Its effectiveness is demonstrated by its application for the problem of defocus map estimation from a single image. Sparse defocus map is constructed using edge detection algorithm followed by the proposed edge width estimation algorithm. Then full defocus map is obtained by propagating the blur amount at edge locations to the entire image. Eperimental results show the effectiveness of the proposed method in providing a reliable estimation of the defocus map. Keywords: Edge width Image blur Defocus map Edge model 1 Introduction There are two general approaches for defocus estimation: methods that require multiple images [5,15] and methods that use only one image [1,4]. The former use a set of images captured with multiple camera focus settings. This approach has limited application due to the occlusion problem and requirements of a scene to be static. The latter split the problem into two steps: construction of a sparse defocus map via blur level estimation at edge locations and obtaining the full defocus map using a propagation method. Elder and Zucker [4] find the locations and the blur amount of edges according to the first- and second-order derivatives of the input image, they only get a sparse defocus map. Bae and Durand [1] etend this work to get a full defocus map from the sparse map by a defocus interpolation method. In [16] authors propose a blur estimation method based on the Gaussian gradient ratio, and show that it is robust to noise, inaccurate edge location and interference from neighboring edges. In [14] the image blur estimation method is based on the observation that defocusing can significantly affect the spectrum amplitude at the object edge locations in an image. Both these methods use matting Laplacian [8] for defocus map interpolation. These methods are the state-of-the-art methods. References to other methods can be found in these papers. c Springer International Publishing Switzerland 2015 S. Battiato et al. (Eds.): ACIVS 2015, LNCS 9386, pp. 15 22, 2015. DOI: 10.1007/978-3-319-25903-1 2

16 A. Nasonov et al. General purpose blur estimation methods can also be used for sparse defocus map construction problem. The method [6] is based on the assumption that the blur of the image is close to Gaussian. The image is divided into blocks, and the blur kernel is supposed to be uniform inside the block. The estimation of the blurriness of the block is based on the maimum of difference ratio between an original image and its two re-blurred versions. Block-based approach provides good blur estimation for highly tetured areas but it shows inadequate results for blocks not containing edges, for eample, flat areas. There are some simple and fast methods for blur estimation at edge locations [9 11], but generally they are not adequate for noisy images. A lot of methods use Gaussian filter as blurring approimation. In [2, 4] authors propose methods of multi-scale edge detection, where the scale of detection for each edge can roughly approimate blur level. In [13] neural networks are used. In [7] edge neighborhood is epanded in radial-symmetric functions with the method of principal components as a classifier of blur level. In this work we present a novel approach for edge width estimation suggested in [12] which is accurate and is robust to noise. Then we demonstrate the effectiveness of the proposed method for the problem of obtaining the defocus map from a single image. 2 Edge Width 2.1 Gaussian Edge Model We propose modeling an edge as a convolution of the ideal step edge function with a Gaussian filter (see Fig. 1): where denotes convolution. E σ () =[H G σ ](), Fig. 1. Edge model In our model we use the following definitions of the Gaussian filter and ideal step edge function { G σ () = 1 e 2 1, 0, 2σ 2, H() = 2πσ 0, < 0.

Edge Width Estimation for Defocus Map from a Single Image 17 It should be noted that H(k) =H() k >0, which leads to the following property for the model edge: 2.2 Estimation of Edge Width E σ () =E σ ( σ σ ) σ >0,σ > 0. (1) For the estimation of edge width we use the unsharp masking approach. Let U σ,α [E σ0 ]() be the result of unsharp masking applied to the edge E σ0 (): U σ,α [E σ0 ]() =(1+α)E σ0 () αe σ0 G σ = =(1+α)E σ0 () αe (). σ0 2 (2) +σ2 Using (1) and supposing σ = σ 0 = σ 1,(2) holds U σ1,α[e σ1 ]() =(1+α)E σ1 () αe 2σ 1 () = 2σ2 =(1+α)E σ1 () αe 2σ 2 ( )=U σ2,α[e σ2 ]( σ 2 ). (3) 2σ1 σ 1 The unsharp masking approach (2), due to (3), holds that for a fied value of parameter α the intensity values of corresponding etrema of U σ,α [E σ ]() are the same for all σ>0 (note that here unsharp masking uses the same σ as the model edge): U (α) = ma U σ,α[e σ ](), U (α) = min U σ,α [E σ ]() =1 U (α) Thus, taking into account the monotonicity of U σ,α [E σ ]() as a function of σ due to (3) and the properties of Gaussian functions: σ<σ 0 : ma U σ,α[e σ0 ]() <U (α), min U σ,α [E σ0 ]() >U (α), σ>σ 0 : ma U σ,α[e σ0 ]() >U (α), min U σ,α [E σ0 ]() <U (α). (4) 2.3 The Edge Width Estimation Algorithm The edge width estimation algorithm takes the following form: 1. Given values: α, U (α), 1-dimensional edge profile E σ0 (). 2. for σ = σ min to σ ma : σ step

18 A. Nasonov et al. compute U σ,α [E σ0 ](), find local maima ma of U σ,α [E σ0 ](), if U σ,α [E σ0 ]( ma ) U (α) result = σ, stop cycle. 3. Output: result. We use α =4,U (4) 1.24, σ min =0.5 (the smallest possible value for the edge blur due to the digitization of the image), σ ma = 10, the value of σ step is fied to 0.1 as this is an acceptable accuracy for the task. 3 Defocus Blur Estimation 3.1 Sparse Defocus Map The previous section deals with isolated edge profiles with values from 0 to 1. In practice real image edge profiles are rarely isolated, and all of them have different amplitude. For sparse blur map, we have to obtain edge map using an edge detector (we use Canny edge detector [3]). For each edge piel we construct an edge profile along the gradient direction at this piel using an interpolation method (we use bilinear interpolation). Then the values of edge profile are scaled into interval from 0 to 1 and the algorithm from section 2.3 is used. Also, for non-isolated edge profiles we isolate the central edge: we find the nearest local maimum to the right from the center and the nearest local minimum to the left, and duplicate those values (see Fig. 2). Fig. 2. Edge profile eamples and the result of the proposed edge width estimation method. The blue thin line is the original profile, the red thick line is an isolated edge, the green puncture line is a model edge with found edge width. 3.2 Full Defocus Map Using sparse defocus map at edge locations, the full defocus map is recovered by an edge-aware interpolation method [8]. We use the MatLab software provided by Zhuo and Sim [16] and substitute their sparse defocus map with ours.

Edge Width Estimation for Defocus Map from a Single Image 19 4 Results and Discussion We demonstrate the effectiveness of the proposed method by comparing our full defocus maps with results provided by [14] and [16]. The proposed edge width estimation method works well on images with a relatively low amount of noise, which is usually the case with defocused images. For noisy images some preliminary blurring can be applied. In Fig. 3 it can be seen that the proposed method makes the background more homogenous than [16]. In Fig. 4 the proposed method correctly processes the grass area and results in more gradual depth change. In Fig. 5, the proposed method and methods [14, 16] show results with almost the same quality. a. Input image b. Edge map for the proposed algorithm b. Edge width estimation c. Proposed edge width estimation by Zhuo and Sim [16] Fig. 3. Full defocus map for bird image. Blue and purple areas are sharp regions, yellow are white areas are blurry regions.

20 A. Nasonov et al. a. Input image b. Zhuo and Sim [16] c. Proposed edge width Fig. 4. Full defocus map for pumpkin image a. Input image b. Zhuo and Sim [16] c. Tang et al. [14] d. Proposed edge width Fig. 5. Full defocus map for flower image

Edge Width Estimation for Defocus Map from a Single Image 21 4.1 Possible Improvement Fig. 6 is the challenging problem for eisting defocus map estimation algorithms because they construct sparse defocus map only using edge piels. We suggest adding ridges and tetures to the sparse defocus map as a possible improvement. Ridges are linear image structures that also contain important information about image blur. Edge detection algorithms usually detect borders of only some ridges. Ignoring ridges results in missing thin lines from a defocus map. For eample, the method [14] does not include stems into the foreground in Fig. 6. Tetured areas contain multiple edges that are hard to analyze with edge detection algorithms. Special algorithms for blur estimation in tetured areas may improve the accuracy of blur estimation in these areas. Both the proposed method and the method [16] falsely include the image bottom into the foreground due to lack of edges provided by the edge detection algorithm in Fig. 6. a. Input image b. Zhuo and Sim [16] c. Tang et al. [14] d. Proposed method Fig. 6. Full defocus map for second flower image 5 Conclusion In this paper, we have presented new edge width estimation method and its application for the problem of obtaining full defocus map from a single image.

22 A. Nasonov et al. The proposed edge width estimation is accurate, robust to noise and edge interference, and it can be used to generate accurate sparse defocus map. Possible ways to improve the defocus map are discussed. The work was supported by Russian Science Foundation grant 14-11-00308. References 1. Bae, S., Durand, F.: Defocus magnification. Computer Graphics Forum 26(3), 571 579 (2007) 2. Basu, M.: Gaussian-based edge-detection methods - a survey. IEEE Transactions on Systems, Man and Cybernetics, Part C 32, 252 260 (2002) 3. Canny, J.: A computational approach to edge detection. IEEE Trans. Pattern Analysis and Machine Intelligence 8, 679 698 (1986) 4. Elder, J.H., Zucker, S.W.: Local scale control for edge detection and blur estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(7), 699 716 (1998) 5. Favaro, P., Soatto, S.: A geometric approach to shape from defocus. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(3), 406 417 (2005) 6. Hu, H., de Haan, G.: Low cost robust blur estimator. In: IEEE International Conference on Image Processing, pp. 617 620 (2006) 7. Hua, Z., Wei, Z., Yaowu, C.: A no-reference perceptual blur metric by using ols-rbf network. In: Pacific-Asia Workshop on Computational Intelligence and Industrial Application, PACIIA 2008, vol. 1, pp. 1007 1011 (2008) 8. Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(2), 228 242 (2008) 9. Marziliano, P., Dufau, F., Winkler, S., Ebrahimi, T.: A no-reference perceptual blur metric. Proceedings of the International Conference on Image Processing 3, 57 60 (2002) 10. Marziliano, P., Dufau, F., Winkler, S., Ebrahimi, T.: Perceptual blur and ringing metrics: Application to JPEG2000. Signal Processing: Image Communications 3(2), 163 172 (2004) 11. Narvekar, N.D., Karam, L.J.: A no-reference perceptural image sharpness metric based on a cumulative probability of blur detection. In: International Workshop on Quality of Multimedia Eperience, QoME 2009 (2009) 12. Nasonova, A.A., Krylov, A.S.: Determination of image edge width by unsharp masking. Computational Mathematics and Modeling 25(1), 72 78 (2014) 13. Suzuki, K., Horiba, I., Sugie, N.: Neural edge enhancer for supervised edge enhancement from noisy images. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 1582 1596 (2003) 14. Tang, C., Hou, C., Song, Z.: Defocus map estimation from a single image via spectrum contrast. Optics letters 38(10), 1706 1708 (2013) 15. Zhou, C., Cossairt, O., Nayar, S.: Depth from diffusion. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1110 1117. IEEE (2010) 16. Zhuo, S., Sim, T.: Defocus map estimation from a single image. Pattern Recognition 44(9), 1852 1858 (2011)

http://www.springer.com/978-3-319-25902-4