Image preprocessing in spatial domain

Similar documents
Image Filtering. Median Filtering

Image Processing for feature extraction

Filip Malmberg 1TD396 fall 2018 Today s lecture

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

Practical Image and Video Processing Using MATLAB

Filtering in the spatial domain (Spatial Filtering)

Digital Image Processing

A Comparative Review Paper for Noise Models and Image Restoration Techniques

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering

Filtering Images in the Spatial Domain Chapter 3b G&W. Ross Whitaker (modified by Guido Gerig) School of Computing University of Utah

Motivation: Image denoising. How can we reduce noise in a photograph?

Computer Vision, Lecture 3

Midterm Review. Image Processing CSE 166 Lecture 10

Templates and Image Pyramids

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Image filtering, image operations. Jana Kosecka

Non Linear Image Enhancement

Motivation: Image denoising. How can we reduce noise in a photograph?

An Efficient Noise Removing Technique Using Mdbut Filter in Images

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror

Images and Filters. EE/CSE 576 Linda Shapiro

Digital Image Processing. Digital Image Fundamentals II 12 th June, 2017

EE482: Digital Signal Processing Applications

Part I Feature Extraction (1) Image Enhancement. CSc I6716 Spring Local, meaningful, detectable parts of the image.

Templates and Image Pyramids

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

Vision Review: Image Processing. Course web page:

Frequency Domain Enhancement

Digital Image Processing

Prof. Feng Liu. Winter /10/2019

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Midterm is on Thursday!

8. Lecture. Image restoration: Fourier domain

CS 4501: Introduction to Computer Vision. Filtering and Edge Detection

Image Filtering in Spatial domain. Computer Vision Jia-Bin Huang, Virginia Tech

Announcements. Image Processing. What s an image? Images as functions. Image processing. What s a digital image?

IMAGE PROCESSING: AREA OPERATIONS (FILTERING)

ELEC Dr Reji Mathew Electrical Engineering UNSW

More image filtering , , Computational Photography Fall 2017, Lecture 4

A Spatial Mean and Median Filter For Noise Removal in Digital Images

IMAGE PROCESSING (RRY025) THE CONTINUOUS 2D FOURIER TRANSFORM

Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering

Overview. Neighborhood Filters. Dithering

Transforms and Frequency Filtering

Introduction. Computer Vision. CSc I6716 Fall Part I. Image Enhancement. Zhigang Zhu, City College of New York

>>> from numpy import random as r >>> I = r.rand(256,256);

CS6670: Computer Vision Noah Snavely. Administrivia. Administrivia. Reading. Last time: Convolution. Last time: Cross correlation 9/8/2009

Defocusing and Deblurring by Using with Fourier Transfer

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Image Enhancement using Histogram Equalization and Spatial Filtering

Sensors and Sensing Cameras and Camera Calibration

Examples of image processing

Fourier analysis of images

Image Smoothening and Sharpening using Frequency Domain Filtering Technique

Chapter 2 Image Enhancement in the Spatial Domain

Sampling and Reconstruction

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm

Lecture 3: Linear Filters

Chapter 3. Study and Analysis of Different Noise Reduction Filters

Computing for Engineers in Python

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Last Lecture. photomatix.com

Image Denoising with Linear and Non-Linear Filters: A REVIEW

FUZZY BASED MEDIAN FILTER FOR GRAY-SCALE IMAGES

Image Processing by Bilateral Filtering Method

Filters. Materials from Prof. Klaus Mueller

1.Discuss the frequency domain techniques of image enhancement in detail.

Exercise Problems: Information Theory and Coding

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

DIGITAL IMAGE DE-NOISING FILTERS A COMPREHENSIVE STUDY

On the evaluation of edge preserving smoothing filter

Available online at ScienceDirect. Procedia Computer Science 42 (2014 ) 32 37

Image Deblurring with Blurred/Noisy Image Pairs

An Adaptive Kernel-Growing Median Filter for High Noise Images. Jacob Laurel. Birmingham, AL, USA. Birmingham, AL, USA

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Image Enhancement II: Neighborhood Operations

TIRF, geometric operators

DIGITAL IMAGE PROCESSING UNIT III

Image Enhancement contd. An example of low pass filters is:

CAP 5415 Computer Vision. Marshall Tappen Fall Lecture 1

PHYS 352. FFT Convolution. More Advanced Digital Signal Processing Techniques

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

PERFORMANCE ANALYSIS OF LINEAR AND NON LINEAR FILTERS FOR IMAGE DE NOISING

Last Lecture. photomatix.com

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015

Algorithm for Image Processing Using Improved Median Filter and Comparison of Mean, Median and Improved Median Filter

Sampling and reconstruction

Performance Analysis of Average and Median Filters for De noising Of Digital Images.

Sampling and reconstruction

Deconvolution , , Computational Photography Fall 2017, Lecture 17

10. Noise modeling and digital image filtering

Image Denoising Using Statistical and Non Statistical Method

Computer Vision. Howie Choset Introduction to Robotics

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Chapter 6. [6]Preprocessing

Transcription:

Image preprocessing in spatial domain convolution, convolution theorem, cross-correlation Revision:.3, dated: December 7, 5 Tomáš Svoboda Czech Technical University, Faculty of Electrical Engineering Center for Machine Perception, Prague, Czech Republic svoboda@cmp.felk.cvut.cz http://cmp.felk.cvut.cz/~svoboda Noise in images deterioration of analog signal / CCD/CMOS chips are not perfect typically, the smaller active surface, the more noise How to suppress noise? digital only, ie. no A/D and D/A conversion. ok larger chips expensive, expensive lenses cooled cameras (astronomy) slow, expensive (local) image preprocessing Example scene 3/ Sample video from a static camera http://cmp.felk.cvut.cz/cmp/courses/ezs/demos/noise in camera.avi

Statistical point of view / observations independent Suppose we can acquire N images of the same scene. For each pixels we obtain N results xi, i =... N. Assume: each xi has E{xi} = µ and var{xi} = σ PN Properties of the average value sn = N xi PN Expectation: E{sN } = N E{xi} = µ Variance: We know that var{xi/n } = var{xi}/n, thus var{sn } = var{x} var{x} var{xn } σ + +... + =. N N N N which means that standard deviation of sn decreases as. N Example 5/ a noisy image average from 6 observations. Example equalized 6/ a noisy image average from 6 observations.

for images: Standard deviations in pixels 7/ Standard deviation in red channel Standard deviation in red channel.5 3.5 3 3.5.5 3.5 6 6 without compression lossy compressed (jpg) Lossy compression is generally not a good choice for machine vision!.5 3.5 3.5.5 Problem: noise suppression from just one image redundancy in images 8/ neighbouring pixels have mostly the same or similar value correction of the pixel value based on an analysis of its neighbourhood leads to image blurring spatial filtering Spatial filtering informally Idea: Output is a function of a pixel value and those of its neighbours. 9/ Example for 8 connected region. g(x, y) = Op f(x, y ) f(x, y ) f(x +, y ) f(x, y) f(x, y) f(x +, y) f(x, y + ) f(x, y + ) f(x +, y + ) Possible operations: sum, average, weighted sum, min, max, median...

Spatial filtering by masks Very common neighbour operation is per-element multiplication with a set of weights and sum together. Set of weights is often called mask or kernel. / Local neighbourhood f(x-,y-) f(x,y-) f(x+,y-) f(x-,y) f(x,y) f(x+,y) f(x-,y+) f(x,y+) f(x+,y+) mask w(-,-) w(,-) w(+,-) w(-,) w(,) w(+,) w(-,+) w(,+) w(+,+) g(x, y) = w(k, l)f(x + k, y + l) k= l= D convolution Spatial filtering is often referred to as convolution. / We say, we convolve the image by a kernel or mask. Though, it is not the same. Convolution uses a flipped kernel. Local neighbourhood f(x-,y-) f(x,y-) f(x+,y-) f(x-,y) f(x,y) f(x+,y) f(x-,y+) f(x,y+) f(x+,y+) mask w(+,+) w(,+) w(-,+) w(+,) w(,) w(-,) w(+,-) w(,-) w(-,-) g(x, y) = w(k, l)f(x k, y l) k= l= D Convolution Why is it important? Input and output signals need not to be related through convolution, but if they are (and only if) the system is linear and time invariant. D convolution describes well the formation of images. Many image distortions made by imperfect acquisition may be modelled by D convolution, too. It is a powerful thinking tool. /

Convolution integral D convolution definition 3/ g(x, y) = f(x k, y l)h(k, l)dkdl Symbolic abbreviation g(x, y) = f(x, y) h(x, y) Discrete D convolution / g(x, y) = f(x, y) h(x, y) = k= l= What with missing values f(x k, y l)? Zero-padding: add zeros where needed. = f(x k, y l)h(k, l) 3 3 3 The result is zero elsewhere. The concept is somehow contra-intuitive, practice with a pencil and paper. Thinking about convolution 5/ g(x) = f(x) h(x) = k f(k)h(x k) Shifting h: shift a copy of h to each position k multiply by the value at that position f(k) add shifted, multiplied copies for all k Blurring f: break the f into each discrete sample send each one individually through h to produce blurred points sum up the blurred points

Thinking about convolution II 6/ g(x) = f(x) h(x) = k f(x k)h(k) Mask filtering: flip the function h around zero shift to output position x point-wise multiply for each position k value f(x k) and the shifted flipped copy of h. sum for all k and write that value at position x Motion blur modelled by convolution 7/ g(x) = k f(x k)h(k) g(x) is the image we get f(x) say to be the (true) D function Camera moves along x axis during acquisition. g does not depend only on f(x) but also on all k previous values of f #k measures the amount of the motion if the motion is steady then h(k) = /(#k) h is impulse response of the system (camera), we will come to that later Spatial filtering vs. convolution Flipping kernel Why not g(x) = k f(x + k)h(k) as in spatial filtering but g(x) = k f(x k)h( k)? 8/ Causality! In g(x) = k f(x + k)h(k) we are asking for values of input function f that are yet to come! Solution: h( k)

Convolution theorem The Fourier transform of a convolution is the product of the Fourier transforms. F{f(x, y) h(x, y)} = F (u, v)h(u, v) 9/ The Fourier transform of a product is the convolution of the Fourier transforms. F{f(x, y)h(x, y)} = F (u, v) H(u, v) Convolution theorem proof / F (u) = M M F{g(x)} =... M M x= F{f(x, y) h(x, y)} = F (u, v)h(u, v) x= f(x) exp ( iπux/m) and g(x) = M k= M k= f(k)h(x k)e( iπux/m) introduce new (dummy) variable w = x k M M M k= f(k) (M ) k w= k h(w)e ( iπu(w+k)/m) f(k)h(x k) remember that all functions g, h, f are assumed to be periodic with period M M k= f(k)e( iπuk/m) M w= h(w)e( iπuw/m) which is indeed F (u)h(u) Convolution theorem what is it good for? Direct relationship between filtering in spatial and frequency domain. See few slides later. Image restoration, sometimes called deconvolution Speed of computation. Convolution has O(M ), Fast Fourier Transform (FFT) has O(M log M) / Enough theory for now. Go for examples...

What is it good for? smoothing sharpening noise removal edge detection pattern matching... Spatial filtering / Smoothing Output value is computed as an average of the input value and its neighbourhood. 3/ Advantage: less noise Disadvantage: blurring Any kernel with all positive weights causes smoothing or blurring They are called low-pass filters (We know them already!) Averaging: g(x, y) = k l w(k, l)f(x + k, y + l) w(k, l) k l Can be of any size, any shape Smoothing kernels / h = 9, h = 6, h = 5.

Averaging ones(n n) increasing mask size 5/ image 768 7 7 5 5 9 9 3 3 Frequency analysis of the spatial convolution Simple averaging const. mask Original image 6/ filtered image 8 8 6 6 6 8 Frequency analysis of the spatial convolution Gaussian smoothing Gauss. mask Original image filtered image 8 7/ 5 8 6 6 5

Simple averaging vs. Gaussian smoothing 8/ simple averaging Gaussian smoothing Both images blurred but filtering by a constant mask still shows up some high frequencies! Frequency analysis of the spatial convolution Simple averaging const. mask Original image 9/ filtered image 5 5 8 6 5 3 6 35 6 7 6 7 6 7 8 9 7 6 7 8 9 5 5 5 6 7 8 9 Frequency analysis of the spatial convolution Gaussian smoothing Gauss. mask Original image 3/ filtered image 8 6 6 7 6 7 8 9 5 8 6 5 6 6 5 7 6 7 8 9 7 3 6 7 8 9 6 8

Simple averaging vs. Gaussian smoothing simple averaging Gaussian smoothing 3/ Both images blurred but filtering by a constant mask still shows up some high frequencies! Non-linear smoothing Goal: reduce blurring of image edges during smoothing 3/ Homogeneous neighbourhood: find a proper neighbourhood where the values have minimal variance. Robust statistics: something better than the mean. Rotation mask Rotation mask 3 3 seeks a homogeneous part at 5 5 neighbourhood. 33/ Together 9 positions, in the middle + 8 on the image... 7 8 The mask with the lowest variance is selected as the proper neighbourhood.

Nonlinear smoothing Robust statistics Order-statistic filters 3/ median Sort values and select the middle one. A method of edge-preserving smoothing. Particularly useful for removing salt-and-pepper, or impulse noise. trimmed mean Throw away outliers and average the rest. More robust to a non-gaussian noise than a standard averaging. Median filtering 35/ 98 99 5 95 55 Mean = 7. median: 95 98 99 5 55 Very robust, up to % of values may be outliers. Nonlinear smoothing examples 36/ noisy image averaging 3 3 averaging 7 7 noisy image median 3 3 median 7 7 The median filtering damage corners and thin edges.

Cross-correlation g(x, y) = h(k, l)f(x + k, y + l) = h(x, y) f(x, y) k l Cross-correlation is not, unlike convolution, commutative 37/ h(x, y) f(x, y) f(x, y) h(x, y) When h(x, y) f(x, y) we often say that h scans f. Cross-correlation is related to convolution through h(x, y) f(x, y) = h(x, y) f( x, y) Cross-correlation is useful for pattern matching Cross-correlation 38/ 3 h(x, y) scans f(x, y) g(x, y) This is perhaps not exactly what we expected and what we want. The result depend on the amplitudes. Do we have some normalisation? Normalised cross-correlation Sometimes called correlation coefficient ( ) ( ) k l h(k, l) h f(x + k, y + l) f(x, y) c(x, y) = ( ) ( k l h(k, l) h k l f(x + k, y + l) f(x, y) ) 39/ h is the mean of h f(x, y) is the mean of the k, l neighbourhood around (x, y) ( ) ( k l h(k, l) h and k l f(x + k, y + l) f(x, y)) are indeed the variances. c(x, y)

Normalised cross-correlation / 6 8.8.6... 6 8 h(x, y) f(x, y) g(x, y) The s are in fact undefined, NaN. The maximum response is indeed where we expected...6.8 Normalised cross-correlation real images /.8 3 h(x, y) f(x, y) g(x, y).6.....6.8 Normalised cross-correlation non-maxima suppression /.8 3.6.....6.8 Red rectangle denotes the pattern. The crosses are the 5 highest values of ncc after non-maxima suppression.

Normalised cross-correlation non-maxima suppression 3/.8 3 6.6.....6.8 Red rectangle denotes the pattern. The crosses are the highest values of ncc after non-maxima suppression. We see the problem. The algorithm finds the cow in any position in the image. However, it does not scale. But we leave the problem for some advanced computer vision course. Autocorrelation / g(x, y) = f(x, y) f(x, y) x.5.5.5 f(x, y) f(x, y) f(x, y)