ECC419 IMAGE PROCESSING

Similar documents
Digital Image Processing

Image Enhancement in the Spatial Domain (Part 1)

Digital Image Processing. Lecture # 3 Image Enhancement

Image Processing (EA C443)

TDI2131 Digital Image Processing

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

Last Lecture. Lecture 2, Point Processing GW , & , Ida-Maria Which image is wich channel?

Image Processing Lecture 4

Image Enhancement using Histogram Equalization and Spatial Filtering

Digital Image Fundamentals and Image Enhancement in the Spatial Domain

Computer Vision. Intensity transformations

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Image Processing for feature extraction

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

What is an image? Bernd Girod: EE368 Digital Image Processing Pixel Operations no. 1. A digital image can be written as a matrix

Digital Image Processing

Non Linear Image Enhancement

Image processing. Image formation. Brightness images. Pre-digitization image. Subhransu Maji. CMPSCI 670: Computer Vision. September 22, 2016

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

A.V.C. COLLEGE OF ENGINEERING DEPARTEMENT OF CSE CP7004- IMAGE PROCESSING AND ANALYSIS UNIT 1- QUESTION BANK

Image Processing. 2. Point Processes. Computer Engineering, Sejong University Dongil Han. Spatial domain processing

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale

PRACTICAL IMAGE AND VIDEO PROCESSING USING MATLAB

Lecture 2: Digital Image Fundamentals -- Sampling & Quantization

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

COURSE ECE-411 IMAGE PROCESSING. Er. DEEPAK SHARMA Asstt. Prof., ECE department. MMEC, MM University, Mullana.

Genuine Fractals 4.1 Evaluation Guide

ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLAB

IMAGE ENHANCEMENT - POINT PROCESSING

SYLLABUS CHAPTER - 2 : INTENSITY TRANSFORMATIONS. Some Basic Intensity Transformation Functions, Histogram Processing.

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

Hello, welcome to the video lecture series on Digital Image Processing.

BBM 413! Fundamentals of! Image Processing!

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing

Image Enhancement in Spatial Domain

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing

Multimedia Systems Giorgio Leonardi A.A Lectures 14-16: Raster images processing and filters

Digital Image Processing. Lecture 1 (Introduction) Bu-Ali Sina University Computer Engineering Dep. Fall 2011

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Digital Image Processing. Lecture # 4 Image Enhancement (Histogram)

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing

December 28, Dr. Praveen Sankaran (Department of ECE NIT Calicut DIP)

GE 113 REMOTE SENSING. Topic 7. Image Enhancement

Dr. J. J.Magdum College. ABSTRACT- Keywords- 1. INTRODUCTION-

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

Image and Video Processing

Design of Various Image Enhancement Techniques - A Critical Review

Image Enhancement in the Spatial Domain

Solution for Image & Video Processing

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Lecture 2 Digital Image Fundamentals. Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror

Image Smoothening and Sharpening using Frequency Domain Filtering Technique

Computer Assisted Image Analysis 1 GW 1, Filip Malmberg Centre for Image Analysis Deptartment of Information Technology Uppsala University

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Computer Graphics Fundamentals

Vision Review: Image Processing. Course web page:

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering

Unit 8: Color Image Processing

Midterm Examination CS 534: Computational Photography

Digital Image Processing. Lecture # 8 Color Processing

Introduction to More Advanced Steganography. John Ortiz. Crucial Security Inc. San Antonio

1.Discuss the frequency domain techniques of image enhancement in detail.

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

Practical Image and Video Processing Using MATLAB

Chapter 6. [6]Preprocessing

PARAMETRIC ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES

Digital Image Processing

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Gonzales & Woods, Emmanuel Agu Suleyman Tosun

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Computing for Engineers in Python

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Digital Image Processing

Image Processing. Chapter(3) Part 2:Intensity Transformation and spatial filters. Prepared by: Hanan Hardan. Hanan Hardan 1

Sampling and pixels. CS 178, Spring Marc Levoy Computer Science Department Stanford University. Begun 4/23, finished 4/25.

in association with Getting to Grips with Printing

Classification in Image processing: A Survey

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Filtering. Image Enhancement Spatial and Frequency Based

Solution Q.1 What is a digital Image? Difference between Image Processing

Midterm Review. Image Processing CSE 166 Lecture 10

IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

What is image enhancement? Point operation

Image Processing: An Overview

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

Digitization and fundamental techniques

DIGITAL IMAGE PROCESSING UNIT III

image Scanner, digital camera, media, brushes,

Lecture # 01. Introduction

Various Image Enhancement Techniques - A Critical Review

Keywords-Image Enhancement, Image Negation, Histogram Equalization, DWT, BPHE.

Transcription:

ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means of computer, it covers low, mid and high level processes. Low level: Low level processes involve primitive operations, such as image preprocessing to reduce noise, contrast enhancement and image sharpening. A lowlevel process is characterized by the fact that both its inputs and outputs typically are images. Mid Level: Mid level processes on images involve task such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing and classification (recognition) of individual objects. A mid level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g. edges, contours, and the identity of individual objects).

High Level: High level processing involves making sense of an ensemble of recognized objects, as in image analysis, and at the far end of the continuum, performing the cognitive functions normally associated with human vision. The aim of image processing: Improve image quality for human perception and/or computer interpretation. Processing of image data for storage, transmission and representation for autonomous machine perception. There are other fields deal with images: Computer graphics: the creation of images. Computer vision: the analysis of image content Digital Image Digital image is a two dimensional function f(x,y), where x and y are spatial coordinates. The amplitude of f is called intensity or gray level at the point (x,y). Image Size = maxx X maxy e.g. 640x480, 512x512, 9x9 Pixel: Picture Element: is a single point in a graphic image. Grayscale Image: is an image in which the value of each pixel is a single sample, that is, it carries only intensity information. Pixel intensity value f(x,y) ϵ [0,255] in 8 bit grayscale image. Image Acquisition: The first stage of any vision system is the image acquisition stage. After the image has been obtained, various methods of processing can be applied to the image to perform the many different image processing. However, if the image has not been acquired satisfactorily then the intended tasks may not be achievable, even with the aid of some form of image enhancement.

Image properties depend on: Image acquisition parameters camera distance, viewpoint, motion camera intrinsic parameters (e.g. lens aberration) number of cameras illumination Visual properties of the 3D world captured Sampling Sampling is the spacing of discrete values in the domain of a signal. Sampling rate: how many samples are taken per unit of each dimension e.g.samples per second, frames per second, etc.

Quantization Spacing of discrete values in the range of a signal. Usually thought of as the number of bits per sample of the signal, e.g. 1 bit per pixel (b/w images), 16 bit audio, 24 bit color images, etc. Resolution Resolution (how much you can see the detail of the image) depends on sampling and gray levels. The bigger the sampling rate (n) and the grayscale (g), the better the approximation of the digitized image from the original. The more the quantization scale becomes, the bigger the size of the digitized image. The Pixel Coordinate System: For pixel coordinates, the first component r (the row) increases downward, while the second component c (the column) increases to the right. Pixel coordinates are integer values and range between 1 and the length of the row or column.

Digital Image Representation A digital image can be considered as a matrix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at the point. Example 9x9 8 bit grayscale image:

Neighbors of a Pixel: A pixel p at coordinate (x,y) has: N4(p) = 4 neighbors of p: (x+1,y), (x 1,y), (x,y+1), (x,y 1) N D (p) = 4 diagonal neighbors of p: (x+1,y+1), (x 1,y 1), (x 1,y+1), (x+1,y 1) N8(p) = 8 neighbors of p:

Types of operations: The types of operations that can be applied to digital images to transform an input image a[m,n] into an output image b[m,n] (or another representation) can be classified into three categories: Operation Characterization The output value at a specific coordinate is dependent only on the input value at that same coordinate. Point The output value at a specific coordinate is dependent on the input values in the neighborhood of that same coordinate. Local The output value at a specific coordinate is dependent on all the values in the input image. Global

IMAGE INTERPOLATION Interpolation works by using known data to estimate values at unknown points. Common interpolation algorithms can be grouped into 2 categories: adaptive and non adaptive. Adaptive methods change depending on what they are interpolating whereas non adaptive methods treat all pixels equally. Non adaptive algorithms include nearest neighbor, bilinear, bicubic etc. Depending on their complexity, these use anywhere from 0 to 256 (or more) adjacent pixels when interpolating. The more adjacent pixels they include, the more accurate they can become, but this comes at the expense of much longer processing time. These algorithms can be used to both distort and resize a photo. Adaptive algorithms include many proprietary algorithms in licensed software such as Qimage, PhotoZoom Pro etc. These algorithms are primarily designed to maximize artfact free detail in enlarged photos, so some cannot be used to distort or rotate an image. NEAREST NEIGHBOR INTERPOLATION Nearest neighbor is the most basic and requires the least processing time of all the interpolation algorithms because it only considers one pixel the closest one to the interpolated point. BILINEAR INTERPOLATION Bilinear interpolation considers the closest 2x2 neighborhood of known pixel values surrounding the unknown pixel. It then takes a weighted average of these 4 pixels to arrive at its final interpolated value. This results in much smoother looking images than nearest neighbor. BICUBIC INTERPOLATION Bicubic goes one step beyond bilinear by considering the closest 4x4 neighborhood of known pixels for a total of 16 pixels. Since these are at various distances from the unknown pixel, closer pixels are given a higher weighting in the calculation. Bicubic produces noticeably sharper images than the previous two methods, and is perhaps the ideal combination of

processing time and output quality. For this reason it is a standard in many image editing programs (including Adobe Photoshop), printer drivers and in camera interpolation.

IMAGE ENHANCEMENT Preview The principal objective of enhancement is to process an image so that the result is more suitable than the original image for a specific application. Why for a specific application? Image enhancement techniques are application dependent because a method that is useful for enhancing x ray images may not be suitable for images of space transmitted by a space probe. Image Enhancement techniques fall into broad categories: Spatial Domain Methods It refers to the image itself, and approaches in this category are based on direct manipulation of pixels in an image. Frequency Domain Methods Frequency Domain techniques are based on modifying Fourier Transform of an image. Spatial Domain Image Enhancement Spatial Domain processes will be denoted by the expression:,, whereg(x,y) is the output image, T is an operator over some neighborhood of (x,y) and f(x,y) is the input image. If we use T by a neighborhood size 1x1, it becomes a gray level (also called intensity or mapping) transformation function and can be rewritten as:

where s is the gray level of g(x,y) at (x,y) and r is the gray level of f(x,y) at (x,y). Basic Grey Level Transformations in Spatial Domain: Image Negatives Logarithmic Transformations Power Law Transformations Piecewise Linear Transformation Functions Image Negatives: are used to obtain photographic negative of an image by applying the negative transformation function. 1 wheres is the output pixel, L is the gray level range of image (256) and r is the input pixel. Ex: Original 2x2 image 15 130 200 0

f(1,1) = 256 1 15= 240 f(1,2) = 256 1 130 = 125 f(2,1) = 256 1 200 = 55 f(2,2) = 256 1 0= 255 output image 240 125 55 255 Example of Image Negatives Logarithmic Transformations: are used to expand the spectrum of dark pixels while compressing the spectrum of higher value pixels in an image. General form of Logarithmic Transformations: log 1 where s is the output pixel, c is the constant and r is the input pixel.

Example of Logarithmic Transformation (c=1) Power Law Transformation: provides more flexible transformation curve than Logarithmic Transformation, according to the value of c and γ (gamma). wheres is the output pixel, c is the constant and r is the input pixel. If γ<1: o Expands the spectrum of dark pixels. o Compresses the spectrum of higher value pixels. If γ>1: o Compresses the spectrum of dark pixels. o Expands the spectrum of higher value pixels. If γ=1: o Identity transformation.

Piecewise Linear Transformation Functions: consists of several functions such as contrast stretching, gray level slicing and bit plane slicing which are used for image enhancement. Contrast Stretching is one of the simplest and most important approaches for Piecewise Linear Transformation Functions. During image acquisition, images may become low contrast because of poor illumination. The idea of contrast stretching is to increase the dynamic range of the gray levels in the image being processed and typical formula is: wheres is output pixel, r is the input pixel, a and b is the lower and upper limits respectively and c and d is the lowest and the highest pixel value in an image respectively.

Histogram Processing in Spatial Domain It is an important approach for image enhancement and it is basis for numerous techniques. Histogram is the discrete function of digital image in k as [0, L 1] and it is defined as: where gray level and is the number of pixels in the image having gray level. Normalization of Histogram: Probability of occurrence of gray level is estimated by dividing its values by total number of pixels in the image: Determination of Contrast Level Dark Image: can be defined as the collection of image pixels in the range [0, n] without having pixels in the range [n, L 1].

Bright Image: can be defined as the collection of image pixels in the range [n, L 1] without having pixel values in the range [0, n]. Low contrast Image: have more complex relationship in the upper and lower limits of gray level values. An image can be classified as a low contrast image if the image pixels are collected in the range [n z, n+z].

High contrast Image: can be defined as the equal distribution of image pixels in the range [0, L 1]. Histogram Equalization where is resultant image, T is transformation function for Histogram Equalization, is gray level and is probability of occurrence. where is the number of pixels that have same gray level.

Ex.