Image Processing. Michael Kazhdan ( /657) HB Ch FvDFH Ch. 13.1

Similar documents
Human Vision, Color and Basic Image Processing

Image Processing. Image Processing. What is an Image? Image Resolution. Overview. Sources of Error. Filtering Blur Detect edges

Image Processing COS 426

Graphics and Image Processing Basics

Image Processing. Adam Finkelstein Princeton University COS 426, Spring 2019

קורס גרפיקה ממוחשבת 2008 סמסטר ב' Image Processing 1 חלק מהשקפים מעובדים משקפים של פרדו דוראנד, טומס פנקהאוסר ודניאל כהן-אור

Image Processing. What is an image? קורס גרפיקה ממוחשבת 2008 סמסטר ב' Converting to digital form. Sampling and Reconstruction.

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color

Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester

Images. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 38

Prof. Feng Liu. Fall /04/2018

COLOR. and the human response to light

Digital Halftoning. Sasan Gooran. PhD Course May 2013

Wireless Communication

The human visual system

Bettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale

Digital Image Processing

Capturing Light in man and machine

COLOR and the human response to light

Capturing Light in man and machine

Visual Perception. Overview. The Eye. Information Processing by Human Observer

Digital Image Processing

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour

Image and Multidimensional Signal Processing

Raster Graphics. Overview קורס גרפיקה ממוחשבת 2008 סמסטר ב' What is an image? What is an image? Image Acquisition. Image display 5/19/2008.

קורס גרפיקה ממוחשבת 2008 סמסטר ב' Raster Graphics 1 חלק מהשקפים מעובדים משקפים של פרדו דוראנד, טומס פנקהאוסר ודניאל כהן-אור

IMAGES AND COLOR. N. C. State University. CSC557 Multimedia Computing and Networking. Fall Lecture # 10

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD)

Capturing Light in man and machine

Capturing Light in man and machine

Image and Video Processing

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song

Image processing. Image formation. Brightness images. Pre-digitization image. Subhransu Maji. CMPSCI 670: Computer Vision. September 22, 2016

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

Dr. Shahanawaj Ahamad. Dr. S.Ahamad, SWE-423, Unit-06

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

Colour. Why/How do we perceive colours? Electromagnetic Spectrum (1: visible is very small part 2: not all colours are present in the rainbow!

Color Science. CS 4620 Lecture 15

Lecture 8. Color Image Processing

Colour. Electromagnetic Spectrum (1: visible is very small part 2: not all colours are present in the rainbow!) Colour Lecture!

What is an image? Images and Displays. Representative display technologies. An image is:

Colour. Cunliffe & Elliott, Chapter 8 Chapman & Chapman, Digital Multimedia, Chapter 5. Autumn 2016 University of Stirling

Evaluation of Visual Cryptography Halftoning Algorithms

Colors in Images & Video

Introduction to Multimedia Computing

Colorimetry and Color Modeling

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

Capturing Light in man and machine. Some figures from Steve Seitz, Steve Palmer, Paul Debevec, and Gonzalez et al.

Fundamentals of Multimedia

Fig 1: Error Diffusion halftoning method

Vision. The eye. Image formation. Eye defects & corrective lenses. Visual acuity. Colour vision. Lecture 3.5

Frequencies and Color

Visual Perception. Jeff Avery

Lecture 2 Digital Image Fundamentals. Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016

Digital Image Processing. Lecture # 8 Color Processing

Lecture 1: image display and representation

IFT3355: Infographie Couleur. Victor Ostromoukhov, Pierre Poulin Dép. I.R.O. Université de Montréal

Capturing Light in man and machine

Lecture 2: Digital Image Fundamentals -- Sampling & Quantization

Digital Imaging Rochester Institute of Technology

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

Visual Perception. human perception display devices. CS Visual Perception

Half-Tone Watermarking. Multimedia Security

Monochrome Image Reproduction

IMAGE PROCESSING: POINT PROCESSES

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

PART II. DIGITAL HALFTONING FUNDAMENTALS

Color Image Processing. Gonzales & Woods: Chapter 6

Problems. How do cameras measure light and color? How do humans perceive light and color?

Acquisition and representation of images

IMAGE PROCESSING >COLOR SPACES UTRECHT UNIVERSITY RONALD POPPE

Images and Displays. Lecture Steve Marschner 1

Light. intensity wavelength. Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies

Technology and digital images

Multimedia Systems Color Space Mahdi Amiri March 2012 Sharif University of Technology

Image Processing - Intro. Tamás Szirányi

Images and Displays. CS4620 Lecture 15

Digital Image Processing

The Science Seeing of process Digital Media. The Science of Digital Media Introduction

CMPSCI 670: Computer Vision! Color. University of Massachusetts, Amherst September 15, 2014 Instructor: Subhransu Maji

Digital Image Processing

Error Diffusion and Delta-Sigma Modulation for Digital Image Halftoning

Course Objectives & Structure

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Color Image Processing

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Digital Image Processing

CS101 Lecture 12: Digital Images. What You ll Learn Today

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression

DIGITAL IMAGE PROCESSING LECTURE # 4 DIGITAL IMAGE FUNDAMENTALS-I

Color Perception. Color, What is It Good For? G Perception October 5, 2009 Maloney. perceptual organization. perceptual organization

White light can be split into constituent wavelengths (or colors) using a prism or a grating.

Multimedia Systems and Technologies

Prof. Feng Liu. Fall /02/2018

Analysis and Design of Vector Error Diffusion Systems for Image Halftoning

CMVision and Color Segmentation. CSE398/498 Robocup 19 Jan 05

Human Visual System. Digital Image Processing. Digital Image Fundamentals. Structure Of The Human Eye. Blind-Spot Experiment.

OPTO 5320 VISION SCIENCE I

Transcription:

Image Processing Michael Kazhdan (600.457/657) HB Ch. 14.4 FvDFH Ch. 13.1

Outline Human Vision Image Representation Reducing Color Quantization Artifacts Basic Image Processing

Human Vision Model of Human Visual System Sun Objects in world Human eye

Electromagnetic Spectrum Visible light frequencies range between... Red = 4.3 x 10 14 hertz (700nm) Violet = 7.5 x 10 14 hertz (400nm) Figures 15.1 from H&B

Visible Light The human eye can see light in the frequency range 400nm 700nm Energy Red (700 nm) Violet (400 nm) White Light Figure 15.3 from H&B Frequency

Human Vision The human retina contains two types of photoreceptors, cones and rods. Cones: 6-7 million cones in the retina Responsible for photopic vision Color sensitive: 64% red, 32% green, 2% blue Distributed in the fovea centralis Rods: 120 million rods in the retina 1000x more light sensitive than cones Responsible for scotopic vision Short-wavelength sensitive Responsible for peripheral vision

Tristimulus Theory of Color Spectral-response functions of each of the three types of cones on the human retina. This motivates encoding color as a combination of red, green, and blue (RGB). Figure 13.18 from FvDFH

Visible Light The human eye can see light in the frequency range 400nm 700nm Energy This does not mean that we can see the difference between the different spectral distributions. Metamers = Two spectral distributions that Frequency Red look the same Violet (700 nm) (400 nm) White Light Figure 15.3 from H&B

Outline Human Vision Image Representation Reducing Color Quantization Artifacts Basic Image Processing

Image Representation What is an image?

Image Representation An image is a 2D rectilinear array of pixels: A width x height array where each entry of the array stores a single pixel. w h Continuous image Digital image

Image Representation What is a pixel? Continuous image Digital image

Image Representation A pixel is something that captures the notion of color Luminance pixels Grey-scale images (aka intensity images ) 0 1.0 or 0 255 Red, Green, Blue pixels (RGB) Color images 0 1.0 or 0 255

Resolutions Spatial resolution: width x height pixels Intensity/Color resolution: n bits per pixel Temporal resolution: n Hz (fps) width x height bit depth Hz Handheld 2220 x 1080 24 60 Monitor 3840 x 1080 24 144 CCDs 6000 x 4000 36 50 Laser Printer 6600 x 5100 3 -

Image Quantization Artifacts With only a small number of bits associated to each color channel of a pixel there is a limit to intensity resolutions of an image A black and white image allocates a single bit to the luminance channel of a pixel.» The number of different colors that can be represented by a pixel is 2. A 24 bit bitmap image allocates 8 bits to the red, green, and blue channels of a pixel.» The number of different colors that can be represented by a pixel is 16,000,000.

Outline Human Vision Image Representation Reducing Color Quantization Artifacts Halftoning and Dithering Basic Image Processing

Reducing Color Quantization Artifacts Key Idea: For (still) images, the combination of image resolution and intensity/color resolution define the total informational content. We can trade off between these to achieve different visual effects.

Reducing Color Quantization Artifacts Disclaimer: In the next few slides, we assume that the original image has continuous pixel values, I x, y [0,1). In practice, all the images you will work with will have integer values, I x, y {0,, 255}.

Quantization When you have a small number of bits per pixel, you can coarsely represent an image by quantizing the color values: P x, y = Q b I x, y = floor I x, y 2 b with b the number of bits per pixel. I x, y [0,1) Q b x, y {0,, 2 b 1 } b = 2 bits per pixel

Quantization Image with decreasing bits per pixel Note contouring! b = 8 bits b = 4 bits b = 2 bits b = 1 bits

Reducing Effects of Quantization Trade spatial resolution for intensity resolution: Half-toning Dithering

Classical Half-Toning Varying-size dots represent intensities Area of dots inversely proportional to intensity I(x, y) P(x, y)

Classical Half-Toning Newspaper Image From New York Times, 9/21/99

Digital Half-Toning Use cluster of pixels to draw (average) intensity Trades spatial resolution for intensity resolution Note: Half-toning pattern matters» Want to avoid vertical, horizontal lines Loss of information» 16 configurations 5 intensities 0 I 0.2 0.2 I 0.4 0.4 I 0.6 0.6 I 0.8 0.8 I 1

Digital Half-Toning Use cluster of pixels to draw (average) intensity Trades spatial resolution for intensity resolution Note: Half-toning pattern matters» Want to avoid vertical, horizontal lines Loss of information» 16 configurations 5 intensities Original (8 bits) Quantized (1 bit) Half-toned (1 bit)

Dithering Distribute errors among pixels Exploit spatial integration in our eye Display greater range of perceptible intensities

P(x, y) Random Dither Randomize quantization errors Errors appear as noise If a pixel is black, then adding random noise to it, you are less likely to turn it into a white pixel then if the pixel were dark gray. P x, y = Q b I(x, y) I x, y + noise x, y 2 b

P(x, y) Random Dither Randomize quantization errors Errors appear as noise How much noise should we add? I(x, y) noise x, y P x, y = Q b I x, y + 2 b If a pixel is black, then adding random noise to it, you are less likely to turn it into a white pixel then if the pixel were dark gray. Enough so that we effect rounding, but not so much that we overshoot: ( 1.0,1.0)

Random Dither Original (8 bits) Uniform (1 bit) Random (1 bit)

Ordered Dither Similar to quantization: We round the input to a value in the range {0,, 2 b 1}. Different from quantization: How we round depends on the pixel s spatial position.

Ordered Dither (Binary Displays) Pseudo-random quantization errors n n matrix stores pattern of thresholds // Locate the index in the matrix: i = x mod n j = y mod n // Get fractional component e = I(x, y) // Round up/down if e > D n i,j P x, y = 1 n 2 +1 else P x, y = 0 D 2 = 1 3 4 2

Ordered Dither (b-bit Displays) Pseudo-random quantization errors n n matrix stores pattern of thresholds // Locate the index in the matrix: i = x mod n j = y mod n // Get fractional component c = I x, y (2 b 1) e = c floor(c) // Round up/down if e > D n i,j P x, y = ceil(c) n 2 +1 else P x, y = floor(c) D 2 = 1 3 4 2

Ordered Dither Original (8 bits) Uniform (1 bit) Random (1 bit) Ordered (1 bit)

Error Diffusion Dither Spread quantization error over neighbor pixels Error dispersed to pixels right and below Floyd-Steinberg Method a b g d a + b + g + d = 1.0 Figure 14.42 from H&B

Error Diffusion Dither for( i=0 ; i<height ; i++ ) for ( j=0 ; j<width ; j++ ) Dest i,j = quantize( Source i,j ) error = Source i,j Dest i,j Source i,j+1 = Source i,j+1 + a * error Source i+1,j-1 = Source i+1,j-1 + b * error Source i+1,j = Source i+1,j + g * error Source i+1,j+1 = Source i+1,j+1 + d * error a = 7/16 b = 3/16 g = 5/16 d = 1/16 Floyd-Steinberg Dither

Error Diffusion Dither Original (8 bits) Uniform (1 bit) Random (1 bit) Ordered (1 bit) Floyd-Steinberg (1 bit)

Outline Human Vision Image Representation Reducing Color Quantization Artifacts Basic Image Processing Single Pixel Operations

Computing Grayscale The human retina perceives red, green, and blue as having different levels of brightness. To compute the luminance (perceived brightness) of a pixel, we need to take the weighted average of the RGBs: L p = 0.30 r p + 0.59 g b + 0.11 b p Original Grayscale Figure 13.18 from FvDFH

Adjusting Brightness Scale pixel components Must clamp to range (e.g., 0 to 255) I p I p α Original Brighter

Adjusting Contrast Compute mean image luminance തL തL = Average(0.30 r p + 0.59 g p + 0.11 b p ) Scale deviation from തL for each pixel component Must clamp to range (e.g., 0 to 255) തL Original More Contrast I p I p തL α + തL

Adjusting Contrast Compute mean image luminance തL തL = Average(0.30 r p + 0.59 g p + 0.11 b p ) What happens if we set the image to have Must clamp no to range contrast (e.g., (α 0 to = 255) 0)? Scale deviation from തL for each pixel component തL Original More Contrast I p I p തL α + തL

Adjusting Saturation Compute per-pixel luminance L p L p = 0.30 r p + 0.59 g p + 0.11 b p Scale deviation from L p for each pixel component Must clamp to range (e.g., 0 to 255) Original More Saturation I p I p L p α + L p

Adjusting Saturation Compute per-pixel luminance L p L p = 0.30 r p + 0.59 g p + 0.11 b p What happens if we set the image to have no saturation (α = 0)? Scale deviation from L p for each pixel component Must clamp to range (e.g., 0 to 255) Original More Saturation I p I p L p α + L p