Lecture Notes 11 Introduction to Color Imaging

Similar documents
Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Image Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson

TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0

Wireless Communication

Image Processing: An Overview

EE 392B: Course Introduction

Introduction to Multimedia Computing

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Digital photography , , Computational Photography Fall 2017, Lecture 2

CS6640 Computational Photography. 6. Color science for digital photography Steve Marschner

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern

MULTIMEDIA SYSTEMS

LECTURE III: COLOR IN IMAGE & VIDEO DR. OUIEM BCHIR

COLOR FILTER PATTERNS

Camera Image Processing Pipeline

Demosaicing Algorithms

A simulation tool for evaluating digital camera image quality

Improvements of Demosaicking and Compression for Single Sensor Digital Cameras

IMAGES AND COLOR. N. C. State University. CSC557 Multimedia Computing and Networking. Fall Lecture # 10

Camera Image Processing Pipeline: Part II

Lecture 3: Grey and Color Image Processing

Introduction. Prof. Lina Karam School of Electrical, Computer, & Energy Engineering Arizona State University

Digital photography , , Computational Photography Fall 2018, Lecture 2

To discuss. Color Science Color Models in image. Computer Graphics 2

Camera Image Processing Pipeline: Part II

Analysis on Color Filter Array Image Compression Methods

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color

CMVision and Color Segmentation. CSE398/498 Robocup 19 Jan 05

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

Color , , Computational Photography Fall 2018, Lecture 7

Color , , Computational Photography Fall 2017, Lecture 11

Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

Digital Imaging Rochester Institute of Technology

Color image processing

Histograms and Color Balancing

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Color Filter Array Interpolation Using Adaptive Filter

Interpolation of CFA Color Images with Hybrid Image Denoising

Acquisition and representation of images

Acquisition and representation of images

Image and Video Processing

CS559: Computer Graphics. Lecture 2: Image Formation in Eyes and Cameras Li Zhang Spring 2008

EE482: Digital Signal Processing Applications

Color image Demosaicing. CS 663, Ajit Rajwade

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD)

DIGITAL IMAGING. Handbook of. Wiley VOL 1: IMAGE CAPTURE AND STORAGE. Editor-in- Chief

Lecture 29: Image Sensors. Computer Graphics and Imaging UC Berkeley CS184/284A

Image Formation and Capture

How does prism technology help to achieve superior color image quality?

Image Processing (EA C443)

Fig Color spectrum seen by passing white light through a prism.

Color Science. CS 4620 Lecture 15

Color and perception Christian Miller CS Fall 2011

Lecture Color Image Processing. by Shahid Farid

Practical Content-Adaptive Subsampling for Image and Video Compression

Denoising and Demosaicking of Color Images

ECC419 IMAGE PROCESSING

Learning the image processing pipeline

Introduction to Computer Vision

What is an image? Bernd Girod: EE368 Digital Image Processing Pixel Operations no. 1. A digital image can be written as a matrix

Photons and solid state detection

Color images C1 C2 C3

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Capturing Light in man and machine. Some figures from Steve Seitz, Steve Palmer, Paul Debevec, and Gonzalez et al.

Sharpness, Resolution and Interpolation

A Unified Framework for the Consumer-Grade Image Pipeline

Image Sensor Color Calibration Using the Zynq-7000 All Programmable SoC

Chapter 9 Image Compression Standards

AN EFFECTIVE APPROACH FOR IMAGE RECONSTRUCTION AND REFINING USING DEMOSAICING

Acquisition. Some slides from: Yung-Yu Chuang (DigiVfx) Jan Neumann, Pat Hanrahan, Alexei Efros

EECS490: Digital Image Processing. Lecture #12

Waitlist. We ll let you know as soon as we can. Biggest issue is TAs

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

Color Image Acquisition Sam Liebo Lead Application Engineer February 2019

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Images. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 38

COLOR and the human response to light

Oversubscription. Sorry, not fixed yet. We ll let you know as soon as we can.

12/02/2017. From light to colour spaces. Electromagnetic spectrum. Colour. Correlated colour temperature. Black body radiation.

LECTURE 07 COLORS IN IMAGES & VIDEO

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song

CEE598 - Visual Sensing for Civil Infrastructure Eng. & Mgmt.

Mahdi Amiri. March Sharif University of Technology

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

Color Image Processing EEE 6209 Digital Image Processing. Outline

Color. Homework 1 is out. Overview of today. color. Why is color useful 2/11/2008. Due on Mon 25 th Feb. Also start looking at ideas for projects

Research Article Discrete Wavelet Transform on Color Picture Interpolation of Digital Still Camera

COLOR. and the human response to light

Lecture 1: image display and representation

Digital Media. Lecture 4: Bitmapped images: Compression & Convolution Georgia Gwinnett College School of Science and Technology Dr.

Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System

Assistant Lecturer Sama S. Samaan

Digital Image Processing Color Models &Processing

THE commercial proliferation of single-sensor digital cameras

Cameras. CSE 455, Winter 2010 January 25, 2010

New applications of Spectral Edge image fusion

Figure 1 HDR image fusion example

Antialiasing and Related Issues

Images and Displays. Lecture Steve Marschner 1

Part Number SuperPix TM image sensor is one of SuperPix TM 2 Mega Digital image sensor series products. These series sensors have the same maximum ima

Transcription:

Lecture Notes 11 Introduction to Color Imaging Color filter options Color processing Color interpolation (demozaicing) White balancing Color correction EE 392B: Color Imaging 11-1

Preliminaries Up till now we have been only discussing gray scale image capture If the incident photon flux density at a pixel is f 0 (λ) ph/cm 3.s, for 400 λ 700 nm, then the resulting photocurrent density j ph = q f 0 (λ)qe(λ)dλ A/cm 2, where QE(λ) e-/ph is the photodetector QE, which is a function of the technology parameters Assuming constant j ph over pixel area and over time (and ignoring dark current and noise) we get a pixel output (voltage) v o f 0 (λ)qe(λ)dλ To capture color images, each pixel must output more information about the spectral distribution of the incident photon flux (f 0 (λ)) Akeyfactfromcolor science is that we do not need to completely know the incident photon flux spectral distribution to faithfully reproduce color in fact only three values per pixel can be sufficient EE 392B: Color Imaging 11-2

Reason: the human eye has three types of photodetectors (cones) L,M, and S with different spectral sensitivities Normalized Spectral Sensitivity 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 S M L 0.1 0 350 400 450 500 550 600 650 700 750 Wavelength(nm) EE 392B: Color Imaging 11-3

So under uniform illumination (photon flux density f 0 (λ)) the color we see can be represented by a 3-dimensional vector L f0 (λ)l(λ)dλ C = M = f0 (λ)m(λ)dλ S f0 (λ)s(λ)dλ Or using discrete λ values as C = l T (λ) m T (λ) s T (λ) F 0 (λ) Thus C can be expressed as a linear combination of three basis vectors C = L l + M m + S s EE 392B: Color Imaging 11-4

Note: photon flux densities with different spectral distributions can produce the same perceived color (these are called metamers), e.g., 200 Power spectral density of A 900 Power spectral density of B 180 800 160 700 140 600 Relative Power 120 100 Relative Power 500 400 80 300 60 200 40 100 20 400 500 600 700 Wavelength(nm) 0 400 500 600 700 Wavelength(nm) The color basis vectors are not unique we can use different basis vectors to represent color, e.g., RGB (but we must be careful in selecting the spectral responses for the basis vectors), so for example we can write C = R r + G g + B b EE 392B: Color Imaging 11-5

C can be transformed from one basis vector representation to another (or from one color space to another) using a 3 3 matrix (more on this later) To get the three values from a pixel, color filters with different spectral responses are used, e.g., R, G, B filters So if we denote the R filter response by φ R (λ), theroutputfromapixel with photon flux density f 0 (λ) is v or f 0 (λ)η(λ)φ R (λ)dλ and similarly for the other filters The camera RGB spectral responses are the products of each filter s response and the photodetector spectral response, i.e., φ R (λ)η(λ),... etc. EE 392B: Color Imaging 11-6

Example: RGB spectral responses for a Kodak digital camera 0.8 0.7 B G R 0.6 Spectral Response 0.5 0.4 0.3 0.2 0.1 0 400 450 500 550 600 650 700 Wavelength(nm) EE 392B: Color Imaging 11-7

Color filter options Use three image sensors and a beam splitter (prism) + Every photon finds its way to a sensor + High spatial resolution High cost, nonoverlapping color filter spectra not desirable Use time-switched color filter + High spatial resolution, each color can have different exposure time Longer exposure time motion blur can be a problem Optical loss due to filters, high cost (rarely used) Use color filter array (CFA) or mosaic deposited on top of the pixel array, so each pixel outputs only one color component, e.g., R,G,orB + Lowest cost option Lower spatial resolution, optical loss due to filters Processing (demozaicing) needed to reconstruct the missing color components for each pixel EE 392B: Color Imaging 11-8

Color Filter Arrays EE 392B: Color Imaging 11-9

Color Processing Object Camera Display Eye Color processing is needed to (i) reconstruct missing pixel color components and (ii) to produce color (on a display device) that is close to what the eye would perceive EE 392B: Color Imaging 11-10

Typical color processing steps in a digital camera Color White Color Gamma Color From ADC Interpolation Balancing Correction Correction Conversion To DSP White balance: used to adjust for illuminant so that, for example, a white background appears white (the eye does this adaptively) Color correction: transforms the camera output to the color space of the display, or to a standard color space Gamma correction: corrects for display nonlinearity, also needed before image processing/compression Color conversion: needed before image processing/compression Color processing is performed mostly in the digital domain (but sometimes in analog, e.g., whitebalancing) It is computationally very demanding (about 70% of processing in a digital camera is related to color) EE 392B: Color Imaging 11-11

Color Interpolation (Demozaicing) Used to reconstruct the missing pixel color components (when a CFA is used) Interpolation method must Reduce artifacts such as aliasing and color fringing (false colors) Have reasonable computational complexity Interpolation algorithms: Nearest neighbor replication To reconstruct a missing color component of a pixel, simply set it equal to the value of its nearest pixel with that color Simple and fast, but results in large artifacts especially at edges Bilinear interpolation Perform bilinear interpolation in each color plane EE 392B: Color Imaging 11-12

relatively simple, but still suffers from some edge artifacts (may not be visible in a video sequence) 2-D filtering This is a generalization of bilinear interpolation The filter window size and coefficients for each color plane are designed to reduce artifacts Artifacts will still exist around edges Adaptive algorithms Since most artifacts occur around edges, change (adapt) the interpolation method when edges are present Yields better performance but requires more computations (for edge detection) EE 392B: Color Imaging 11-13

Interpolation Algorithms Examples Consider the Bayer pattern B1 G2 B3 G G4 R5 G6 R B7 G8 B9 G R G G R Bilinear interpolation G5 = G2+G4+G6+G8 4 B5 = B1+B3+B7+B9 4 B2 = B1+B3 2 Adaptive algorithm interpolation results in color fringing and zipper effects along edges most significant for luminance (green), since the eye is more EE 392B: Color Imaging 11-14

sensitive to spatial variation in luminance than chrominance For each pixel (with missing green) perform edge detection before interpolation, and only use pixels along edges For example, assume that the pixels to the left of the edge have larger pixel values than the ones on the right R G R G B G R G R Instead of using the four greens to estimate the missing green value of the blue pixel, which would result in color fringing, we only use the two greens along the edge What if the edge is diagonal? Use larger region for interpolation... EE 392B: Color Imaging 11-15

White Balancing Different light sources (illuminants) have different power spectral densities (psd) The psd of color reflected from an object is a function of both the object s surface reflectance and the illuminant psd more specifically the photon flux density at a pixel is proportional to the product of the object surface reflectance S(λ) and the illuminant psd E(λ), i.e.,f 0 (λ) E(λ)S(λ) So, for example, a raw image taken by a camera of a white piece of paper will look yellowish under incandescent lighting and greenish under fluorescent lighting compared to under day light The eye, by comparison, would see the white paper as white almost independent of the illuminant, and in a scene with white background it adjusts the colors in the scene so that the background looks white Captured images must also be processed so that a white background looks white this is called white (color) balancing EE 392B: Color Imaging 11-16

Two Approaches to White Balancing Fixed white balance, i.e., with known illuminant Capture images of a white piece of paper under each potential illuminant (the first illuminant being the standard one where the image looks white), for each illuminant Compute the average value for each color channel (R i,g i,b i ) Compute the ratio between each color channel and the green channel (luminance), i.e., R i and G B i i G i Normalize each ratio ( ) by the ( corresponding ) ratio of the first illuminant to get Ri / R1... G i G 1 To perform white balancing for a captured image with known illuminant, divide the red and blue values by the appropriate normalized ratios EE 392B: Color Imaging 11-17

Automatic white balance is used if we do not know the illuminant Most algorithms used in cameras are proprietary, most use some variation of the Gray World assumption The Gray World assumption is that over all scenes R avg = G avg = B avg Simple GrayWorld Algorithm: equalize the averages for the three color channels by dividing each red value by ( R avg G avg ) and each blue by ( B avg G avg ) (this works ok except when we have atypical scenes, e.g., a forest with mostly bright green leaves the image will look grayish after white balancing) Another approach is to use the color information to estimate (or decide on) the illuminant EE 392B: Color Imaging 11-18

Color Correction Color filter technology and photodetector spectral response determine the camera color space To ensure that color from the camera looks the same on a display, the camera output must be transformed to the display color spectral response space Since there may be many display types used to render a captured image, it is customary to transform the camera output to a standard color space, e.g., correspondingtothelmsspectralresponses,inthecamera correction for each display type is performed outside the camera To transform the camera output to a standard color space we use a 3 3 matrix D, thusifc is the color from a pixel, the corrected color C o = DC EE 392B: Color Imaging 11-19

So how do we find D? If A 1 is the camera spectral response matrix (3 n) anda 2 is the LMS spectral response matrix (3 n), then we can select D such that DA 1 is as close to A 2 as possible, which can be done, for example, using least squares This seems to work well, the following is the corrected RGB (of the Kodak camera) compared to LMS spectral responses EE 392B: Color Imaging 11-20

Gamma Correction Intensity of the light generated by a display device Z is not linear in its input Y,e.g.,Z Y 2.22 Must prewarp the image sensor output X so that the output of the display is linear in the illumination at the camera done using a companding function, e.g., Y X 0.45 Also needed prior to image enhancement and compression Most image processing algorithms assume pixel values proportional to perceptual brightness, which is close to the prewarped value Y Typically implemented using 3 lookup tables, one for each color component EE 392B: Color Imaging 11-21

Color Space Conversion Transform RGB to YCbCr, or to YUV using 3 3 matrix Y a 11 a 12 a 13 R Cb = a 21 a 22 a 23 G Cr a 31 a 32 a 33 B Most image enhancement and compression are performed on luminance and chrominace values separately Eye is more sensitive to luminance than to chrominance Preserve color before and after processing EE 392B: Color Imaging 11-22