Design of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2

Similar documents
Design of practical color filter array interpolation algorithms for digital cameras

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Edge Potency Filter Based Color Filter Array Interruption

Method of color interpolation in a single sensor color camera using green channel separation

Demosaicing Algorithms

Fourier transforms, SIM

AN EFFECTIVE APPROACH FOR IMAGE RECONSTRUCTION AND REFINING USING DEMOSAICING

Demosaicking methods for Bayer color arrays

Image Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates

Topic 4: Image Filtering Workshop Solutions

Color Filter Array Interpolation Using Adaptive Filter

1.Discuss the frequency domain techniques of image enhancement in detail.

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Deconvolution , , Computational Photography Fall 2017, Lecture 17

RGB RESOLUTION CONSIDERATIONS IN A NEW CMOS SENSOR FOR CINE MOTION IMAGING

Edge-Raggedness Evaluation Using Slanted-Edge Analysis

Practical Implementation of LMMSE Demosaicing Using Luminance and Chrominance Spaces.

Interference in stimuli employed to assess masking by substitution. Bernt Christian Skottun. Ullevaalsalleen 4C Oslo. Norway

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern

?t-) LILIITILIT LEITT LT. UIT DICTITI TIETTET 5,629,734. U.S. Patent º gá

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Vision Review: Image Processing. Course web page:

Simultaneous geometry and color texture acquisition using a single-chip color camera

Determination of the MTF of JPEG Compression Using the ISO Spatial Frequency Response Plug-in.

digital film technology Resolution Matters what's in a pattern white paper standing the test of time

Chapter 2 Fourier Integral Representation of an Optical Image

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System

Analysis on Color Filter Array Image Compression Methods

Data Embedding Using Phase Dispersion. Chris Honsinger and Majid Rabbani Imaging Science Division Eastman Kodak Company Rochester, NY USA

Convolutional Networks Overview

Camera Resolution and Distortion: Advanced Edge Fitting

CSCI 1290: Comp Photo

A Study of Slanted-Edge MTF Stability and Repeatability

Influence of Image Enhancement Processing on SFR of Digital Cameras

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Module 3 : Sampling and Reconstruction Problem Set 3

CS 4501: Introduction to Computer Vision. Filtering and Edge Detection

Image Filtering. Median Filtering

An Improved Color Image Demosaicking Algorithm

ELEC Dr Reji Mathew Electrical Engineering UNSW

No-Reference Perceived Image Quality Algorithm for Demosaiced Images

Denoising and Demosaicking of Color Images

Computer Graphics (Fall 2011) Outline. CS 184 Guest Lecture: Sampling and Reconstruction Ravi Ramamoorthi

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering

Section 2 ADVANCED TECHNOLOGY DEVELOPMENTS

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

Convolution Pyramids. Zeev Farbman, Raanan Fattal and Dani Lischinski SIGGRAPH Asia Conference (2011) Julian Steil. Prof. Dr.

Digital Image Processing 3/e

The Effect of Single-Sensor CFA Captures on Images Intended for Motion Picture and TV Applications

Color Demosaicing Using Variance of Color Differences

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

DEMOSAICING, also called color filter array (CFA)

Color filter arrays revisited - Evaluation of Bayer pattern interpolation for industrial applications

On Contrast Sensitivity in an Image Difference Model

Introduction. Prof. Lina Karam School of Electrical, Computer, & Energy Engineering Arizona State University

Lecture 3: Linear Filters

Speech and Audio Processing Recognition and Audio Effects Part 3: Beamforming

ABSTRACT. Keywords: Color image differences, image appearance, image quality, vision modeling 1. INTRODUCTION

Last Lecture. photomatix.com

Focused Image Recovery from Two Defocused

Evaluation of image quality of the compression schemes JPEG & JPEG 2000 using a Modular Colour Image Difference Model.

ECE 484 Digital Image Processing Lec 09 - Image Resampling

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Practical Image and Video Processing Using MATLAB

Module 6 STILL IMAGE COMPRESSION STANDARDS

Filters. Materials from Prof. Klaus Mueller

Study guide for Graduate Computer Vision

Assistant Lecturer Sama S. Samaan

ECE 2111 Signals and Systems Spring 2012, UMD Experiment 9: Sampling

Midterm Examination CS 534: Computational Photography

Signal Processing Techniques for Software Radio

Double resolution from a set of aliased images

ELEC Dr Reji Mathew Electrical Engineering UNSW

Orthonormal bases and tilings of the time-frequency plane for music processing Juan M. Vuletich *

Adaptive Optimum Notch Filter for Periodic Noise Reduction in Digital Images

Chrominance Assisted Sharpening of Images

Lecture Notes 11 Introduction to Color Imaging

Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015

Aliasing and Antialiasing. What is Aliasing? What is Aliasing? What is Aliasing?

+ a(t) exp( 2πif t)dt (1.1) In order to go back to the independent variable t, we define the inverse transform as: + A(f) exp(2πif t)df (1.

An Evaluation of MTF Determination Methods for 35mm Film Scanners

Defense Technical Information Center Compilation Part Notice

Sampling and Reconstruction

Image Processing Final Test

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

On Contrast Sensitivity in an Image Difference Model

06: Thinking in Frequencies. CS 5840: Computer Vision Instructor: Jonathan Ventura

Announcements. Image Processing. What s an image? Images as functions. Image processing. What s a digital image?

Lecture 7 Frequency Modulation

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Thermal tuning of volume Bragg gratings for high power spectral beam combining

IN A TYPICAL digital camera, the optical image formed

Last Lecture. photomatix.com

In-line digital holographic interferometry

Matlab (see Homework 1: Intro to Matlab) Linear Filters (Reading: 7.1, ) Correlation. Convolution. Linear Filtering (warm-up slide) R ij

An Effective Directional Demosaicing Algorithm Based On Multiscale Gradients

Transcription:

Design of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2 James E. Adams, Jr. Eastman Kodak Company jeadams @ kodak. com Abstract Single-chip digital cameras use a color filter array and subsequent interpolation strategy to produce full-color images. While the design of the interpolation algorithm can be grounded in traditional sampling theory, the fact that the sampled data is distributed among three different color planes adds a level of complexity. Existing interpolation methods are usually derived from general numerical methods that do not make many assumptions about the nature of the data. Significant computational economies, without serious losses in image quality, can be achieved if it is recognized that the data is image data and some appropriate image model is assumed. 1. Introduction In a previous paper [ 11, the author described the makeup of a digital camera image processing chain with particular emphasis on the color filter array (CFA) interpolation process. This paper again explores the CFA interpolation process, this time from the standpoint of Fourier spectrum analysis and optimum algorithm design. Though the work presented can be generalized to most any CFA pattern, for simplicity, the Kodak Bayer CFA pattern [2] will be assumed throughout this paper. Figure 1 is an illustration of this pattern. In Fig. 1, R stands for red, G stands for green, and B stands for blue. blue information will be performed using Cok s method, described in the author s previous paper [1,]. A one-dimensional approach to CFA interpolation will be used. Used in conjunction with an adaptive strategy for selecting either a horizontal or vertical one-dimensional pixel neighborhood for each pixel in question, this produces a CFA interpolation algorithm capable of very high-quality image reconstructions. In a previous paper [4], the author described how to use the pixels within a resulting one-dimensional slice to produce the best estimate for the missing green pixel value in question. This paper will discuss how to algorithmically choose an appropriate orientation for interpolation. In order to design an optimum green pixel value predictor, the interpolation problem will be stated as a simple signal sampling and recovery problem, after Gaskill [5]. This will establish the characteristics of the perfect CFA interpolation predictor. Subsequently, the best approximation to this ideal predictor from [4] will be compared with the ideal predictor and the results used to develop an optimum pixel neighborhood selection scheme. It should be realized that the image processing operation modeled here, CFA sampling and interpolation, is not shift-invariant. Therefore, the following analysis, which assumes a linear, shift-invariant (LSI) system, cannot be expected to be rigorously correct. However, it does provide a framework from which some general and pragmatic results can be derived. The reader is reminded to keep this caveat in mind. 2. Sampling theory review We will assume the one-dimensional pixel neighborhood in Fig. 2 throughout most of this paper. Figure 1. Bayer CFA pattern This paper will concentrate on the reconstruction of the luminance information in the image. In this case, the green pixel information will be treated as the luminance information. The subsequent reconstruction of the red and... R-2 G-1 R, GI R2... Figure 2. One-dimensional pixel neighborhood In Fig. 2, R would be either a red pixel (for red-green rows) or a blue pixel (for blue-green rows) and G would be a green pixel. If f(x) is the original green image information, then the sampled green data, fs(x), would be given in Eq. 1. 0-8186-8821-1/98 $10.00 0 1998 IEEE 488

Evaluating Eq. 6 for odd integral values (i.e., green pixels) and for even integral values (Le., red or blue pixels) produces the desired results: The Fourier transform of Eq. 1 is given in Fig. 2. Equation 2 indicates the well-known fact that the spectrum of the sampled signal consists of the spectrum of the original signal, F(C), replicated along the frequency axis at regular intervals. Assuming the spectrum replicates do not overlap, then by eliminating the spectrum replicates, F(@, and, therefore, f(x), can be recovered. The ideal interpolation filter to perform such a spectrum replicate elimination is where Intp(Lj) = rect(25), (). Analysis of an optimum family of predictors m even Since Eq. 6 does not converge quickly enough for practical use, an approximation must be employed. In a previous paper [4], the author derived an approximation with optimum spatial frequency performance. The form of this predictor, a generalized 5-point FIR filter [7-91, is given in Eq. 9. Applying Intp(&) to F&) produces the desired result, to within an unimportant multiplicative constant. (See Eq. 5.) In Eq. 9, the first -point kernel is convolved with the existing green pixel values. The second 5-point kernel is convolved with the existing red pixel values. A good numerical example of Eq. 9 is given in Eq. 10. Taking the inverse Fourier transform of Eq. 5 produces the ideal interpolation process. (See Eq. 6.) In Eq. 6, Gaskill's definition of the sinc fiinction is assumed: Of course, Eq. 6 is the well-known Shannon-Whittaker sampling theorem [6]. Equally well known fs that a direct implementation of Eq. 6 is impractical because the summation has a painfully slow rate of convlergence. The spatial frequency response of Eq. 10 is given in Eq. 11. 1 Intp(4) = cos2(n5)+-sin2(2n~) Figure is a plot of Eqs. and 11. It can be seen that there is good agreement between the ideal predictor (Eq. ) and the 5-point predictor (Eq. 11) up to roughly 0.25 cycles/sample. Beyond 0.25 cycles/samples the curves diverge significantly. We can define a simple figure of merit for how well the 5-point predictor performs as a function of spatial frequency by simply subtracting Eq. 11 from Eq. and taking the absolute value. (See Eq. 12 and Fig. 4.) clas(6) = lcos'(n<) + 1 sin2(2n<) - rect(26) 489

0.6 lntp 0.4 (5) 1 0.8 0.2 0 - Ideal -5-Point Predictor 0 0.1 0.2 0. 0.4 0.5 5 (cycledsam ple) Figure. Comparison of ideal and optimum 5-point interpolating function spectra i 1 0.60 Pirr o.80 0.40 0.20 5 h I\ J \ - 0.00 0.00 0.10 0.20 0.0 0.40 0.50 5 (cycle s/sa m pl e) Figure 4. Absolute difference between ideal and optimum 5-pOint interpolating function spectra Clearly, we would like to operate solely in the spatial region from 0 to just below 0.25 cycles/sample. 4. Creation of an optimum classifier the orientation that produces the smallest value. This should correspond to the direction that has the least amount of spatial frequency activity above 0.25 cycles/sample. The challenge will be being restricted to the available data points called out in Fig. 5. Referring back to our one-dimension pixel neighborhood of Fig. 2, two terms of our classifier immediately suggest themselves. These are the wellknown gradient and Laplacian operators. The gradient operator can be applied to the green data points and the Laplacian operator to the red data points. In terms of convolution kernels, we might express our initial classifier as done in Eq. 1 [7-91. g=21(1 0 -l)l+l(-l 0 2 0 -I)] (1) The first term is the green gradient operator and the second term is the red Laplacian operator. Note that we incorporate absolute value signs to prevent the operators from canceling each other out in any way. We also scale the gradient operation by two to permit both terms to have equal full-scale values. Equivalent clas(x) and Clas(5) functions are and where clas(x) = (z6~(~)]+[26(*)-~~6(f)] (14) Clas(c) = 4sin2(2n()+j4sin(2n{) (15) We can expand Fig. 2 into a two-dimensional representation. (See Fig. 5.) Figure 5. Two-dimensional pixel neighborhood Our strategy is going to be to evaluate the spatial frequency response of the data in both the horizontal and vertical directions and pick the direction that has the least amount of energy above 0.25 cycleslsample, i.e., where Clas(5) is greatest. In order to easily perform this spatial frequency content test, we will create a numeric classi$er that will have a spatial frequency response similar to Eq. 12. Once we have this classifier, then we simply evaluate it in both the horizontal and vertical directions and select and j is the square root of -1. Note that we have temporarily removed the absolute value signs in going from Eq. 1 to Eq. 14. Figure 6 is a plot of the absolute value of the components of Eq. 15 with the ideal response of Eq. 12. While there is reasonable agreement above 0.25 cycleslsample, below that frequency the responses diverge. What is needed is a third term that has a low response below 0.25 cycleslsample and a high response above 0.25 cycleslsample. If pixel R, in Fig. 2 were actually another green pixel, Go, we could then add a second Laplacian of the central three green pixel values to perform the needed discrimination. The new set of convolution kernels and corresponding clas(x) and Clas(5) expressions would then be [7-91 490

If we are processing the line in Fig. 2 from left to right, then by the time calculation of Go is ready to begin, a value for G, has already been determined. Therefore, for the sake of using the convolution kernel in Eq. 18, we may estimate Go as,and (19) 0.0 0.1 0.2 0. 0.4 0.5 5 (c ycleslsam p le) Figure 6. Comparison of classifier spatial frequency responses. The gradient and Laplaciain functions have been scaled in amplitude for ease of comparison with the ideal response. Figure 7 is a repeat of Fig. 6 with the second Laplacian (labeled the mixed Laplacian for reasons explained below) added. Clas (5) 1.o 0.8 0.6 0.4 0.2 0.0 0.0 0.1 0.2 0. 0.4 0.5 5 (cycleslsam ple) Figure 7. Comparison of classifier spatial frequency responses. The gradient, Laplacian and mixed Laplacian functions have been scaled in amplitude for ease of comparison with the ideal response. Our new term has the desired frequency response. Now, we need to address the fact that we do not have Go. For the purposes of our classifier we can get a rough estimate of Go from previously interpolatedl green pixels by assuming the red and green channels are perfectly correlated to within an offset term. (See Eq. 21.) We can assume this image model because the RGB color planes of an image tend to be correlated [lo]. This estimate will probably not be nearly as accurate as the one produced by Eq. 10, but it will generally be sufficient for Eq. 18 to allow the correct interpolation orientation to be determined. All that is left is to use the results of Eq. 18 to select the correct orientation for interpolation. If we return to Fig. 5, we would apply Eq. 18 to the horizontal pixel neighborhood to determine the horizontal classification value. Then we would apply Eq. 18 to the vertical pixel neighborhood to determine the vertical classification value. Note that for the vertical classification calculation, we will want to estimate Go with the pixel values bo, &* and G;, assuming we are processing the image from top to bottom. Once we have classification values for both directions, we simply choose the direction that produces the smallest classification value for our preferred direction of interpolation. 5. Summary The process of color filter array sampling and interpolation can be cast in the form of a signal sampling and recovery problem using standard Fourier spectrum analysis. A simple image model can be used to permit the use of information from color channels other than green to aid in the reconstruction of the green (luminance) record. Fourier spectra can be derived for various CFA classifier kernels and relative image quality predictions be made from these spectra. 6. Acknowledgments The author would like to thank John Hamilton, Kevin Spaulding, Brian Keelan, and Karin Topfer, all of Eastman Kodak Company, for their valuable contributions to this material. 7. References [ 1 J. E. Adams, Jr., Interactions between color plane interpolation and other image processing functions in electronic photography, Proceeding of SPIE, C. Anagnostopoulos, M. Lesser, eds., vol. 2416, pp. 144-151, SPIE, Bellingham, WA, 1995. [2] B. E. Bayer, Color imaging array, U.S. Patent,971,065, 1976. 491

[] D. R. Cok, Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal, U.S. Patent 4,642,678, 1987. [4] J. E. Adams, Jr., Design of practical color filter array interpolation algorithms for digital cameras, Proceeding ofspie, D. Sinha, ed., vol. 028, pp. 117-125, SPIE, Bellingham, WA, 1997. [5] J. D. Gaskill, Linear Systems, Fourier Transforms, and Optics, John Wiley & Sons, New York, p. 266, 1978. [6] Zbid., p. 271. [7] J. E. Adams and Jr., J. F. Hamilton, Jr., Adaptive camera, U.S. Patent 5,506,619, 1996. [8] J. F. Hamilton and Jr., J. E. Adams, Jr., Adaptive camera, U.S. Patent 5,629,74, 1997. [9] J. E. Adams, Jr. and J. F. Hamilton, Jr., Adaptive camera, U.S. Patent 5,652,621, 1997. [ 101 K. Topfer, J. E. Adams, and B. W. Keelan, Modulation transfer functions and aliasing patterns of CFA interpolation algorithms, Proceedings of IS&T s FICS Conference, Portland, OR, p. 67, 1998. 492