Abstract. RAMANATH, RAJEEV. Interpolation Methods for the Bayer Color Array (under the guidance of Dr. Wesley E. Snyder)

Size: px
Start display at page:

Download "Abstract. RAMANATH, RAJEEV. Interpolation Methods for the Bayer Color Array (under the guidance of Dr. Wesley E. Snyder)"

Transcription

1 Abstract RAMANATH, RAJEEV. Interpolation Methods for the Bayer Color Array (under the guidance of Dr. Wesley E. Snyder) Digital still color cameras working on single CCD-based systems have a mosaicked mask of color filters on the sensors. The Bayer array configuration for the filters is popularly used. This requires that the data be interpolated to recover all the scene information. Many existing interpolation (demosaicking) algorithms that can reconstruct the scene use modifications of the bilinear interpolation method, introducing a variety of artifacts in the images. These algorithms have been investigated. A new method for restoring these color images using an optimization method known as Mean Field Annealing is introduced using a variety of image prior models. Their performance relative to existing demosaicking methods is included.

2

3 Biography Rajeev Ramanath was born in New Delhi, the capital of India in February, He was born and brought up for the first few years in a place called Gorakhpur. His family later relocated to Madras (now known as Chennai) where he had his high school education. In 1998, he earned a Bachelors of Engineering degree from Birla Institute of Technology and Science, Pilani, India, majoring in Electrical Engineering. He has since moved to Raleigh, NC, to pursue higher education. ii

4 Dedication to my parents, Ramanath and Meera... iii

5 Acknowledgments Although merely mentioning names in here will by no means satisfy my need to thank all those people involved with the work in this document, I am bound to mention a few here. I would like to thank my advisor, Dr. Wesley Snyder for all the support and guidance provided, not to mention his innate ability to discuss problems on the basis of scientific intellect. Special gratitude goes to Dr. H. J. Trussell for his valuable insight. Also my gratitude goes to Dr. Jack Holm (HP Labs) for all the help with the data gathered; to Dr. Toshi Hori (Pulnix America Inc.) for all the help with providing us with data and a camera to perform experiments. Also, I wish to thank Dr. Y.F. Yoo (Texas Instruments), who had been very helpful in providing calibration information about the cameras. Dr. James Adams (Eastman Kodak Company) deserves a lot of credit for his insight and advice during the course of this project, which he offered ever so willingly. Many thanks to Dr. Griff Bilbro, Dr. Paul Hemler and Dr. Richard Kuehn for their encouragement and direction and timely guidance. It would be unfair if I did not thank my parents, Ramanath and Meera and my sister, Ramya, for having stood by me in every decision I have made; Sumathi, for being there for me and understanding my every idiosyncrasy; my friends, who helped me review my work, encourage me and help me sort things out; thankyou. iv

6 LIST OF TABLES viii LIST OF FIGURES ix Chapter 1 Introduction Color Filter Arrays Demosaicking Outline Chapter 2 Color Fundamentals Introduction Retinal Receptors of the Human Eye Color Models Color Matching Functions RGB Color Model HSV Color Model CIE-XYZ Color Model Color Differences CIE-L*a*b* Color Model ISO RGB Color Model srgb Color Model (for displays) Caveat (Interpolating in Color Space) Chapter 3 Image Model Introduction Linear Model a-priori imaging constraints Blur Noise Chapter 4 Common demosaicking methods in Bayer Arrays Introduction Ideal Interpolation v

7 Bilinear Interpolation Constant Hue-based Interpolation Neighborhood considerations Median-based Interpolation Gradient Based Interpolation Adaptive Color Plan Interpolation Comparison of Interpolation Methods Type I Test Images Type II Images Type III Images RGB Images Results Chapter 5 MFA Restoration Methods Introduction Reconstruction processes Mean Field Annealing Noise Term Prior Term Choice of parameters Chapter 6 Demosaicking using Mean Field Annealing Introduction Independent Restoration (MFA-RGB) Independently Piecewise Uniform Independently Piecewise Linear Vector MFA (VMFA) VMFA - Noise Term VMFA - Prior Term Piecewise-constant hue model (MFA-HSV) Chapter 7 Experimental Approach Introduction vi

8 CFA Sampling Synthetic Images Chapter 8 Results Introduction Synthetic Images Real-world Images Measures Chapter 9 Improvements Introduction Constant-Hue based Interpolation Median-Based Interpolation Vector median based interpolations Chapter 10 Conclusion and Future Work References Appendix A Gamma Correction Explained Appendix B RGB / HSV conversion vii

9 LIST OF TABLES Table 4.1 * E ab metric 81 Table 4.2 E RGB metric (x10-3) 81 Table 5.1 Parameter list for MFA 98 Table 7.1 * E ab metric for subpixel Gaussian blurred images 114 Table 7.4 E RGB metric (x10-3) for subpixel Gaussian blurred images 114 Table 8.1 Error Metrics for piecewise-constant restored images 125 Table 8.2 Error Metrics for piecewise-linear restored images 130 Table 8.3 Number of floating point operations required for a 256x256 image 147 viii

10 LIST OF FIGURES Figure 1.1 The four possible arrangements of the Bayer Array Sensors 2 Figure 1.2 Yamanaka Color Filter Array. Chrominance samples are 3 out of phase on each scanline Figure 1.3 Illustration of mosaicking artifacts (a) Original reference image (b) a small portion from that image (c) result of linear interpolation 5 Figure 2.1 Figure 2.2 Color components of light, based on nanometers. Note: names do not define wavelength regions, but are for the main regions of the spectrum. The visible region of the spectrum is from about 400 to 700 nm. The scotopic and photopic sensitivities of the Human eye. Notice the region where the sensitivity is highest in each case. Figure 2.3 Color sensitivity of the human eye. 10 Figure 2.4 Principle of trichromatic color matching by additive mixing of lights. R, G, B are sources of red, green and blue light, the intensities of whose can be adjusted. C is the target color the observer needs to match by changing the intensities of R, G, B. 13 Figure 2.5 The color matching functions of the CIE RGB standard represented in terms of the stimulus provided by the three wavelengths mentioned in the text Figure 2.6 The RGB Color Cube 17 Figure 2.7 The HSV Color hexcone 19 Figure 2.8 The CIE-XYZ color matching curves. 21 Figure 2.9 The chromaticity diagram viewed in the CIE XYZ model Notice that the hues are not evenly spaced about the perimeter of this Horse-shoe plot ix

11 Figure 2.10 Figure 2.11 The result of pioneering work done by MacAdams, resulting in ellipses that were a marker to the just noticeably different colors. Ellipses are scaled up in size to enhance orientation. Subjectively equal color steps on the chromaticity diagram. Each line segment represents a color difference about three times greater than that just noticeable distance for a 2 o field. Figure 2.12 ISO RGB Color Matching Functions 31 Figure 2.13 OECF for a typical DSC [52] 33 Figure 2.14 Illustration of results of interpolating in different color spaces (a) original image (b) RGB interpolation (c) HSV interpolation 36 Figure 3.1 Image formation model in a digital camera system 38 Figure 4.1 Sample Bayer Pattern 46 Figure 4.2 Illustration of fringe or zipper effect resulting from the linear interpolation process (a) Original image (only 2 colors) (b) subsampled Bayer image (c) result of linear interpolation. Notice color fringe in locations 5 and 6 47 Figure 4.3 Figure 4.4 a) RGB image b) Green Channel c) Green minus Red (d)green minus Blue 52 Illustration of Freeman s interpolation method for a two 54 channel system (a) Original image (b) subsampled Bayer image (c) result of linear interpolation (d) Green minus Red (e) median filtered result of the difference image (f) reconstructed image Figure 4.5 Sample Bayer Neighborhood A i = Chrominance, G i = 59 Luminance Figure 4.6 Bayer Array Neighborhood Figure 4.7 Type I Test Images, a) Test Image 1 has vertical bars with decreasing thicknesses(16 pixels down to 1 pixel) b) Test Image2 has bars of constant width (3 pixels) 64 x

12 Figure 4.8 Figure 4.9 Figure 4.10 Figure 4.11 Figure 4.12 Figure 4.13 Figure 4.14 Figure 4.15 Figure 4.16 Figure 4.17 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on Test Image1. Note: Images are not the same size. Image has been cropped to hide edge effects 66 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)ha 68 milton-adams interpolations on Test Image2. Note: Images are not the same size. Image has been cropped to hide edge effects Type II Test Images, a) has horizontal bars with decreasing thicknesses(16 pixels down to 1 pixel) b) Constant width (3 pixels Type III Test Image, a) Full-size Starburst Image b) Upper right quadrant, used in tests 70 (a)linear (b)cok (c)freeman (d)laroche-prescott 72 (e)hamilton-adams interpolations on Starburst Image. Note: Images are not the same size. Image has been cropped to hide edge effects. (a) Full-resolution macaw Image (b) ROI about the green macaws eye (c) ROI about the red macaws eye (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on macaw Image showing ROI about the green macaw s eye. (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on macaw image showing ROI about the red macaw s eye (a)full-resolution girl Image (b) ROI about the balloon ribbon (c) ROI about the girl s mouth (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on girl image showing ROI about the balloon ribbon xi

13 Figure 4.18 Figure 5.1 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on girl image showing ROI about the girl s mouth Illustration of blur estimation process (a) sample image showing CFA pattern (b) samples in red channel along the horizontal axis Figure 6.1 Flow-chart showing the proposed restoration process 109 Figure 7.1 Illustration of the missing-pixel artifact (a) Original CFA 111 image (b) Result after Linear Interpolation Figure 7.2 Type I Test Image, a) Test Image2 used earlier (b) Test Image2 after a 2x2 boxcar blur (c) TestImage2 after a subpixel blur of variance unity and subsampled accordingly.(bars of constant width - 3 pixels) Figure 8.1 Piecewise-constant synthetic images (a) Test Image 1 (b) Test Image 2 (c) Test Image 3 (d) Test Image 4 Figure 8.2 MFA Restored Test Image 1 (a) result of de-mosaicking (b) 119 MFA-RGB (c) VMFA (d) MFA-HSV restoration Figure 8.3 MFA Restored Test Image 2 (a) result of de-mosaicking (b) 121 MFA-RGB (c) VMFA (d) MFA-HSV restoration Figure 8.4 MFA Restored Test Image 3 (a) result of de-mosaicking (b) 122 MFA-RGB (c) VMFA (d) MFA-HSV restoration Figure 8.5 Regions of interest in the starburst image (a) ROI 1 (b) ROI Figure 8.6 MFA Restored ROI 1 of the starburst image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration 124 Figure 8.7 Figure 8.8 Figure 8.9 MFA Restored ROI 2 of the Starburst image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Piecewise-linear synthetic images (a) Test Image5 (wedge) (b) Test Image 6 (color wedges) MFA Restored wedge image (a) result of de-mosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration xii

14 Figure 8.10 ROI in the color wedges image 128 Figure 8.11 MFA Restored ROI of the color-wedges image (a) result of de-mosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration 129 Figure 8.12 CFA resolution chart image 131 Figure 8.13 Regions of interest in the resolution chart image (a) ROI (b) ROI 2 Figure 8.14 MFA Restored ROI 1 of the Resolution chart image (a) result of de-mosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration 133 Figure 8.15 MFA Restored ROI 2 of the resolution chart image (a) result of de-mosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Figure 8.16 CFA matisse image 135 Figure 8.17 Regions of interest in the matisse image (a) ROI 1 (b) ROI Figure 8.18 MFA Restored ROI 1 of the matisse image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration 137 Figure 8.19 MFA Restored ROI 2 of the resolution chart image (a) result of de-mosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Figure 8. CFA books image 139 Figure 8.21 Region of interest in the books image 140 Figure 8.22 MFA Restored ROI 1 of the matisse image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration 141 Figure 8.23 CFA car image 142 Figure 8.24 Region of interest in the car image 143 Figure 8.25 MFA Restored ROI of the car image (a) result of de-mosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration with piecewise-constant prior model xiii

15 Figure 8.26 MFA Restored ROI of the car image (a) result of de-mosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration with piecewise-linear prior model Figure 8.27 Error rates in the restoration process vs. step count 148 Figure 9.1 Example spectral sensitivities of a digital color camera [52] 150 Figure 9.2 Improvement on Cok s algorithm (a) original RGB image (b) result of Cok s algorithm (c) result of using the new definition of hue 152 Figure 9.3 Figure A.1 Figure A.2 Figure A.3 Modification of Freeman s algorithm (a) original RGB image (b) result of Freeman s algorithm (c) result of using the mean instead of the median Illustration of gamma correction for displays (a) Original Input to display (b) as seen on a monitor with gamma of 2.5 (c) gamma-corrected input (d) as seen after gamma correction Image profiles and gamma correction curves for the images used earlier (a) Original Input to display (b) as seen on a monitor with gamma of 2.5 (c) gamma-corrected input (d) as seen after gamma correction Illustration showing how gamma correction affects hue (a) actual color as seen on a display with a gamma of 2.5 (b) color simulated with a gamma correction of xiv

16 Chapter 1 Introduction Digital Color Cameras sample the spectrum of visible light using three or more filters. Each location on the CCD (photo-site) captures one sample of the color spectrum. This gives us a mosaic of samples. The process of recovering the original image is referred to as demosaicking. The choice of the sensitivities of the CCD elements is critical. To reconstruct a color image, we need to sample luminance and chrominance information. Since the Human Visual System (HVS) is less sensitive to degradations in the chrominance information than those in the luminance information [29], the chrominance channels are often sub-sampled by a larger magnitude than the luminance channels. In other words, since there needs to be a sub-sampling in the scene information, the chrominance channels may be sampled at a rate much lower than the luminance channel. Interpolation Methods for the Bayer Color Array 1

17 Introduction It shall be shown later that the spectral sensitivity of the HVS to luminance is similar to the spectral power distribution of the green channel. Hence, green sensors replace the luminance channel. Red and blue sensors replace the chrominance channels. We can now construct a color filter to perform the sampling of the color spectrum. 1.1 Color Filter Arrays The Bayer Array [9] is one of the many realizations of color filter arrays possible. It has a mosaic of red, green and blue optical filters which can be described by a simple 2x2 arrangements as shown in Figure 1.1. R G B G G R G B G B G R B G R G Figure 1.1 The four possible arrangements of the Bayer Array Sensors (phase shifted) Interpolation Methods for the Bayer Color Array 2

18 The arrangements describe the upper left corner of the sensor array. The rest of the sensor array can be determined by repeating this pattern both horizontally and vertically in both the spatial dimensions. Other implementations of a color-sampling grid have been incorporated in commercial cameras, also using the principle that the luminance channel needs to be sampled more than the chrominance channels. For instance, the pattern shown in Figure 1.2, developed by Yamanaka [55] is widely used in digital cameras produced by Sony Corporation. This can be repeated in a similar fashion with a horizontal alignment to the green data. R G B G B G R G G R G B G B G R B G R G R G B G G B G R G R G B R G B G B G R G G R G B G B G R Figure 1.2 Yamanaka Color Filter Array. Chrominance samples are out of phase on each scan-line There are many more arrangements of these filters published, but not all been implemented, for reasons of complexity and costs. These include arrays obtained by random allocations of photosites using blue noise mask patterns [57], those Interpolation Methods for the Bayer Color Array 3

19 Introduction developed in hexagonal grids forms by Fujifilm Inc. etc. (implemented recently with good success [30]). In this dissertation, however, we shall look at Bayer Color Arrays. 1.2 Demosaicking The mosaic of colors needs to be undone to recover three color planes in order to obtain a full color reproduction of the scene information. This process is referred to as demosaicking. There are a variety of methods available for this interpolation process, which shall be discussed. In this dissertation, we restore images obtained from the mosaicking process in order to reduce the artifacts observed. Included in Figure 1.3 are results obtained from demosaicking using linear interpolation (details discussed in Chapter 4), highlighting the need for a restoration step. The original image [1] is shown in Figure 1.3a, a region of interest of which is extracted in Figure 1.3b. Linear interpolation gives rise to artifacts in the image Interpolation Methods for the Bayer Color Array 4

20 Demosaicking as shown in Figure 1.3c. These images are included here just to give the reader an idea of the problem at hand. (b) (a) (c) Figure 1.3 Illustration of mosaicking artifacts (a) Original reference image (b) a small portion from that image (c) result of linear interpolation Interpolation Methods for the Bayer Color Array 5

21 Introduction 1.3 Outline In Chapter 2, a review of color fundamentals is presented, which forms a basis for the rest of the dissertation. Chapter 3 describes a generic camera imaging model. Chapter 4 introduces the available demosaicking methods and compares their performances on test images, highlighting their advantages and disadvantages. Chapter 5 reviews generic restoration techniques in image processing, with concentration on the Mean Field Annealing (MFA) process. In Chapter 6, the suggested method for restoration is introduced, describing the fundamentals of vector based MFA. The experimental approach and the caveats involved have been described in Chapter 7. Results obtained are given in Chapter 8. Chapter 9 briefly describes possible improvements to existing demosaicking algorithms. The dissertation and its results and findings have been concluded in Chapter 10. Interpolation Methods for the Bayer Color Array 6

22 Chapter 2 Color Fundamentals 2.1 Introduction The understanding of color lays its foundations on the experiments conducted by Sir Isaac Newton in 1666 [36], [42]. Until then, opinions on the nature of color and their interactions were not well defined. Newton made a small hole in an otherwise dark room; through this hole, the direct rays of the sun could shine and form an image of the sun s disc on the opposite wall of the room, much like a pin-hole camera. Then, taking a prism of glass and placing it close to the hole, he observed that the light spread out into what he called a spectrum, colored red, orange, yellow, green, blue, indigo, and violet along its length. The natural conclusion was that white light was not simply a homogenous entity, but was composed of a mixture of colors of the spectrum. In Figure 2.1 the different color-bands are shown against a scale of wavelength of light. It must be noted that color names and wavelength boundaries are only to be Interpolation Methods for the Bayer Color Array 7

23 Color Fundamentals used as a rough guide; each color gradually changes into the other color and there are no definite boundaries. Also to be noted is that the appearance of a color is dependent upon viewing conditions (foreground color, background color, lighting, etc.). and varies from one viewer to another Ultraviolet Violet Indigo Blue Green Yellow Orange Red Infrared Wavelength, nm Figure 2.1 Color components of light, based on nanometers. Note: names do not define wavelength regions, but are for the main regions of the spectrum. The visible region of the spectrum is from about 400 to 700 nm. 2.2 Retinal Receptors of the Human Eye In general, the retina has two kinds of receptors, rods and cones. The primary function of the rods is to provide monochromatic vision under low illumination levels (usually about a few hundredths of a candela per square meter). Interpolation Methods for the Bayer Color Array 8

24 Retinal Receptors of the Human Eye The rods have a photosensitive pigment called rhodopsin. This pigment absorbs light most strongly in the blue-green region of the spectrum. As a result, the spectral sensitivity of the rods is as shown in Figure This part of human vision is referred to as scotopic vision. 1 Photopic Vision Scotopic Vision 0.8 Efficiency Wavelength (nm) Figure 2.2 The scotopic and photopic sensitivities of the Human eye. Notice the region where the sensitivity is highest in each case Data obtained from web site maintained by Dr. H. J. Trussell ( Interpolation Methods for the Bayer Color Array 9

25 Color Fundamentals The function of the cones in the retina is to provide color vision at normal levels of vision (usually, several candela per square meter. This results in photopic vision. The cones are primarily of three types, namely the based on their independent sensitivities to different wavelengths of light. βγρ,, β cones γ cones ρ cones Figure 2.3 Color sensitivity of the human eye 1. Interpolation Methods for the Bayer Color Array 10

26 Color Models As can be seen by comparing Figure 2.3 and Figure 2.1, the βγρ,, correspond to the sensitivities in approximately the blue-violet, green and yellow-orange regions of the spectrum respectively. It is interesting to note the substantial spectral overlap of the γ and ρ cones. 2.3 Color Models The essential principles of three color-measurements were first introduced and presented as axioms by Grassman as long ago as 1853 has made the acceptance of trichromatic stimuli, axiomatic [42]. Color measuring devices have sensors with frequency spread over the spectrum. This spectrum has a certain wavelength called the dominant frequency. The dominant frequency being the one at which the spectral response function of that device has a peak. The whole spectrum can be reconstructed by these spectral response functions by simple linear or non-linear combinations of the available frequencies. Let us look at a few of the different color models used to try and best represent the color spectrum. Interpolation Methods for the Bayer Color Array 11

27 Color Fundamentals Color Matching Functions Color vision is basically a function of three variables. These are the three different types of cones. Although the rods provide a fourth spectrally different receptor, there is overwhelming evidence [15] that at some later stage of the visual system, the number of variables is reduced to three. Hence, it is expected that the evaluation of perceived color from spectral power data should require the use of three different spectral weighting functions. At levels of illumination that are high enough for color vision to be operating properly, there is evidence that the outputs of the rods is rendered ineffective. At levels where both rods and cones are operating together, color vision must be based on four different spectral sensitivities; but, at these levels color discrimination is not very good RGB Color Model Two separate experimental arrangements were used by Guild [28], at the National Physical Laboratory at Teddington; where a tungsten lamp and colored filters were Interpolation Methods for the Bayer Color Array 12

28 Color Models used and by W. D. Wright[53] who used three light sources along with test color patches in a set-up as shown below in Figure 2.4. Their results were normalized to three monochromatic light sources (Red-700nm, Green nm, and Blue nm). Aperture Plate R G Diffuser B Observer s eye C Diffuser Figure 2.4 Principle of trichromatic color matching by additive mixing of lights. R, G, B are sources of red, green and blue light, the intensities of whose can be adjusted. C is the target color the observer needs to match by changing the intensities of R, G, B. This choice of green and blue were made because of the light source being used - mercury discharge lamps, have two prominent lines in the green and blue frequencies and red was chosen to make it spectrally distinct and also to reduce errors in Interpolation Methods for the Bayer Color Array 13

29 Color Fundamentals wavelength calibration as hue changes slowly with wavelength at the wavelengths near red [32]. The subject would match a certain color patch on the target by combining the three wavelengths and it was observed that it was possible to map the visible color spectrum using the r( λ) ; g( λ) ; b( λ), weighting functions, shown in Figure 2.5. An interesting point to note is the negative values the red weight function takes. This is done not because there is negative light! It is only a convenient way to represent experimental fact that one of the matching stimuli, in this case, red, had to be added to the color being matched instead of to the mixture. Interpolation Methods for the Bayer Color Array 14

30 Color Models r(λ) g(λ) b(λ) Sensitivity Wavelength (nm) Figure 2.5 The color matching functions of the CIE RGB standard represented in terms of the stimulus provided by the three wavelengths mentioned in the text These colors are called additive colors because they can be added to produce different colors; meaning, if one power unit of one wavelength is matched by λ 1 r 1 of R + g 1 of G + b 1 of B and one power unit of light of wavelength λ 2 Interpolation Methods for the Bayer Color Array 15

31 Color Fundamentals r 2 of R + g 2 of G + b 2 of B, then the additive mixture of the two lights, λ 1 and λ 2 is matched by ( r 1 + r 2 ) of R + ( g 1 + g 2 ) of G + ( b 1 + b 2 ) of B. This means that the color-matching functions of Figure 2.5 can be used as weighting functions to determine the amounts of R, G, B needed to match any color, if the amount of power per small-constant-width wavelength interval is known for that color throughout the spectrum. In the R, G, B color model described above, the amounts of R, G and B are referred to as tristimulus values. The RGB color model requires tristimulus values that could be negative. This is not very desirable as a certain display which uses, say electron beams striking on phosphors, would not be able to reproduce this negative input, to match the complete color gamut. In general, a display system has stimuli which can vary from 0 to 255. This is represented as an RGB color cube as shown in Figure 2.6. The RGB Color model that is used in color displays, Interpolation Methods for the Bayer Color Array 16

32 Color Models use tristimulus values in the range of [0,255]. This model is represented as a color cube shown in Figure 2.6. Blue (0,0,255) Cyan (0,255,255) Magenta (255,0,255) White (255,255,255) Black Green (0,0,0) (0,255,0) Red (255,0,0) Yellow (255,255,0) Figure 2.6 The RGB Color Cube HSV Color Model Color, as perceived by the human eye has three essential characteristics, hue, saturation and intensity. Hue represents dominant color as perceived by an observer; when we call an object red, orange or yellow, we are specifying its hue. Saturation Interpolation Methods for the Bayer Color Array 17

33 Color Fundamentals refers to the relative purity or the amount of white light mixed with a hue. The degree of saturation is inversely proportional to the amount of white light added. Intensity embodies the achromatic notion of brightness (luminance) and is one of the key factors in describing color sensation. It is a subjective descriptor of the amount of energy an observer perceives from a light source. This model was initially designed for artists to input and mix their painting colors on a computer drawing [16], [43]. The HSV Color model is represented as a hexcone as shown in Figure 2.7. The hexcone is of unit height. Hue is measured as the angle from the red axis. When S is zero, the value of H is irrelevant as it is a shade of gray (achromatic). Any color with V=1 and S=1 is akin to an artist s pure color pigment used at the starting point of mixing colors. Adding white pigment corresponds to decreasing S, without changing V. The top of the HSV hexcone corresponds to the projection seen by looking along the principal diagonal of the RGB cube from white toward black. Subcubes of the RGB cube correspond to different slices along the V axis of the hexcone. This gives an intuitive correspondence between RGB and HSV color models. RGB to HSV conversion algorithms are included in Appendix B. Interpolation Methods for the Bayer Color Array 18

34 Color Models Green 1 o V Yellow Cyan 1.0 White Red 0 o Blue 240 o Magenta H 0.0 Black Figure 2.7 The HSV Color hexcone S CIE-XYZ Color Model To mitigate the problem with the RGB color model of negative stimuli, the CIE (Commision Internatinale de l Eclairage), a body responsible for international recommendations for photometry and colorimetry devised a new color model which used transformations of the RGB model such that tristimulus values were never negative. Interpolation Methods for the Bayer Color Array 19

35 Color Fundamentals This transformed color space is called the CIE-XYZ color space. The transformation is given by means of the following equations: X Y Z = R G B (2.1) The numbers for these set of equations were carefully chosen so that X, Y, Z would always be positive for all colors as is seen in Figure 2.8. Another advantage of this color model is that the Y component represents luminance of a color. The color matching curves we had for the RGB color model transform to the XYZ color model by Equation 2.1. Interpolation Methods for the Bayer Color Array

36 Color Models x(λ) y(λ) z(λ) Sensitivity Wavelength (nm) Figure 2.8 The CIE-XYZ color matching curves 1. Notice, in Figure 2.8, the y( λ) curve spans most of the spectrum. When compared to Figure 2.2, the spectral response of the photopic vision is very close to that of y( λ). Three new stimuli replace the existing X, Y, Z values, x, y, z (normalized) where, X x = ( X + Y + Z) (2.2) Interpolation Methods for the Bayer Color Array 21

37 Color Fundamentals y = Y ( X + Y + Z) (2.3) z = Z ( X + Y + Z) (2.4) A plot of the x, y space is called the chromaticity diagram is shown in Figure 2.9. Colors in the XYZ model are represented as ordered triplets of ( x, y, Y), where x, y record the chromaticity values while Y gives the luminance. Interpolation Methods for the Bayer Color Array 22

38 Color Models Figure 2.9 The chromaticity diagram viewed in the CIE XYZ model Notice that the hues are not evenly spaced about the perimeter of this Horse-shoe plot 1. In Figure 2.9, the point E, 1 x = y = z = --, is the equi-energy stimulus point. 3 Interpolation Methods for the Bayer Color Array 23

39 Color Fundamentals Color Differences In the RGB color model, one could use three colors in the RGB cube, a, b, c which are set apart such that E RGBab = E RGBbc, i.e. the colors have the same Euclidean distance between them, where, E RGB = (( R 1 R 2 ) 2 + ( G 1 G 2 ) 2 +( B 1 B 2 ) 2 ) (2.5) It is observed, however, that this measure is not uniform (constant) over the color space. For instance, colors with the same E RGB maybe be perceived as having no resemblance at all or on the other hand they may be perceived as being very similar. Some of the first controlled experiments on similar colors were conducted by MacAdam [36]. Observers were asked to match two color patches in an experiment similar to the one described in Figure 2.4 and the error in matching yielded sensitivity ellipses in the chromaticity diagram as shown in Figure The ellipses represent differences in chromaticity that are just noticeable. Interpolation Methods for the Bayer Color Array 24

40 Color Models 0.9 MacAdam Ellipses Y X Figure 2.10 The result of pioneering work done by MacAdams, resulting in ellipses that were a marker to the just noticeably different colors. Ellipses are scaled up in size to enhance orientation 1. As was the case with the RGB model, the CIE XYZ model also suffers from the same problem of perceptual imbalance. This is illustrated in the Figure Interpolation Methods for the Bayer Color Array 25

41 Color Fundamentals Figure 2.11 Subjectively equal color steps on the chromaticity diagram. Each line segment represents a color difference about three times greater than that just noticeable distance for a 2 o field. Notice, in Figure 2.11, the line segments are not all of equal length. In the region near the greens, the line-segments are longer than those near the reds and purples implying non-uniformity (non-constancy) in color differences. Equal length line Interpolation Methods for the Bayer Color Array 26

42 Color Models segments throughout the chromaticity diagram would imply uniformity in the color differences (hence, perceptual balance), which is desired in the color model CIE-L*a*b* Color Model From earlier sections, it is concluded that there is a need for a perceptually balanced color model that the human visual system can to relate to. Two of the color models suggested by the CIE which are perceptually balanced and uniform are the CIE-L*u*v* and the CIE-L*a*b* color models. The L*u*v* model is based on the work by MacAdams on the Just Noticeable Differences [36] in color. These color models are transformations of the XYZ color model. L * Y for Y > Y n Y = n Y for Y Y n Y n (2.6) Interpolation Methods for the Bayer Color Array 27

43 Color Fundamentals a * = 500 X X n Y Y n (2.7) b * = 0 Y Y n Z Z n , (2.8) where X n, Y n, Z n are the values of X, Y, Z, for the appropriately chosen reference X Y Z white; and where, if any of the ratios -----, -----, is equal to or less than , it is replaced in the above formula by X n Y n Z n 7.787F X Y Z where F is -----, -----, -----, as the case may be. X n Y n Z n The color differences in the L*a*b* color model are given by * E ab = [( L * ) 2 + ( a * ) 2 + ( b * ) 2 ] (2.9) Interpolation Methods for the Bayer Color Array 28

44 Color Models In general * E ab differences of about 2.3 or greater are noticeable [37] and those over 10 are so different that comparison is not worthwhile. In the rest of this dissertation, * E ab differences shall be used in conjunction with mean square error, primarily because it is the * E ab errors that the human eye can detect - perceptual errors. The mean square error shall be used in conjunction as it is a mathematical measure that most methods use as a cost function ISO RGB Color Model The spectral responses of the color analysis channels of digital still cameras (DSCs) do not, in general, match those of a typical human observer [52], defined by the CIE standard colorimetric observer. Neither do the responses of different DSCs necessarily match each other. There is hence a need to characterize these DSCs with help of color matching functions and illuminants and map them onto a reference color space. Interpolation Methods for the Bayer Color Array 29

45 Color Fundamentals The International Organization for Standardization (ISO), along with experts in the field of digital cameras, is working on a standard for digital cameras to characterize them using color targets spectral illumination. The process of transforming from the color space of the camera to a uniform color space has been addressed [42], [49]. The ISO RGB represents an estimate of the scene or original colorimetry. As such, it maintains the relative luminance ratio and the color gamut of the scene or original. Figure 2.12 shows the ISO RGB color matching functions. Interpolation Methods for the Bayer Color Array 30

46 Color Models Red Green Blue 0.15 Sensitivity Wavelength (nm) Figure 2.12 ISO RGB Color Matching Functions These color matching functions have been derived using the spectral sensitivity of the sensors in digital color cameras; unlike the RGB color matching functions shown in Figure 2.5, which were developed for the human eyes spectral sensitivities. Interpolation Methods for the Bayer Color Array 31

47 Color Fundamentals DSCs use non-linear, usually logarithmic amplifiers in the conversion from light intensity to voltage output. These values need to be converted into a linear space before they can be transformed into the XYZ (uniform) color space. The transformation from the logarithmic camera space to the linear space is done by the use of the Opto Electronic Conversion Function (OECF). The OECF for a typical digital still color camera [32] is shown in Figure 2.13 where the horizontal axis denotes the data output of the camera (from the sensors in the camera, which is in the range of [0,255] and vary in a non-linear fashion) and the vertical axis corresponds to the linear domain of the data captured. Interpolation Methods for the Bayer Color Array 32

48 Color Models OECF Data Camera Digital Level Figure 2.13 OECF for a typical DSC [52] For a D 65 luminant, the transform matrix is given by X Y Z = R Lin G Lin B Lin (2.10) where R Lin, G Lin, B Lin are R, G, B values in after transforming by the OECF. Interpolation Methods for the Bayer Color Array 33

49 Color Fundamentals Once in a linear space, the data can be modified or transformed into the colorgamut of the device being used srgb Color Model (for displays) To be able to display colors, in the desired colorimetric settings, there is a tendency to display images using the perceptually balanced color models like the CIE- L*a*b*, or the CIE-L*u*v* color models. This is however not practical due to the complexity of the transformations required. This standard, like the ISO RGB standard is a working standard. It attempts to standardize the color gamut available to commercial PC- and web-based color imaging systems, aiding in precise reproduction of images on different displays. The three major factors of the srgb space are the colorimetric RGB definition, the equivalent gamma value (display s tonal transfer function, see Appendix A for details) of 2.2 and well-defined viewing conditions, along with a number of secondary details. Interpolation Methods for the Bayer Color Array 34

50 Caveat (Interpolating in Color Space) The transformation matrix for a gamma value of 2.2 (standard) and a D 50 illuminant is given below in Equation R srgb G srgb B srgb = X Y Z (2.11) where R srgb, G srgb, B srgb are R, G, B values in the srgb Color Space. The reader is referred to [32] for further details. 2.4 Caveat (Interpolating in Color Space) The result of interpolation depends on the color space in which the interpolation is being performed. If the conversion from one color model to another is linear (can be represented as a matrix multiplication, like RGB, YIQ), then the results of linear interpolation in both models will be the same [16]. However, in the case of non-linear transformations (HSV, CIE-L*a*b*, CIE-L*u*v*), the results are not the same as those produced in, say the RGB space. This is illustrated in Figure 2.14, where the pixel in the center needs to be estimated from its neighbors, Interpolation Methods for the Bayer Color Array 35

51 Color Fundamentals red(255,0,0) and green(0,255,0). Equally weighted linear interpolation in the RGB space results in (128,128,0) - brown. On the other hand, estimating the center pixel from red(0 o,1,1) and green(1 o,1,1) in the HSV model, results in (60 o,1,1) which, in RGB space is (255,255,0) - yellow. This needs to be borne in mind while performing operations in linear and non-linear spaces. If the objective is to maintain fixed hue (or saturation) between colors during interpolation, the HSV model may be preferable [16]. (a) (b) (c) Figure 2.14 Illustration of results of interpolating in different color spaces (a) original image (b) RGB interpolation (c) HSV interpolation Interpolation Methods for the Bayer Color Array 36

52 Chapter 3 Image Model 3.1 Introduction Image models provide a backbone on which algorithms can be designed and implemented. In this chapter we look at the image formation model used in Digital Still Color Cameras (DSC). 3.2 Linear Model We shall consider the image formation model to be linear in nature [50]. The image formation model in a camera system is based on a continuous-discrete model [41] where the input data (scene) is continuous while the capture device (CCD) is discrete. Figure 3.1 shows such a model. Interpolation Methods for the Bayer Color Array 37

53 Image Model f h g Scene Camera System CCD Array Figure 3.1 Image formation model in a digital camera system The input/output relationship for this system is given by g λi u λ i ( x, y) = f ( ξηλ,, )hx ( ξ, y η, λ )L ( λ ) d ξdη d λ+ ε λi ( x, y) l λ l (3.1) where, a band-limited image obtained from the CCD in the th sensor type, is g λi discrete (with uniformly spaced samples), f, the scene, is continuous and h, the point-spread function (PSF), is continuous. L is the spectral sensitivity of the sensor with non-zero spectral sensitivity between λ i and λ i. ( x, y) is a discrete space l u while ( ξη, ) is a continuous space. ε λi is signal independent additive noise. Details λ i Interpolation Methods for the Bayer Color Array 38

54 Linear Model about the blur kernel and the noise are discussed in later sections. The form of Equation 3.1 assumes that the PSF is space invariant. Sampling this equation (without noise considerations) for a discrete approximation [41], we get M 1 N 1 gxy (, ) = f( k, l)hx ( k, y l) k = 0 l = 0 (3.2) The two dimensional image formation models have long used the stacked notation to represent the mathematics involved, especially in image restoration problems [6]. This is also referred to as a lexicographic notation. Consider a gray-scale F image (only one color channel, say red) of size NxN pixels. In other words, F = f 1 f 2 f N (3.3) where the f i are column vectors of length N each. The resulting lexicographic representation for F is given by f = f T 1 f T T T 2 f N (3.4) Interpolation Methods for the Bayer Color Array 39

55 Image Model where the superscript T corresponds to a transpose operation. f is now a N 2 x1 column vector. Generating the blur kernel H, as an N 2 xn 2 block Toeplitz matrix [41], we can represent the blurred image formation model as a matrix multiplication operation using g = Hf. (3.5) Introducing signal independent additive noise to the system, we can represent the formed image g, as g = Hf + ε (3.6) where ε is the additive noise of the imaging system. This image g is now sampled with the CFA mask in the color channel under consideration. The advantages of using this notation is the ease of use in written form. Also, when the expected values of such terms and their products are transformed into the Fourier domain (especially in restoration processes), the result is easier to handle than what may be expected [41]. Interpolation Methods for the Bayer Color Array 40

56 a-priori imaging constraints 3.3 a-priori imaging constraints The assumption of non-negativity must be maintained throughout our models since optical energies inherent in imaging must always be non-negative quantities. Both the measurement and the true image being non-negative imply that the PSF must also be non-negative. f ( ξη, ) 0 gxy (, ) 0 h( ξη, ) 0 (3.7) (3.8) (3.9) An important assumption that needs to be made is that we have a lossless system, meaning that the energy in the object is preserved in the image; i.e. the lens and other parts of the imaging system do not absorb or generate optical energy. In the continuous-discrete case [6], f ( ξη, ) dξdη = N 1 M 1 x = 0 y = 0 gxy (, ) (3.10) which implies h ( ξη, ) = 1 (3.11) Interpolation Methods for the Bayer Color Array 41

57 Image Model Blur In this dissertation, we model PSFs introduced due to the optics in the system (outof-focus blurs). An out-of-focus blur for a circular aperture is represented by an airy disk [33]. The airy disk has a PSF which is given by h( ξη, ) ,(( ξ πr 2 + η 2 ) R 2 ) = 2 0, else (3.12) However, owing to its ease in implementation and the fact that small out-of-focus blurs may be modeled by a Gaussian PSF, we will the out-of-focus blur using a Gaussian PSF, given by h( ξη, ) = 1 ( ξ ξ) 2 + ( η η) πσ exp σ 2 (3.13) where σ is the blur variance and ξ, η are the mean values describing position of the blur. In our case however, the blur will be centered about the origin, making the mean values zero. Interpolation Methods for the Bayer Color Array 42

58 a-priori imaging constraints Noise The noise introduced in the system is modeled as additive zero-mean white Gaussian noise, for each channel. This is an approximation to the noise process which is in fact a Poisson process. For the purpose of this document, we approximate the noise process as a signal-independent non-negative Gaussian process. For a small signal variance, this assumption is widely accepted. The noise samples have a probability distribution given by p( ς) 1 ς = πσ exp σ 2 (3.14) To maintain the requirement of Equation 3.8 of non-negativity, the pixel values in the image need to be clipped. Interpolation Methods for the Bayer Color Array 43

59 Chapter 4 Common demosaicking methods in Bayer Arrays 4.1 Introduction Commercially available Digital Still Color Cameras (DSC), as mentioned in Section 1.1, are based on a single CCD array and capture color information by using three or more types of color filters, each pixel capturing only one sample of the color spectrum. This mosaic, needs to be populated with information from all the color planes in order to obtain a full resolution true color image. This process is referred to as demosaicking. To reconstruct the complete color image, interpolation must be performed on the image data. There are a variety of methods available, the simplest being linear interpolation, which, as shall be shown does not maintain edge information well. More complicated methods [14], [17], [24], [22], [34], [35] perform this interpolation and attempt to maintain edge detail or limit hue transitions. Interpolation Methods for the Bayer Color Array 44

60 Ideal Interpolation In this chapter, we shall assume that the images are not corrupted by noise. Also, we shall assume that the images do not have a PSF associated with them (or rather, the PSF is an ideal impulse). In Chapter 7 and Chapter 8, we will consider the effects out-of-focus-blurs and noise. 4.2 Ideal Interpolation Following the sampling theory, the sampling of a continuous image f( x, y) yields infinite repetitions of its continuous spectrum in the Fourier domain, which do not overlap. If this is so, and only then, the original image f( x, y) can be reconstructed perfectly from its discrete samples f( m, n). The 1- D ideal interpolation is the multiplication with a rect function in the frequency domain and can be realized in the spatial domain by a convolution with the sinc function. The ideal interpolator kernel is band-limited and hence is not space limited. This interpolator, hence is not feasible to implement in practice. 4.3 Bilinear Interpolation Consider the array of pixels as shown in Figure 4.1. Interpolation Methods for the Bayer Color Array 45

61 Common demosaicking methods in Bayer Arrays R 11 G 12 R 13 G 14 R 15 G 16 R 17 G 21 B 22 G 23 B 24 G 25 B 26 G 27 R 31 G 32 R 33 G 34 R 35 G 36 R 37 G 41 B 42 G 43 B 44 G 45 B 46 G 47 R 51 G 52 R 53 G 54 R 55 G 56 R 57 G 61 B 62 G 63 B 64 G 65 B 66 G 67 R 71 G 72 R 73 G 74 R 75 G 76 R 77 Figure 4.1 Sample Bayer Pattern At a blue center, we need to estimate the green and red components. Consider pixel 44 at which only B 44 is measured; we need to determine G 44. Given G 34,G 43,G 45, G 54, one estimate for G 44 is given by ( G G 34 + G 43 + G 45 + G 54 ) 44 = (4.1) To determine R 44, given R 33, R 35, R 53, R 55, the estimate for R 44 is given by ( R R 33 + R 35 + R 53 + R 55 ) 44 = (4.2) and at a red center, we would estimate the blue and green accordingly. Performing this process at each photo-site (location on the CCD), we can obtain three color Interpolation Methods for the Bayer Color Array 46

62 Bilinear Interpolation planes for the scene which would give us one possible demosaicked form of the scene. This type of interpolation is a low pass filter process. The band-limiting nature of this interpolator smoothens edges, which show up in color images as fringes (referred to as the zipper effect [1], [2]). This has been illustrated with two colors channels (for simplicity) in Figure (a) (b) (c) Figure 4.2 Illustration of fringe or zipper effect resulting from the linear interpolation process (a) Original image (only 2 colors) (b) subsampled Bayer image (c) result of linear interpolation. Notice color fringe in locations 5 and 6. Interpolation Methods for the Bayer Color Array 47

63 Common demosaicking methods in Bayer Arrays 4.4 Constant Hue-based Interpolation This method proposed by Cok [14] and is one of the first few methods used in commercial camera systems. Modifications of this system are still in use. The key objection of pixel artifacts in images that result from bilinear interpolation is abrupt and unnatural hue change. There is a need to maintain the hue of the color such that there are no sudden jumps in hue (but for over edges, say). Chrominance channels correspond to the red and blue channels while the luminance channel is the green channel. As used herein, the term hue refers to a quantity relating to a value of a color sampled at lower resolution to the value of a color sampled at higher resolution. In R other words, the hue is defined by ---, B [14]. It is to be noted that the term hue G Ḡ -- defined above is valid for this method only, also, the hue needs to be redefined if the denominator G is zero. By interpolating the hue value and deriving the interpolated chrominance values (blue and red) from the interpolated hue values, hues are allowed to change only gradually, thereby reducing the appearance of color fringes which would have been obtained by interpolating only the chrominance values. Interpolation Methods for the Bayer Color Array 48

64 Constant Hue-based Interpolation Consider an image with constant hue. The values of the luminance (G) and one chrominance component (R, say) at one location (R ij, G ij ) and another sample location (R kl, G kl ) are related as R ij R kl = G ij G kl B if ij G = ij (4.3) B kl G kl in exposure space (be it logarithmic 1 or linear). If R kl represents the unknown chrominance value, and R ij and G ij represent measured values and G kl represents the interpolated luminance value, the missing chrominance value R kl is given by R ij R kl = G kl G ij (4.4) In an image that does not have uniform hue, as in a normal color image, smoothly changing hues are assured by interpolating the hue values between neighboring chrominance values. 1. Most cameras capture data in a logarithmic exposure space and need to be linearized before the ratios used as such. If interpolating in the logarithmic exposure space, difference of logarithms need to be taken R ij instead of ratios; i.e. log = log( R. R kl ij ) log( R kl ) Interpolation Methods for the Bayer Color Array 49

65 Common demosaicking methods in Bayer Arrays The green channel however is interpolated first using bi-linear interpolation, to populate the green channel s checkerboard pattern. After this first pass, the hue is interpolated. In other words, referring to Figure 4.1, R 33 R 35 R 53 R G 33 G 35 G 53 G 55 R 44 = G (4.5) and similarly for the blue channel R 22 R 24 R 42 R G 22 G 24 G 42 G 44 B 33 = G (4.6) In Equation 4.5 and Equation 4.6, the G values in bold-face are estimated values, after the first pass of interpolation. The extension of the logarithmic exposure space is a straightforward as multiplications become additions and divisions become subtractions in the logarithmic space. There is a caveat however as interpolations will be performed in the logarithmic space and hence the relations in linear space and exposure space are not identical [14]. Hence in most implementations the data is first linearized and then interpolated as mentioned above. Interpolation Methods for the Bayer Color Array 50

66 Neighborhood considerations 4.5 Neighborhood considerations We could possibly get better estimates for the missing pixels by increasing the neighborhood of the pixel, but this increase is expensive in terms of memory considerations which is not desirable. There is hence a need to keep the interpolation filter kernel space-limited to a small size and also extract as much information from the neighborhood as possible. The correlation between color signals is often used [1]. For RGB images, cross-correlation between channels has been determined and found to vary between 0.25 and 0.99 with averages of 0.86 for red/ green, 0.79 for red/blue and 0.92 for green/blue cross correlations [46]. One wellknown image model is to simply assume that red and blue are perfectly correlated with the green over a small neighborhood. This image model is stated as G ij = R ij + k (4.7) where i, j refers to the pixel location, R (known) and G (unknown) the red and green pixel values, k is the appropriate bias for the given pixel neighborhood. The same applies at a blue pixel location. Let us illustrate Equation 4.7 with an example, by considering the green channel of an image and the corresponding Green minus Red and Green minus Blue channels. In Figure 4.3, we can see that majority Interpolation Methods for the Bayer Color Array 51

67 Common demosaicking methods in Bayer Arrays of the regions in the Green minus Red and Green minus Blue images are uniform, especially in regions where there is high spatial detail (near the eyes of the macaws) (a) (b) (c) (d) Figure 4.3 a) RGB image b) Green Channel c) Green minus Red (d)green minus Blue Interpolation Methods for the Bayer Color Array 52

68 Median-based Interpolation 4.6 Median-based Interpolation This method, proposed by Freeman [17], is a two pass process, the first being a linear interpolation, and the second pass a median filter of the color differences. In the first pass, linear interpolation is used to populate each photo-site with all three colors and in the second pass, the difference image, of say, Red minus Green and Blue minus Green is median filtered. This median filtered image is then used in conjunction with the original bayer pattern samples to recover the samples. This method preserves edges very well, as illustrated in Figure 4.4. Only one row of the Bayer array is considered since this process can be extrapolated to the case of the rows containing blue and green pixels. In Figure 4.4, where (a) shows the original image before Bayer subsampling (b); (c) shows the result of linear interpolation. Notice the color fringes introduced between pixels 5 and 6; (d) shows the absolute valued difference image between the two channels; (e) shows the result of median filtering the difference image with a kernel of size 5. Using the result obtained from (e) and the original Bayer image (b), (f) is generated to produce a reconstructed version of the original scene image. The reconstruction of the edge is exact. Interpolation Methods for the Bayer Color Array 53

69 Common demosaicking methods in Bayer Arrays (a) (b) (c) (d) (e) Figure 4.4 Illustration of Freeman s interpolation method for a two channel system (a) Original image (b) subsampled Bayer image (c) result of linear interpolation (d) Green minus Red (e) median filtered result of the difference image (f) reconstructed image (f) This concept can be carried over to three color sensors wherein differences are calculated between pairs of colors and the median filter is applied to these differences to generate the final image. In other words, referring to Figure 4.2; R 34 = rˆg + G 34 (4.8) Interpolation Methods for the Bayer Color Array 54

70 Median-based Interpolation R 44 = rˆb + B 44 (4.9) where rˆg = median( R i G i ), i ℵ G34 rˆb = median( R i B i ), i ℵ B44 (4.10) (4.11) Similarly, G 33 = ĝ r R 33 G 44 = ĝ b + B 44 (4.12) (4.13) where ĝ r = median( R i G i ), i ℵ R33 ĝ b = median( G i B i ), i ℵ B44 (4.14) (4.15) and B 33 = bˆ r + R 33 (4.16) B 34 = bˆ g + G 34 (4.17) where bˆ r = median( R i B i ), i ℵ R33 (4.18) bˆ g = median( G i B i ), i ℵ G34 (4.19) Interpolation Methods for the Bayer Color Array 55

71 Common demosaicking methods in Bayer Arrays In the above equations, ℵ refers to the neighborhood of pixel under consideration. The size of this neighborhood is variable. We shall consider neighborhoods of a size such that all the algorithms can be compared on the same basis. The algorithms described in this chapter have at most 9 pixels under consideration for estimation. In a square neighborhood, this would imply a 3 x 3 window. We shall hence use a 3 x 3 neighborhood for Freeman s algorithm. 4.7 Gradient Based Interpolation This method, proposed by Laroche and Prescott [35] and is in use in the Kodak DCS 0 Digital Camera System. It employs a three step process, the first one being the interpolation of the luminance channel (green) and the second and third being bilinear interpolation of the chrominance channels (red and blue). This method takes advantage of the fact that the human eye is most sensitive to luminance changes. Depending upon the position of an edge, in the green channel, the interpolation is performed accordingly. Interpolation Methods for the Bayer Color Array 56

72 Gradient Based Interpolation Referring back to Figure 4.2, if we need to estimate G 44, let α = B abs ( 42 + B 46 ) B 2 44 (4.) β = B abs ( 24 + B 64 ) B 2 44 (4.21) αβ, are estimates to the horizontal and vertical second-derivatives respectively, in blue. It is intriguing to note that the classifiers used are second derivatives with the sign inverted and halved in magnitude. Using these second derivatives as classifiers, we come up with the following estimates for the missing green pixel value. G 44 = ( G 43 + G 45 ) if ( α< β) 2 G 44 = ( G 34 + G 54 ) if ( α> β) 2 G 44 = ( G 43 + G 45 + G 34 + G 54 ) if α 4 = β (4.22) (4.23) (4.24) Similarly, for estimating G 33, let α = R abs ( 31 + R 35 ) R 2 33 (4.25) ( R β abs 13 + R 53 ) = R 2 33 (4.26) Interpolation Methods for the Bayer Color Array 57

73 Common demosaicking methods in Bayer Arrays αβ, are estimates to the horizontal and vertical second derivatives in blue, respectively. Using these gradients as classifiers, we come up with the following estimates for the missing green pixel value. G 33 = ( G 32 + G 34 ) if ( α< β) 2 G 33 = ( G 23 + G 43 ) if ( α> β) 2 G 33 = ( G 32 + G 34 + G 23 + G 43 ) if α 4 = β (4.27) (4.28) (4.29) Once the luminance is determined, the chrominance values are interpolated from the difference of color differences between the color (red and blue) and luminance (green) signals. This is given as follows R 34 ( R 33 G 33 ) + ( R 35 G 35 ) = G 2 34 R 43 ( R 33 G 33 ) + ( R 35 G 35 ) = G 2 43 R 44 = ( R 33 G 33 ) + ( R 35 G 35 ) + ( R 53 G 53 ) + ( R 55 G 55 ) G 4 44 (4.30) (4.31) (4.32) Note however, that the green channel has been completely populated before this step. The bold-face entries correspond to estimated values. We get corresponding formulae for the blue pixel locations. Interpolating color differences and adding Interpolation Methods for the Bayer Color Array 58

74 Adaptive Color Plan Interpolation the green component has the advantage of maintaining color information and also using intensity information at pixel locations. At this point, three complete RGB planes are available for the image. 4.8 Adaptive Color Plan Interpolation This method is proposed by Hamilton and Adams [22]. It is a modification of the method proposed by Laroche and Prescott [35]. The concept is similar to that of using second derivatives as an estimate for the data; but uses the second derivative in conjunction with the gradient estimate. Consider the Bayer array neighborhood as shown below in Figure 4.5. A 1 G 2 A 3 G 4 A 5 G 6 A 7 G 8 A 9 Figure 4.5 Sample Bayer Neighborhood A i = Chrominance, G i = Luminance Interpolation Methods for the Bayer Color Array 59

75 Common demosaicking methods in Bayer Arrays This process also has three runs. The first run populates that luminance (green) channel and the second and third runs populate the chrominance (red and blue) channels. G i is a green pixel and A i is either a red pixel or a blue pixel (All A i pixels will be the same color for the entire neighborhood). We now form the following classifiers α = abs( A 3 + 2A 5 A 7 ) + abs( G 4 G 6 ) β = abs( A 1 + 2A 5 A 9 ) + abs( G 2 G 8 ) (4.33) (4.34) We use the term classifiers as these are used to classify a pixel as belonging to a vertical or a horizontal edge. These are not estimates of a derivative as they are combinations of the first derivative and the second derivative. These classifiers are composed of second derivative terms for chromaticity data and gradients for the luminance data. As such, these classifiers sense the high spatial frequency information in the pixel neighborhood in the horizontal and vertical directions. Interpolation Methods for the Bayer Color Array 60

76 Adaptive Color Plan Interpolation Consider, that we need to estimate the green value at the center, i.e. to estimate G 5. Depending upon the preferred orientating, the interpolation estimates are determined. G 5 = ( G 4 + G 6 ) A 3 + 2A 5 A if ( α< β) G 5 = ( G 2 + G 8 ) A 1 + 2A 5 A if ( α> β) G 5 ( G 2 + G 4 + G 6 + G 8 ) A 1 A 3 + 4A 5 A 7 A 9 = if α = β (4.35) (4.36) (4.37) These predictors are composed of arithmetic averages for the green data and appropriately scaled second derivative terms for the chromaticity data. This comprises the first pass of the interpolation algorithm. The second pass involves populating the chromaticity channels. Consider the neighborhood as shown below in Figure 4.6. Interpolation Methods for the Bayer Color Array 61

77 Common demosaicking methods in Bayer Arrays A 1 G 2 A 3 G 4 C 5 G 6 A 7 G 8 A 9 Figure 4.6 Bayer Array Neighborhood G i is a green pixel and A i is either a red pixel of a blue pixel and C i is the opposite chromaticity pixel. A 2 = ( A 1 + A 3 ) G 1 + 2G 2 G A 4 = ( A 1 + A 7 ) G 1 + 2G 4 G (4.38) (4.39) The two cases in Equation 4.38 and Equation 4.39 are those when the nearest neighbors to A i are in the same column and row respectively. To estimate C 5, we employ the same method as we did to estimate the luminance channel. We again, form two classifiers, αβ, which estimate the gradient in the horizontal and vertical directions. α = abs( G 3 + 2G 5 G 7 ) + abs( A 3 A 7 ) (4.40) Interpolation Methods for the Bayer Color Array 62

78 Comparison of Interpolation Methods β = abs( G 1 + 2G 5 G 9 ) + abs( A 1 A 9 ) (4.41) In Equation 4.40 and Equation 4.41, αβ, sense the high frequency information in the pixel neighborhood in the positive and negative diagonal respectively. G 3 ( A A 3 + A 7 ) + 2G 5 G 7 5 = if ( α< β) ( A A 1 + A 9 ) G 1 + 2G 5 G 9 5 = if ( α> β) ( A A 1 + A 3 + A 7 + A 9 ) ( G G 3 + 4G 5 G 7 G 9 ) = if α = β 4 4 (4.42) (4.43) (4.44) These predictors are composed of arithmetic averages for the chromaticity data and appropriately scaled second derivative terms for the green data. Depending upon the preferred orientation of the edge, the predictor is chosen. We now have the three color planes populated for the Bayer Array data. 4.9 Comparison of Interpolation Methods The above mentioned methods are a few of those employed in commercial cameras today. A few test images, shown in Figure 4.7, Figure 4.10, Figure 4.11, are simulations of the data contained in the Bayer Array of the camera. In other words, Interpolation Methods for the Bayer Color Array 63

79 Common demosaicking methods in Bayer Arrays these are cases that consider what-if cases in the Bayer Array. These particular images were chosen as test images to emphasize the various details that each algorithm works on. The images have been broken down into three categories depending upon the dominant edge orientation and the fourth set of images are RGB images Type I Test Images These images have a predominantly horizontal orientation to their gradient directions i.e. their edges are as shown in Figure (a) (b) Figure 4.7 Type I Test Images, a) Test Image 1 has vertical bars with decreasing thicknesses(16 pixels down to 1 pixel) b) Test Image 2 has bars of constant width (3 pixels) Interpolation Methods for the Bayer Color Array 64

80 Comparison of Interpolation Methods The first test image was chosen to demonstrate the artifacts each process introduces for varying thicknesses of stripes (increasing spatial frequencies). The second test image was chosen to study a similar performance, but with a constant spatial frequency. Both these images are binary in nature, having no gray scales to them. Grayscale images shall be dealt with later. Interpolation Methods for the Bayer Color Array 65

81 Common demosaicking methods in Bayer Arrays (a) (b) (c) (d) (e) Figure 4.8 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on Test Image 1. Note: Images are not the same size. Image has been cropped to hide edge effects. In Figure 4.9, notice the artifacts introduced by linear interpolation. The effect observed is referred to as the zipper effect [2]. This effect is reduced in Cok s Interpolation Methods for the Bayer Color Array 66

82 Comparison of Interpolation Methods interpolation, but the reduction is not as successful as in Hamilton-Adams or Laroche-Prescott s implementation of the interpolation process. Notice that all these algorithms perform poorly at high spatial frequencies (the right hand side of Test Image 1 ). Interpolation Methods for the Bayer Color Array 67

83 Common demosaicking methods in Bayer Arrays (a) (b) (c) (d) (e) Figure 4.9 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)ha milton-adams interpolations on Test Image 2. Note: Images are not the same size. Image has been cropped to hide edge effects. In Figure 4.8, notice the artifact introduced has color imbalance. Cok s interpolation however is able to reconstruct the black perceptually better than the bilinear Interpolation Methods for the Bayer Color Array 68

84 Comparison of Interpolation Methods interpolation. Hamilton-Adams interpolation and Laroche-Prescott s however seem to perform better at reconstructing the original image. This is because both use information from the other channels (chrominance channel to interpolate luminance and vice versa) Type II Images Type II Images have their edges oriented in the vertical direction i.e. edges are encountered travelling in the vertical direction over the image as can be seen in Figure Test Image 3 Test Image (a) (b) Figure 4.10 Type II Test Images, a) has horizontal bars with decreasing thicknesses(16 pixels down to 1 pixel) b) Constant width (3 pixels) Interpolation Methods for the Bayer Color Array 69

85 Common demosaicking methods in Bayer Arrays The results of interpolation in these images are exactly the same as those seen in Type I images. This is because all these algorithms have identical properties in the horizontal and vertical directions Type III Images Images in this category have edges in all (almost) directions, as in a starburst image shown in Figure Original Unsampled Image Original Unsampled Image (a) (b) Figure 4.11 Type III Test Image, a) Full-size Starburst Image b) Upper right quadrant, used in tests Interpolation Methods for the Bayer Color Array 70

86 Comparison of Interpolation Methods The disk in the center has edges in all directions. This disk being sampled poses the problem of sampling as the image will be aliased and the edges jagged. This is the reason why the image is said to have edges in almost all directions. The starburst pattern has the advantages of having edges in almost all directions and also having decreasing spatial frequencies away from the center. The upper right quadrant of the starburst image shall be used for display purposes (for reasons of clarity), as shown in Figure 4.11b and also because, as mentioned earlier, the interpolation methods discussed have symmetry in the horizontal directions. Interpolation Methods for the Bayer Color Array 71

87 Common demosaicking methods in Bayer Arrays (a) (b) (c) (d) (e) Figure 4.12 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on Starburst Image. Note: Images are not the same size. Image has been cropped to hide edge effects. Interpolation Methods for the Bayer Color Array 72

88 Comparison of Interpolation Methods As can be seen from Figure 4.12, a similar trend is noticed between algorithms as was seen in the earlier test images RGB Images Figure 4.13, and Figure 4.16 show two RGB images that were subsampled in the from of a Bayer array and then interpolated to get the three color planes. The region of interest (ROI) in these images have been highlighted with a white box, as shown and have also been shown magnified, for the purpose of comparison with the results. Interpolation Methods for the Bayer Color Array 73

89 Common demosaicking methods in Bayer Arrays (a) (b) (c) Figure 4.13 (a) Full-resolution macaw Image (b) ROI about the green macaws eye (c) ROI about the red macaws eye Interpolation Methods for the Bayer Color Array 74

90 Comparison of Interpolation Methods (a) (b) (c) (d) (e) Figure 4.14 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on macaw Image showing ROI about the green macaw s eye. Interpolation Methods for the Bayer Color Array 75

91 Common demosaicking methods in Bayer Arrays (a) (b) (c) (d) (e) Figure 4.15 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on macaw image showing ROI about the red macaw s eye Interpolation Methods for the Bayer Color Array 76

92 Comparison of Interpolation Methods (a) (b) (c) Figure 4.16 (a)full-resolution girl Image (b) ROI about the balloon ribbon (c) ROI about the girl s mouth Interpolation Methods for the Bayer Color Array 77

93 Common demosaicking methods in Bayer Arrays (a) (b) (c) (d) (e) 2468 Figure 4.17 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton- Adams interpolations on girl image showing ROI about the balloon ribbon Perceptual differences can be very well judged from the above images. The fringe artifacts are also evident in Figures The images in Figures are of the same size, hence can be viewed simultaneously for differences. Interpolation Methods for the Bayer Color Array 78

94 Results (a) (b) (c) (d) (e) Figure 4.18 (a)linear (b)cok (c)freeman (d)laroche-prescott (e)hamilton-adams interpolations on girl image showing ROI about the girl s mouth 4.10 Results Laroche-Prescott s and Hamilton-Adams interpolation processes have similar forms. Both of them use second derivatives to estimate the location of the edge and perform interpolation which may be written as vmn (, ) = umn (, ) + λgmn (, ) (4.45) Interpolation Methods for the Bayer Color Array 79

95 Common demosaicking methods in Bayer Arrays where λ > 0 and gmn (, ) is a suitably defined gradient at ( mn, ). In the case of Laroche-Prescott interpolation, gmn (, ) is a first order gradient and in the case of Hamilton-Adams interpolation, gmn (, ) is the second derivative. Equation 4.45, is in the form of that used for unsharp masking [], an enhancement process. Unsharp masking may be interpreted as either subtraction from the original image (scaled), the low-pass image or as addition of a high-pass image to the original image (scaled). This is shown below O = L + H F = AO L = ( A 1)O+ O L = ( A 1)O + H (4.46) (4.47) where O is the original image (composed of low-pass components, L and high-pass components, H). F is the image obtained after unsharp masking. It may hence be expected that these processes will sharpen edges in the resulting images as is observed in the results of Laroche-Prescott s and Hamilton-Adams interpolations. Interpolation Methods for the Bayer Color Array 80

96 Results * As an error metric, error [54], as defined in Equation 2.9 shall be used. We shall however bear in mind the bounds on this error for detectability, as mentioned * in [37]. errors less than about 2.3 are not easily detected while on the other E ab E ab E ab * Table 4.1 E ab metric macaws (Green Eye) Test Test Starburst macaws Girl Method Image 1 Image 2 (LL) (Red Eye) (Ribbon) Linear Cok Freeman * hand, greater than about 10 are so large that relative comparison is insignificant. Laroche- Prescott Hamilton- Adams Girl (Mouth) E RGB Table 4.2 metric (x10-3 ) Test Image 1 Test Image 2 Starburs t (LL) macaws (Red Eye) macaws (Green Eye) Girl (ribbon) Linear Cok Freeman Girl (Mouth) Interpolation Methods for the Bayer Color Array 81

97 Common demosaicking methods in Bayer Arrays E RGB Table 4.2 metric (x10-3 ) Laroche- Prescott Hamilton- Adams Test Image 1 Test Image 2 Starburs t (LL) macaws (Red Eye) macaws (Green Eye) Girl (ribbon) Girl (Mouth) In Table 4.1 and Table 4.2, the boldface numbers represent the minimum values in the corresponding image, which gives us an idea about which algorithm performs best for a given image. From Table 4.1, on the basis of simple majority, Freeman s algorithm outperforms the other algorithms. On the other hand, in two cases, it performs very poorly. Let us examine the conditions under which this occurs. For Test Image 1, as can be seen from Figure 4.8, Linear interpolation produces the zipper effect that had been mentioned earlier. This is because of the property of the linear interpolation process of introducing ripples [2]. Interpolation Methods for the Bayer Color Array 82

98 Results Cok s interpolation does reduce hue transitions over the edges since it interpolates not the colors, but the hue of the colors, which reduces abrupt hue jumps, hence producing fewer perceptual artifacts. Freeman s algorithm, using the median as an estimator, is not as robust and consistent as the other algorithms. The median however maintains the edge locations in gray scale. The second derivative estimator consists of information from the chrominance channels. An edge lying in the chrominance photo-sites would hence contribute to the estimator values. This data, along with luminance gradient information provides a bias to the interpolation process of the luminance channel, based on the data in the chrominance channel and vice versa for the chrominance channels. We hence get better estimates than what we would if we considered first order gradients (as is done in the other algorithms). Laroche-Prescott s algorithm, interpolates color differences (chrominance minus luminance), which is in essence interpolating the chrominance channel with a bias to the second-derivative of the luminance channel (similar to Hamilton-Adams Interpolation Methods for the Bayer Color Array 83

99 Common demosaicking methods in Bayer Arrays process). Laroche-Prescott uses the second derivatives only for the chrominance channels, as the luminance channel is linearly interpolated. This is essentially the only difference between Hamilton-Adams and Laroche-Prescott s implementations. In Test Image 2, interestingly, though we find the same trend in Linear and Cok, we find that Laroche-Prescott and Hamilton-Adams are able to reproduce the image exactly. This is attributed to the structure (and size) of their estimators and predictors and the width of the bars themselves. In the Starburst Image, the lower left corner is so badly distorted by the sampling * and interpolation process that all the differences are so high that the image E ab can be treated as having been lost (in that ROI only). This is attributed to the fact that all these algorithms are optimized for horizontal and vertical edge orientations (as the HVS is most sensitive to these orientations). The macaw image brings forth the errors that come about due to color edges. In the Balloon girl image, the prominent regions of distortions are the teeth and the balloon ribbons. In the teeth area, Hamilton-Adams algorithm perceptually performs Interpolation Methods for the Bayer Color Array 84

100 Results better than the other algorithms. In the balloon ribbon region, since we do not have much luminance information, Laroche-Prescott s interpolation, inspite of using first order gradients in the luminance interpolation, unlike Hamilton-Adams, is able to perform relatively equally well (better) than Hamilton-Adams interpolation. Interpolation Methods for the Bayer Color Array 85

101 Chapter 5 MFA Restoration Methods 5.1 Introduction It was observed in Chapter 4 that the process of interpolation during demosaicking produces color artifacts for a variety of reasons. It would be desirable to overcome these artifacts, in an attempt to present these images in the best possible fashion to the observer. We make use of prior information about these images in the restoration process for these images, as a post demosaicking step. 5.2 Reconstruction processes There are a variety of restoration techniques available in literature for multidimensional signals. In [27], Hunt and Kubler formulate a minimum mean squared error restoration for multi-channel images, using inter-channel cross-correlation information. Their method is based on the assumption that cross-correlation (interchannel and intra-channel) matrix is diagonal, allowing the formulation of a KL Interpolation Methods for the Bayer Color Array 86

102 Mean Field Annealing transform on the channels, making them orthogonal to each other and performing the restoration independently on these images. This permits the use of conventional restoration filters on the individual channels. In [18], Galatsanos introduces multichannel image restoration using the Weiner filter in multiple-dimensions, permitting the use of inter-channel correlations. Linear restoration methods, like Wiener filters, maximum entropy restoration, etc., optimize an objective function that incorporates some global feature of the image, like power spectrum, mean-squared error or entropy. These techniques suffer from one important limitation - they do not incorporate local features of the image into the linear model used [21]. One method of incorporating these local interactions is to model these interactions as Markov Random Fields (MRF) [21]. Geman and Geman [19] noted the equivalence of Gibbs distributions to the MRFs. 5.3 Mean Field Annealing In the work published by Bilbro and Snyder [11], the concept of mean field annealing as an approximation to Stochastic simulated annealing was established. The work published by Perona and Malik [38] used a non linear diffusion process to Interpolation Methods for the Bayer Color Array 87

103 MFA Restoration Methods achieve the effect of encouraging intra-region smoothing in preference to interregion smoothing. Since then, many researchers have reported improvements on this model [4], [5], [12], [13], [25]. In [44], Snyder et. al., show the equivalence of these methods. The method of MFA is a deterministic method that provides an approximation to stochastic relaxation. Relaxation, in general is a multi-step process with essential properties of iterativeness (output after each step is of the same form as input) and that it converges to a bounded result. MFA is a maximum a-posteriori relaxation method, which uses prior information about the image. In other words, for an image g, obtained from an original image f, using the lexicographic notation used in Chapter 3, we wish to maximize the a- posteriori probability p( f g) of the unknown image f given measured image g, of size NxN, represented in lexicographic form as vectors f and g. Using Bayes rule, we have p( f g) p( g f)p( f ) = p( g) (5.1) Interpolation Methods for the Bayer Color Array 88

104 Mean Field Annealing To simplify the process of maximizing p( f g), we take logarithms, reducing the problem to argmax f ( p( f g) ) = argmax f ln p( f g) = argmax f ( ln p( g f) + ln p( f ) ln p( g) ) (5.2) In Equation 5.2, ln p( g) does not figure in the maximization process as it is independent of f and hence need not be considered. Assume that the noise is signal independent, additive, white Gaussian. The probability distribution function (PDF) of a multidimensional Gaussian random variable ε, with zero mean is given by p( ε) = exp 1 -- ( ε T [ φ N ε ] 1 ε ) ( 2π) 2 φε 12 / (5.3) where N 2 is the length of the vector ε and [ ] is its covariance matrix. φ ε [ ] = E{ εε T } φ ε (5.4) where the expected value is over the ensemble of all possible samples from the PDF given in Equation 5.3. If we assume that the noise is white, Interpolation Methods for the Bayer Color Array 89

105 MFA Restoration Methods [ ] = σ 2 [ I] φ ε (5.5) where [ I] is an identity matrix of size N 2 xn 2. Since the noise is assumed to be signal independent and additive, g = Hf + ε. (5.6) ε = g Hf (5.7) We then have p( g f) = p( ε) (5.8) Using Equation 5.3 and Equation 5.5 i.e. p( g f) = p( g Hf) (5.9) p( g f) = 1 1 ( g Hf) T exp -- ( g Hf) N σ 2 ( 2π) 2 σ (5.10) We would like to maximize the quantity in Equation We now have, ln( p( g f) ) K ( g Hf) T ( g Hf) = ln( p( f g) ) K ( g Hf) T ( g Hf) = ln( p( f )) σ 2 σ 2 (5.11) (5.12) N σ where K = ln ( 2π) 2, which shall be ignored as it is independent of. f Interpolation Methods for the Bayer Color Array 90

106 Mean Field Annealing The second term is referred to as the noise Hamiltonian and the third term is referred to as the prior Hamiltonian. To solve the MAP problem, we minimize H I ( f ) = H n ( f ) + H p ( f ) (5.13) 1 ( g Hf) where T ( g Hf) H n ( f ) = and H. Notice the change in 2 p ( f ) = ln( p( f )) σ 2 sign. We have thus transformed a maximization problem into one of minimization. The prior term determines the structure of the image f. The prior term acts as a penalty term in the relaxation process. We minimize H I ( f ), using an annealing process given by + f k α = H f I ( f k ) f k 1 (5.14) where the superscript k denotes the k th step in the annealing process and α is referred to as the step size. The choice of parameters is explained in later sections in this chapter. In [19], Geman and Geman prove that simulated annealing converges to the global minimum for a logarithmic annealing schedule like t k 1 = However, such a lnk Interpolation Methods for the Bayer Color Array 91

107 MFA Restoration Methods schedule is too slow for practical implementation. Instead, most implementations use a geometric annealing schedule given by t k + 1 = γ t k (5.15) To determine the direction of movement in the annealing process, we need to eval- uate H. f I ( f ) H f I ( f ) = H f N ( f ) + f HP ( f ) (5.16) Noise Term The derivative of the noise term, H, is given by f N ( f ) H f N ( f ) = 1 -- ( g Hf) T ( g Hf) f 2 σ 2 (5.17) which can be written as H f N ( f ) = 1 -- ( gt g f 2 f T H T g g T Hf + f T H T Hf) (5.18) H f N ( f ) = 1 -- ( 2H 2 T Hf 2H T g) (5.19) H f N ( f ) = H T ( Hf g) (5.) Interpolation Methods for the Bayer Color Array 92

108 Mean Field Annealing Reverting to convolution representations, Equation 5. can be written as H f N ( f ) = ( f h g) h rev (5.21) This gives us the first term in Equation Prior Term Geman and Geman [19] formalized the Hamiltonian for a discrete-lattice MRF. Using the equivalence of Gibbs distributions and MRFs (proved using the Hammersley-Clifford expansion in [10]), any MRF can be represented by a Gibbs distribution. For a MRF x, px ( ) = 1 U( x) -- Z exp T (5.22) where T and Z are constants and U is called the energy function. It is of the form U( x) = V c ( x) c C (5.23) where C denotes the cliques (neighborhood) for the MRF. V c is called the potential [19] and Z is called the partition function [19], [25] and is given by Interpolation Methods for the Bayer Color Array 93

109 MFA Restoration Methods Z = x exp U( x) T (5.24) In Equation 5.22, p the probability measure on the set of all possible configurations of x (a MRF). We will refer to T as the temperature. For one pixel, px ( ) = Z exp --- V T c ( x) c C (5.25) The idea of a MRF is applied to each pixel in an image f, giving us p( f ) = exp β V c ( x) i c C (5.26) where the sum is taken over the cliques of pixel i, β is a normalizing constant. Taking logarithms, reduces this probability to ln p ( f ) = β V c ( x) i c C (5.27) Piecewise Constant Previous work [11], [12], [25] has demonstrated that for images that are piecewiseconstant, a choice of prior term given as Interpolation Methods for the Bayer Color Array 94

110 Mean Field Annealing H P ( f ) = i β ( f ) -- 2 t exp t 2 (5.28) where f is the gradient operator given by f = f 2 x + f y 2 (5.29) When written as a convolution process, Equation 5.29 is given by f = ( q x f ) 2 + ( q y f ) 2 (5.30) where q x = 2 0 2, q y = (5.31) Piecewise Linear The prior term requires the quantity within the exponent to be close to zero in the regions of interest. Hence, for piecewise linear models, we need the argument of the exponent to be such that it is zero over linear regions, which is satisfied by the second derivative. The Laplacian, being an estimate for the second derivative, can be used for the kernel required for q. However, the Laplacian may also be zero at saddle points. Instead, in Equation 5.28, we use Interpolation Methods for the Bayer Color Array 95

111 MFA Restoration Methods 2 f = 2 f 2 2 f 2 2 f 2. (5.32) x 2 + y xy When written as a convolution process, Equation 5.32 is given by 2 f = ( f q xx ) + ( f q yy ) +( f q xx ) (5.33) This is the quadratic variation and is implemented by the following three convolution kernels [12], [40], which approximate the pure and mixed partial derivatives of f q xx = , q yy = , q xy = (5.34) Rewriting Equation 5.28, where f = Qf, where Q is the matrix representation of the operators in Equation 5.34; H P ( f ) = 2 β Qf -- 2, (5.35) t exp t 2 where. 2 represents the 2-norm of the vector. Note that we are using just one convolution kernel here, for symbolic representation, when in fact we should have two (piecewise constant) or three (piecewise linear) as the case may be. Interpolation Methods for the Bayer Color Array 96

112 Mean Field Annealing We know that for a vector x, = x T x. x 2 H P ( f ) = β ( Qf) -- T Qf. (5.36) t exp t 2 Differentiating Equation 5.36 w.r.t f, we get HP f ( f ) = β f -- T Q T Qf ( ) t f 2t exp f T Q T Qf t 2 (5.37) β i.e. HP ( f ) -- 2QT Qf f Q T Qf = f t 2t t 2 (5.38) β f i.e. HP ( f ) f t Qf Q T Qf = exp t 2 (5.39) Reverting to convolution representations, Equation 5.39 can be written as β ( f q) HP ( f ) = --- ( f q) (5.40) f t exp 3 2t 2 q rev Using Equation 5.21 and Equation 5.40 along with Equation 5.16, we can perform the gradient descent process, if the parameters for the gradient descent are chosen appropriately. Note: Piecewise constant models work well for images with step edges, but for images with roof edges, piecewise constant models are inappropriate and we need to adopt piecewise linear models. Interpolation Methods for the Bayer Color Array 97

113 MFA Restoration Methods Choice of parameters There are heuristic estimates for the parameters that work with good precision documented in [12], [25], [51]. Depending upon the choice of prior model, the estimates of the parameters varies. The temperature decrement for the annealing process is a number close to 1.0, such as 0.99 or The annealing schedule is hence arbitrarily chosen to be t k + 1 = 0.95t k (5.41) where k is the iteration number. The choice of parameters is as suggested by [12], [25] are given below, Table 5.1 Parameter list for MFA Parameter Piecewise Constant model Piecewise Linear model T init 4σ 2 = atleast 4*O(σ 2 ) 2.0 T final σ 0.02, down by two orders of magnitude , down two orders of magnitude 134 β O(σ) O(σ) α γσ t MLF γ γσ t MLF Interpolation Methods for the Bayer Color Array 98

114 Mean Field Annealing Table 5.1 Parameter list for MFA Parameter Piecewise Constant model Piecewise Linear model MLF E( f 2) E( f 2) f 1 β ( H σ 2 f N ( f )) HP ( f ) f 2πt f 1 β = ( H σ 2 f N ( f )) πt f HP ( f ) This constraint of a prior can be applied in a variety of fashions. We have seen prior models for piecewise constant and linear images. Extensions to RGB images shall be given in the next chapter, where we will have, not a scalar value at a pixel location, but a vector with three entries, one for each color channel. PSF estimation Samples from each channel are considered independently. We will hence have data with alternating samples and zeros (data not captured) in each channel, along the x and y directions. This is illustrated in Figure 5.1, where a sample CFA image of a horizontal edge after convolution with a PSF is shown. Figure 5.1b shows the red channel of the CFA image. Figure 5.1c shows the profile of the red channel along the horizontal axis in the center of the image. The profile plot (Edge Spread Function - ESF) of the blurred edge in Figure 5.1b is considered to be from an edge that Interpolation Methods for the Bayer Color Array 99

115 MFA Restoration Methods would result from fitting a curve to the samples. The fitted curve would look like the curve in red in Figures 5.1d. In other words, the ESF is a plot of an edge that would have resulted from an edge with samples at hand. Using classical techniques [6], [47], to estimate the blur from the ESF; differentiate the ESF to obtain the Line Spread Function (LSF) assuming a radially symmetric blur, one ESF is adequate to estimate the PSF The same process is repeated for each channel to obtain three different PSFs required in the restoration process. It is to be noted however, that there are other techniques for blur estimation [6] (spectral signature analysis, cepstral methods, etc.). Interpolation Methods for the Bayer Color Array 100

116 Mean Field Annealing (a) (b) samples fitted Distance along profile (c) Figure 5.1 Illustration of blur estimation process (a) sample image showing CFA pattern (b) samples in red channel along the horizontal axis Noise estimation Interpolation Methods for the Bayer Color Array 101

117 MFA Restoration Methods Noise estimations are performed on a per-channel basis, ignoring pixels where samples are not gathered. Classical techniques [6], [47] of using a uniform area of the image are used, where the variance in the signal information will occur mostly due to the noise. The noise, as mentioned earlier is modeled as having a clipped Gaussian distribution. Hence the only parameter needed to be estimated is the variance. Interpolation Methods for the Bayer Color Array 102

118 Chapter 6 Demosaicking using Mean Field Annealing 6.1 Introduction The primary idea of demosaicking is to reconstruct samples and in the process introduce a minimal number of artifacts. The inter-channel dependence along with the intra-channel dependence needs to be taken into consideration while performing this interpolation process. Some of the algorithms described in Chapter 4 make use of this correlation. In this chapter, we look at methods of restoring these mosaicked images using MFA as the restoration process. 6.2 Independent Restoration (MFA-RGB) MFA may be used such that the three (or as many as desired in the general case) channels in an image, are treated independent of each other. This does not take into account inter-channel correlation, instead models each channel as a Markov random field. Each channel is restored using the image Hamiltonian given by Interpolation Methods for the Bayer Color Array 103

119 Demosaicking using Mean Field Annealing H I ( f ) = H n ( f ) + H p ( f ) (6.1) The image Hamiltonian is minimized, using gradient descent as described in Chapter 5, with piecewise constant and piecewise linear prior models Independently Piecewise Uniform Each channel is treated as a gray scale image and restored using piecewise constant priors described in Chapter 5. The initial estimate for f for the MFA process is taken to be the result of one of the demosaicking methods described in Chapter 4. The resulting image converges to a solution which depends upon the choice of the initial estimate. Convergence rates are discussed in later chapters Independently Piecewise Linear Each channel is restored independently using a piecewise linear prior. Results discussed in Chapter 8. Interpolation Methods for the Bayer Color Array 104

120 Vector MFA (VMFA) 6.3 Vector MFA (VMFA) Multivariate images (images which contain more than one sample at each pixel location), especially RGB images have strong correlations between channels [46], which can be made use of in the restoration process, which is called Vector MFA [23]. Consider each pixel in the multivariate image as a vector. In our case of an RGB image, each pixel will be represented as a 3-vector as follows f i = r i g i. (6.2) b i VMFA - Noise Term The noise term ( f ) for additive white Gaussian noise is then given by H N H N ( f ) = l ( r gl H r r f l ) σ r 2 m ( g gm H g g f m ) σ g 2 n ( b gn H b b f n ) σ b 2 (6.3) Interpolation Methods for the Bayer Color Array 105

121 Demosaicking using Mean Field Annealing where the subscript f l,m,n and g l,m,n correspond to the location l, m, n on the images f and g that have measurements in red, green and blue respectively, σ r, g, b correspond to the noise variance and is the blur in the respective channels. H r, g, b The formulation of the noise term is an extension of the case for one univariate data (grayscale), which was derived in Section VMFA - Prior Term We are interested in a formulation for the prior term which will determine the structure of the image. If we require the restored image to have locally homogeneous magnitude and directions, then the image should be component-wise uniform. In this case the property of pixel i is its intensity (magnitude) and angle (direction). Piecewise-constant intensity To maintain piecewise-constant intensity images, we need to incorporate a scalar measure of the difference between vectors (over the neighborhood of a pixel) in the prior term. The 2-norm of the difference vector is one such measure Interpolation Methods for the Bayer Color Array 106

122 Vector MFA (VMFA) H P ( f ) = i β ( πt) 32 / j ℵ i + ( ) 2 +( b i b j ) 2 exp ( r i r j ) 2 g i g j t 2 (6.4) However, CIE-L*a*b* difference metrics could also be used. This form for the prior [23] moves one color vector to another, minimizing 2-norm of the difference vector in the neighborhood of the pixel. Piecewise-linear intensity For preserving second-order discontinuity (piecewise linear), the second derivative of the difference vector may used. As in case of grayscale images, the Laplacian (a good estimate for the second derivative) is not used. Instead, the quadratic variation is used. The prior term is written as H P ( f ) = i β ( πt) 32 / k, l ℵ i ( r k + 2ri r l ) 2 + ( g k + 2gi g l ) 2 + ( b k + 2bi b l ) 2 exp t 2 (6.5) The results of vector piecewise-constant intensity and vector piecewise-linear intensity priors are given in later chapters. Interpolation Methods for the Bayer Color Array 107

123 Demosaicking using Mean Field Annealing 6.4 Piecewise-constant hue model (MFA-HSV) At first let us consider only the hue, it had been mentioned in Section 4.4 that the human eye is sensitive to changes in hue and hue needs to be maintained constant over uniform regions of the image. In other words, the hue of the image needs to be piecewise-constant. This idea is incorporated into the prior model by using the hue-saturation-value (HSV) color model, described in Section The problem is now decomposed into three independent restoration processes, piecewise-constant hue no change to saturation piecewise-linear / constant 1 Depending upon prior knowledge of the scene, various combinations of prior models can be used. In particular, the exact choice of scene information and prior model is yet to be investigated. The reconstruction process can be summarized in the following flow-chart. 1. The term value and intensity will be used interchangeably. However when referred to, either term will correspond to the value in the HSV hexcone. Interpolation Methods for the Bayer Color Array 108

124 Piecewise-constant hue model (MFA-HSV) Start with g image Hue Saturation Value Restored Hue Restored Value Estimated f image Figure 6.1 Flow-chart showing the proposed restoration process Note however that the starting image g is the result of a demosaicking process, like those described in Chapter 4. The conversion from RGB to HSV and vice versa has been described in Appendix A. Interpolation Methods for the Bayer Color Array 109

125 Chapter 7 Experimental Approach 7.1 Introduction During the course of this work, in communications with Dr. Adams of Eastman Kodak Inc. [3], the problems arising due to aliasing and clipping were brought forth. The results obtained due to sample aliasing are dramatic enough to warrant the need of redoing most of the work done earlier in Chapter 4, using images that had sharp edges. This chapter shall summarize the methodology used in performing the experiments, in an attempt to recreate the data from original CFAs. The restoration process may be employed after the demosaicking step or even as a method to perform the demosaicking. 7.2 CFA Sampling Use of a CFA introduces a new type of aliasing effect in resulting images. For example, a very small (and therefore high spatial frequency) object such as a thin Interpolation Methods for the Bayer Color Array 110

126 CFA Sampling line may only be sampled by a single color channel. In such a case we see single pixel artifacts which are dominated by a single color. Figure 7.1 illustrates such an artifact. Original Image Linear Interpolation (a) (b) Figure 7.1 Illustration of the missing-pixel artifact (a) Original CFA image (b) Result after Linear Interpolation The combination of the optical system in a camera and the sampling rate need to be chosen to reduce aliasing. Given a desired sampling rate, aliasing can be eliminated / reduced by using an anti-aliasing filter in the optical assembly. To produce simulated CFA images, we take a full color image and sample it using a CFA pattern. In the sampling process, we need to avoid aliasing artifacts. One Interpolation Methods for the Bayer Color Array 111

127 Experimental Approach way to achieve this is to blur a high resolution image and subsample it before using the CFA mask. This needs to be done such that the Bayer (CFA) subsampling, itself does not introduce artifacts due to missing data. The original high-resolution images needs to be atleast two times higher in resolution than the desired resolution for the CFA image Synthetic Images As an example of synthetic images, let us look at a test image used in Chapter 4. Figure 7.2a shows the image that was used earlier, as a part of a larger Bayer Array mosaic, with sharp edges. Subpixel blurs (different in each channel) are introduced by using the image data in much higher resolution than required and using a Gaussian PSF of zero mean and variance in the order of one. This blurred image is then subsampled to produce the resulting image. The result of a subpixel blur is shown in Figure 7.2b. Interpolation Methods for the Bayer Color Array 112

128 CFA Sampling (a) (b) Figure 7.2 Type I Test Image, a) Test Image 2 used earlier (b)testimage 2 after a subpixel blur of variance unity and subsampled accordingly.(bars of constant width - 3 pixels) The images used in Chapter 4 were blurred in the same fashion as the above image and subsampled and then demosaicked. The results from this process are summarized in Tables Comparing the error metrics in Tables 7.1 and 7.2 with those in Tables 4.1 and 4.2, we observe the algorithms perform much better than they did on the images without such PSFs improve the performance. The error metrics in L*a*b* space have been greatly reduced, illustrating the need for antialiasing in simulated data. This also emphasizes a notion of acceptable errors Interpolation Methods for the Bayer Color Array 113

129 Experimental Approach which the demosaicking algorithms introduce. Observe that * E ab errors for the * E ab Table 7.1 metric for subpixel Gaussian blurred images Method Test Image 1 Test Image 2 Starburst (LL) Linear Cok Freeman Laroche Presscott Hamilton- Adams E RGB Table 7.2 metric (x10-3 ) for subpixel Gaussian blurred images Test Image 1 Test Image 2 Starburst (LL) Linear Cok Freeman Laroche Presscott Hamilton- Adams starburst image that were highly prominent without the blur, are now in a not-easily detectable range. To simulate CFA data in a more appropriate fashion, we need to add a noise to the data which will reflect the noise sensitivity of the CCD array. For synthetic images, zero mean white Gaussian noise is added (such that SNR = 32dB) to each channel Interpolation Methods for the Bayer Color Array 114

130 CFA Sampling independently; making the noise uncorrelated between the various channels of the image. Interpolation Methods for the Bayer Color Array 115

131 Chapter 8 Experimental Results 8.1 Introduction The approach outlined in Chapter 7 is applied to synthetic and real-world CFA images. The synthetic images chosen have the properties that have been modeled in the prior models described earlier. However, the limitations of the prior model need to be understood and adapted accordingly. It should be noted that the demosaicking methods presented in Chapter 4 do not have a blur compensation term or a noise reduction term and comparative performance is not comparable. The suggested method of Chapter 6 is an advancement over the existing demosaicking methods. The initial estimate for the restoration process presented in this chapter is the result of performing the Hamilton-Adams demosaicking process. However, other methods for demosaicking can also be used. Interpolation Methods for the Bayer Color Array 116

132 Synthetic Images 8.2 Synthetic Images The two prior models that we have developed are for piecewise-constant and piecewise-linear prior image formation models. We will hence consider these two types of synthetic images. Piecewise-constant images Figure 8.1 shows the synthetic images used to represent piecewise-constant images. Interpolation Methods for the Bayer Color Array 117

133 Experimental Results (a) (b) (c) (d) Figure 8.1 Piecewise-constant synthetic images (a) Test Image 1 (b) Test Image 2 (c) Test Image 3 (d) Test Image 4 For synthetic images, since the blur and noise in the image are known, we can use them for the restoration process. The results of using MFA using a piecewise constant prior model on the synthetic images have been shown in Figures Interpolation Methods for the Bayer Color Array 118

134 Synthetic Images MFA RGB restored (a) (b) VMFA restored MFA HSV Restored (c) (d) Figure 8.2 MFA Restored Test Image 1 (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Notice the step-edge reproduction that the restoration process has been accomplished with high fidelity. At relatively high spatial frequencies (on the right hand side of the images) the restoration of the edges is remarkably good. Similar results Interpolation Methods for the Bayer Color Array 119

135 Experimental Results are obtained for Test Image 2. The restoration parameters are kept the same for all the experiments. This demonstrates the robustness of the reconstruction process for a class of images. The MFA-RGB method however has image boundary artifacts that result from the convolution at each stage in the annealing process. This can be removed with appropriate cropping of the images. Interpolation Methods for the Bayer Color Array 1

136 Synthetic Images MFA RGB restored (a) (b) VMFA restored MFA HSV restored (c) (d) Figure 8.3 MFA Restored Test Image 2 (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Test Image 3 is an extension of Test Image 2 but the edges are between colors and black (zero intensity), unlike white and black in Test Image 2. Different colors have been chosen to illustrate robustness of the color restoration process. Interpolation Methods for the Bayer Color Array 121

137 Experimental Results MFA RGB restored (a) (b) VMFA restored MFA HSV Restored (c) (d) Figure 8.4 MFA Restored Test Image 3 (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration As in the earlier cases, the restoration process reproduces the original image (without blur and noise) with remarkable fidelity. The metric measures of the error are Interpolation Methods for the Bayer Color Array 122

138 Synthetic Images shown in Table 8.1 Two parts of the starburst image, which are of particular interest (regions of interest - ROI) have been shown in Figure (a) (b) Figure 8.5 Regions of interest in the starburst image (a) ROI 1 (b) ROI 2 Different properties of the image have been enhanced in the two regions of interest. It has been found that regions of high spatial frequency are destroyed in the MFA process. This occurs when the size of the convolution kernel is larger than the feature itself. In Figure 8.7, we notice that the image has been denoised and the edges are step-edges, unlike what is obtained from the demosaicking methods presented in Chapter 4. Interpolation Methods for the Bayer Color Array 123

139 Experimental Results (a) (b) VMFA Restored MFA HSV Restored (c) Figure 8.6 MFA Restored ROI 1 of the starburst image (a) result of demosaicking (b) MFA- RGB (c) VMFA (d) MFA-HSV restoration (d) MFA RGB Restored (a) VMFA Restored (b) MFA HSV Restored (c) (d) Figure 8.7 MFA Restored ROI 2 of the starburst image (a) result of demosaicking (b) MFA- RGB (c) VMFA (d) MFA-HSV restoration Interpolation Methods for the Bayer Color Array 124

140 Synthetic Images Table 8.1 Error Metrics for piecewise-constant restored images Image Method used E x 10-3 ab E RGB Test Image 1 Hamilton-Adams Test Image 1 MFA-RGB (constant) Test Image 1 VMFA (constant) Test Image 1 MFA-HSV (constant) Test Image 2 Hamilton-Adams Test Image 2 MFA-RGB (constant) Test Image 2 VMFA (constant) Test Image 2 MFA-HSV (constant) Test Image 3 Hamilton-Adams Test Image 3 MFA-RGB (constant) Test Image 3 VMFA (constant) Test Image 3 MFA-HSV (constant) Starburst Image (ROI 1) Hamilton-Adams Starburst Image (ROI 1) MFA-RGB (constant) Starburst Image (ROI 1) VMFA (constant) Starburst Image (ROI 1) MFA-HSV (constant) Starburst Image (ROI 2) Hamilton-Adams Starburst Image (ROI 2) MFA-RGB (constant) Starburst Image (ROI 2) VMFA (constant) Starburst Image (ROI 2) MFA-HSV (constant) * Observe that MFA-HSV has the best performance for Test Images 1, 2, and 3. However, for the starburst images, it is not so. This behavior is due to the choice of Interpolation Methods for the Bayer Color Array 125

141 Experimental Results parameters in the restoration process. To make the MFA process and the choice of parameters generic, a rule set as described in Table 5.1 is used. Modifications to the parameters for each image would clearly result in the optimal images after the annealing process. But this not being a practical solution, the parameters are not modified for each image, providing us performances as shown in Table 8.2. Piecewise-linear images Figure 8.8 shows test images used as piecewise-linear images (a) (b) Figure 8.8 Piecewise-linear synthetic images (a) Test Image 5 (wedge) (b) Test Image 6 (color wedges) Interpolation Methods for the Bayer Color Array 126

142 Synthetic Images The wedge image is a linear gradient from black to white, with additive white Gaussian noise (SNR = 32dB). The color wedges image is a collection of three differently aligned color gradients producing color edges. MFA RGB restored (a) (b) VMFA Restored MFA HSV Restored (c) Figure 8.9 MFA Restored wedge image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration (d) Interpolation Methods for the Bayer Color Array 127

143 Experimental Results Figure 8.10 ROI in the color wedges image The color wedges image has piecewise-linearly varying hues. The region of interest in this image is the color edges. The demosaicking algorithms do not perform well at color edges. This is seen in Figure 8.11a where the edge has color artifacts, which appear as saddle-points. This occurs because of the use of the Laplacian to estimate the second-derivative. The restoration processes (use quadratic variation) reconstruct the edge with reduction in the saddle-point artifact. Table 8.2 has metric results from the restoration process. It is seen from the figures that the demosaicking process by itself reconstructs the linear regions well, but performs poorly at color edges - which the restoration process is able to overcome. Interpolation Methods for the Bayer Color Array 128

144 Synthetic Images MFA RGB Restored (a) (b) VMFA Restored MFA HSV Restored (c) (d) Figure 8.11 MFA Restored ROI of the color-wedges image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Interpolation Methods for the Bayer Color Array 129

145 Experimental Results Table 8.2 Error Metrics for piecewise-linear restored images Image Method used E x 10-4 ab E RGB Test Image5 Hamilton-Adams Test Image5 MFA-RGB (linear) Test Image5 VMFA (linear) Test Image 5 MFA-HSV (linear) Test Image 6 (ROI) Hamilton-Adams Test Image 6 (ROI) MFA-RGB (linear) Test Image 6 (ROI) VMFA (linear) Test Image 6 (ROI) MFA-HSV (linear) * 8.3 Real-world Images Four images have been chosen from a collection of images, to illustrate the properties of the MFA restoration processes. The resolution chart (Figure 8.12), and the matisse image (Figure 8.16) were obtained from Texas Instruments Inc. [56]. The book (Figure 8.) image and the car (Figure 8.23) were obtained from Pulnix America Inc. [26]. Numerical metrics used earlier have not been included as the truth image is not available for the CFA images. Interpolation Methods for the Bayer Color Array 130

146 Real-world Images Figure 8.12 CFA resolution chart image The ROIs for the resolution chart image are shown in Figure These images have been superimposed with the CFA pattern to give an idea of the arrangement of the color samples.cfa resolution chart image Interpolation Methods for the Bayer Color Array 131

147 Experimental Results (a) (b) Figure 8.13 Regions of interest in the resolution chart image (a) ROI 1 (b) ROI 2 It was remarked earlier for synthetic images, that MFA does not maintain high spatial frequencies. The same property is observed in the real world images in Figures 8.14 and This is seen especially in the checker-board pattern in Figures Interpolation Methods for the Bayer Color Array 132

148 Real-world Images MFA RGB restored (a) (b) VMFA Restored Restored (c) (d) Figure 8.14 MFA Restored ROI 1 of the Resolution chart image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Interpolation Methods for the Bayer Color Array 133

149 Experimental Results (a) MFA RGB restored (b) VMFA Restored (c) MFA HSV Restored (d) Figure 8.15 MFA Restored ROI 2 of the resolution chart image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration Interpolation Methods for the Bayer Color Array 134

150 Real-world Images The second image used in this experiment is shown in Figure The brightness adjustments performed on the resolution chart image have also been performed on this image. The image has the properties that it is piecewise linear in a few regions and mostly piecewise constant Figure 8.16 CFA matisse image Interpolation Methods for the Bayer Color Array 135

151 Experimental Results However it needs to be understood that this image does not completely test the MFA prior models for their robustness with cases other than those modeled in earlier chapters. Figure 8.17 shows the ROIs in the matisse image (a) (b) Figure 8.17 Regions of interest in the matisse image (a) ROI 1 (b) ROI 2 Interpolation Methods for the Bayer Color Array 136

152 Real-world Images ROI 1 demonstrates the ability of the MFA methods to reconstruct color edges in the demosaicking process. Similarly, ROI 2 demonstrates text-reproduction capabilities. MFA RGB restored (a) (b) VMFA Restored Restored (c) (d) Figure 8.18 MFA Restored ROI 1 of the matisse image (a) result of demosaicking (b) MFA- RGB (c) VMFA (d) MFA-HSV restoration Interpolation Methods for the Bayer Color Array 137

153 Experimental Results (a) MFA RGB restored (b) VMFA Restored (c) Restored (d) Figure 8.19 MFA Restored ROI 2 of the resolution chart image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration The third CFA image, books, a picture of a scene with a book-stand and the macbeth color chart, is shown in Figure 8.. Interpolation Methods for the Bayer Color Array 138

154 Real-world Images Figure 8. CFA books image This image has a color chart and regions of high detail (text on the books). Such a regions is chosen as a ROI, shown in Figure Interpolation Methods for the Bayer Color Array 139

155 Experimental Results Figure 8.21 Region of interest in the books image. Although the color transformation matrix is not known so that the colors can not be transformed from the color space of the camera to the visual color space, the artifacts introduced by the demosaicking process can be observed. Figures 8.22b-d show the piecewise constant restoration of this ROI. The color chips and the text on the book covers are the regions where the restoration is clearly observed. Interpolation Methods for the Bayer Color Array 140

156 Real-world Images MFA RGB restored (a) (b) VMFA Restored MFA HSV Restored (c) (d) Figure 8.22 MFA Restored ROI 1 of the matisse image (a) result of demosaicking (b) MFA-RGB (c) VMFA (d) MFA-HSV restoration The fourth CFA image used is a picture of a car in daylight. This image has sharp edges in regions where the specular reflection off the car which produce artifacts after the demosaicking process. The image also has regions where the body of the Interpolation Methods for the Bayer Color Array 141

157 Experimental Results car has cylindrical shapes which have not been modeled by the MFA process described here. However, piecewise constant and piecewise linear models have been applied to the ROI to observe its performance Figure 8.23 CFA car image Interpolation Methods for the Bayer Color Array 142

COLOR. and the human response to light

COLOR. and the human response to light COLOR and the human response to light Contents Introduction: The nature of light The physiology of human vision Color Spaces: Linear Artistic View Standard Distances between colors Color in the TV 2 Amazing

More information

Fig Color spectrum seen by passing white light through a prism.

Fig Color spectrum seen by passing white light through a prism. 1. Explain about color fundamentals. Color of an object is determined by the nature of the light reflected from it. When a beam of sunlight passes through a glass prism, the emerging beam of light is not

More information

Color Image Processing

Color Image Processing Color Image Processing Jesus J. Caban Outline Discuss Assignment #1 Project Proposal Color Perception & Analysis 1 Discuss Assignment #1 Project Proposal Due next Monday, Oct 4th Project proposal Submit

More information

COLOR and the human response to light

COLOR and the human response to light COLOR and the human response to light Contents Introduction: The nature of light The physiology of human vision Color Spaces: Linear Artistic View Standard Distances between colors Color in the TV 2 How

More information

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing For a long time I limited myself to one color as a form of discipline. Pablo Picasso Color Image Processing 1 Preview Motive - Color is a powerful descriptor that often simplifies object identification

More information

Digital Image Processing Color Models &Processing

Digital Image Processing Color Models &Processing Digital Image Processing Color Models &Processing Dr. Hatem Elaydi Electrical Engineering Department Islamic University of Gaza Fall 2015 Nov 16, 2015 Color interpretation Color spectrum vs. electromagnetic

More information

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD)

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD) Color Science CS 4620 Lecture 15 1 2 What light is Measuring light Light is electromagnetic radiation Salient property is the spectral power distribution (SPD) [Lawrence Berkeley Lab / MicroWorlds] exists

More information

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song

Image and video processing (EBU723U) Colour Images. Dr. Yi-Zhe Song Image and video processing () Colour Images Dr. Yi-Zhe Song yizhe.song@qmul.ac.uk Today s agenda Colour spaces Colour images PGM/PPM images Today s agenda Colour spaces Colour images PGM/PPM images History

More information

Color Image Processing. Gonzales & Woods: Chapter 6

Color Image Processing. Gonzales & Woods: Chapter 6 Color Image Processing Gonzales & Woods: Chapter 6 Objectives What are the most important concepts and terms related to color perception? What are the main color models used to represent and quantify color?

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Digital Image Processing. Lecture # 8 Color Processing

Digital Image Processing. Lecture # 8 Color Processing Digital Image Processing Lecture # 8 Color Processing 1 COLOR IMAGE PROCESSING COLOR IMAGE PROCESSING Color Importance Color is an excellent descriptor Suitable for object Identification and Extraction

More information

Color images C1 C2 C3

Color images C1 C2 C3 Color imaging Color images C1 C2 C3 Each colored pixel corresponds to a vector of three values {C1,C2,C3} The characteristics of the components depend on the chosen colorspace (RGB, YUV, CIELab,..) Digital

More information

Color Science. CS 4620 Lecture 15

Color Science. CS 4620 Lecture 15 Color Science CS 4620 Lecture 15 2013 Steve Marschner 1 [source unknown] 2013 Steve Marschner 2 What light is Light is electromagnetic radiation exists as oscillations of different frequency (or, wavelength)

More information

Unit 8: Color Image Processing

Unit 8: Color Image Processing Unit 8: Color Image Processing Colour Fundamentals In 666 Sir Isaac Newton discovered that when a beam of sunlight passes through a glass prism, the emerging beam is split into a spectrum of colours The

More information

Introduction to Color Science (Cont)

Introduction to Color Science (Cont) Lecture 24: Introduction to Color Science (Cont) Computer Graphics and Imaging UC Berkeley Empirical Color Matching Experiment Additive Color Matching Experiment Show test light spectrum on left Mix primaries

More information

Announcements. The appearance of colors

Announcements. The appearance of colors Announcements Introduction to Computer Vision CSE 152 Lecture 6 HW1 is assigned See links on web page for readings on color. Oscar Beijbom will be giving the lecture on Tuesday. I will not be holding office

More information

Color Image Processing EEE 6209 Digital Image Processing. Outline

Color Image Processing EEE 6209 Digital Image Processing. Outline Outline Color Image Processing Motivation and Color Fundamentals Standard Color Models (RGB/CMYK/HSI) Demosaicing and Color Filtering Pseudo-color and Full-color Image Processing Color Transformation Tone

More information

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models

Introduction to computer vision. Image Color Conversion. CIE Chromaticity Diagram and Color Gamut. Color Models Introduction to computer vision In general, computer vision covers very wide area of issues concerning understanding of images by computers. It may be considered as a part of artificial intelligence and

More information

Color image processing

Color image processing Color image processing Color images C1 C2 C3 Each colored pixel corresponds to a vector of three values {C1,C2,C3} The characteristics of the components depend on the chosen colorspace (RGB, YUV, CIELab,..)

More information

Chapter 3 Part 2 Color image processing

Chapter 3 Part 2 Color image processing Chapter 3 Part 2 Color image processing Motivation Color fundamentals Color models Pseudocolor image processing Full-color image processing: Component-wise Vector-based Recent and current work Spring 2002

More information

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color 1 ACHROMATIC LIGHT (Grayscale) Quantity of light physics sense of energy

More information

Introduction. The Spectral Basis for Color

Introduction. The Spectral Basis for Color Introduction Color is an extremely important part of most visualizations. Choosing good colors for your visualizations involves understanding their properties and the perceptual characteristics of human

More information

To discuss. Color Science Color Models in image. Computer Graphics 2

To discuss. Color Science Color Models in image. Computer Graphics 2 Color To discuss Color Science Color Models in image Computer Graphics 2 Color Science Light & Spectra Light is an electromagnetic wave It s color is characterized by its wavelength Laser consists of single

More information

Raster Graphics. Overview קורס גרפיקה ממוחשבת 2008 סמסטר ב' What is an image? What is an image? Image Acquisition. Image display 5/19/2008.

Raster Graphics. Overview קורס גרפיקה ממוחשבת 2008 סמסטר ב' What is an image? What is an image? Image Acquisition. Image display 5/19/2008. Overview Images What is an image? How are images displayed? Color models How do we perceive colors? How can we describe and represent colors? קורס גרפיקה ממוחשבת 2008 סמסטר ב' Raster Graphics 1 חלק מהשקפים

More information

קורס גרפיקה ממוחשבת 2008 סמסטר ב' Raster Graphics 1 חלק מהשקפים מעובדים משקפים של פרדו דוראנד, טומס פנקהאוסר ודניאל כהן-אור

קורס גרפיקה ממוחשבת 2008 סמסטר ב' Raster Graphics 1 חלק מהשקפים מעובדים משקפים של פרדו דוראנד, טומס פנקהאוסר ודניאל כהן-אור קורס גרפיקה ממוחשבת 2008 סמסטר ב' Raster Graphics 1 חלק מהשקפים מעובדים משקפים של פרדו דוראנד, טומס פנקהאוסר ודניאל כהן-אור Images What is an image? How are images displayed? Color models Overview How

More information

Color & Graphics. Color & Vision. The complete display system is: We'll talk about: Model Frame Buffer Screen Eye Brain

Color & Graphics. Color & Vision. The complete display system is: We'll talk about: Model Frame Buffer Screen Eye Brain Color & Graphics The complete display system is: Model Frame Buffer Screen Eye Brain Color & Vision We'll talk about: Light Visions Psychophysics, Colorimetry Color Perceptually based models Hardware models

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Color Image Processing Christophoros Nikou cnikou@cs.uoi.gr University of Ioannina - Department of Computer Science and Engineering 2 Color Image Processing It is only after years

More information

Colors in Images & Video

Colors in Images & Video LECTURE 8 Colors in Images & Video CS 5513 Multimedia Systems Spring 2009 Imran Ihsan Principal Design Consultant OPUSVII www.opuseven.com Faculty of Engineering & Applied Sciences 1. Light and Spectra

More information

PERCEIVING COLOR. Functions of Color Vision

PERCEIVING COLOR. Functions of Color Vision PERCEIVING COLOR Functions of Color Vision Object identification Evolution : Identify fruits in trees Perceptual organization Add beauty to life Slide 2 Visible Light Spectrum Slide 3 Color is due to..

More information

6 Color Image Processing

6 Color Image Processing 6 Color Image Processing Angela Chih-Wei Tang ( 唐之瑋 ) Department of Communication Engineering National Central University JhongLi, Taiwan 2009 Fall Outline Color fundamentals Color models Pseudocolor image

More information

Demosaicking methods for Bayer color arrays

Demosaicking methods for Bayer color arrays Journal of Electronic Imaging 11(3), 306 315 (July 00). Demosaicking methods for Bayer color arrays Rajeev Ramanath Wesley E. Snyder Griff L. Bilbro North Carolina State University Department of Electrical

More information

Visual Perception. Overview. The Eye. Information Processing by Human Observer

Visual Perception. Overview. The Eye. Information Processing by Human Observer Visual Perception Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Class Introduction to DIP/DVP applications and examples Image as a function Concepts

More information

Digital Image Processing (DIP)

Digital Image Processing (DIP) University of Kurdistan Digital Image Processing (DIP) Lecture 6: Color Image Processing Instructor: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture, University of Kurdistan,

More information

LECTURE 07 COLORS IN IMAGES & VIDEO

LECTURE 07 COLORS IN IMAGES & VIDEO MULTIMEDIA TECHNOLOGIES LECTURE 07 COLORS IN IMAGES & VIDEO IMRAN IHSAN ASSISTANT PROFESSOR LIGHT AND SPECTRA Visible light is an electromagnetic wave in the 400nm 700 nm range. The eye is basically similar

More information

CS6640 Computational Photography. 6. Color science for digital photography Steve Marschner

CS6640 Computational Photography. 6. Color science for digital photography Steve Marschner CS6640 Computational Photography 6. Color science for digital photography 2012 Steve Marschner 1 What visible light is One octave of the electromagnetic spectrum (380-760nm) NASA/Wikimedia Commons 2 What

More information

Color , , Computational Photography Fall 2018, Lecture 7

Color , , Computational Photography Fall 2018, Lecture 7 Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 7 Course announcements Homework 2 is out. - Due September 28 th. - Requires camera and

More information

Digital Image Processing COSC 6380/4393. Lecture 20 Oct 25 th, 2018 Pranav Mantini

Digital Image Processing COSC 6380/4393. Lecture 20 Oct 25 th, 2018 Pranav Mantini Digital Image Processing COSC 6380/4393 Lecture 20 Oct 25 th, 2018 Pranav Mantini What is color? Color is a psychological property of our visual experiences when we look at objects and lights, not a physical

More information

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University

Achim J. Lilienthal Mobile Robotics and Olfaction Lab, AASS, Örebro University Achim J. Lilienthal Mobile Robotics and Olfaction Lab, Room T1227, Mo, 11-12 o'clock AASS, Örebro University (please drop me an email in advance) achim.lilienthal@oru.se 1 2. General Introduction Schedule

More information

Color Image Processing. Jen-Chang Liu, Spring 2006

Color Image Processing. Jen-Chang Liu, Spring 2006 Color Image Processing Jen-Chang Liu, Spring 2006 For a long time I limited myself to one color as a form of discipline. Pablo Picasso It is only after years of preparation that the young artist should

More information

Continued. Introduction to Computer Vision CSE 252a Lecture 11

Continued. Introduction to Computer Vision CSE 252a Lecture 11 Continued Introduction to Computer Vision CSE 252a Lecture 11 The appearance of colors Color appearance is strongly affected by (at least): Spectrum of lighting striking the retina other nearby colors

More information

Color , , Computational Photography Fall 2017, Lecture 11

Color , , Computational Photography Fall 2017, Lecture 11 Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 11 Course announcements Homework 2 grades have been posted on Canvas. - Mean: 81.6% (HW1:

More information

Human Vision, Color and Basic Image Processing

Human Vision, Color and Basic Image Processing Human Vision, Color and Basic Image Processing Connelly Barnes CS4810 University of Virginia Acknowledgement: slides by Jason Lawrence, Misha Kazhdan, Allison Klein, Tom Funkhouser, Adam Finkelstein and

More information

Bettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University

Bettina Selig. Centre for Image Analysis. Swedish University of Agricultural Sciences Uppsala University 2011-10-26 Bettina Selig Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Electromagnetic Radiation Illumination - Reflection - Detection The Human Eye Digital

More information

Colorimetry and Color Modeling

Colorimetry and Color Modeling Color Matching Experiments 1 Colorimetry and Color Modeling Colorimetry is the science of measuring color. Color modeling, for the purposes of this Field Guide, is defined as the mathematical constructs

More information

Announcements. Electromagnetic Spectrum. The appearance of colors. Homework 4 is due Tue, Dec 6, 11:59 PM Reading:

Announcements. Electromagnetic Spectrum. The appearance of colors. Homework 4 is due Tue, Dec 6, 11:59 PM Reading: Announcements Homework 4 is due Tue, Dec 6, 11:59 PM Reading: Chapter 3: Color CSE 252A Lecture 18 Electromagnetic Spectrum The appearance of colors Color appearance is strongly affected by (at least):

More information

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour

CS 565 Computer Vision. Nazar Khan PUCIT Lecture 4: Colour CS 565 Computer Vision Nazar Khan PUCIT Lecture 4: Colour Topics to be covered Motivation for Studying Colour Physical Background Biological Background Technical Colour Spaces Motivation Colour science

More information

Lecture Color Image Processing. by Shahid Farid

Lecture Color Image Processing. by Shahid Farid Lecture Color Image Processing by Shahid Farid What is color? Why colors? How we see objects? Photometry, Radiometry and Colorimetry Color measurement Chromaticity diagram Shahid Farid, PUCIT 2 Color or

More information

Light. intensity wavelength. Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies

Light. intensity wavelength. Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies Image formation World, image, eye Light Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies intensity wavelength Visible light is light with wavelength from

More information

Color Image Processing

Color Image Processing Color Image Processing Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Color Used heavily in human vision. Visible spectrum for humans is 400 nm (blue) to 700

More information

Introduction to Computer Vision CSE 152 Lecture 18

Introduction to Computer Vision CSE 152 Lecture 18 CSE 152 Lecture 18 Announcements Homework 5 is due Sat, Jun 9, 11:59 PM Reading: Chapter 3: Color Electromagnetic Spectrum The appearance of colors Color appearance is strongly affected by (at least):

More information

Color Computer Vision Spring 2018, Lecture 15

Color Computer Vision Spring 2018, Lecture 15 Color http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 15 Course announcements Homework 4 has been posted. - Due Friday March 23 rd (one-week homework!) - Any questions about the

More information

Chapter 6: Color Image Processing. Office room : 841

Chapter 6: Color Image Processing.   Office room : 841 Chapter 6: Color Image Processing Lecturer: Jianbing Shen Email : shenjianbing@bit.edu.cn Office room : 841 http://cs.bit.edu.cn/shenjianbing cn/shenjianbing It is only after years of preparation that

More information

MULTIMEDIA SYSTEMS

MULTIMEDIA SYSTEMS 1 Department of Computer Engineering, g, Faculty of Engineering King Mongkut s Institute of Technology Ladkrabang 01076531 MULTIMEDIA SYSTEMS Pakorn Watanachaturaporn, Ph.D. pakorn@live.kmitl.ac.th, pwatanac@gmail.com

More information

Figure 1: Energy Distributions for light

Figure 1: Energy Distributions for light Lecture 4: Colour The physical description of colour Colour vision is a very complicated biological and psychological phenomenon. It can be described in many different ways, including by physics, by subjective

More information

University of British Columbia CPSC 414 Computer Graphics

University of British Columbia CPSC 414 Computer Graphics University of British Columbia CPSC 414 Computer Graphics Color 2 Week 10, Fri 7 Nov 2003 Tamara Munzner 1 Readings Chapter 1.4: color plus supplemental reading: A Survey of Color for Computer Graphics,

More information

12/02/2017. From light to colour spaces. Electromagnetic spectrum. Colour. Correlated colour temperature. Black body radiation.

12/02/2017. From light to colour spaces. Electromagnetic spectrum. Colour. Correlated colour temperature. Black body radiation. From light to colour spaces Light and colour Advanced Graphics Rafal Mantiuk Computer Laboratory, University of Cambridge 1 2 Electromagnetic spectrum Visible light Electromagnetic waves of wavelength

More information

Images. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 38

Images. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 38 Images CS 4620 Lecture 38 w/ prior instructor Steve Marschner 1 Announcements A7 extended by 24 hours w/ prior instructor Steve Marschner 2 Color displays Operating principle: humans are trichromatic match

More information

Multimedia Systems Color Space Mahdi Amiri March 2012 Sharif University of Technology

Multimedia Systems Color Space Mahdi Amiri March 2012 Sharif University of Technology Course Presentation Multimedia Systems Color Space Mahdi Amiri March 2012 Sharif University of Technology Physics of Color Light Light or visible light is the portion of electromagnetic radiation that

More information

EECS490: Digital Image Processing. Lecture #12

EECS490: Digital Image Processing. Lecture #12 Lecture #12 Image Correlation (example) Color basics (Chapter 6) The Chromaticity Diagram Color Images RGB Color Cube Color spaces Pseudocolor Multispectral Imaging White Light A prism splits white light

More information

Comparing Sound and Light. Light and Color. More complicated light. Seeing colors. Rods and cones

Comparing Sound and Light. Light and Color. More complicated light. Seeing colors. Rods and cones Light and Color Eye perceives EM radiation of different wavelengths as different colors. Sensitive only to the range 4nm - 7 nm This is a narrow piece of the entire electromagnetic spectrum. Comparing

More information

Computer Graphics Si Lu Fall /27/2016

Computer Graphics Si Lu Fall /27/2016 Computer Graphics Si Lu Fall 2017 09/27/2016 Announcement Class mailing list https://groups.google.com/d/forum/cs447-fall-2016 2 Demo Time The Making of Hallelujah with Lytro Immerge https://vimeo.com/213266879

More information

12 Color Models and Color Applications. Chapter 12. Color Models and Color Applications. Department of Computer Science and Engineering 12-1

12 Color Models and Color Applications. Chapter 12. Color Models and Color Applications. Department of Computer Science and Engineering 12-1 Chapter 12 Color Models and Color Applications 12-1 12.1 Overview Color plays a significant role in achieving realistic computer graphic renderings. This chapter describes the quantitative aspects of color,

More information

Color images C1 C2 C3

Color images C1 C2 C3 Color imaging Color images C1 C2 C3 Each colored pixel corresponds to a vector of three values {C1,C2,C3} The characteristics of the components depend on the chosen colorspace (RGB, YUV, CIELab,..) Digital

More information

Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester

Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester Image Processing for Mechatronics Engineering For senior undergraduate students Academic Year 2017/2018, Winter Semester Lecture 8: Color Image Processing 04.11.2017 Dr. Mohammed Abdel-Megeed Salem Media

More information

Color Digital Imaging: Cameras, Scanners and Monitors

Color Digital Imaging: Cameras, Scanners and Monitors Color Digital Imaging: Cameras, Scanners and Monitors H. J. Trussell Dept. of Electrical and Computer Engineering North Carolina State University Raleigh, NC 27695-79 hjt@ncsu.edu Color Imaging Devices

More information

Color and Color Model. Chap. 12 Intro. to Computer Graphics, Spring 2009, Y. G. Shin

Color and Color Model. Chap. 12 Intro. to Computer Graphics, Spring 2009, Y. G. Shin Color and Color Model Chap. 12 Intro. to Computer Graphics, Spring 2009, Y. G. Shin Color Interpretation of color is a psychophysiology problem We could not fully understand the mechanism Physical characteristics

More information

Color and Perception. CS535 Fall Daniel G. Aliaga Department of Computer Science Purdue University

Color and Perception. CS535 Fall Daniel G. Aliaga Department of Computer Science Purdue University Color and Perception CS535 Fall 2014 Daniel G. Aliaga Department of Computer Science Purdue University Elements of Color Perception 2 Elements of Color Physics: Illumination Electromagnetic spectra; approx.

More information

Lecture Notes 11 Introduction to Color Imaging

Lecture Notes 11 Introduction to Color Imaging Lecture Notes 11 Introduction to Color Imaging Color filter options Color processing Color interpolation (demozaicing) White balancing Color correction EE 392B: Color Imaging 11-1 Preliminaries Up till

More information

Lecture 8. Color Image Processing

Lecture 8. Color Image Processing Lecture 8. Color Image Processing EL512 Image Processing Dr. Zhu Liu zliu@research.att.com Note: Part of the materials in the slides are from Gonzalez s Digital Image Processing and Onur s lecture slides

More information

Colors in images. Color spaces, perception, mixing, printing, manipulating...

Colors in images. Color spaces, perception, mixing, printing, manipulating... Colors in images Color spaces, perception, mixing, printing, manipulating... Tomáš Svoboda Czech Technical University, Faculty of Electrical Engineering Center for Machine Perception, Prague, Czech Republic

More information

the eye Light is electromagnetic radiation. The different wavelengths of the (to humans) visible part of the spectra make up the colors.

the eye Light is electromagnetic radiation. The different wavelengths of the (to humans) visible part of the spectra make up the colors. Computer Assisted Image Analysis TF 3p and MN1 5p Color Image Processing Lecture 14 GW 6 (suggested problem 6.25) How does the human eye perceive color? How can color be described using mathematics? Different

More information

Color and Color Models

Color and Color Models Einführung in Visual Computing 186.822 Color and Color Models Werner Purgathofer Color problem specification light and perception colorimetry device color systems color ordering systems color symbolism

More information

Mahdi Amiri. March Sharif University of Technology

Mahdi Amiri. March Sharif University of Technology Course Presentation Multimedia Systems Color Space Mahdi Amiri March 2014 Sharif University of Technology The wavelength λ of a sinusoidal waveform traveling at constant speed ν is given by Physics of

More information

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015 Computer Graphics Si Lu Fall 2017 http://www.cs.pdx.edu/~lusi/cs447/cs447_547_comput er_graphics.htm 10/02/2015 1 Announcements Free Textbook: Linear Algebra By Jim Hefferon http://joshua.smcvt.edu/linalg.html/

More information

The Principles of Chromatics

The Principles of Chromatics The Principles of Chromatics 03/20/07 2 Light Electromagnetic radiation, that produces a sight perception when being hit directly in the eye The wavelength of visible light is 400-700 nm 1 03/20/07 3 Visible

More information

Color and perception Christian Miller CS Fall 2011

Color and perception Christian Miller CS Fall 2011 Color and perception Christian Miller CS 354 - Fall 2011 A slight detour We ve spent the whole class talking about how to put images on the screen What happens when we look at those images? Are there any

More information

Color Cameras: Three kinds of pixels

Color Cameras: Three kinds of pixels Color Cameras: Three kinds of pixels 3 Chip Camera Introduction to Computer Vision CSE 252a Lecture 9 Lens Dichroic prism Optically split incoming light onto three sensors, each responding to different

More information

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the

More information

Test 1: Example #2. Paul Avery PHY 3400 Feb. 15, Note: * indicates the correct answer.

Test 1: Example #2. Paul Avery PHY 3400 Feb. 15, Note: * indicates the correct answer. Test 1: Example #2 Paul Avery PHY 3400 Feb. 15, 1999 Note: * indicates the correct answer. 1. A red shirt illuminated with yellow light will appear (a) orange (b) green (c) blue (d) yellow * (e) red 2.

More information

Capturing Light in man and machine

Capturing Light in man and machine Capturing Light in man and machine CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2014 Etymology PHOTOGRAPHY light drawing / writing Image Formation Digital Camera

More information

Lecture 3: Grey and Color Image Processing

Lecture 3: Grey and Color Image Processing I22: Digital Image processing Lecture 3: Grey and Color Image Processing Prof. YingLi Tian Sept. 13, 217 Department of Electrical Engineering The City College of New York The City University of New York

More information

Color. Some slides are adopted from William T. Freeman

Color. Some slides are adopted from William T. Freeman Color Some slides are adopted from William T. Freeman 1 1 Why Study Color Color is important to many visual tasks To find fruits in foliage To find people s skin (whether a person looks healthy) To group

More information

Multimedia Systems and Technologies

Multimedia Systems and Technologies Multimedia Systems and Technologies Faculty of Engineering Master s s degree in Computer Engineering Marco Porta Computer Vision & Multimedia Lab Dipartimento di Ingegneria Industriale e dell Informazione

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 6. Color Image Processing Computer Engineering, Sejong University Category of Color Processing Algorithm Full-color processing Using Full color sensor, it can obtain the image

More information

Image Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson

Image Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson Chapter 2 Image Demosaicing Ruiwen Zhen and Robert L. Stevenson 2.1 Introduction Digital cameras are extremely popular and have replaced traditional film-based cameras in most applications. To produce

More information

Digital Image Processing Chapter 6: Color Image Processing ( )

Digital Image Processing Chapter 6: Color Image Processing ( ) Digital Image Processing Chapter 6: Color Image Processing (6.1 6.3) 6. Preview The process followed by the human brain in perceiving and interpreting color is a physiopsychological henomenon that is not

More information

Capturing Light in man and machine

Capturing Light in man and machine Capturing Light in man and machine CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2015 Etymology PHOTOGRAPHY light drawing / writing Image Formation Digital Camera

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Lecture # 3 Digital Image Fundamentals ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation Outline

More information

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

CPSC 4040/6040 Computer Graphics Images. Joshua Levine CPSC 4040/6040 Computer Graphics Images Joshua Levine levinej@clemson.edu Lecture 04 Displays and Optics Sept. 1, 2015 Slide Credits: Kenny A. Hunt Don House Torsten Möller Hanspeter Pfister Agenda Open

More information

Wireless Communication

Wireless Communication Wireless Communication Systems @CS.NCTU Lecture 4: Color Instructor: Kate Ching-Ju Lin ( 林靖茹 ) Chap. 4 of Fundamentals of Multimedia Some reference from http://media.ee.ntu.edu.tw/courses/dvt/15f/ 1 Outline

More information

Prof. Feng Liu. Winter /09/2017

Prof. Feng Liu. Winter /09/2017 Prof. Feng Liu Winter 2017 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/09/2017 Today Course overview Computer vision Admin. Info Visual Computing at PSU Image representation Color 2 Big Picture: Visual

More information

Color Reproduction. Chapter 6

Color Reproduction. Chapter 6 Chapter 6 Color Reproduction Take a digital camera and click a picture of a scene. This is the color reproduction of the original scene. The success of a color reproduction lies in how close the reproduced

More information

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro Cvision 2 Digital Imaging António J. R. Neves (an@ua.pt) & João Paulo Silva Cunha & Bernardo Cunha IEETA / Universidade de Aveiro Outline Image sensors Camera calibration Sampling and quantization Data

More information

OPTO 5320 VISION SCIENCE I

OPTO 5320 VISION SCIENCE I OPTO 5320 VISION SCIENCE I Monocular Sensory Processes of Vision: Color Vision Ronald S. Harwerth, OD, PhD Office: Room 2160 Office hours: By appointment Telephone: 713-743-1940 email: rharwerth@uh.edu

More information

Color vision and representation

Color vision and representation Color vision and representation S M L 0.0 0.44 0.52 Mark Rzchowski Physics Department 1 Eye perceives different wavelengths as different colors. Sensitive only to 400nm - 700 nm range Narrow piece of the

More information

IFT3355: Infographie Couleur. Victor Ostromoukhov, Pierre Poulin Dép. I.R.O. Université de Montréal

IFT3355: Infographie Couleur. Victor Ostromoukhov, Pierre Poulin Dép. I.R.O. Université de Montréal IFT3355: Infographie Couleur Victor Ostromoukhov, Pierre Poulin Dép. I.R.O. Université de Montréal Color Appearance Visual Range Electromagnetic waves (in nanometres) γ rays X rays ultraviolet violet

More information

Werner Purgathofer

Werner Purgathofer Einführung in Visual Computing 186.822 Color and Color Models Werner Purgathofer Color problem specification light and perceptionp colorimetry device color systems color ordering systems color symbolism

More information

Reading. Foley, Computer graphics, Chapter 13. Optional. Color. Brian Wandell. Foundations of Vision. Sinauer Associates, Sunderland, MA 1995.

Reading. Foley, Computer graphics, Chapter 13. Optional. Color. Brian Wandell. Foundations of Vision. Sinauer Associates, Sunderland, MA 1995. Reading Foley, Computer graphics, Chapter 13. Color Optional Brian Wandell. Foundations of Vision. Sinauer Associates, Sunderland, MA 1995. Gerald S. Wasserman. Color Vision: An Historical ntroduction.

More information

What is Color Gamut? Public Information Display. How do we see color and why it matters for your PID options?

What is Color Gamut? Public Information Display. How do we see color and why it matters for your PID options? What is Color Gamut? How do we see color and why it matters for your PID options? One of the buzzwords at CES 2017 was broader color gamut. In this whitepaper, our experts unwrap this term to help you

More information

Problems. How do cameras measure light and color? How do humans perceive light and color?

Problems. How do cameras measure light and color? How do humans perceive light and color? Light and Color Problems How do cameras measure light and color? Radiometry How do humans perceive light and color? Photometry How do computers represent light and color? How do monitors display light

More information