MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN DIGITAL IMAGERS

Size: px
Start display at page:

Download "MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN DIGITAL IMAGERS"

Transcription

1 MEASUREMENT AND ANALYSIS OF DEFECT DEVELOPMENT IN DIGITAL IMAGERS by Jenny Leung Bachelor of Computer Engineering, University of Victoria, 2006 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF APPLIED SCIENCE In the Faculty of Engineering Jenny Leung 2011 SIMON FRASER UNIVERSITY Spring 2011 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced, without authorization, under the conditions for Fair Dealing. Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately.

2 APPROVAL Name: Degree: Title of Thesis: Jenny Leung Master of Applied Science Measurement and Analysis of Defect Development in Digital Imagers. Examining Committee: Chair: Dr. Albert Leung, PEng Professor, Engineering Science Dr. Glenn H. Chapman, PEng Senior Supervisor Professor, Engineering Science Dr. Marinko V. Sarunic, PEng Supervisor Assistant Professor, Engineering Science Dr. Israel Koren External Examiner University of Massachusetts at Amherst Dept. of Computer and Electrical Engineering Date Defended/Approved: April 21 th, 2011 ii

3 Declaration of Partial Copyright Licence The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users. The author has further granted permission to Simon Fraser University to keep or make a digital copy for use in its circulating collection (currently available to the public at the Institutional Repository link of the SFU Library website < at: < and, without changing the content, to translate the thesis/project or extended essays, if technically possible, to any medium or format for the purpose of preservation of the digital work. The author has further agreed that permission for multiple copying of this work for scholarly purposes may be granted by either the author or the Dean of Graduate Studies. It is understood that copying or publication of this work for financial gain shall not be allowed without the author s written permission. Permission for public performance, or limited permission for private scholarly use, of any multimedia materials forming part of this work, may have been granted by the author. This information may be found on the separately catalogued multimedia material and in the signed Partial Copyright Licence. While licensing SFU to permit the above uses, the author retains copyright in the thesis, project or extended essays, including the right to change the work for subsequent purposes, including editing and publishing the work in whole or in part, and licensing other parties, as the author may desire. The original Partial Copyright Licence attesting to these terms, and signed by this author, may be found in the original bound copy of this work, retained in the Simon Fraser University Archive. Simon Fraser University Library Burnaby, BC, Canada Last revision: Spring 09

4 ABSTRACT This thesis experimentally investigated the development of defects in commercial cameras ranging from high-end DSLRs, moderate point-and-shoot, to cellphone cameras. All tested cameras operating in the terrestrial environment developed hot pixels. In this study, calibration procedures are used to measure defect parameters and collect spatial data. Software tools are built to trace the temporal growth of defects from historical camera images. The imaging processes, demosaicing, jpeg are explored for its effect on defects. Statistical methods are developed to analyze the spatial and temporal distribution and identify the defect causal source. The impact of camera design parameters: ISO, sensor and pixel size on the imager defects are investigated. An empirical formula is created from the data to project the defect growth rate as a function of the sensor design parameters. Also, the multi-finger photogate pixels are measured over the visible spectrum and the enhancement in sensitivity of these designs are explored. Keywords: image sensors; hot pixels; defective pixels; demosaicing; fault-tolerance; photogate iii

5 ACKNOWLEDGEMENTS I would like to thank my thesis committee members Glenn, Israel and Marinko for their participation in getting through my thesis. I want to extend my graitude to my supervisor Dr. Glenn Chapman for giving me this opportunity to work on this research project. Your inspirations, patient, and guidance through this project are much appreciated. I would also like to thank Dr. Israel and Zahava Koren for sharing your thoughtful insights and advices throughout my study. A special thank to all my colleagues for your participation and assistance along the way. This gradute experience would be the same without you. Lastly, I want to thank my parents, and friends for your endless support. Your presences have made this journey more enjoyable. iv

6 TABLE OF CONTENTS Approval...ii Abstract...iii Acknowledgements...iv Table of Contents...v LIST OF FIGURES...viii LIST OF TABLES...xii Glossary...xv 1: Introduction History of Image Sensor Modern Digital Cameras Reliability Issues In-field defect analysis Defect Growth Algorithm Impact of defects on future sensor design Multi-Finger Active pixel sensor Summary : Theory and Background on Solid-State Image Sensors Theory of Photodetectors Photodiodes Photogates Pixel performance metric Charge Coupled Device Charge Transfer Basic CCD Structures CMOS Sensor Photodiode Active Pixel Sensor Photogate Active Pixel Sensor CMOS Sensor Arrays CMOS vs. CCD Digital cameras Sensor and Pixel size Color filter array sensors Camera operation ISO amplification...46 v

7 2.7 Defects in image sensors Material degradation In-field defect mechanisms Summary : Types of defect in digital cameras Defect Identification on Digital Cameras Stuck defects Hot Pixels Defect Identification Techniques Bright Defects Identification Techniques for DSLRs Bright Defects Identification Technique for cellphone cameras Defects in demosaic and compressed images Demosaicing Algorithm Demosaicing algorithms comparison Analyzing defects in color images Defect on a uniform color background Defects on varying color backgrounds Summary : Characterization of in-field defects Basic DSLR defect data ISO Amplification ISO and hot pixel parameters ISO and hot pixel numbers Spatial Distribution of faults Inter-defect distance distribution Inter-defect distance chi-square test Nearest neighbour analysis Nearest neighour Monte-Carlo simulation Spatial distribution results Basic defect data from small sensors Defect data from cellphone cameras Defect data from Point-and-shoot cameras Temporal Growth Defect growth rate on large area sensors Defect growth rate on small area sensor Calibration temporal growth limitations Chapter Summary : Temporal Growth of In-field with Defects Trace Algorithm Bayes defect trace algorithm Interpolation scheme Windowing and Correction scheme Simulation results Experimental results Summary vi

8 6: The Impact of Pixel and Sensor design on defective pixels Impact of sensor design trend on defects on imagers Defect count on APS vs. CCD Impact of ISO trend on defects Defect growth rate vs. sensor area Defect growth rate vs. pixel size Chapter Summary : Multi-Finger Active Pixel Sensor Multi-Fingered Photogate APS Experimental setup and sensitivity measure LED control circuit and calibration Photogate sensor performance measures Experimental results Comparison response for different photogate structure Comparison response at various wavelength Chapter Summary : Conclusion Measure of in-field defects Spatial and temporal growth analysis Defect trace algorithm Fitting of defect growth with sensor design trends Experimental measure of Mulit-Finger Photogate Future Work References Appendix A: Specification of tested DSLRs vii

9 LIST OF FIGURES Figure 1-1. CMOS camera-on-chip vs. CCD...4 Figure 1-2. Production of film vs. digital cameras (data from CITA [6])....6 Figure 2-1. Absorption of photons in semiconductor...15 Figure 2-2. Absorption coefficient of silicon crystal at various wavelengths.[14]...16 Figure 2-3. Simple p-n junction...18 Figure 2-4. Photodidoe (a) unbiased, (b) reverse biased...19 Figure 2-5. Standard Photogate Figure 2-6. CCD composed with (a) 2, and (b) 3 MOS capacitors Figure 2-7. Three-Phase clock cycle CCD...25 Figure 2-8. Buried channel CCD (BCCD) Figure 2-9. Common CCD structures (a) Frame Transfer, (b) Interline Transfer, (c) Full Frame...28 Figure Active pixel sensor with Photodiode photodetector...31 Figure Active pixel sensor with Photogate photodetector Figure Photogate operation cycle, from signal integration to readout Figure Active pixel array...35 Figure Point-and-Shoot digital still camera Figure DSLR digital still camera Figure Various sensor sizes Figure Color filter array sensor Figure Basic image process operation Figure 3-1. Pixel response to optical exposure...57 Figure 3-2. Fully and Partially stuck defects...58 Figure 3-3. Normalized pixel dark response vs. exposure time of (a) good pixel, (b) partially-stuck, (c) standard hot pixel, (d) partially-stuck hot pixel...60 Figure 3-4. DSLR noise level at various ISOs [data: [35]]...64 Figure 3-5. Mesh plot of a defect in a demosaic compressed color image...66 Figure 3-6. Bilinear interpolation of (a) green, (b) red and blue pixels...69 viii

10 Figure 3-7. Kimmel gradient mask...71 Figure 3-8. Sample images used in experiment...74 Figure 3-9. Moire pattern (b) Bilinear, (c) Median, (d) Kimmel Figure Experiment procedure Figure Bilinear demosaic image for red defect with I Offset = Figure Error mesh plot of red defect at I Offset = 0.8 with bilinear demosaicing...81 Figure Median demosaic image for red defect with I Offset = Figure Error mesh plot of red defect at I Offset = 0.8 with median demosaicing Figure Kimmel demosaic image for red defect with I Offset = Figure Error mesh plot of red defect at I Offset = 0.8 with kimmel demosaicing Figure MSE vs. I Offset of a red defect on non-uniform background (bilinear demosaic) Figure MSE vs. I Offset of a red defect on non-uniform background (median demosaic) Figure MSE vs. I Offset of a red defect on non-uniform background (kimmel demosaic) Figure 4-1. Dark response of a hot pixel at various ISO level Figure 4-2. Plot of (a) Dark current, (b) Offset vs. ISO Figure 4-3. Magnitude distribution of (a) dark current intensity rate, (b) dark offset at various ISO levels Figure 4-4. Magnitude distribution of (a) dark current, (b) dark offset at various ISO levels from camera B Figure 4-5. Combined defect offset distribution at (a) 1/30s, (b) 1/2s Figure 4-6. Spatial pattern (a) clustered, (b) random Figure 4-7. Defect map of hot pixels identify from camera A at ISO Figure 4-8. Inter-defect distance measurement Figure 4-9. Inter-defect distance distribution of (a) APS, (b) CCD sensors at ISO Figure Defect inter-distance distribution at various ISO levels Figure Comparison of the theoretical and empirical distribution of nearest neighbor distances in camera M Figure Empirical distribution of G(d) vs. Ḡ(d) with upper and lower bound Figure Defect count vs. sensor age for camera A from dark-frame calibration (at ISO 400) Figure Average defect count vs. sensor age by sensor type at ISO Figure 5-1. Concept of defect trace algorithm ix

11 Figure 5-2. Ring interpolation Figure 5-3. Image wide interpolation errors (a) PDF, (b) CDF Figure 5-4. A 5x5 pixel interpolation mask weighting factor: (a) regular averaging (b) ring averaging Figure 5-5. Image wide interpolation error derived from regular and ring averaging Figure 5-6. Sliding window approach to defect identification Figure 5-7. Post-correction procedure Figure 5-8. Plot of Prob(Good y) vs. image in the windowing test Figure 5-9. Defect growth rate at ISO 400 with calibration and Bayes search identification Figure Defect growth rate at ISO 800 with calibration and Bayes search identification Figure Defect growth rate at ISO1600 with calibration and Bayes search identification Figure 6-1. Mega Pixel design trends in digital cameras 2001 to Figure 6-2. Impact of dark current on large and small pixel Figure 6-3. Defect rate per sensor area vs. pixel size (ISO400) Figure 6-4. Semi-log of defect rate per sensor area vs. pixel size Figure 6-5. Logarithmic plot of defect rate per sensor area of all tested imagers Figure 6-6. Logarithm plot of defect rate per sensor area versus pixel size of all tested APS imagers Figure 6-7. Logarithm plot of defect rate per sensor area versus pixel size of all tested CCD imagers Figure 7-1. Single silicon absorption coefficient vs. photon energy. (Data from Refs[14]) Figure 7-2. Standard photogate photodetector Figure 7-3. Multi-finger photogate photodetector Figure 7-4. Standard and multi-finger photogate APS design and expected potential well [13] Figure 7-5. Experimental setup Figure 7-6. Relative intensity vs. wavelength Figure 7-7.Voltage-current converter Figure 7-8. Input voltage vs. illumination intensity Figure 7-9. Pixel output vs. input light intensity Figure Compare sensitivity curve of standard and multi-finger photogate pixels x

12 Figure Sensitivity ratio relative to standard photogate vs. photogate area. (red light) xi

13 LIST OF TABLES Table 2-1. Average sensor size used in various digital cameras ( ) Table 2-2. Comparison of die cost on a 300mm wafer...42 Table 3-1. Characteristics of defect type...58 Table 3-2. Average MSE and PSNR of demosaic images Table 3-3. Estimate defect size with bilinear demosaicing...79 Table 3-4. Peak defect cluster value from bilinear demosaicing Table 3-5. Estimate defect size with median demosaicing...83 Table 3-6. Peak defect cluster value from median demosaicing Table 3-7. Estimate defect size with kimmel demosaicing Table 3-8. Peak defect cluster value from kimmel demosaicing...87 Table 3-9. Comparison of defect in varying color region with bilinear demosaicing Table Comparison of defect in varying color region with median demosaicing...93 Table Comparison of defect in varying color region with kimmel demosaicing...95 Table 4-1. Summary of defects identified in DSLRs at ISO Table 4-2. Cumulative total of hot pixels identified at various ISO levels Table 4-3. Magnitude of dark current and offset measured for defect in Figure Table 4-4. Statistics summary of spatial defect distributions from APS and CCD sensors Table 4-5. Statistics summary of spatial defect distributions at various ISO settings Table 4-6. Theoretical vs. actual inter-defect distance distribution (in percentage) Table 4-7. Comparison of Ĝ(d) and G(d) from each test cameras Table 4-8. Accumulated defects count from 10 cellphone cameras (ISO 400) Table 4-9. Accumulated defect count from Point-and-Phoot at various ISO levels Table Measured defect rate from calibration result for all tested mid-size DSLRs xii

14 Table Measured defect rate from calibration result for all tested full-frame DSLRs Table Measured defect rates from cellphone cameras at ISO Table Measured defect rates for Point-and-Shoot at various ISO levels Table 5-1. Compared interpolation error from various interpolation schemes Table 5-2. Performance of Bayes detection at fixed dark current (Intp: 3x3) Table 5-3. Performance of Bayes detection at fixed dark current (Intp: 5x5 ring) Table 5-4. Performance of Bayes detection at fixed dark current (Intp: 7x7 ring) Table 5-5. Performance of Bayes detection at fixed exposure (Intp: 3x3) Table 5-6. Performance of Bayes detection at fixed exposure (Intp: 5x5 ring) Table 5-7. Performance of Bayes detection at fixed exposure (Intp: 7x7 ring) Table 5-8. Performance of Bayes detection using various interpolation schemes Table 5-9. Specification of test cameras Table Manual calibration and Bayes detection growth rate comparison at ISO Table Manual Calibration and Bayes detection growth rate comparison at ISO Table Manual Calibration and Bayes detection growth rate comparison at ISO Table 6-1. Average sensor and pixel sizes from tested cameras Table 6-2. Average defect rate for various sizes of sensors Table 6-3. Comparison of APS DSLRs defect rates at various ISOs scaled with sensor area Table 6-4. Average defect rate per sensor area for all camera types at various ISOs Table 6-5. Comparison of defect rate per sensor area between CCD in PS and DSLRs Table 6-6. Linear regression fit statistics on defects/year/mm 2 vs. pixel size Table 6-7. Linear regression fit statistics on defect rate/mm 2 vs. pixel size Table 6-8. Estimated defect rate/mm 2 at various pixel sizes with the fitted power function Table 7-1. Multi-finger photogate APS poly-finger spacing [13] Table 7-2. LED colors and dominate wavelengths Table 7-3. Sensitivity result from standard and multi-fingered photogates Table 7-4. Sensitivity ratio for multi-finger photogates relative to standard photogate Table 7-5. Sensitivity change for multi-finger photogates relative to standard photogate xiii

15 Table 7-6. Sensitivity of open area in multi-fingered photogates Table 7-7. Collection efficiency of open area in multi-fingered photogates Table 7-8. Relative sensitivity between Red, Yellow, Green and Blue illumination Table 7-9. Ideal responsivitiy ratio approximation (η =constant) xiv

16 GLOSSARY 4NN APS A/D BCCD CCD CDS CFA CMOS DSC DSLR 4 Nearest Neighbours Active Pixel Sensor Analogy-to-Digital Buried Channel CCD Charge Couple Device Correlated Double Sampling Color Filter Array Complimentary Metal Oxide Semiconductor Digital Still Camera Digital Signle Lens Reflex FFCCD Full Frame CCD FTCCD Frame Transfer CCD ITCCD LCD MSE PS PSNR QE SNR Interline Transfer CCD Liquid Crystal Display Mean Square Error Point-and-Shoot Peak Signal to Noise Ratio Quantum Efficiency Signal to Noise Ratio xv

17 1: INTRODUCTION The start of photography begun as early as in 500BC with the creation of the pin-hole camera concept (camera obscura)[1]. However, the true invention of the camera which records images did not occur until In early photography, reflected light from an object or scene is projected onto a light sensitive material, known as film, for a period of time creating a reaction that makes a copy of the scene. The film cameras had dominated the camera market for over 100 years. However the film process involves many steps, from capturing the image, to chemically developing the film and finally printing photograph. In the 1960s, the imaging technology was integrated with the modern semiconductor elements as a light sensing device, also known as the semiconductor image sensor. The digital image sensor has the advantage of integration with other electronic systems such as LCD display, electronic storage, microprocessor, etc. These new features found only in the digital camera systems have started a new era of imaging. With the increasing popularity and benefits for a wide range of applications, the digital imagers become the mainstream imaging device in the 21 st century. While the Digital Still Cameras (DSC) have been rapidly replacing the tradition film systems, the digital image sensors still suffer from new challenges to surpass the image quality of the film systems. Researches are continuously developing advance imaging algorithms and color sensing elements to improve the image quality from the digital sensor. 1

18 Enhancement functions such as face recognition, blink detection, etc, were developed to ease handling of the devices by all photographers. However, one of the main challenges remains is the reliability of the sensor. A common problem suffered by many electronic devices is the development of faults or failures due to material related degradation or radiation damage. Typical image sensors (~23-864mm 2 ) are much larger than common electronic chips ~5x5mm. Thus the likelihood of defects being found in the image sensors is greater than regular devices. Defects on sensors are permanent damage that alters the characteristics of the normal pixel operation. Such damage will impact the quality of image captured by the sensor and limit the lifetime of the device. With the sensor being the subject to degradation, and the operational lifetime is growing, the problem of defects in imaging sensors needs to be addressed. The four main focuses of this thesis are: the exploration of the source and characteristics of pixel defects, the impact of imager defects in regular photos, and how the defect growth is affected by the the sensor design trends, and an exploration in photogate design to improve the sensitivity of the photodetector over the visible spectrum. This study will involve the development of calibration techniques, and a defect trace algorithm to identify defects and the growth rate of these faults from a set of commercial imagers. Traditional yield analysis will be adapted to identify the defect source mechanism on commercial digital cameras. Three classes of commercial cameras will be involved in this study: Digital Single Lens Reflex systems (DSLRs), Point-and-Shoot (PS) and cellphone cameras. Each of these cameras will provide data to measure the impact of defects on 2

19 various sensors. A detail analysis of defects collected from the different class of cameras, and study of an image algorithm (i.e. demosaicing) will be presented to pinpoint the impact of defect to the design of the sensor and image quality. 1.1 History of Image Sensor The two types of sensor technologies are Charge-Coupled Device (CCD) and CMOS, which were both invented in 1960 s. However the early work on CMOS sensors showed they suffered from fixed pattern noise. Due to the underdevelopment of the CMOS process line, this technology was not a favourable choice. In comparison, when Bell labs announced the invention of CCD by Willard Boyle and George Smith in 1969[2], this technology was embraced by the imaging industry for its freedom from fixed pattern noise and small pixel size. The CCD became the main focus of research for over 20 years as the technology continued to improve. By the early 1980 s with the development of advance fabrication and lithography, the CMOS technology improved drastically and has became the dominate process line for most logic devices and processors. This resulted in the CMOS sensor being revived as an imaging device. In early 1990 s the CMOS Active Pixel Sensor (APS) was first introduced by Fossum, Mendis and Kemeny at JPL[3]. With the significant effort made in exploring the CMOS APS, the quality and noise level improved drastically to the point where it was comparable to the CCDs. The CMOS APS became a rival to the CCD as it brought low power and reduced cost imaging systems. More importantly, being a CMOS based technology, it allows integration with other sub-systems (i.e. timing control, Analog-to-Digital (A/D), 3

20 system controller, Digital Signal Processor (DSP)) creating a highly integrated camera system[4], as shown in Figure 1-1. Since the analog process line of CCD is optimized for the imaging performance, implementing additional functions required redevelopment; thus prolong the design time and production cost. The CMOS APS permits digital integration, but the increase in noise level will degrade the image quality and increase the design complexity. Thus, with the trade-off for imaging performance, the CMOS APS only recently has been implemented in the camera-on-chip-design for areas where cost is important such as cellphones. Figure 1-1. CMOS camera-on-chip vs. CCD. 1.2 Modern Digital Cameras The concept of digital camera was introduced in 1973 by Texas Instruments Incorporated[5]. The first application of digital image sensor was in video camera; however it did not achieve great success due to the high pricing of the product. The birth of the first digital camera was marked by the Fuji DS-1P in 1988, which used a 400K pixels CCD sensor and saved images to SRAM 4

21 memory cards. However this camera was not available on the commercial market. The first digital camera available commercially appeared at 1991 was the Kodak DCS-1. It consisted of a 1.3MP CCD sensor that fitted into a Nikon SLR camera body where images are stored on an external 200MB hard disk. The DCS-1 was targeted for newspaper photography and successfully reduced the time from taking the image to transmitting it for publishing. The more portable commercial digital cameras began to appear in 1994, with the Apple QuickTake 100, later followed by the Casio QV-10, which was the first digital camera with a build-in LCD display. By 1997, the digital cameras resolution had increased to multiple Mega Pixels (MP), and in 2002 mobile phones are equipped with digital image sensors. From the statistics, shown in Figure 1-2, reported by the Canadian Imaging Trade Association (CITA)[6], in the year 1999 only 0.05 millions digital cameras were sold as compared to 33 million film cameras. However, with the continuous advancements in digital imagers, by 2002, the sales of film cameras had declined to 23 million units where as the digital camera had increased to 24 million units. The sales of digital camera continues to increase as the most recent report showed, 119 millions units were sold in 2008 with almost no film cameras. 5

22 Millions Film Digital Number of camera manufactured Figure 1-2. Production of film vs. digital cameras (data from CITA [6]). Digital cameras have been dominating the photography industries due its attractive features and functions which the film camera cannot offer. A typical digital camera found in the commercial market consists of several components which include an image sensor, A/D, microprocessor, LCD display and removable storage device. In a traditional film camera, the film functions both as the light sensing element and storage; however, in a DSC the role of film is replaced by the use of an image sensor and a removable storage device. As shown in Figure 1-1, the analog light signals at the image sensor is transformed into a digital signal via an A/D converter and can be stored onto a removable storage device and displayed on a LCD. The removable storages such as microdrive, compact flash, or SD card, are a form of flash memory and can be reused. However in film cameras, each roll of film needs to be replaced after a single use. Hence, the cost of owning a digit camera is far less than that of film cameras. In addition, images stored in digital form can be accessed by other 6

23 electronic devices. In the film cameras, images are only available after developing and printing, but in the digital cameras, the LCD display provides immediate image playback, which is an important advantage for seeing if the desired picture has been captured. More importantly, in the compact camera models (i.e. Point-and-Shoot), the LCD also functions as an image view finder. The microprocessor in the digital cameras provides enhanced features that are not feasible in film cameras. For example the sensitivity of a film, also known as ISO (International Standards Organization defined value) can only be adjusted by changing the roll of film. However in the digital cameras, this is achieved by changing the amplication at the sensor output. Thus the ISO can be altered easily between each capture. The development of the image sensor has impacted wide range of applications. With the integration of digital image sensors into many electronic devices such as cellphone cameras, it has made a remarkable change to our daily life. The embedded camera in cellphones provides alternative device for us to capture images or videos. In addition, video conference calls are not limited to our PC workstation but can be made on the go with mobile phones. In the medical community, the digital imagers have began to replace the traditional filmbase radiography[7]. Images stored in digital forms reduce the chance of misplacement and permit sharing and transmitting of pictures over a computer network. In addition, digital images can be processed by software functions to enhance relevant information for diagnosis or to correct improper camera settings. Unlike the traditional film production, digital recording is free from 7

24 damage cause by a faulty camera and projector mechanics and storage degradation such as dirt trapped in the film. In addition, cinematographers can monitor the filming process and correct any displacement on the set immediately. The surveillance and security systems such as vehicle tracking[8] and traffic measures[9], have also gained benefits with the invention of digital image sensor. The digital data acquired from the remote sensors can be transmitted through a wireless network[10] which reduces the storage require at each sensor site and allows remote monitoring. The digital cameras in many ways surpass the film cameras with the user friendly features, transferability, and low cost, but problems such as reliability of the sensors is still a major concern. In the film cameras, a defective film will be replaced with the next roll but in the digital cameras the replacement of a defective sensor can be expensive and is often not feasible for highly integrated camera systems. 1.3 Reliability Issues Excluding the mechanic failures; the lifetime of a digital camera is limited by the reliability of the sensor. Different from the film cameras, where the cost of replacing the film is low; in the digital systems image sensors are themselves expensive and are interconnected to other camera subsystems. Thus the cost of replacing the sensor is high and often not feasible. When a sensor generates faulty pixels, all subsequent images captured will be affected. In many applications such as embedded image sensors in the space shuttle, or remote sensing where access to the sensor is limited, the reliability of the imagers is very 8

25 important. In the commercial digital camera market, the replacement of low-end commercial cameras such as Point-and-Shoot(PS) is common due to modest cost and constant appearance of new camera features. However with the higher-end cameras such as DSLR, the cost of these cameras is significant (~1000 or more). Thus the replacement rate of these cameras is less frequent, typically several years. None the less, as the performance of digital camera matured where the resolution, and image processing functions become nearly the same for many cameras, the next main concern to consumers will be the lifetime of the digital cameras In-field defect analysis A common problems suffered by all microelectronic devices are the developed of defects over the lifetime of the device. A defective pixel will simply fail to sense light properly. Faults on image sensors developed during manufacture time or during operation in the field. These defects are permanent damage on the sensor and will affect all captured images. Manfacturing time defects are corrected via factory mapping, where factory testing identifies faults and hides the defects, but this is not available for in-field defects. Most of the current literature studies on faults in digital image sensors focused on the change in the optical characteristics in high radiation environment such as outer space[11]. However, the reports of defects observed in regular digital cameras which are discussed in photography forums were rarely being addressed. Whether the cause of these in-field defects is due to degradation from fabrication process or in-field factors such as radiation, these damages are not limited to 9

26 space applications but are also important in the terrestrial environment. Very little information in the literature discusses the defect development rate during in field operation which is a central point of this thesis. In chapter 3 details of the defect identification techniques for DSLRs, Point-and-Shoot, and cellphone cameras will be discussed, where we aim to extract information such as the type and quantity of defects, spatial location map of defects and measurment of the defect parameters. In addition, we will study the prevalence of defects as function of the camera settings, which will provide insight to the effects of defects on regular photography. In chapter 4, we will provide detailed defect data collections from various commercial cameras. Then, extensive statistical analysis derived from standard manufacture yield analysis will be applied in attempt to characterize the defect source mechanism Defect Growth Algorithm In any fault tolerant study, the failure rate measures the frequency of the development of new faults and also the reliability of the device. Defects caused by different source mechanism will exhibit different failure rates. Thus by tracking the temporal growth of defects, we can provide a better judgment of the causal mechanism behind those commercial digital cameras. Defects on a sensor will appear in all subsequence images. Thus the development of in-field faults can be found by tracing through the image dataset to find when the defect first appeared. The date which the image is captured (i.e. which is recorded in the image meta-data) will be served as an approximation of the defect development date. The procedure of tracing defects can be done by visual 10

27 inspection; however with some image dataset as large as 10Gb, this procedure is cumbersome. In chapter 5, we had proposed a recursive algorithm which utilizes statistics gather from each image to automatically determine the presence or absence of defects in a specific image. The algorithm has been implemented and the accuracy of the detection has be verified through simulations and tested with existing image datasets from real cameras. 1.4 Impact of defects on future sensor design Four main trends are found in new sensor designs. First, the CMOS APS is gaining in popularity as large area devices, as most commercial DSLRs are now equipped with APS sensors. Secondly, the improvement of sensor technology has reduced the noise signal on imagers. This permits an expansion in the ISO range. Third, the changes in sensor sizes due the high demand of cellphone cameras is driving companies toward smaller sensors, and the need of better image quality in the high-end DSLRs has lead to more production of the large area sensors (i.e. full frame). Lastly, the increase of imager resolution is achieved with more pixels on the sensor. While the sensor size remains nearly the same, the dimension of pixels is reduced to attain a higher pixel count. In chapter 6, we will explore the possible impact of these current sensor trends on defects using the defect data collected from different types of cameras: cellphone, PS and DSLR. An extensive analysis of the defect rate base on various design parameters will provide possible estimation on the quality of sensor with the future imager design. 11

28 1.5 Multi-Finger Active pixel sensor The two main types of photodetectors used by the APS imagers are photodiode and photogate. The light sensing area of these pixels consist a photodiode comprised of a PN junction or photogate which is simply a version of a MOS capacitor. Although the structures of the two phototectors are different; the basic operation remains the same. When the photodetector encounters light, the optical signal is translated into an electronic signal through the separation and collection of electron-hole pairs. One main drawback of the photogate is the non-uniform sensitivity measure over the visible spectrum. To address the issue of the absorption in the photogate, a multi-finger photogate structure had been proposed [12], [13]. Chapter 7 will extend the experimental study from [13] by measuring the sensitivity of both the standard and multi-finger photogate over the visible spectrum. The sensitivity measured from the multi-finger photogate provides a means to estimate the amount of absorption which had not been studied before. More importantly, the extensive analysis can provide insights into the performance of the multi-finger photogate and other possible drawbacks of the standard photogate design. 1.6 Summary Faults generated on image sensors are being study in the high radiation environment; however little work had addressed the reports of defects developed in terrestrial environment. Accumulation of defects will continue to degrade the image quality and the change in optical characteristics of the sensor will limit the usability of the sensor. More importantly, as the imaging technology matures, 12

29 lifetime of the cameras will become an important concern to consumers. The first half of the thesis (chapter 2 and 3) with focus on addressing the issues of defects found in commercial digital cameras while operating in the field. In chapter 4, a detail study on the defect spatial distribution, development rate, and the impact of camera setting on faults will offer insight on the defect source mechanism. In chapter 5, an in depth look into the failure rate of individual sensor will be examined by analyzing the historical image dataset with our proposed defect trace algorithm. In chapter 6, defect data collected will be categorized by sensor type, sensor and pixel area. The comparison of defect data by the sensor design will provide estimations of the impact of defects in future sensors. More importantly, it will serve as a limitation factor measure to the current sensor design trend. Chapter 7 of this thesis will extend on the work done by Michelle L. Haye where we will analyze the spectral response of the standard and multi-finger photogate implemented previously. 13

30 2: THEORY AND BACKGROUND ON SOLID-STATE IMAGE SENSORS Before going into further discussions on defects in digital imagers, we will first review some basic operations of image sensors. In this chapter we will first cover the theory of light detection in semiconductors and the basic structure of the two main photodetectors: photodiodes and photogates. Then we will discuss the principle architectures of CCD and CMOS APS pixels and some metrics use to evaluate the performance of these sensors. The optical signal is converted into voltage/current by the solid-state image sensor and the internal image processing functions such as noise reduction, color interpolation, white balancing are applied to enhance the image quality. To better understand the impact of defects in the pixel output, we will provide the background on the design and operation of the commercial digital cameras. Finally, in the last part of this chapter we will provide the background on the mechanism behind the two defect sources: material degradation and external random process. 2.1 Theory of Photodetectors Photoconversion is the process by which the energy of incident light is converted into an electric signal. In the basic properties of a semiconductor there are discrete energy levels which electrons may occupy. The highest energy band occupied by electrons at absolute zero temperature is called the valence band. At the same temperature, the conduction band, which electrons must 14

31 occupy for current to flow, of semiconductor is empty. The energy separation between the upper edge of the valence band and lowest conduction band is known as the bandgap energy E g, shown in Figure 2-1. Photons travelling through the semiconductor with energy exceeding E g will excite an electron from the valence into conduction band leaving a mobile hole behind. Thus, as shown in Figure 2-1, each absorbed photon will create an electron-hole pair. On the other hand, to any photon with insufficient energy the semiconductor will appear transparent. Figure 2-1. Absorption of photons in semiconductor. The energy of the photon depends on the wavelength, as calculated with E photon h c =, (2-1) λ where h is Planck s constant, c is speed of light and λ is the photon wavelength. Each semiconductor material has a cut-off wavelength denoted by λ c, and is calculated with Equation(2-1) for a photon energy equal to the bandgap energy (i.e. E photon = E g ). The cut-off wavelength simply shows that any photons with wavelength longer than λ c will not be absorbed. 15

32 Silicon has a bandgap energy of 1.1eV; hence it is able to detect photons in the visible spectrum ( nm) and near Infra Red (IR), but most photons in the IR range (>1124nm) will not be absorbed. While light penetrates through the semiconductor, optical power is lost due to the interaction between photons and the electrons. The intensity of photons passing through the semiconductor decays exponentially, I( x) = Io exp( αx), (2-2) where x is distance below the surface and α is the absorption coefficient (in cm -1 ). Shown in Figure 2-2, the absorption coefficient, α, is a wavelength dependent variable. 1.E+05 Absorption coefficient (1/cm) 1.E+04 1.E+03 1.E Photon energy (ev) Figure 2-2. Absorption coefficient of silicon crystal at various wavelengths.[14] Photons with high energies have a larger absorption coefficient and will be absorbed in shallower depths as compared to the photons with lower energies. A low absorption coefficient implies the photons will penetrate deeply into the semiconductor before fully absorbed. For example, in a silicon crystal, the α of blue light (2.61eV) is ~5E+4cm -1 and red light (1.19eV) is ~5E+3cm -1. The depth 16

33 1/α is the distance which the photon intensity drops by a factor of 1/e. Given an initial intensity of red and blue photons in a silicon crystal, the red photons would need to travel is 10x longer than the blue photons to be reduced by the same 1/e factor. In addition, for silicon based optical devices, photons with much shorter than visible wavelengths (i.e. UV range) will be absorbed by the oxide layers and penetrate little in the substrate. Thus all the carriers are generated near the surface which is dominated by the surface traps. The detectable range of silicon base semiconductor is from ~1µm to short enough that surface and cover glass optical absorption becomes dominate (typically 350nm). Photogenerated carriers can provide a measure of the light intensity. However, without an electric field, these electron-hole pairs will recombine after a short time. The main merit of the photodetector is to collect photocarriers but the efficiency of the detector is determined by the ability to prevent the free carriers from recombining. The way of creating that separation is shown by the two main types of photodetectors used in the CMOS sensors: photodiodes and photogates. In the following sections we will discuss each of these in details Photodiodes Photodiodes utilizes a P-N junction to collect photocarriers. A P-N junction is composed of p- and n-type semiconductors layers that contact each other. As shown in Figure 2-3(a), the joining of the two different semiconductors will form a junction at the interface which is known as depletion region. Under zero bias, the depleted region is formed by the diffusion of mobile holes in the p- region and electrons in n- region; thus leaving the positively charged donors in n- 17

34 and the negatively charged acceptors in p-. The separation of charges at the interface of the junction forms an internal electric field which prevents further recombination of mobile carriers. The internal electric field has a build-in potential of V bi. When an external voltage is applied, the internal potential will change and cause movement in the mobile charges; thus results in a net current flow as shown in Figure 2-3(a). When a forward bias is applied, Figure 2-3(c), the internal potential decreases, thus more mobile charges are able to diffuse across the junction, results in net forward current flow. When a reverse bias is applied as shown in Figure 2-3(d), both holes in p- and electrons in n- are being pulled away from the junction, thus the width of the depleted region expands. The maximum reverse bias which a p-n junction can operate is marked by the breakdown voltage, V br. (b) Zero bias (c) Forward bias (a) I-V curve (d) Reverse bias Figure 2-3. Simple p-n junction. 18

35 A typical silicon based photodiode as shown in Figure 2-4(a) consisting of a N-type material in the substrate, a layer of P-type material above the N region forming the active surface, and a thin layer of insulator material above the P-type region. As noted, the absorption of the photons is wavelength dependent. The photons with short wavelengths are absorbed near the surface in the p-region, and long wavelengths tend to penetrate deeply into the n-region before being fully absorbed. Hence, during signal integration, shown in Figure 2-4(b), an external reverse bias is applied to extend the depletion region such that the absorption of photons at various depths is accommodated. (a) unbiased (b) reverse biased Figure 2-4. Photodidoe (a) unbiased, (b) reverse biased. When the photodiode is exposed to a light source, as shown in Figure 2-4(b), photons with sufficient energy will stimulate electron-hole pairs through out the material. The electron-hole pairs created in the depletion region create the drift current, I Drift, and the carriers created outside junction move via the diffusion current, I Diffuse. The drift and diffusion of carriers generate a net photocurrent, I = I + I. (2-3) ph Diffuse Drift 19

36 Measurment of this photocurrent depends on the number of electron-hole pairs generated and the time it takes for the carries to drift across the junction. The response time is limited by the width of the junction as all carriers need to travel through this layer. Hence, the photocarriers generated within the junction will have the fastest response time (i.e. I Drift ). The photocarriers generated outside the depletion region need to diffuse into the junction which results in a slow response time. To optimize the response time, the p-layer must be kept shallow and the reverse biased voltage should extend the junction such that the absorption length of the desire wavelengths are within the depletion region. Although the width of the depletion region can be extended with a reverse bias, the creation of a dark current is major drawback to this operation. Dark current is simply a thermally generated leakage current due to the applied reverse bias voltage; the current is usually small, from pa to µa. However, dark current is a function of the junction width and temperature. Thus if a wide depletion layer operates at a high temperature, this could result in a significant measure of dark current Photogates Another type of photodetectors commonly found in CMOS sensors is the photogate. Photogates integrated the MOS capacitor technology to capture incident illumination within a potential well. As shown in Figure 2-5, the basic structure of the photogate consists of a MOS capacitor with a thin layer of poly-silicon as a gate that sits on top of a transparent insulator layer. Photogate changes the optical signals into charges. 20

37 Figure 2-5. Standard Photogate. As show in Figure 2-5, with p-type substrate, when a positive gate voltage is applied, holes are pushed away from the positive gate forming a potential well (depletion region) of ionized acceptors. The depth of the depleted region depends on the gate voltage (VG), which will affect the capacity of the photogate. During signal integration time, the photons must penetrate through the silicon gate into the substrate where electron-hole pairs are formed in the potential well. The electric field in the depleted region pushes electrons to the surface while the holes will penetrate and be absorbed by the substrate. The amount of charges collected depends on the integration time; however thermally generated carriers will limit the integration cycle. The optically generated carriers are stored as charges in photogate; thus a read out circuit is needed to generate the voltage or current signal from the stored charges. Notice that all incident photons need to pass through the gate layer; thus, the optical absorption in the gate layer is a major limitation to the efficiency of this photodetector. When optical signal is measured as accumulated charges, it allows sensing of weaker signals thus the photogate has a higher sensitivity measurement than the photodiode. A simple photodiode measures the optical signal of the instantaneous current/voltage, which made it dependent on strong 21

38 light signal. However, the fast response time made this a suitable choice for high speed applications Pixel performance metric There are several standard metrics used to evaluate the performance of the photodetector and imaging pixels. In this section we will define some of these metrics that we will be using in this thesis. The performance of the photodetectors is measured with Quantum Efficiency (QE) and responsivity. The absorption of incident photons depends on the penetration depth, and the reflectivity of the conductor surface. More importantly, the generated electron-hole pairs can be loss through recombination and trapping. Thus the efficiecy of the conversion process is expressed as # Generated, Collected _ Electron Hole Pair QE = η =. (2-4) # incident _ photons The numerator is the number of absorbed photons generated electron hole pairs which are collected and the denominator is the number of incident photons. This ratio is always less than unity. Since the photocollection is wavelength dependent, we an express QE as a function of wavelength, Iph / e η =, (2-5) P / hυ o where I ph is the photogenerated current, and P o is the incident light power. For the photodetectors which measures the output in current, we can expresse the the efficiency in terms of the output current. This is also know as responsivitiy, 22

39 R = Iph( A) eλ = η. (2-6) P ( W ) hc o A good photodetector, should have QE ~90-95% over visible spectrum. Each pixel consists of a photodetector connected to an output circuitry. The actual photosensitive area is usually a fraction of the pixel area. The fill factor measures the fraction of photosensitive area versus the full pixel area. Hence, pixels with small fill factor have less surface exposure to light and will have a lower collection of photocarriers. Recently, mircolens was introduced to resolve such problem. The microlens is a transparent lens positioned above each pixel which helps to direct light from al the pixel area to the photosensitive region. The sensitivity of a pixel measures the rate which the pixel response to the incident light power. The dynamic range measures the range between the maximum output level to the minimum noise signal. Thus a high noise level will significantly reduce the dynamic range of the pixel. 2.2 Charge Coupled Device The Charge-Coupled device was invented in 1969 at Bell Lab. At first, the CCD was created as a digital memory to compete with the Magnetic Bubble Memory (MBM)[15] as a mass memory storage device. However, as the costs of hard disks were reduced and with the development of flash memory neither the CCD nor MBM became the next generation of mass storage digital memory. The principle operation of the CCDs is like a shift register. It is composed of a linear array of MOS capacitors that can store charges. By controlling the gate voltage of the MOS capacitors, it will induce the charge packets to move along the array. 23

40 The optical response of the CCD even under low light conditions has resulted in its taking off as a major imaging device in many large-scale light sensing applications. In the following two sections, we will discuss the basic operation of the charge transfer and the several commonly used transfer methodologies that are employed in the industry Charge Transfer One of the key operations of the CCD is the integration of photo-electrons and the transfer of the collected charge packets. By keeping the capacitors closely spaced, the interaction between the depleted regions will allow charges to shift to the adjacent well. A typically 2 and 3 clock phase CCDs are shown in Figure 2-6. Note that in a 2 clock phase design, each pixel is consisted of 2 MOS capacitors and 3 for the 3 clock phase CCD. (a) 2 MOS capacitors (b) 3 MOS capacitors Figure 2-6. CCD composed with (a) 2, and (b) 3 MOS capacitors. At each clock phase, the gate voltage will be adjusted to shift the charge packet into the adjacent MOS capacitor; thus, the sensor composed of 3 MOS capacitors will require to operate on a 3 clock phase cycle. 24

41 Figure 2-7. Three-Phase clock cycle CCD. The operation of the 3 clock phase CCD is as follows, at each stage, one gate region acts as the storage and the two adjacent gates act as the barriers. Shown in Figure 2-7, during the signal integration phase, VG(1) is pulsed high to create a potential well in the substrate and collect the photogenerated carriers. Then at the first cycle of the transfer phase, VG(2) is pulsed high while VG(1) decreases slowly. Thus the charge stored in the first well will flow toward the adjacent well because it has the lower potential. At the second clock cycle, VG(3) will pulse high while VG(2) and VG(1) are held low creating a barrier to the adjacent MOS capacitors. Thus, the charge packet under gate 2 is now shifted into the well under gate 3. At the last clock cycle, VG(1) will pulse high while VG(2) and (3) are held low. Now the charge packet is shifted into the next pixel. By repeating the 3 clock cycles, the collected charge packets will move sequentially across each row to the output node for readout. This operation is often called a bit bucket. 25

42 The output of each pixel on a CCD sensor is highly dependent on the transfer efficiency. During the transfer of the charge packets several factors such as the dark current, transfer speed, and interface traps will affect the overall transfer efficiency. Dark current is caused by the thermal generated charges which build-up in the potential well and will corrupt the signal packet. This charge build-up is due to the high voltage applied at the gate. Thus, by operating at a high clock frequency, the dark current can be reduced. However, the clock frequency is governed by the charge transfer speed. If the clock frequency is too high, charge will be loss during the transfer. The main drawback to the transfer speed is limited by the interface traps. At each transfer, the charges will fill up the empty traps at the surface. Then, at the next transfer some traps will release the charge instantaneously while others are slower. The slow released charges might not get transfer; hence resulting in signal loss. This phenomenon is known as an interface trap loss. The problem with surface traps can be overcome with the Buried channel CCD (BCCD) which consists of an n-type layer above the p-type substrate as shown in Figure 2-8. When a positive gate voltage is applied, the n-type layer is fully depleted, and the charges will be collected at the minimum potential. Because the charge packet is localized from the Si/SiO 2 interface, the overall transfer efficiency is increased by minimizing signal loss due to trapping of charges. With the highly customized process line used to manufacture the CCDs, this sensor is reported to operate with a 99% transfer efficiency[16]. 26

43 Figure 2-8. Buried channel CCD (BCCD) Basic CCD Structures The CCD sensors come in different structures to accommodate for the requirements needed by various applications. In this section, we will present the three common types of CCD architectures: Frame Transfer (FTCCD), Interline Transfer (ITCCD), and Full-Frame (FFCCD). The signal charges packet stored in each pixel are shifted to the output node located at the bottom of each column. Thus, the frame speed which the CCDs can operate on is limited by the transfer speed. 27

44 (a) Frame Transfer (FTCCD) (b) Interline Transfer (ITCCD) (c) Full Frame (FFCCD) Figure 2-9. Common CCD structures (a) Frame Transfer, (b) Interline Transfer, (c) Full Frame. When the frame speed is the key requirement, for example in video cameras, Frame Transfer CCD will be the preferred choice. As shown in Figure 2-9(a), FTCCD consists of two CCD arrays of the same size combined with a horizontal shift register at the output. The top CCD will be used to collect signal charges, and the second one is shielded from light to act as an analog memory. During signal integration, charges will be collected by the top CCD, then, the integrated charges will be quickly transfered in parallel onto the bottom CCD. The stored signal charges in the bottom CCD is then transfer into the horizontal shift register one row at a time and readout by the output circuitry. While signal is being transfer for readout, the top CCD can start the next image integration cycle; thus, the device operation speed is optimized. However, this structure suffers from a smear problem which arises from the simultaneous integration and transfer to storage. Also, the need of using two CCD areas will increase the production cost. 28

45 Alternatively, the Interline Transfer CCD is among the most popular architectures used for commercial digital still cameras. Shown in Figure 2-9(b), ITCCD is composed of photodiodes arranged in interlaced columns and positioned between masked vertical transfer CCD pixels. The photodiode is used to collect photogenerated charges while the adjacent CCD will act as an analog frame memory. Stored charges in the CCD pixel will transfer into the horizontal shift register one row at a time and read by the output circuitry. Because the photodiode is not being using during the transfer cycle, the next integration cycle can be used during the transfer. With a proper timing, this CCD can operate at high speed and with minimal smear problems. The last and most important CCD design, is the Full-Frame CCD. Shown in Figure 2-9(c) the FFCCD has a 100% fill factor because the entire pixel array is photosensitive; thus this is the highest quality CCD design available on the market. The integrated charges collected by the CCD pixels are transferred in parallel onto the horizontal shift register. In this architecture, there is no dedicated storage; hence the CCD array functions both as charge collection and an analog memory. The pixel array is shielded from light with a mechanical shutter during the transfer cycle. With the use of an external mechanical shutter in the camera to control the exposure, the integration and charge transfer will not occur simultaneously. Hence the smearing problem is eliminated. However, the frame rate at which this structure can operate is limited by the read-out cycle. Thus this structure is mostly use for high quality imaging and not rapid shooting. 29

46 2.3 CMOS Sensor Like the CCD sensors, early CMOS sensors were known as passive pixel sensors because the amplification of the instantaneous photogenerated current is performed at the output of each row. In 1990s, when the CMOS sensor began to revive, it has adopted the Active Pixel Sensor (APS) design where each pixel integrates the photocarreis locally and has a built-in amplification. In a CMOS sensor, both the photodiodes and photogates can be used as photodetectors. The operations of the two photodetectors are very similar and will be discussed in the following two sections Photodiode Active Pixel Sensor A simple photodiode active pixel is shown in Figure The photodiode acts to collect incident light in the form of integrated charges that creates a voltage/current when connected to the read-out circuit. A typical pixel read-out circuit consists of 3 transistors, ReSet (RST), Source Follower (SF) and Row Selector (RS), as labelled in Figure 2-10, are used to control the pixel operation as will be described next. The read out circuit can be set to operate in voltage or current mode. 30

47 (a) Photodiode APS circuit (b) Photodiode APS control signals Figure Active pixel sensor with Photodiode photodetector. Shown in Figure 2-10(b) there are three stages to the pixel s operation: reset, signal integration, and readout. At the start of each integration cycle, the RST transistor is pulsed high, to precharge the capacitance, C x, at node V x during time T reset (Figure 2-10(b)). Hence, the reset voltage measured at node V x is simply V DD - V Th. The capacitance at node V x is composed of the photodiode capacitance and the parasitic capacitance from the RST and SF transistors, C = C + C + C. (2-7) x d The capacitance of the photodiode C d is typically 10 times larger than that of the transistors; thus the capacitance of C x is dominated by the photodiode. RST SF During the signal integration cycle, the RST is turned off and the Q Photo generated from the incident light will partially discharge C x over the integration cycle for exposure of duration T int (Figure 2-10(b)). The voltage measured at node V x is calculated by Q Photo V x =. (2-8) Cx 31

48 As a first approximation, the capacitance C d is proportional to the diode area; thus shrinking the pixel will reduce both the collected light and C x at about the same rate. Hence, V x is approximately independent of pixel size for 10-2µm pixels. In the readout cycle, the RS transistor is connected to the row address bus, and is turned on when the row is being selected for read out. During read out, the buffered voltage at the SF transistor will be placed onto the column bus and stored into the Sample-and-Hold (S/H) circuitry located at the bottom of each column. The output from the SF transistor is calculated by, V Cs = A C / h out + x noise, (2-9) where A is the voltage gain, usually <1, and C s/h is the capacitance in the S/H circuitry. The photodiode APS has the advantages that its control/readout cycle is very simple, and that power is only consumed during the reset and readout stage Photogate Active Pixel Sensor In a photogate APS, the photogate is used to collect photogenerated carriers. As shown in Figure 2-11 the basic structure of the photogate pixel employs a 4 transistors readout circuit. Different from the photodiode pixel, the photogate requires two additional control lines to control the photogate voltage V PG and the transfer of charges from the potential well to the floating diffusion T x. 32

49 Figure Active pixel sensor with Photogate photodetector. There are four stages to the photogate APS operations: signal integration, reset, transfer, and readout. Each of the 4 transistors is responsibled for controlling the operation at different stages as shown in Figure Initial Condition (a) Signal Integration (b) Reset (c) Transfer (d) Readout Figure Photogate operation cycle, from signal integration to readout. The pixel operation begins with signal integration by applying a photogate voltage V PG to create a potential well where photogenerated carriers can be collected, Figure 2-12(a). Then the pixel RST transistor is turned on to remove 33

50 the previous charges stored in the floating diffusion, Figure 2-12(b). At the transfer cycle Figure 2-12(c), the T X gate is turned on and the V PG is turned off to shift the collected charges into the floating diffusion and the gate of the SF transistor. Finally, during the readout cycle, Figure 2-12(d), the RS transistor is turned on and the corresponding signal voltage is readout by the SF into the S/H circuit. The corresponding voltage collected by each pixel can be calculated with V signal Q =. (2-10) C + C The advantage of the photogate APS is that the capacitance of the photogate is small so in principal it should be more sensitive. However the FD PG absorption in the gate reduces this. Also it consumes power during the integration phase, unlike the photodiode APS, and has a more complicated control cycle CMOS Sensor Arrays The basic structure of CMOS sensor is shown in Figure 2-13, it is composed of an array of active pixels; hence this sensor is known as CMOS APS. Because an additional transistor is used to perform amplification at each pixel site, the fill factor of CMOS APS pixels (~25-30%) is smaller than CCD pixels (~70-90%). 34

51 Figure Active pixel array. Each photosite on the array is connected to the row select circuit which is used to select a row for readout. Unlike the CCDs, APSs can be randomly addressed. The row select control allows partial readout of the array and this function is known as windowing. Charge packets in the CCDs are read in a sequential manner; thus windowing is not feasible. The output of the pixel is connected to the S/H circuit located at the bottom of each column. Due the variation in manufacture process and the reset value of each pixel, the CMOS sensors generally suffer fixed pattern noise. That is each pixel has a different reset (unexposed) voltage, and different threshold in the SF transistors creating a variation in the image characteristics. To reduce this artifact, a Correlated Double Sampling circuit (CDS) is adopted at the bottom of each column in the array. The CDS is consisted of a mirror of two S/H capacitors. One capacitor is used to hold reset value and the other use to hold signal output. The function of the CDS is to subtract the reset value from the output signal; hence, the variation in reset value and threshold voltage will be suppressed. 35

52 2.4 CMOS vs. CCD As the two sensor technologies mature, there is no clear indication whether one is more favourable over the other. In fact the choice of sensor is usually determined by the application requirements such as frame speed, image quality, power consumption, and production cost[17]. The design of CMOS APS has an advantage in low power consumption because the photodiode APS do not consume power during the integration phase while the CCDs do. The CMOS APS also offers possible integration with other CMOS subsystems. Hence, this sensor is well embraced by the embedded applications such as mobile phones, and security cameras. The frame rate of CCDs is limited by the pixel transfer speed; thus the higher resolution CCDs are tradeoff with a lower frame rate. On the other hand, the CMOS APS provides random access; which is easier to achieve higher frame rate. Hence in the highspeed applications such video cameras, the CMOS APS is the preferred choice. The APSs benefit from the advances in CMOS fabrication in regular circuits so its production cost has declined. Since for large sensors the production cost of CMOS is lower than CCD, in the recent years, many of the large area CCD sensors used in commercial DSLRs are being replaced by CMOS APS. However, the CCDs being a more mature technology maintained its market in the small sensors (i.e. Point-and-Shoot) where the balance of imaging performance and production cost is needed. One of the main drawbacks of the CMOS APS is each pixel has a build in amplifier, thus the fill factor is small and it limits the ability to shrink the pixel size. More importantly, amplifier variation becomes 36

53 more significant when operate in low light conditions. On the other hand, the single amplifier at the output node adopted by the CCD sensors provides a higher fill factor and uniform pixel response. The CCDs have a greater signal response under low light condition made it suitable for many scientific applications (eg. Hubblespace telescope). In addition, the large fill factor made it easier for CCD to achieve smaller pixel sizes; thus most PS cameras in the market use this sensor. Recently, the addition of micorlens on each pixel have allowed both CCDs and APSs to collect the same light for a given pixel size; thus reducing the fill factor problem for CMOS pixels. 2.5 Digital cameras The two main types of digital cameras available on the market are Pointand-Shoot (PS), and Digital Single Lens Reflex (DSLR). A typically PS uses a small sensor (e.g. 6.1 x 4.6mm); however the pixel count of these cameras is nearly the same as DSLRs. Hence, the pixels on these sensors are relatively small (1.5-2µm). The pixel size differentiates the quality of the two types of cameras. As the pixel size decreases, the light sensitivity decreases as well, hence, PS tends to suffer image quality under low light conditions. Since this class of cameras targets portability, the tradeoff is in the imaging performance. In addition, as shown in Figure 2-14, the optics is integrated into the camera and usually of a much smaller size; thus poorer optical resolution as well. 37

54 (a) Typical camera (b) cross section view Figure Point-and-Shoot digital still camera. More advance photographers who are concerned with better control of the camera parameters (exposurer time, ISO, etc) and required high imaging quality, will prefer the DSLRs. The DSLR inherits the traditional SLR architecture as shown in Figure The lens system is interchangeable and a single reflective mirror mechanism is used to project image onto a viewfinder to see through the lens. To expose an image onto the sensor, the mirror will swing upward into a 90 o angle creating a path for the light to reach the sensor. In the early models of DSLRs, the LCD display was mainly used for a quick image playback while taking pictures. However in recent developments, most DSLR models are also equipped with a live view option which allows photographer to utilize the LCD display as the view finder. Advances in sensor production have allowed lower cost DSLRs to achieve better quality images than PS cameras with more features at a similar cost. Hence, DSLRs are growing in popularity. 38

55 (a) typical camera (b) cross section view Figure DSLR digital still camera Sensor and Pixel size The sensor area of Point-and-Shoot cameras ranges from 28 to 51mm2 as shown in Figure When compared to the tradition 35mm film, the small PS sensor has only 3-5% of the sensing area. The sensor area of the mid-range DSLR ranges from 350 to 545mm2, which has ~50% of the sensing area of the 35mm film. The sensor size trends are being driven by two opposite applications: high quality DSLRs and cellphone cameras. In effort to match the image quality of the film cameras, the high-end DSLRs are moving toward the full frame sensor (36 x 24mm). The sensing area of a full frame sensor is equivalent to the 35mm film. By comparison the increasing popularity of the portable, small cellphone cameras demands the use of small sensors. The sensor employed by the cellphone cameras has by far the smallest sensing area of 7.2mm2, a small fraction of the DSLR imagers (see Figure 2-16). 39

56 Figure Various sensor sizes. Table 2-1. Average sensor size used in various digital cameras ( ). Camera Type Sensor size (mm) Pixels (MP) Pixel size (µm) Full-frame DSLR 36.0 x x 7.09 DSLR 21.9 x x 5.18 PS 6.1 x x 1.17 Cellphone 3.0 x x 2.20 The main impacts of the sensor size is the angle of view and production cost. Given the same optical system, the small sensor will have a much smaller angle of view than a large sensor; hence subjects are being cropped out from the image. In terms of production cost, all sensors are being cut from the same size silicon wafer. Thus, the production is maximized when the size of the sensor remains small. A typical DSLR sensor area is ~ mm 2 which is 3x the area of a PC processor chip (~100mm 2 ). Shown in Table 2-2, is the comparison of die size (i.e various sensors, processor chip) that can be manufactured on a 300mm wafer. The dies per wafer is calculated with Dies / wafer = 2 π ( d / 2) Die _ area π d 2 Die _ area ; (2-11) which estimates the number of chips that can be cut from a wafer. 40

57 As shown in Table 2-2, assuming there is no defects on the wafer, only ~59 full frame sensors can be manufactured on a 300mm wafer. On the same wafer, ~ of PS and cellphone camera sensors can be made. Moreover, as the sensor area increases, the probability of defects on a given die will also increase. These faults on the wafer are related to manufacture time defects on the die. The number of good die yielded from a wafer is called die yield and is calculated with defects / area Die _ area Die _ yield = wafer _ yield 1+. (2-12) α The parameter α measure the manufacturing complexity which is related to the masking level and for CMOS process α = 4. Shown in column 3 and 4 from Table 2-2, the small sensor has a typical die yield >90%; thus only <10% of the die will be discarded due to defects. As the sensor size increases the die yield decreases to less than 50% for the average DSLR sensors (APS-C/H) and 13% for the full frame sensors. Hence, for these large area sensors over 50% of the die are discarded due to defects. The low die yield shows that out of the 59 full frame sensors cut from the 300mm wafer, only 8 sensors are usable. As the production per wafer decreases, the cost of each die will increase. The cost per die is calculated as α Cost _ of _ wafer cost_of_die =. (2-13) Die / wafer Die _ yield Assume a 300mm wafe cost is $1000, the price of each full frame sensor is ~$124, which is 300 times more than the small sensors ~$

58 Table 2-2. Comparison of die cost on a 300mm wafer. chip size die yield cost of die (mm 2 die/wafer die yield / wafer ) (%) ($) full frame DSLR APS-H DSLR APS-C DSLR average PS cellphone camreas intel core i intel core Although the large sensors tend to have better sensitivity due the larger photosensitive area, this performance is highly dependent on the pixel size. As mention before, the high pixel count on the small sensors is achieved by shrinking the pixel size. The tradeoff with the small pixel size is the imaging performance[18]. The shrinkage of pixel size will reduce the photosensitivity area which implies that the capacity of the photo-collection is reduced. This will result in lower dynamic range as the pixel will saturate at a much lower value. In terms of the Signal-to Noise Ratio (SNR), the noise signals are minimized with the new sensor technologies; however, the accumulation of dark current increases with the exposure duration. With the decline of the well capacity in the small pixels, the SNR measures will decrease and this impact is more significant in long exposure images Color filter array sensors As created pixels are monochromatic; they do not generate color information. To capture color images, the sensor must classify the wavelength of the incoming light and this mechanism is called color separation. The most common approach being used to create color images is the Color Filter Array (CFA), as shown in Figure 2-17(a). The sensors that use the 42

59 CFA design are also known as CFA sensors. This color separation method utilizes an array of transparent filters position above the image sensor such that only the desire wavelength range will reach a given photosite. Because each pixel on the CFA sensor will only collect information from a given wavelength range; the light of other wavelengths are discarded. The most common color filter pattern used in commercial imagers is called the Bayer mosaic pattern which consists of red, green and blue filters, as shown in Figure 2-17(b). The Bayer pattern is composed of 2 green samples because the human visual system is most sensitive to this wavelength range; however the arrangement of the colors in the array differs among various manufactures. (a) Microlens color filter array (b) Bayer mosaic pattern Figure Color filter array sensor. The Bayer pattern is for RGB color system images; thus each pixel is composed of the measured intensity of red, green and blue wavelenghts to produce the true color. However in the CFA sensor, each photosite only records of the three colors. Thus the output from the CFA sensor requires software interpolation algorithm known as demosaicing to estimate the two missing color values at each photosite from the surrounding pixels. This will be covered in 43

60 more details in chapter 3. The main problem for images captured using the CFA sensors is the creation of a Moiré pattern; that is a distortion caused by interpolation error of the missing colors. Another disadvantage is the reduction in overall sensitivity of the sensor. The output of the CFA sensor retains 50% of the intensity at the green wavelengths, and 25% at the red and blue wavelengths. Hence, large amounts of information are being discarded which reduces the sensitivity of the sensor. With such a problem existing, recently, Kodak has presented an alternative color filter pattern which replaces some pixels with no filter areas, also known as the panchromatic pixels[19]. The advantage of having no filter is that no photons will be lost; thus providing an increase luminance channel to the output image. Hence, creates a higher sensitivity, and allows the use of faster shutter speed under low-light conditions. The CFA sensors needed color interpolation to recover the missing color channels at each pixel site. Current demosaicing algorithms neglected the presence of defects on sensors; thus faults are treated as normal pixels. The use of the faulty values in the demosaicing will result in larger interpolation error and widen the defective area. The demosaicing algorithms are proprietary and vary among manufacturers. Because this process is irreversible thus to create a more robust digital imager, in-field defect correction is needed. A detail examinng of the impact of demosaicing on defects will be discussed in chapter Camera operation The basic image processing pipeline in a digital camera system is shown in Figure Each image captured by the sensor will produce a raw image. 44

61 The raw format image is simply the direct output from the sensor before demosaicing. As shown in Figure 2-18, the raw image output from a CFA sensor consists of a bit measure of one of the three color channels (i.e. R, G or B). The raw format is available in DSLRs and some high-end PSs only. To generate a color image, first the demosaicing will be applied. Then, white balance, sharpening, noise reduction, etc will be executed to improve the image quality. Usually the digital chain is executed on bit values to maintain the high precision in the algorithms. At the end of the process, the image will under go an 8-bit conversion and Jpeg compression to reduce the file size. The Jpeg compression will produce a smaller file size than raw, but it freezes the image into all that processing so it cannot be changed at later dates. Some professional photographers will shoot in raw format as they have the option to apply a customized processing pipeline with photo editing softwares. The entire process pipeline has no knowledge of possible defective pixels on the sensor. Hence, defective pixels will cause errors in all imaging functions. Figure Basic image process operation. 45

62 2.6.1 ISO amplification The camera ISO system in film cameras is known as the sensitivity of the film, or film speed. In this sense, the film with a low ISO speed rating has less sensitivity and requires longer exposure times. Consider the case of a camera with a fixed F/number and aperture. If an image is captured at ISO 100, the same image can be produced by reducing the shutter speed by half at ISO 200. Different from the film systems, the sensitivity of digital camera systems depends on the properties and settings of the sensor, noise level and image processing functions. The sensitivity of an image sensor measures the ratio between input illumination and output signal level. However due to the various processing functions, and additional noise from the mixed signal system; the overall sensitivity observed at the output image will be different. The ISO setting is specified by the manufacture such that the image produced is comparabled to the pictures created by film cameras of the same ISO. In the digital camera systems, multiple ISO speeds can be achieved by changing the amplification of the signal output from the sensor. The gain can be applied to the analog output from the sensor or by bit shifting the output from the A/D converter. Gain is applied to all pixels despite the possible presence of defects in the sensors. With such amplification applied to the defects, the appearance of faulty pixels will become more visible. In chapter 3 we will discuss in details the impact of ISO amplification on defects in image sensors. 46

63 2.7 Defects in image sensors Defects are known to develop over the lifetime of microelectronic devices. There are two main types of errors: soft errors and hard errors. Soft errors are a single event upset causing an instantaneous change of the pixel sensor state. This fault is related to bit errors; thus the state can be recovered after a reset or when overwriting the error. On the other hand, hard errors are permanent damage; hence, the change of state is unrecoverable. The defects of concern in digital imagers are permanent damage and will change the state of the pixel for all future images. These defects are irreversible; thus accumulation of defects requires replacement with a new sensor. As opposed to the film cameras, a defective film can be easily replaced with a new roll. Faulty pixels that occur prior to shipment of the camera are known as manufacture time defects and faults that developed after leaving the factory are known as post-fabrication or in-field defects. In this thesis we will focus on defects developed while the imagers are operating in the field. The two main defect mechanisms that induced faulty pixels in the digital imagers are categorized into material degradation and external stress. In the following sections we will discuss the two mechanisms in details. It is important to note that defects cause by manufacturing processes exists. However prior to the shipment of these cameras, the manufacturer will perform a calibration to map out the defects on each sensor. The defect map is stored in the camera and imaging functions such as interpolation or dark frame subtraction will be used to hide the presence of these defects. 47

64 2.7.1 Material degradation Material degradation is associated with the reliability of the semiconductor devices. This is mainly due to the alternation of the intrinsic properties of the structural layers of the sensor such as the bulk silicon or the gate oxide[20],[21]. The decay of the materials of the sensor can lead to issues ranging from the minor case of erroneous output to the more serious failures of a malfunctioning device. Gate oxide thinning is one of the common issues found when reducing the device dimension to make smaller pixels. Trimming down the thickness of the oxide layer usually leads to a faster wear-out rate; hence, the device becomes more sensitive to damage. Any catastrophic damage such as sudden spike in voltage/current, or static discharge can cause a permanent breakdown in the dielectric material. Alternatively, the break down can simply be the decay of the insulator material, also known as time-dependent dielectric breakdown. The degradation of the dielectric material usually occurs at local weak oxide points due to defects. The defects found in oxide film are related to local poor processing conditions such as when impurities are introduced while growing the oxide. The break down of the gate oxide will form a conduction path from the gate to the substrate. Thus the current flows between the drain and the source cannot be controlled. In the case of circuits, the excessive leakage will increase the standby power dissipation and decrease in circuit speed. For the APS pixels, gate leakage will cause the reset charge on the gate to decrease, thus results in a time related change in the pixel value. 48

65 Hot carriers are another problem commonly found in sub 100mm semiconductor devices. In pmos, hot carriers are referred to high energy holes and high energy electrons in nmos. These carriers have sufficiently large energies, so that when carriers are accelerated by the large electric field of small devices, these carriers can get injected and trapped in the gate or insulating oxide interface. Thus they permanently affect the space charged layer. The build-up of trap charges in the transistors will cause a decrease of the drain current or a shift in the transistor threshold voltage. Electromigration is another failure mechanism which is caused by the diffusion of metal atoms in the wires due to the force from the electron flow of the current. Migration of the metal atoms will cause build-up of atoms at the positive end of a wire while leaving vacancies at the negative end. This literally wears away or narrows the line especially at the thin area like steps over other layers. Under this condition, the current passing through the interconnection will be subjected to an increase in electrical resistance or even a broken conductor. The local heating may cause dielectric cracking which will result in shorting between adjacent metal lines. Another problem is material changing in composition due to changes in the chemical or crystal structure that arises from temperature or other environmental conditions. Many of these failure mechanisms are triggered by defects or contaminations from the fabrication process. The defects introduced in the material will create breakdowns while the devices are operating in the field. Since the defects are localized in small area, failure of from material degradation 49

66 will usually result in local defect clusters. In addition, the occurrence of such defects will increase exponentially with time[22]. Ensuring a clean fabrication condition, and keeping good design rules will help reduce much of these future breakdowns In-field defect mechanisms The development of defects after fabrication (i.e. in the field) is also related to the environment that employed the device. These types of defects are directly related to reliability and robustness of the system. The two common types of external stress that cause damage in microelectronic devices are electric and radiation stress. Each of these mechanisms will be discussed in the following sections Electrical Stress For regular (non-imaging) devices a common post fabircaiton failure is due to electrical stress. The two common electric stress sources are categorized by the changes in the application of the supply voltage, Electric Over Stress (EOS), and Electric Static Discharge (ESD). EOS is associated with an over-voltage or current stress that lasts for a relatively short duration (>1µs). This type of stress generally occurs during a normal circuit operation. It may arise due to voltage applied in reverse bias operation, a relay operation, or a power supply linear variation. However, an external factor such as lighting surges will also induce stress to the circuitry. The second type of electrical stress, ESD, as implied by the name, is triggered from a static discharge in the working environment. The 50

67 static build-up when applied to the device will discharge through the lowest resistive path. Hence, the circuit will experience a high current pulse for a short duration (1ns 1µs). The high transient voltage will cause permanent damage to the thin dielectrics (gate oxide) which results in an increase of leakage current. These defects are triggered by specified events or working conditions. Thus, it is usually associated with hot-spot development. The spatial distribution of defects development from such mechanism is usually random and does not have a constant failure rate. Electrical stress is less common for cameras as they usually operate on battery supplies Radiation Stress Another defect source which affects all microelectronic devices is radiation from sources such as cosmic rays or radioactive materials in the environment. This mechanism is more severe when the device is employed in space applications where radiation levels can be extreme. Several literature studies[23]-[25] have shown the damage in image sensors working in a harsh radiation environment. The accumulation of defects will significantly degrade the image quality and limit the lifetime of the device. Radiation damage is not limited to spaceborne applications, terrestrial radiation (i.e. cosmic rays) was often reported to be the main cause to damage found in transistors, and processors, RAM memories, etc [26],[27]. Studies from Theuwissen[28],[29] have details the observations of cosmic rays damages on imaging sensors operating in terrestrial environment. 51

68 Cosmic rays are composed of higher energy particles, mostly protons, and are categorized into primary and secondary rays. The primary rays come from the sun and solar flares striking the Earth s atmosphere. Another source is the high energy particles outside the solar system. Most of the primary rays are deflected by the earth s magnetic fields, hit atoms in the air and get decay or are absorbed by the atmosphere. Hence, less than 1% of the particles from the primary rays will reach the Earth s surface. The lower energy remnants that reached the Earth s surface are called secondary cosmic rays. The measured of cosmic rays at sea level are the secondary rays. These consist of neutrons, protons, pions, muons, electrons and photons[30]. The magnetic fields which shielded the earth from many charged particles weaken as it reaches the pole. Hence, the density of the cosmic rays varies with altitude and latitude. A study from IBM[30] has shown that cosmic rays flux increases exponentially with elevation; thus the particle flux is 10x more with an increase of 10km in height. The peak density of particles in the terrestrial environment occurs at an altitude of 15km above sea level, which is near flying height of airplanes. The radiation level for the trans-atlantic or pacific airflights is 100x higher than on groun level. The cosmic rays damage often reported on processors, RAM, etc are soft errors. These Single Event Upset (SEU) faults can be detected and corrected with fault-tolerant algorithms. Permanet defects are generally much less common in these digital devices but do occur. The analogy nature of image sensors makes them more sensitive and vulnerable to the cosmic rays damage. 52

69 Hence, the energetic particles such as neutrons, electrons and protons will cause permanent damage to the optoelectronic devices. The failure of a pixel is usually due to permanent ionization damage and displacement damage. Ionization damage is related to the generation of electron-hole pairs in the insulator material. Creation of these carriers in the dielectric will cause an accumulation of trapped charges in the oxide interface. The effect on pixels with such damage is a shift in the threshold voltage and an increase of the dark current. Hence, the noise level of the pixel will increase and the dynamic range will be reduced. Displacement damage is the result of collisions in the silicon substrate crystal by energetic particles which causes the displacement of an atom. The displacement will either leave vacancies in the lattice or the original position of the atom will be replaced by the bombarding ion. The displacement of atoms is simply a defect in the silicon lattice and will disturb the intrinsic properties of the bulk silicon creating effects such as a new localized energy level in the forbidden energy bandgap. Additional energy levels will affect the mobility of carriers and promote thermal generation of electron-holes pairs. The most prominent effect is the increase in the dark current level. Both ionization damage and displacement damage affect the CCD and APS sensors. One of the main requirements in CCD operation is the near perfect transfer efficiency. The small generation of surface trap charges will significantly affect the transfer efficiency of the CCD sensor. Although the inversion operation can reduce the surface charges from this ionization damage, 53

70 traps in the bulk silicon due to displacement damage are harder to resolve. The CMOS APS support x-y addressing; hence charge transfer loss in this architecture is not as significant. However, the excessive dark current level is the dominated effect in both sensors because these charges become integrated during the exposure cycle. As we will see this creates hot pixels effects in these sensors. 2.8 Summary The photodetector is the basic building block in the digital image sensors. Optically generated electron-hole pairs are collected with a reverse biased PN junction (photodiode) or in a depleted well (photogate). In a CCD sensor, a series of closely spaced MOS capacitors will shift collected charge packets sequentially for readout. The CMOS sensor uses photodiodes or photogates whose signal is integrated on an amplifier transistor to create an active pixel sensor. The additional transistor at each pixel reduces the fill factor of APS pixel. However, the x-y addressing architecture used in the CMOS APS makes it easy to support windowing output and faster frame speed. The two main types of cameras that employe the digital image sensors are: PSs and DSLRs. The pixel and sensor size of these cameras differentiates their quality in terms of dynamic range and noise. Recently, the popularity of cellphone camera market has created new demand for small sensors and small pixels design. 54

71 Defects in microelectronic devices are the main limitation factor to the reliability and robustness of the device. Faults on image sensors are permanent damage and will degrade image quality of the sensors. The two main sources to faults in microelectronic devices are material degradation and in-field defect mechanism. Material related defects are triggered by the design limitations and contaminations during fabrication process. These defects will weaken the device thus resulting in early material breakdown. The post-fabrication defect mechanism includes electric stress (i.e. applied voltage, statsic discharge) and radiation. Radiation stress is not limits to spacborne environment. Terrestrial radiation, cosmic rays, consists of high energy particles that can cause ionization or displacement damage in the oxide layer and bulk silicon. The result in accumulation of interface traps and excess leakage current will affect both the CCD and CMOS APS sensors. Each defect causal mechanism exhibit different characteristics, in chapter 4, detail analysis of the spatial distribution and growth rate of defects will help pin-point the defect source in the digital image sensors. 55

72 3: TYPES OF DEFECT IN DIGITAL CAMERAS Observations of imager defects have been reported on many camera forums. However, few studies had been done to understand the mechanism behinds the development of these in-field defects from commercial cameras. In addition, the impact and characteristics of these faults under normal camera operations have not been addressed. Most research studies on imager defects were related to sensors employed in space applications[23]-[25]. Although radiation was claimed to be the main source to sensor defects in spaceborne environment, imagers operating in terrestrial environment have also experience defects and the source has not been previously identified. In this study we will focus on characterizing and modelling the defects observed in commercial imagers (i.e. DSLR, PS and cellphones). In this chapter, we will be presenting the types of faults found in commercial sensors and the characteristics of each of these defects. Then we will discuss some customized laboratory techniques that are used to identify defects from commercial cameras. Majority of the commercial cameras adopted the CFA design which requires demosaicing to generate color images. In the second part of this chapter we will present a study on several demosaicing algorithms and analyze the impact of this imaging function on faulty pixels. 56

73 3.1 Defect Identification on Digital Cameras A typical pixel response from the sensor of interest is shown in Figure 3-1. Under the illumination of a light source, the output of the pixel will increase linearly with respect to the duration of the exposure time, also known as the shutter speed. The maximum output of the pixel is limited by the saturation level of the photocarriers collection. In an 8-bit color pixel system, 0 represents a dark pixel and 255 is a fully saturated white pixel. For simplicity, in the remainder of this thesis we will use a normalized scale where the pixel output range is from 0 to 1 with 0 being a dark pixel and 1 being a white pixel. The typical portion of the operation of a pixel can be modelled by I Pixel ( RPhoto exp, Texp ) = m ( RPhoto T ), for Texp Tsat (3-1) I Pixel = I sat, for T exp > Tsat (3-2) where R Photo is the incident illumination rate, T exp is the exposure time, T sat is the point where the current reaches saturation(i sat ) and m is the numerical gain controlled by the ISO setting. Figure 3-1. Pixel response to optical exposure. Faulty pixels on the image sensors will fail to sense light properly. Several types of defects had been reported in study[31] and photographer forums. Faults 57

74 are categorized into two types. The first type of faults will fail to response to light completely; these are called the fully-stuck defects. The second type of faults is still responsive to light but fail to give a proper measurement. The characteristics of some of the commonly known defects are summarized in Table 3-1. Responsive to light No Yes Table 3-1. Characteristics of defect type Defect type Output function Description Stuck high f ( x) = 1 Appear as a bright pixel at all time Stuck low f ( x) = 0 Appear as dark pixel at all time Partially-stuck f ( x) = x + b Offset 0 < b < 1 Hot pixel f ( x) x Rdark Texp (standard) Illumination independent offset that increases linearly with exposure time i.e. I dark Has two illumination independent Hot pixel f ( x) = x + Rdark Texp + b offsets (partially-stuck) (1) increases with exposure time, R dark (2) offset at all time, b Figure 3-2. Fully and Partially stuck defects Stuck defects The most commonly known faults are the stuck defects (see Figure 3-2). Pixels classified as fully-stuck faults are no longer sensitive to incident illumination; these pixels will either be fully saturated (stuck-high), or fully dark 58

75 (stuck-low). Shown in Figure 3-2, are the three types of stuck defects. A fullystuck low defect will appear as a dark pixel in all images while a fully-stuck high will always appear as a white pixel. Another less commonly discussed type of stuck defect is the partially-stuck pixel. This type of faults is still sensitive to light, but operates with a fixed offset. As shown in Table 3-1 the output function of a partially stuck pixel is modelled with the offset b (for 0 < b <1). This offset is added onto the measured illumination; hence the pixel will appear brighter than normal as shown in Figure 3-2. More importantly, the offset reduces the dynamic range of the pixel, in another words, such pixel will reach saturation at a much faster rate. In the work from [32], a detail study was conducted to identify fully-stuck and partially-stuck defects from a collection of commercial cameras; however no trace of such faults were found in these studies. Because stuck defects can be differentiated easily from the good pixels, these faults are often identified at fabrication time. For most commercial digital cameras (i.e. DSLR, PS), the sensors were calibrated prior to shipment; thus these manufacture time defects are removed. However due to the tight production cost in low-end cameras such as those on cellphones, mapping of these defects might not be done Hot Pixels Hot pixel is another type of defect observed in digital imagers. Different from stuck defects, hot pixels were seen to develop while these cameras are operating in the field. Shown in Table 3-1, this type of faulty pixel will continue to sense light; however, it has an addition illumination independent component 59

76 known as dark current, R Dark, which increases linearly with exposure time (see Figure 3-3). Hence, like the partially-stuck defects, the output of hot pixels will be higher than good pixels for the same illumination intensity. There are two classes of hot pixel: standard and partially-stuck hot pixel. Standard hot pixel is most visible under long exposure as the dark current component increases with the integration time. Figure 3-3. Normalized pixel dark response vs. exposure time of (a) good pixel, (b) partially-stuck, (c) standard hot pixel, (d) partially-stuck hot pixel. Shown in Figure 3-3 is the comparison of pixel dark output from a good, partially-stuck defect, standard and partially-stuck hot pixel under no illumination (i.e. the dark response). As demonstrated by Figure 3-3(a), without light, a good pixel should be black over any exposure range. For a partially stuck defect, Figure 3-3(b), the offset is exposure independent; hence the offset is constant over the entire exposure range. Figure 3-3(c) shows the output of a standard hot pixel. Since R Dark increases linearly with time, the output will increase over the integration time even with no illumination. Figure 3-3(d) models the output of a 60

77 partially-stuck hot pixel. Like the standard hot pixel, R Dark increases linearly with the exposure time, but differently, and like a partially-stuck defect, it has a constant offset b. Hence, the partially-stuck hot pixel will have the shortest saturation time. A point to keep in mind, the offset modelled in Figure 3-3 are added onto the illumination charges collected by the pixel. Hence, the measurement from these plots also demonstrated the reduction of the pixel dynamic range. A typical hot pixel response can be modelled with I ( RPhoto, RDark, Texp, b) = m ( RPhotoTexp + RDarkTexp b), (3-3) Pixel + where R Photo is the incident illumination rate, R Dark is the dark current rate and b is the offset. Thus, the combined offset due by the defect parameters, I offset, can be modelled using where R Photo is zero. I ( RDark, Texp, b) = m ( RDarkTexp b), (3-4) Offset Defect Identification Techniques There are two main calibration techniques used to map defects from sensors: Dark-field, and Flat-field (Light-field) calibration. Each calibration is responsible for finding different types of defects. For example, the dark-field calibration is an image captured by the sensor under the absence of light. Thus, this calibration is mainly used to test for any bright defects. Similarly, the flat-field is an image captured with a uniform light source such that all pixels are will be at or near saturation, and this technique is used to test for dark defects. However, the creation of a uniform light source requires a very customized and difficult 61

78 setup which is not feasible for home testing. From a study[32], no reports of stuck-low defects were reported, hence in this study we will only focus on finding bright defects (i.e. stuck-high, partially-stuck and hot pixels). In most defect studies, the focus is on analyzing the magnitude of the dark current and its fluctuations over the varying temperature of the sensor[33][34]. Different from these studies, our calibrations aimed to extract information such as the quantity of defects, the magnitude of the defect parameters, and its spatial location on each commercial imager. The information collected from these calibrations will serve as the data in the characterization of the defect source and the growth rate of defects over the sensors life time. In this study, we will be analyzing three types of cameras: commercial DSLRs, PSs and cellphones. The user control functions available on these cameras vary. For example, DSLR offers explicit controls on exposure settings, a wide ISO range, and output of the raw image format. By comparison the PSs and cellphones have limited manual controls and only offer jpeg output. Each of these controls will affect our calibration procedure; thus the techniques used are tailor to the setting available by these cameras. In the following two sections, we will describe the basic procedure used to calibrate these commercial cameras Bright Defects Identification Techniques for DSLRs Commercial DSLRs are commonly used by more advance/professional photographers or those who are concerned with the imaging performance. This type of cameras provides more settings such that photographers can change 62

79 parameters to achieve the best image quality one desires. In particular the raw format function available on these cameras provides the best scenario to perform digital image editing. The raw images contain direct output from the sensor; hence defects have not been permanently altered by the imaging pipeline. Also explicit exposure, aperture and ISO adjustment allow calibration to be carried out in a well control situation. To identify bright defective pixels such as hot pixels, stuck-high, and partially-stuck defects, dark frame calibration will be the ideal procedure. Dark frame calibration is performed in a dark illumination situation. The camera is placed in the dark such that the sensor is not exposed to any light source; hence, any bright defects can be identified easy from the dark output. Our basic calibration procedure is as followed: Adjust the output format to raw. Disable any noise reduction settings, flash, picture rotation Keep the ISO constant (e.g. 400) Capture the image at increasing exposure from 1/100 to 2s The camera gain used during the calibration is usually ISO 400, where the noise level is negligibled for the DSLRs. In our calibration, not only do we want to verify the existence of these faults, but also be able to estimate the magnitude of the defect parameters (i.e. R Dark, b). To test for hot pixels, multiple calibration images are taken at increasing exposure levels. To identify the bright defects we applied a threshold test to find all pixels with an output above the noise level. The noise signal increases with the ISO amplification; hence the threshold value needs to be adjusted. Shown in 63

80 Figure 3-4 is the plot of noise level versus ISO for two DSLR cameras. The variation in the noise can be approximated using an exponential regression fit, ISOx y = A exp(bx), where x = log 2. (3-5) 50 Where y is the measured noise and x is the number of doubling of ISO from 50. (i.e. ISO 400 = 2 3 where x = 3) To compensate for the variation of the noise level in different sensors, we use the average of the A and B values (i.e. A = 0.8 and B = 0.2), estimated from several DSLRs. standard deviation of luminosity camera 1 camrea 1-fit camera 2 camera 2-fit ISO setting Figure 3-4. DSLR noise level at various ISOs [data: [35]] Note: (data from camera analysis at dpreview.com [35]) The pixel output from each calibration image can be used to generate a dark response plot, as shown in Figure 3-3. Both defect parameters (R Dark and b) can be approximated with a linear fit function Bright Defects Identification Technique for cellphone cameras Different from the DSLRs, both PS and cellphone cameras have limited manual controls. In particular, the inability to capture images in the raw format restricts our calibration to the use of the jpeg compressed color images 64

81 generated by the imaging pipeline. Hence, defects will be distorted by the irreversible imaging functions which increase the difficulty in identifying the pixel location. Although some advance PS cameras have a higher exposure limit and allow explicit exposure control, these features are generally not available for cellphone cameras. To overcome these challenges imposed by these types of cameras, the calibration procedure used for DSLRs will need to be modified. Data from these cameras is important as they have the smallest pixel and sensor areas in our measurement set. When there is no explicit exposure controls available, the exposure compensation will be used to maximize the allowed integration time which is often less than 1s. This is an internal camera setting that controls the tradeoffs it makes in aperature and exposure time up to the camera longest permited exposure. Otherwise, images will be taken at variable exposure times. As the raw image format is not available on these cameras, all calibration images are taken in the jpeg format. Again images will be captured under dark room conditions where the tested cellphone or PS is fully shielded from any light sources. To identify the bright defects, a threshold of test will be used. However due to smearing of defects from imaging functions, a single pixel fault will appear as defect cluster in the color images. As demonstrated in Figure 3-5 which plots the intensity versus pixel x, y position in the final demosaic and jpeg compressed image from our tested cellphone cameras. In this case we assume that the fault is an isolated defect. This assumption can be justified because as shown in chapter 4 none of the DSLRs raw files show two defects to be nearest neighbour 65

82 or clustered. The mesh plot in Figure 3-5 shows that for a single defect on a red pixel site, we will observe a defect cluster on the red color plane while little effect in the blue and green planes. A simple threshold test will have duplicate detection of a single defect due to the clustering of these faults. To eliminate such false detections, each cluster will be mapped with our software tool. Then the location of the pixel will be estimated by the peak of the defect cluster. Also to eliminate false detection arising from the noise signals, multiple (~typically 6) images are captured and the defective pixel is declared only if it appears in at least 3 of the calibration images. The calibration of these images is performed at a fixed exposure level; hence the magnitude of the defect parameters cannot be estimated. We will not be able to distinguish whether the defect is a hot pixel or partially-stuck defect. Rather we can only conclude that the identified faults are bright defects, their location and numbers. Figure 3-5. Mesh plot of a defect in a demosaic compressed color image. 66

83 3.3 Defects in demosaic and compressed images Single pixel defects can only be found in the raw images before demosaicing; in color images these single sites are distorted by the imaging pipeline and result in a defect cluster. The creation of defect clusters makes the fault more visible than a single pixel defect. Hence imaging functions will enhance the visibility of defects. The creation of color images taken with the CFA sensors is shown in Figure Because the defects are treated as normal pixels by all imaging algorithms, the false measurement from a faulty pixel will impose errors in the processing functions. All applied imaging functions will affect the appearance of the defects. However, the interpolation used by demosaicing (i.e. color interpolation) will have the most significant impact to the final output as the missing colors of the neighbouring pixels are approximated using the faulty pixel. In this experiment, we will explore the impact of faulty pixels on color images processed by various demosaicing algorithms. In addition, we will analyze the possible impact from jpeg compression as well Demosaicing Algorithm Demosaicing is an irreversible imaging process and which is used to generate a color image from the CFA sensors. manufactures has their own proprietary algorithms. Each of the camera Demosaicing is the first function applied in the processing pipeline. Hence, the accuracy of the 67

84 interpolated values will affect the subsequent functions. Demosaicing algorithms can be categorized into three types, simple interpolation, statistical and adaptive. In this experiment we will implement one algorithm from each of the three categories and observe the impact on the presence of faulty pixels. In addition, we will also compare the appearance of defect at different jpeg compression level. A problem suffers by the demosaicing images is the moiré pattern. This type artifact is an interference that appears on edges of periodic patterns, as we will show in the later section. Note that when we refer to edges in the discussion we mean the boundries between objects within in the scene, not the picture border Bilinear demosaicing Bilinear interpolation is the simplest linear demosaicing method. The estimation of the missing color is based on the neighbouring pixels from the same color channels. Thus the calculation of each color plane is an independent process. Although this method is fast, it suffers poor image quality and moiré effect as well. For the Bayer CFA (i.e RGBG) mask shown in Figure 2-17(a), and isolating each color planes, Figure 3-6 shows the four nearest neighbouring pixels that will be used to interpolate the missing pixel (e.g. clear cell). The exact calculations that will be used to compute missing colors over the entire sensor area are: Green: Red, Blue: R 2 R + R = ; G R 5 4 G2 + G4 + G6 + G8 = 4 (3-6) R1 + R7 R1 + R3 + R7 + R9 = ; R5 = 2 4 (3-7) 68

85 In this Bayer mask, there is double the number of green pixels as compare to red and blue, thus an estimation of green pixel is always consists of 4 neighbouring pixels. On the other hand, for a red or blue pixel, the estimation will involve either 2 or 4 neighbours depending on the location of the centre pixel. Thus the estimation of green is usually more accurate then red and blue pixels. (a) green pixel (b) red and blue pixels Figure 3-6. Bilinear interpolation of (a) green, (b) red and blue pixels Median demosaicing The second type of demosaicing relies on correlation with the other color planes. An example of this type of algorithm is the median demosaicing proposed by Freeman[36]. The algorithm is executed in four steps. First the missing colors at each photosite are recovered using bilinear interpolation. Then, the difference between each color plane is computed using the Equations (3-8)- (3-10). D D D rg gb rb ( x, y) = f ( x, y) f ( x, y) (3-8) r g ( x, y) = f ( x, y) f ( x, y) (3-9) g b ( x, y) = f ( x, y) f ( x, y) (3-10) r b 69

86 The function f denotes the pixel value at location x, y from the indicate color plane. Next, a median filter is applied to the computed difference D rg, D gb and D rb. The purpose of the median filter is to suppress any large discrepancies between color planes based on information from the surrounding pixels. The last stage is the correction step. The results from the median filter are used to correct the interpolated color at each photosite. This method is especially useful in suppressing artifacts on object edge regions in the picture. Due to the large color variation at edges and the lack of information in the red and blue channel, the comparison with other color planes can suppress the interpolation error and artifacts in the final image Kimmel demosaicing Adaptive demosaicing is a more advance process of interpolation which utilized mathematical modelling to obtain information from the local area near the pixel for best approximation. A simple example of such an algorithm is a gradient-based technique proposed by Laroche and Prescott[37]. With this method, the interpolation is performed in the direction of a local image edge such that the error from the abrupt changes in color by the boundry is minimized. Another algorithm, and the one which we will be using, is proposed by Kimmel[38]. It integrates several methods: linear, weighted-gradient, and color ratio interpolation. This algorithm is executed in three steps. First the missing green components at each photosite is interpolated using a weighted bilinear interpolation, 70

87 G E G + E G + E G + E G =. (3-11) E2 + E4 + E6 + E8 The weight factor is used to adjust the interpolation to the direction of the local edge and is calculated with E i 1 =. 2 2 (3-12) 1+ D ( P ) + D ( P ) The gradient function D is calculated with Equations (3-13)-(3-16). The direction of the gradient factor is shown in Figure 3-7, which shows the Kimmel gradient mask. 5 i Vertical, Horizontal Diagonal +45, -45 P2 P8 D x ( P5 ) = ; 2 (3-13) P4 P6 D y ( P5 ) = 2 (3-14) P1 P5 P9 P5 D xd ( P5 ) = max, 2 2 (3-15) P3 P5 P7 P5 D yd ( P5 ) = max, 2 2 (3-16) Figure 3-7. Kimmel gradient mask. In the second stage, the red and blue components are interpolated using a ratio interpolation. With this method, the ratio between color planes is assumed to remain constant within the image scene. Because the typical Bayer patterns 71

88 72 record more green information; thus the ratio interpolation of red and blue, given in Equation (3-17), (3-18) respectively, are based on the ratio with respect to the green components G E E E E G R E G R E G R E G R E R = (3-17) G E E E E G B E G B E G B E G B E B = (3-18) The last step is the correction stage. The purpose of this step is to ensure the color ratio in the object remains constant. To satisfy this property, the green components recovered from the first stage are recalculated using the ratio with respect to the red and blue components obtain from the second step, Equation (3-19) G r G b G + = (3-19) R E E E E R G E R G E R G E R G E G r = B E E E E B G E B G E B G E B G E G b = After correcting the green components, the ratio with the red and blue components will be changed; thus these two channels will be recalculated using Equations (3-20) and (3-21).

89 R B 9 i Ei i = 1 Gi 5 = G 9 5 E i = 1 R 9 i Ei i = 1 Gi 5 = G 9 5 E i = 1 i B i, i 5 (3-20), i 5 (3-21) To obtain the best result, the correcting stage is repeated for at least three times. Different from the bilinear and median algorithm, the Kimmel algorithm incorporates adaption to the scene of a given image with the weighted bilinear interpolation, and color ratio interpolation. These enhancements are crucial in reducing artifacts which we will be showing in experimental results. Clearly, the down side to the Kimmel method is the high computational requirments which reduce the rate at which images can be captured Demosaicing algorithms comparison In the first experiment, we will test the performance of each demosaciing algorithm by applying it on a set of camera color images. The execution of the experiment is as follows: first each color image is converted into raw form using the Bayer mask shown in Figure Then, each demosaicing algorithm is used to recover the color image. The performance of each algorithm is the measured based on the comparison of the demosaic image with the original color image. The metric used in the evaluation are the Mean-Square Error (MSE), Equation (3-22), and the Peak-Signal-Noise-Ratio (PSNR), Equation (3-23). 1 m 1n 1 I (, ) w i j i= 0 j = 0 MSE = Kw ( i, j) m n (3-22) 2 73

90 Maxi PSNR = 20 log 10 (3-23) MSE The MSE measures how the pixel values from the demosaic output differs from the original image. Thus a high MSE implies that there are large interpolation errors. The second parameter PSNR asses the quality of the result image by evaluating the ratio between the maximum pixel value with the average error. The interpolation errors are simply noise signals in the output image; hence, high PSNR indicates the magnitude of error is small relative to the peak signal value. In this experiment 10 regular photographs captured by the same camera will be used as shown in Figure 3-8. Each demosaic algorithm will be applied to the raw form of these images to generate a full color version of the images. By comparing the demosaic output with the original image, the average MSE and PSNR calculated are summarized in Table 3-2. This will provide us with a baseline value to compare image quality when defects are injected into the image. Figure 3-8. Sample images used in experiment. 74

91 Table 3-2. Average MSE and PSNR of demosaic images. Methods Red Green Blue Total MSE PSNR MSE PSNR MSE PSNR MSE PSNR Bilinear Median Kimmel First we will examine the performance of the three algorithms by each color plane. As shown in the first three columns in Table 3-2, despite the different approaches used by each demosaicing function, the evaluations of the green pixels have the lowest MSE. Hence, the PSNR computed from the green plane is well above 30dB. The CFA sensors that use a Bayer pattern only records 25% of red and blue pixels. Thus the interpolation of the missing red and blue pixels will suffer in accuracy as reflected by the high MSE. Comparing the three algorithms, we will examine the overall MSE and PSNR of all three color planes as summarized in the last column in Table 3-2. Among the three algorithms, bilinear has the lowest PSNR of 29.95dB due to the large interpolation error. The median demosaicing utilizes median filter to suppressed large interpolation error. Hence the improvement was reflected with an increase in the PSNR of 35.09dB. The kimmel algorithm incorporates edge information and the ratio between color channels into the interpolation. Thus the overall interpolation errors are further reduced, and the PSNR increases to 35.54dB. The accuracy of these interpolations is highly affected by the scene of the image. Estimation of the pixel values in the regions with abrupt changes, like an object edge, will suffer large interpolation errors. The significant interpolation errors near rapid changes will create a type of artifacts called moiré pattern. To reduce this type of artifacts, the gradient interpolation is often used. Interpolating 75

92 along the direction of the edge can help reduce the estimation error. In Figure 3-9, examples of moiré pattern generated by the three demosaicing algorithms are shown. Both the bilinear and median algorithms make no use of the shape or texture within the image. Thus in Figure 3-9(b) and (c) the black lines show both a false color pattern along therm and the solid area show a moiré pattern. With the Kimmel algorithm, Figure 3-9(d), both the color ratio and edge detection were used, thus the same line patterns in the image appear more refined. (a) Original Image (b) Bilinear (c) Median (d) Kimmel Figure 3-9. Moire pattern (b) Bilinear, (c) Median, (d) Kimmel Analyzing defects in color images As seen from the results in the previous experiment, each algorithm inherits some errors in the interpolation of the missing colors. These errors can lead to observable artifacts affecting the overall image quality. A faulty pixel measures incorrect light level; thus the error from such pixels will impose additional errors in the interpolation of the neighboring pixels. In the following two experiments we will inject bright defects into each image and observe the impact of the demosaicing algorithms on the defective pixel and its neighbours. 76

93 In a regular camera image process sequence, multiple imaging functions are applied to create a color image. To isolate the analysis to the impact from the demosaicing process only, we will start with a color image as shown in Figure Like in the previous experiment, the color image will be converted into raw form. Then a single pixel bright defect will be injected into the image. Next the demosacing function will be applied to obtain the color image. Finally we will also apply the jpeg compression to observe any additional effects of the image compression on the spread of the defective pixel. The experiment will be divided into two parts. In the first part, a single defect will be injected on a uniform background, and in the second part, the defect will be injected on a color varying background. Figure Experiment procedure Defect on a uniform color background In this set of tests, 11 images with a uniform gray scale background will be used. The gray scales starts from a black image (i.e. R, G, B = 0) and intensity increases with a step size of 5 to a maximum value of (R, G, B, = 50) where 255 is the saturation value. A constant background eliminates any interpolation errors from the scene due to edges and color variations. Hence we can better 77

94 observed the impact of defect on the neighbouring pixels. A simulated defect will be injected into the raw image, where the defect offset is added onto the pixel value, as shown in Figure For each test, the defect will be inserted into one of the three color pixels in the Bayer pattern. The magnitude of the defect parameter is represented in the form of I offset, which will take on a constant value. To measure the impact of defect on the image quality, we will compare the difference between a defective image and that without the defect. As shown in Figure 3-10, both the defective and non-defective images are processed by the same demosaic function. Hence, the difference between the pixel outputs will be the impact from the defective point. In addition we will also apply the build-in jpeg compression function in Photoshop to create a compressed image. The compression quality measures on a scale of 1 to 10 with 10 being the highest quality and 1 being the lowest. In the following experiment we will be using three compression levels, with 9 being the high, 6 being the medium and 3 being the low quality compressed image. These results will replicate the conditions of the dark field test in PS and cellphones Bilinear demosaic results The first set of sample result of a red defect processed by the bilinear demosaic image is shown in Figure A visual comparison of the four images shows that the non-compressed TIFF image in Figure 3-11(a) has the brightest defect cluster but also the most confined spot. 78

95 (a) TIFF (b) JPEG 9 (c) JPEG 6 (d) JPEG 3 Figure Bilinear demosaic image for red defect with I Offset = 0.8. By taking the difference between the defective and non-defective pixel image, the errors will indicate the spreading of the defect. The size of the defect clusters are summarized in Table 3-3. Note the highlighted columns are ones with the defective pixel color. RED Table 3-3. Estimate defect size with bilinear demosaicing. TIFF JPEG 9 JPEG 6 JPEG 3 R G B R G B R G B R G B 0.2 3x3 0x0 0x0 7x7 7x7 7x7 8x8 8x8 8x8 0x0 0x0 0x x3 0x0 0x0 16x16 16x16 16x16 8x8 8x8 8x8 8x8 8x8 8x x3 0x0 0x0 16x16 16x16 16x16 15x15 8x8 8x8 8x8 8x8 8x x3 0x0 0x0 16x16 16x16 16x16 15x15 8x8 8x8 12x12 9x9 8x x3 0x0 0x0 16x16 16x16 16x16 16x16 15x15 8x8 13x13 12x12 12x12 GREEN 0.2 0x0 3x3 0x0 7x7 6x6 7x7 8x8 8x8 8x8 0x0 0x0 0x x0 3x3 0x0 7x7 7x7 7x7 8x8 8x8 8x8 8x8 8x8 8x x0 3x3 0x0 8x8 7x7 8x8 8x8 8x8 8x8 8x8 8x8 8x x0 3x3 0x0 8x8 8x8 8x8 8x8 9x9 10x10 8x8 8x8 8x x0 3x3 0x0 8x8 8x8 11x11 8x8 9x9 10x10 12x12 12x12 12x12 BLUE 0.2 0x0 0x0 3x3 2x2 2x2 7x7 0x0 0x0 0x0 0x0 0x0 0x x0 0x0 3x3 3x3 5x5 16x16 4x4 4x4 4x4 0x0 0x0 0x x0 0x0 3x3 7x7 6x6 16x16 7x7 7x7 17x17 9x9 9x9 9x x0 0x0 3x3 8x8 9x9 18x18 8x8 13x13 17x17 11x11 11x11 11x x0 0x0 3x3 10x10 11x11 18x18 8x8 13x13 17x17 11x11 11x11 15x15 Because the interpolation used by the bilinear demosaic performed on each color planes independently, as shown from the results, a red defect will only affect neighbouring pixels from the same color plane in the uncompressed 79

96 images. Since the bilinear interpolation consists of the nearest 3x3 neighbouring pixels, the spreading of the defects is also confined within the 3x3 region. However, these two points are only true for the uncompressed image (i.e. Figure 3-11(a), TIFF). In the case on the compressed images (i.e. JPEG 9, 6 and 3) a single defective will spread into a wider area and affecting all three color planes as show in Table 3-3 and seen in Figure 3-11(b)-(d). A sample error mesh plot of a red defect is shown in Figure Again a visual comparison shows that the compressed images have a wider spread of the faulty values. However, the peak error of the fault is reduced through compression. The peak errors of the defect in the resulting images are summarized in Table 3-4. RED Table 3-4. Peak defect cluster value from bilinear demosaicing. TIFF JPEG 9 JPEG 6 JPEG 3 R G B R G B R G B R G B GREEN BLUE It is clear from Table 3-4 that the uncompressed image has the highest peak error; thus the defective pixel appears the brightest. As the lossiness of the compression increases, the peak error is being reduced by ~78%. Although 80

97 defect appears less visible in compressed image, the spread of the defect covers a much wider area. (a) Tiff (b) Jpeg 9 (c) Jpeg 6 (d) Jpeg 3 Figure Error mesh plot of red defect at I Offset = 0.8 with bilinear demosaicing. 81

98 Median demosaic results Different from the bilinear algorithm, the median demosaic related pixels from all three color planes in the interpolation. Shown in Figure 3-13, are the resulting images of a red defect processed by the median demosaicinh. Different from the bilinear demosaic images (Figure 3-11), the red defect appears as a white pixel surrounded by red neighboring pixels. Observe that this defect now spread both in area and into the other (G and B) color planes. Again measuring the spread of the error values, we can measure the defect cluster size as summarized in Table 3-5. (a) TIFF (b) JPEG 9 (c) JPEG 6 (d) JPEG 3 Figure Median demosaic image for red defect with I Offset =

99 RED Table 3-5. Estimate defect size with median demosaicing. TIFF JPEG 9 JPEG 6 JPEG 3 R G B R G B R G B R G B 0.2 3x3 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x x3 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x3 1x1 1x1 16x16 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x3 1x1 1x1 16x16 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x3 1x1 1x1 16x16 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 GREEN 0.2 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 12x12 12x12 12x12 BLUE 0.2 1x1 1x1 3x3 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x x1 1x1 3x3 8x8 8x8 12x12 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 3x3 8x8 8x8 16x16 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 3x3 8x8 9x9 18x18 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 3x3 8x8 10x10 18x18 8x8 8x8 8x8 8x8 8x8 8x8 The use of median filter to suppress large interpolation errors does not correct the defects, as seen from the resulting images. In fact, it is this correction step that will spread the defect onto all three color planes. Hence the defect will appear as a white spot when the pixel is at or near saturation. Notice that for the uncompressed image (i.e. TIFF - e.g. Figure 3-13(a)), the spreading of the red and blue defects is confined within the 3x3 region, which is same as the bilinear demosaic. However, for the green defects, these spreading are not observed. Because the raw images retain 50% of green pixels, the median filter correction is able to reduce the spread of the defects on the green plane. Like the bilinear demosaic, the high quality compression shows the largest defect spread of up to 18x18 (in the blue case) when the pixel is full saturated. The peak error values measured from the resulting images are summarized in Table 3-6. The sample error mesh plots of the uncompressed and compressed images are shown in Figure

100 RED Table 3-6. Peak defect cluster value from median demosaicing. TIFF JPEG90 JPEG60 JPEG30 R G B R G B R G B R G B GREEN BLUE Different from the bilinear demosaic results, we did not observe as large of a decrease in the peak error values with median demosaic images. For example the peak error of a red defect in the TIFF and JPEG 9 has only 30% drop as compared to 55% drop observed in bilinear demosic. This observation is demonstrated in Figure 3-14(a) and (b). Because the defect has been spreaded into all the color planes prior to the compression, the suppression of the peak value is less significant as the difference of the pixel values between color planes is minimal. 84

101 (a) Tiff (b) Jpeg 9 (c) Jpeg 6 (d) Jpeg 3 Figure Error mesh plot of red defect at I Offset = 0.8 with median demosaicing. 85

102 Kimmel demosaic results Previously we have shown that the adapative approached used in Kimmel demosaic can suppress the moiré artifacts. With the presence of a defect pixel, the result images are shown in Figure Different from the median demosaic images, in this case, the correlation of color planes in the interpolation will not cause the defective pixel to appear as a white spot. To examine each color plane in details, the measure of defect spread is summarized in Table 3-7. (a) TIFF (b) JPEG 9 (c) JPEG 6 (d) JPEG 3 Figure Kimmel demosaic image for red defect with I Offset = 0.8. RED Table 3-7. Estimate defect size with kimmel demosaicing. TIFF JPEG 9 JPEG 6 JPEG 3 R G B R G B R G B R G B 0.2 5x5 3x3 3x3 6x6 6x6 6x6 7x7 7x7 7x7 0x0 0x0 0x x7 4x4 4x4 16x16 8x8 8x8 8x8 8x8 8x8 6x6 6x6 6x x7 4x4 4x4 16x16 8x8 8x8 13x13 8x8 8x8 8x8 8x8 8x x7 5x5 5x5 16x16 13x13 15x15 15x15 8x8 8x8 8x8 8x8 8x x7 5x5 5x5 16x16 14x14 15x15 15x15 11x11 8x8 8x8 8x8 8x8 GREEN 0.2 1x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 0x0 0x0 0x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x x1 1x1 1x1 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 8x8 BLUE 0.2 3x3 3x3 5x5 5x5 5x5 8x8 4x4 4x4 4x4 0x0 0x0 0x x4 4x4 7x7 6x6 6x6 12x12 6x6 6x6 6x6 4x4 4x4 4x x4 4x4 7x7 7x7 8x8 16x16 6x6 6x6 17x17 6x6 6x6 6x x5 5x5 7x7 9x9 9x9 16x16 7x7 9x9 17x17 7x7 7x7 7x x5 5x5 7x7 10x10 9x9 16x16 7x7 9x9 17x17 8x8 8x8 8x8 86

103 As shown from the results in Table 3-7, the red and blue defects spread into the ~5x5 and 7x7 region with the Kimmel demosaicing. This is larger than the 3x3 region measured from the bilinear and median demosaic. The color ratio interpolation used in the Kimmel function also enhances the spreading of error values on the defect-free color planes. However, different from the bilinear and median images, a single green fault at all offset ranges will remain as single defective pixel after the Kimmel demosaic. Although the jpeg compression increases the defect spreading, the size of green defect cluster is confined within in the 8x8 region at all compression levels. The peak error of the defective pixel measured after demosaicing is summarized in Table 3-8. The sample error mesh plots of a red defect are shown in Figure RED Table 3-8. Peak defect cluster value from kimmel demosaicing. TIFF JPEG 9 JPEG 6 JPEG 3 R G B R G B R G B R G B GREEN BLUE

104 (a) Tiff (b) Jpeg 9 (c) Jpeg 6 (d) Jpeg 3 Figure Error mesh plot of red defect at I Offset = 0.8 with kimmel demosaicing. First a visual comparison of the mesh plots in Figure 3-16 of (a) TIFF and (b)jpeg 9 showed that the peak error is reduced significantly with by the compression. This is verified from the measurement recorded in Table 3-8. On 88

105 average, the peak error is reduced by 50% in Jpeg 9 images, which is more than the 40% measured in median demosaic images. With the lowest compression quality (i.e. Jpeg 3), the peak error is reduced by ~80%. Hence, the appearance of defect shown in Figure 3-15(d) is gray instead of red. A common trend observed from the compressed images in all three set of demosaic images is the high quality compression gives the largest defect spread. The lossiness of the low quality compression reduces the peak errors and the size of the defect cluster. However, size of the defect spread in the compressed images is still larger than the uncompressed images. It is clear that the demosaicing will spread a single defective pixel into its neighboring pixels. The size of the defect cluster will range from 3x3 7x7 region in an uncompressed image. The peak error is nearly the defect offset value in the uncompressed images; thus the defect cluster is very visible. Adding compression is able to reduce the peak error; however, the spread of defect will increase into the 18x18 region for high compression. Although the defect clusters are smaller and less visible in the lossy compressed images, the use of low quality compression is not common as the pixel values are being discarded and altered through this process Defects on varying color backgrounds In this second part of the experiment we will be using a cropped section from the image data set shown in Figure 3-8, to provide a color varying background. The same procedure shown in Figure 3-10 will be used but the defects will be injected into these cropped color images. Again a simulated 89

106 defect will be inserted into one of the three color planes and I Offset will be increased progressively. The color image provides variation in the background; hence, we can estimate the impact of defect in regular photos. The measurement of impact from the defective pixel is based on the MSE calculated from the comparison between the defective and non-defective images as shown in Figure Bilinear demosaic results Previously, we have shown that the bilinear demosaic will spread the defect into the nearest 3x3 region in an uncompressed image, and this extend to 16x16 in compressed images. In this experiment, we uses the MSE to measures the impact of the spreading around the defective area. The MSE calculated from the bilinear demosaic images are summarized in Table 3-9. RED Table 3-9. Comparison of defect in varying color region with bilinear demosaicing. Tiff Jpeg 9 Jpeg 6 Jpeg 3 R G B R G B R G B R G B GREEN BLUE

107 Like the results observed from the uniform background, the errors cause by the defect remains only on the defective color plane in the uncompressed images. As expected, the errors cause by a green defect is lower than red and blue defect due to the extra green neighbour pixels available in the calculation. There are two trends observed in the compressed images. First the due to the suppression of the peak error as shown from the uniform background results, the MSE calculated from the defective color plane decreases with compression. On the other hand, the spread of the errors into the two non-defective color channels increases the MSE on these planes. This trend is demonstrated by the plot of the MSE versus I Offset of a red defective pixel at different compression levels in Figure

108 RED plane TIFF JPEG 9 JPEG 6 JPEG GREEN plane BLUE plane Figure MSE vs. I Offset of a red defect on non-uniform background (bilinear demosaic). Comparing the three plots, as the quality of the compression declines, the difference between evaluated MSE from three color color planes reduces as well. In fact, as shown in the red and blue planes of the jpeg 3 curve, the plots are nearly the same. This suggested that the compression algorithm has the tendancy to suppress color variations of the three color planes, hence lowering 92

109 the impact from the defective pixel. However, as noted the use of low quality compression (e.g Jpeg 3) is not very common Median demosaic results Likewise, the MSEs calculated for the median demosaic images are summarized in Table RED Table Comparison of defect in varying color region with median demosaicing. Tiff Jpeg 9 Jpeg 6 Jpeg 3 R G B R G B R G B R G B GREEN BLUE The non-zero MSEs calculated from the defect-free color channels reflect the spread of defects in the uncompressed images. Although the MSEs calculated on these color planes are relatively low, these errors will increase through compression as the defective region expands. Shown in Figure 3-18 is the plot of MSE versus I Offset of a red defect. 93

110 TIFF JPEG 9 JPEG 6 JPEG 3 RED plane GREEN plane BLUE plane Figure MSE vs. I Offset of a red defect on non-uniform background (median demosaic). Again the smoothing effect from the compression can be observed from the jpeg 3 curves on the red and blue planes. Although the defect resides on the red color plane, the MSE calculated from the red plane is nearly same as that on the blue plane. Observed in the MSE plots of the red color plane, at a low impact defect, I Offset = 0.2, the error spread through compression dominates. Thus the MSE is increased through the spreading by the low quality compression. However, at the high impact defect (i.e. I Offset >= 0.6), the I Offset becomes the 94

111 dominating error factor. Hence, the low quality compression reduces the appearance of defect cluster by suppressing the peak error from the defective pixel Kimmel demosaic results Table The last set of results is from Kimmel demosaicing and is summarized in RED Table Comparison of defect in varying color region with kimmel demosaicing. Tiff Jpeg 9 Jpeg 6 Jpeg 3 R G B R G B R G B R G B GREEN BLUE Although the Kimmel desmoaic function will also spread the defects on to the fault-free color planes, in most cases MSEs reported in Table 3-11 are lower than that in Table 3-9 (bilinear), and Table 3-10 (median). Shown in Figure 3-19 is the plot of MSE versus I Offset of a red defect. 95

112 TIFF JPEG 9 JPEG 6 JPEG 3 RED plane GREEN plane BLUE plane Figure MSE vs. I Offset of a red defect on non-uniform background (kimmel demosaic). Looking at the defect-free color planes (i.e. green and blue), the MSEs calculated from the uncompressed images are lower than that observed from median demosaic images. Hence, this suggested the adaptive approach will not impose significant errors on the neighbouring pixels. However, as the uniform background result shows, the small errors are being spread into the 7x7 region with this demosaic function. Again the compression function is showing a reduction of the peak error, with the lower MSE values, when compared to the 96

113 uncompressed images. However, the appearance of low impact defects is being enhanced by the jpeg compression through the wide spreading of the small errors. It is important to note that the Kimmel type algorithms are very common in the high-end cameras. 3.4 Summary The defects found in digital cameras are categorized into two types. First, are the fully-stuck defects which are no longer responsive to light. These defects are most often observed at manufacture time and factory mapping can resolve such problem. The second type of fault is still responsive to light but fails to give a proper measure of the light level (e.g. partially-stuck defect, standard and partially-stuck hot pixels). The offset values from these faults is either constant or exposure dependent value and both are added onto the illumination signal in the pixel. Hence, these faults will always appear brighter than the normal pixels. The bright defects can be identified easily with a dark frame calibration test. The raw format and explicit exposure control available on DSLRs provide an ideal setting for calibration measurement. On the other hand, the lack of explicit exposure control from PS and cellphone cameras requires the calibration be evaluated in a compressed color image form. Defects found in color images are altered by the camera internal imaging functions. Hence only the count of defects can be extracted from such such calibration. 97

114 In the study of the three demosaicing algorithms: bilinear, median and Kimmel had been studied. Testing of the three demosaic functions shows the adaptive approach used by the Kimmel algorithm provided the best image quality. The gradient interpolation and color ratio correction in Kimmel help reduce the moiré artifacts cause by interpolation error on or near the edge of objects. The testing of a single defective pixel on uniform background has shown that a 3x3 spread with the bilinear and median demosic and 7x7 with the Kimmel demosaic for uncompressed images. The lossy jpeg compression images have shown a reduction of the peak error from the faulty pixel. However, the compression spread the defect will affect all three color planes and the impacting pixels within the 18x18 region. In the next chapter the calibration technique described will be applied to a set of DSLRs, PS and cellphone cameras. Detail such as the magnitude of defect parameters, spatial location and increase of defect counts will be served in the yield anlaysis. 98

115 4: CHARACTERIZATION OF IN-FIELD DEFECTS Most research studies on imager fault analysis have focused on measuring the magnitude of the dark current and its variation with the temperature shift in sensors[33][34]. These studies have neglected important information such as the spatial distributions and development rates of the postfabrication defects over the lifetime of the sensor. By comparison, in our study of in-field defects, we aim to provide answers to two main questions: what is the causal source of the defects developing in commercial cameras? Second, what is the growth rate of these faults. Like the standard fabrication time defect yield analysis, the distribution of defects and the failure rate are crucial in the characterization of defect source mechanism. In this chapter, we will be using the defect detection techniques presented in chapter 3 to obtain the defect distribution and characteristics. We will be monitoring a set of commercial digital imagers (DSLR, PS and cellphones) over the course of their lifetime. Testing the cameras periodically will provide information such as the quantity, temporal growth of faults, and the magnitude of the defect parameters. This information can help us analyzed the spatial distribution of faulty pixels and defect rate of the faults on each individual sensor. We will also investigate how changing camera parameters, such as the gain (ISO) affect the number of visible defects. 99

116 4.1 Basic DSLR defect data In our continuous research, we have been analyzing a set of 21 commercial DSLRs as listed in Appendix A. The cameras in this study range from less to a year to <6 years old from the manufactured date and all have sensor of at least 23x15mm in size. From our most recent dark-frame calibration analysis, the break down of the defects found for each camera is summarized in Table 4-1. Table 4-1. Summary of defects identified in DSLRs at ISO 400. Number Sensor Age Hot Pixel Camera Stuck- Partiallystuck Standard Total Type (year) Partially- High stuck A APS B APS C APS D APS E APS F APS G APS H APS I APS J CCD K CCD L CCD M CCD O CCD N CCD P CCD Q APS R CCD S CCD T APS U CCD Cumulative Total: One of the first clear points from this table relates to the stuck high defects mentioned in chapter 3. Although photographers had reported to have seeing stuck defects in their images, from our calibrations, on the cumulative total of 229 identified faults from 21 cameras, there was no evidence of any stuck high or low 100

117 faults. This is same for pure partially-stuck defects, where none were identified from our calibrations. In fact, the dominated defect type found in both APS and CCD sensors is the hot pixel. Another important finding is that while standard hot pixels are assumed to have no impact at very short exposures, a significant number of those we find do appear at all exposure times due to an additional offset. The partially-stuck hot pixels were very common, 106 out of the 229 (35%) identified hot pixels exist with an offset. This is a new finding because the offset in hot pixels appears to be not addressed in the literature. As discussed in section 3.1, this offset will affect the pixel output at all exposure levels further reducing the pixel dynamic range. In fact, based on the data observed in our table, most, if not all, of the stuck-high defects reported by the camera users could simply be the partially-stuck hot pixels with a high offset. In particular, this point will be made clear in the next section which shows the impact of camera gain (ISO) settings on the defect count. Another important observation from these results is that the number of defects found in older cameras consistently increases with the age of the sensor. The the growth rate of defects will be discussed in details in the temporal growth section. Furthermore, we confirm our initial finding that the defect does not change significantly in parameters after formation[32]. The accumulation of defects suggests the quality of the sensors will degrade over time. 4.2 ISO Amplification One of the most important adjustable functions on a digital camera is the ISO gain. In the tradition film cameras ISO is the sensitivity measure of the film, 101

118 and in the case of digital cameras it is the gain or sensitivity setting of the sensor scaled to match that of the film equivalent. Importantly, the ISO is simply a numerical gain setting of the amplification applied to the sensor output. Because the ISO of a digital imager can be adjusted from image to image, it has become a significant new control ability for digital photography which was not available in the film cameras. Despite having this advantage, the amplification of the pixel output creates an unanticipated problem: it will enhance the appearance of defect as well. The usable ISO range for a camera is limited by the noise level on the sensor. Before 2004, most of the commercial DSLRs had a usable ISO range of ISO We use ISO 400 for our standard dark-frame calibration as at this setting the noise level was very low in all DSLRs, whereas at higher ISOs the noise signal began to increase in the older cameras. We would expect that as the as the gain increases the noise will be amplified and this increase applies in the same way for defect parameters as well. Camera manufacturers use software algorithms to reduce the noise levels at higher ISO but these do not suppress the increase in hot pixels intensities. In the following Table 4-2 summarizes the results from the calibrations performed at different ISO levels and the cumulative total of hot pixels identified at each ISO level from a sub-set of 13 cameras. As not all cameras from Table 4-1 were accessible for re-testing, the calibration at different ISOs can only be performed on a sub-set of cameras. 102

119 Table 4-2. Cumulative total of hot pixels identified at various ISO levels. Camera Age ISO A B C D E F G H I J K L M Cumulative Total: The accumulated defects from the 13 cameras of age 1-6 years showed a clear trend where the number of faults found increased with the ISO levels. At ISO 400, a total of 137 defects were identified in this set of cameras. By comparison at the lower setting of ISO 100 only 27% of these defects were observable. The number of defects identified increased significantly when these cameras were calibrated at still higher ISOs. From the result at ISO 800 the number of faults increased by a factor of 1.75 to 240 defects, and at 1600, a factor of 2.7 with 367 defects. Hence, the number of defects we would expect from the 21 cameras in Table 4-1 will actually be >600 defects when calibrated at the higher ISO levels. This trend suggested at the low ISOs many defects were not identified because they were not distinguishable from the noise signal ISO and hot pixel parameters Calibration at still higher ISOs like up to in the newer DSLR (like camera B) shows that the defect parameters are being amplified significantly where the noise level is just moderate. Thus the distinction between the 103

120 background noise and faults become more noticeable. In Figure 4-1, the plot shows the comparison of dark response versus exposure time of an identified hot pixel at various ISO levels. Normalized pixel output ISO25600 ISO12800 ISO6400 ISO3200 ISO1600 ISO800 ISO400 ISO200 ISO Exposure duration (s) Figure 4-1. Dark response of a hot pixel at various ISO level. The magnitude of the dark currents and offsets measured for the defect are summarized in Table 4-3. Note that the dark currents and offsets here are measured on a normalized pixel scale where 1 represents saturation. Thus one over the scaled dark current is the exposure time until saturation if the offset is zero. While all hot pixels showed the same behaviour, the pixel selected for Figure 4-1 was able to demonstrate this behaviour of the dark current and offset changes over the 100 to ISO range. 104

121 Table 4-3. Magnitude of dark current and offset measured for defect in Figure4.1. Dark current (1/s) ISO NA Offset It is clear for the defect shown here at ISOs below our standard 400, the defect magnitude of R Dark (values <0.1/s) and b (values <0.01) are relatively low. However, as the gain goes up with the ISO levels, both R Dark, and b increased dramatically. The plot at ISO shows that the dynamic range of the defective pixel has a 70% reduction due to the offset b. This offset values will be reflected at all exposure times. Since this offset is added to the collected light, this means that the pixel will be saturated under all but the darkest areas of a photograph. Making this worst, R Dark, the slope became steeper as the ISO gain increases. Thus the dark current, R Dark rises rapidly with exposure time. At ISO the dark current rate is 4.89/s which means the pixel will saturate in the dark at 1/4s exposure. At ISO 25600, this defective pixel is fully saturated at all exposures which will cause this fault to appear as a stuck-high defect, so the slope is unmeasureable. Note the combination of the amplified offset b and rapid increase of R dark with exposure will cause any pixel with illumination >0.5 to appear as a stuck high defect in almost any exposure. From Table 4-1, 46.2% of the identify faults are partially-stuck hot pixels. This suggested that the development of stuck high pixels in the field may actually be hot pixels with high offsets amplified by the ISO gain. 105

122 The numerical gain applied to the sensor differs between manufacturers due to a variation in the sensitivity. From our measure of R Dark and b at various ISO settings, we have plotted these measurements versus ISO for two individual faults in Figure 4-2. Dark Current (1/s) Defect 1 Defect 2 Offset Defect 1 Defect ISO ISO (a) Dark current (b) Offset Figure 4-2. Plot of (a) Dark current, (b) Offset vs. ISO. Both plot in Figure 4-2 (a)dark current, and (b) offset magnitude display a linear increase with ISO levels. Given the measurements of R Dark and b at the specific ISO levels, we can approximate the dark current and offset at other ISO levels from the following derivation: R = A ISO R = A ISO Dark ISOx x, Dark calibrated calibrated R Dark ISOx ISO = ISO x calibrated ISO ISO R calibrated Dark calibrated x m = (4-1) The linear trend from the two plots suggested that the gain m from Equation(3-3) is simply the ratio between the ISO x and the given ISO level which R Dark and b are calibrated from (i.e ISO calibrated ) as shown in Equation (4-1). This 106

123 means, as expected the hot pixel parameters increase with ISO at the same rate that the collected photoelectrons do. This ratio indicates the R Dark and b measured at ISO 400 are increased by a factor of 4 at ISO 1600 and 8 at ISO3200. As expected the impact from such a scaling factor will cause the moderate defects identified at ISO 400 to appear as fully stuck faults at the higher ISO levels. Hence, the 137 defects identified at ISO 400 will most likely reach saturation at ISO 1600 and if not at higher ISOs ISO and hot pixel numbers The cumulative defect total from Table 4-2 shows that at ISO 100 only 10% of the total defects from ISO 1600 were detected and at ISO 400 this increased to 37%. However, this suggested the majority of the in-field faults are low impact damage and the visibility of these defects are due to amplification from the ISO gain. To see how the magnitude of the defect parameters vary over different ISO levels, the plot in Figure 4-3 shows the distribution of R Dark and b collected from all cameras listed in Table 4-2. defect count (%) ISO 100 ISO 200 ISO 400 ISO 800 ISO 1600 defect count (%) ISO 100 ISO 200 ISO 400 ISO 800 ISO dark current rate intensity (1/s) dark offset (a) Dark current intensity rate (b) Dark offset Figure 4-3. Magnitude distribution of (a) dark current intensity rate, (b) dark offset at various ISO levels. 107

124 Since most of the tested cameras from Table 4-2 have a usable ISO range of , the distribution of the two plots of Figure 4-3 are scaled based on the cumulative defect total identified at the highest ISO level (i.e.1600). A common trend observed from the two plots is that at all ISO levels the majority of the defects were identified with R Dark <=0.2/s and b<0.01. This observation shows that many of the faults are created with a low impact (i.e dark current). At ISO 100 and 200 where the gain factor remains small, only 20-40% of the faults from ISO 1600 were identified. As shown from the two distributions, the magnitudes of these faults are small. In fact, only 10-20% of these defects will pass our threshold test (i.e. I offset >=0.1) at these ISO ranges, as reported in Table 4-2. As the ISO level increases, both R Dark and b are amplified, and the distributions from ISO 400, 800 and 1600 showe more defects are measured with R Dark >0.2/s and b> The broadening of the distribution is caused by the amplification of the moderate defects identified at ISO 100 and 200. In fact at ISO 1600, the plot shows over 10% of the faults has R Dark >=1/s. These high dark currents faults will saturate almost immediately with fast shutter speed at modest light levels; thus appearing as fully-stuck high defects. As the sensor technology improves, the noise level observed on sensors is reduced through both pixel design and software noise suppression algorithms. Hence, the usable ISO range in the newer cameras is continuously expanding. In 2010, most DSLR cameras released into the market have a usable ISO range up to ISO 6400 or higher. However; from our collection of cameras only one of the newest cameras which uses a 24x36mm sensor (camera B) has calibration 108

125 data for the whole ISO range up to To observe the trend on the increase of defect count at these new high ISO settings, the distribution of the defect parameters for this single camera is shown in Figure 4-4. defect count (%) ISO 100 ISO 200 ISO 400 ISO 800 ISO 1600 ISO 3200 ISO 6400 ISO ISO dark current rate intensity (1/s) (a) Distribution of Dark current defect count (%) ISO 100 ISO 200 ISO 400 ISO 800 ISO 1600 ISO 3200 ISO 6400 ISO ISO dark current offset (b) Distribution of offset Figure 4-4. Magnitude distribution of (a) dark current, (b) dark offset at various ISO levels from camera B. Again the two distributions shown are scaled based on the number of defects identified from the highest ISO level (i.e. ISO 25600). When compared to the distributions from the collective cameras at the lower ISOs shown in Figure 4-3, a similar trend is found in camera B where most defects are created with low damage (i.e. R Dark <= 0.01/s or b<= 0.08). With the expanded ISO range 109

126 , the moderated defects measured from the low ISOs continue to scale up; thus broadening the distribution of the defect parameters. In fact at ISO 25600, we observed two clear peaks in both plots. The first peak shows ~50% defect count at R Dark > 0.2/s or b >= 0.03 and the second peak shows ~20% defect count at R Dark >1 or b >0.1. The first peak demonstrates the scaling of low impact defects and second peak is from the moderate defects which appear at ISO It is unknown if additional hot pixels will continue to appear above the noise threshold as the ISO increases beyond our available setting. However, the trend observed in the expand ISO range shows that as the gain factor increases with the higher ISOs, it will enhance the significance and numbers of low impact defects. In addition, the saturation of moderate defects will cause great distortion to the image quality in the high ISO images. As the defect parameters get amplified by the ISO gain, faults will become more visible even in short exposure time images. Recall the calculation of the combined offset from Equation(3-4), where I offset provides an estimate of the brightness of each hot pixel at a specific exposure and ISO setting. This offset can be interpreted as the dynamic range reduction that the faulty pixel will encounter. Thus a large offset is a major interference in the pixel operation. Given the defect parameter measured at various ISO levels from Figure 4-3, we calculated the I offset at 1/30s, a typical short exposure setting use for low/modest light conditions, and at 1/2s, a long exposure used in very dark conditions. These measures of I offset can be used to evaluate the impact of defects in a regular camera setting. The distribution of I offset is plotted in Figure

127 defect count (%) ISO 100 ISO 200 ISO 400 ISO 800 ISO 1600 defect count (%) ISO 100 ISO 200 ISO 400 ISO 800 ISO combined defect offset combined defect offset (a) Exposure time 1/30s (b) Exposure time 1/2s Figure 4-5. Combined defect offset distribution at (a) 1/30s, (b) 1/2s. At a commonly used low exposure time e.g. 1/30s, the distribution of I offset in Figure 4-5(a) shows that majority of the defect offset values are smaller than 0.1 at all ISO levels. At ISO 400, over 50% of the identified faults will have a combined offset <0.1. However, at high ISOs (i.e 800 and 1600), the distributions of I offset are broader and over 2% of the defective pixels will have I offset >= 0.2. In other words, these defective pixels will have a 20% reduction in dynamic range. In fact, at ISO 1600 ~2% of the defective pixels will be fully saturated even at such low exposure levels. It is well known from the camera users forums that what appear to be stuck high pixels have been observed. However from our own measurements, we did not find any true stuck high defects. As shown from Equation(3-4), the photo-current is added on top of I offset, thus pixels with I offset >= 0.2 will be at or near saturation in most images. The distribution here shown that these reported stuck high pixels are most likely cause by the fully saturated hot pixels at common ISO levels. In everyday photography, long exposures (i.e. >1/15s) are rarely used because camera motions will distort the image unless a fixed/tripod mounting is 111

128 employed. Typical long exposure photography is used under low light conditions with high ISO setting, which also has large dark areas in the picture (e.g. night scenes). Thus the brightness of the defective pixels will create a significant impact to the image quality. In Figure 4-5(b), the plot shows the distribution of I offset calculated at 1/2s exposure time. Different from the results seen at 1/30s, the plot in Figure 4-5(b) shows a broader distribution with more pixels having a high measure of I offset even at the low ISO levels. At ISO 800 and 1600 over 20% of the defects are measured with I offset >= 0.2. Hence, combining with the incident illumination, these faults will most likely be at saturation. Another photography area is sport or action images where high ISO is being used to compensate for the very short exposure time. In those conditions the combination of amplified offsets in the defective pixels and high light levels again brings saturated pixels to distort the images. It is clear that the camera noise level has improved at high ISO levels but the gain increases the hotness of the faulty pixels give way to a clear distinction between hot pixels and the background noise. Calibration at the expanded ISO range shows 2-3 times more hot pixels over the moderated ISO 400 level. Although the distribution of R Dark and I offset shows that majority of these faults are created with low damage, the ISO gain will cause these low impact defects to become more prominent and the moderate defects to reach full saturation when combined with the exposured image. 112

129 4.3 Spatial Distribution of faults Like the traditional yield analysis on manufactured chip, the mapping of defects can provide insight to the defect causal mechanism as each potential source will produce a different spatial and temporal pattern of defects. For example, as noted in chapter 3, if the main defect source is degradation of the sensor material, we should observe large defect clusters in the sensor[39]. If, on the other hand, the defects are caused by a radiation source (i.e. cosmic ray radiation), it will likely result in permanent damage to randomly located pixels. An example of a clustered pattern is shown in Figure 4-6(a) and a random pattern in (b). (a) Clustered (b) Random Figure 4-6. Spatial pattern (a) clustered, (b) random. The information related to the spatial location of the faulty pixels can be collected from our dark-frame calibration procedure. In Figure 4-7 shows an example of the spatial map of faulty pixels calibrated at ISO 400 for camera A. 113

130 15.1 y (mm) 0 0 x (mm) 22.7 Figure 4-7. Defect map of hot pixels identify from camera A at ISO 400. From visual inspection of the defect maps of all the tested cameras (e.g. in Figure 4-7), we have observed no local cluster of defects, and faults appear to be uniformly distributed over the entire sensing area. Indeed in all the cameras tested not a single case of defect being adjacent has been found. To confirm this observation, more rigorous statistical analysis is applied. In the following sections we will apply differenct methods to analyze the spatial defect patterns observed from each tested sensor. We want to find whether these faults are clustered (e.g. Figure 4-6(a)) or randomly distributed (e.g Figure 4-6(b)) on these sensors Inter-defect distance distribution The first method is to analyze the spatial distribution of defects from each individual sensor. The Euclidean distance between faults, see Figure 4-8, are calculated for each sensor from Table 4-1. The distances collected from each 114

131 sensor are categorized by the sensor type and then plotted into one single distribution as shown in Figure 4-9. Figure 4-8. Inter-defect distance measurement. frequency (%) distance (mm) (a) APS frequency (%) distance (mm) (b) CCD Figure 4-9. Inter-defect distance distribution of (a) APS, (b) CCD sensors at ISO 400. Despite the differences of the two sensor technologies, plots in of Figure 4-9(a) APS and Figure 4-9(b) CCD both showed a distribution expected from a random occurence of defects with a peak near the median inter pixel distances and with no multiple peaks at either long or short distances. If any of the tested sensors exhibit local clustering of defects, we would expect to observe multiple peaks at both short and long distances. As shown in Figure 4-6(a), the measure of short distances arises from the close defects in the cluster and the 115

132 long distances arise from the separation distances between the clusters. detailed measure of the two distributions is summarized in Table 4-4. A The distribution plots from APS and CCD both appeared as broad distributions with a single peak at ~10 mm and a standard deviation of 5.2 mm. The 10mm distance is nearly half the maximum distance on a 24x15mm sensor. The broad distribution suggested that defects are randomly scattered over the sensor area. In addition, the similar finding in both sensors indicated that the defect source is not related to the manufacturing process or the design of the pixel; rather it is a common random external source such as radiation. Table 4-4. Statistics summary of spatial defect distributions from APS and CCD sensors. Distance Sensor Type # of defects Mean (mm) Standard deviation (mm) Min (pixel) APS CCD Up to now, the analysis shown is based on the defects found at ISO 400 (Table 4-1). As shown in section 4.2, the brightness of defects is enhanced by the ISO gain factor; thus calibration at higher ISOs will reveal more finding of the low impact defects. However, from our collection of cameras, only a subset of imagers was available for testing at higher ISOs (see Table 4-2). Based on the calibrations at multiple ISOs collected from the 13 cameras (Table 4-2), we repeated the same distance analysis for the defects found on these sensors at the tested ISO levels. The distribution of distances collected from each sensor is plotted in Figure 4-10 and the measurements of the plots are summarized in Table

133 frequency (%) distance (mm) frequency (%) distance (mm) frequency (%) distance (mm) (a) ISO 400 (b) ISO 800 (c) ISO 1600 Figure Defect inter-distance distribution at various ISO levels. Table 4-5. Statistics summary of spatial defect distributions at various ISO settings. Distance ISO level # of defects Mean (mm) Standard deviation (mm) Min (pixel) Although more defects were found at ISO 800 and 1600, the distributions remain nearly the same with one single peak broad distribution and an average distance measure of 10.36mm. Thus the calibrations from higher ISOs continue to confirm that there are no local defect clusters in any of the tested sensors. These distributions strongly suggest that these faults are not related to material degradation where clusters of defects are expected. In fact, the similar broad uniform distributions from all three plots are suggesting these faults are caused by a random source such as cosmic rays radiation Inter-defect distance chi-square test In order to strengthen the conclusion from our visual inspection on the inter-defect distributions, a statistical chi-square, goodness of fit test, as proposed by our collaborator Israel and Zahava Koren of UMASS Amherst[40], is 117

134 performed on the three distributions shown in Figure First a Monte-Carlo simulation is performed by simulating 100 sensors of size x mm with defects uniformly distributed over the sensor area. In this case a random number generator is used to create the x and y coordinates for the defective pixels scattered the simulated sensors. Then, for each simulated sensor, the inter-defect distances are calculated to derive the expected distribution. The Monte-Carlo results are listed in Table 4-6 as the expected value for each of the 20 distances from 0 to > 28mm. The three experimental distributions shown in Figure 4-10 are then compared against the theoretical distribution by computing the chi-square value, ( O = i 2 2 Ei ) χ, (4-2) E where O i and E i are the observed and expected frequencies respectively. The numerical values from the observed distributions are also summarized in Table 4-6 i Table 4-6. Theoretical vs. actual inter-defect distance distribution (in percentage). Distance Expected ISO ISO ISO Distance Total Χ 2 Expected ISO ISO ISO The three chi-square values from each distribution are 3.40, and for the observed distance distributions at ISO 400, 800 and

135 respectively. The chi-square distribution table shows that the critical value is (for 19 degrees of freedom and a significance level of 0.05). Since the critical value is much greater than the chi-square values from all three distributions, it shows that all experimental distributions observed in Figure 4-10 are consistent with a random defect distribution. Thus the defect source is most likely from a random mechanism such as radiation rather than a clustered source like material degradation Nearest neighbour analysis In the first two methods we analyzed the spatial distribution of defects base on all inter-distances between all faults on the sensor. Instead of observing all the distances measured between defects, in this third method we will test the distribution of faults using a nearest neighbour analysis. The method for identifying cluster of events which is well established in the literatures[41][42] and used for identifying clustered distribution in area for fields such as geography. Different from the previous analysis, this methodology is based on the distance measured to the closest defect point. In a cluster pattern, the distances between close defects will be much smaller than in a randomly scattered pattern. Hence, this measure will provide means to concentrate on the close inter-event distances. First we need to find and compute the nearest neighbour distance for each pixel on the sensor. The shortest distance measured from the i th defective pixel is denoted by d i. Under Complete Spatial Randomness (CSR) conditions, where 119

136 each faulty pixel is an independent event, then the theoretical distribution function is 2 n G( d) = 1- exp(-λπd ), for d 0, λ =. (4-3) A The nearest distance, d, depends on n, the number of defects on the sensor, and A, the sensor area. Now we measure the actual distribution in the data set. The Empirical Distribution Function (EDF), G^ (d), of the nearest neighbour distances is calculated as follows: d d G ˆ # [ i ] ( d) = <, (4-4) n where we count (i.e. #[d i < d]) the number of defects with nearest distance d i < d. Given a set of d i measured from the defects on each sensor in Table 4-2, we can compute the empirical distribution G^ (d) using Equation(4-4) and compared it to the theoretical distribution G(d). In Figure 4-11(a), it shows the plot of G^ (d) and G(d) versus d for camera M at ISO The G(d) shown in Figure 4-11(a) is the calculated values based on the sensor size of camera M with 85 defects at ISO 1600 (from Table 4-2). From the visual inspection of Figure 4-11(a), the EDF calculated for camera M resembled closely to theoretical distribution, G(d). The shape of the EDF will give insight into the spatial distribution of defects on the sensor. If the defects are clustered on the sensors, then G^ (d) will rise rapidly due to the short distances measured within the clusters. On the other hand, if defects are randomly scattered, G^ (d) will increase slowly at 120

137 short distances until the critical point (i.e. mean nearest distance,d ) and G^ (d) will then rise rapidly beyond this point. To compare the experimental G^ (d) and the theoretical G(d), Figure 4-11(b) shows a plot of G^ (d) versus G(d). If the EDF is exactly the same as the theoretical distribution, then all points should lie on the linear line. Otherwise, the deviation from the linear line gives an approximation on how closely does G^ (d) resembles a CSR distribution (i.e. G(d)). For this particular camera M, a visual inspection of Figure 4-11(b) suggested that the distribution of G^ (d) resembled closely to the CSR distribution. Thus, defects are most likely random on the sensor. (a) Ĝ(d) and G(d) vs. d (b) Ĝ(d) vs. G(d) Figure Comparison of the theoretical and empirical distribution of nearest neighbor distances in camera M. The computation of the theoretical distribution G(d) depends on the number of defects, n, and the area of the sensor, A. As the parameters n and A are different for each tested imager, the calculated G(d) will vary as well. Thus 121

138 the comparison between the theoretical and empirical distribution must be done individually for the 13 cameras shown in Table 4-2. The comparison of G^ (d) with G(d) for these sensors are based on two parameters: R n, the nearest neighbour index and z the standard normal deviate. The evaluation of R n is the ratio between the mean nearest distance and the expected distance from the theoretic CSR distribution, dmin R n =. (4-5) E( d min ) The expected values from the theoretical distribution G(d) with the boundary correction factor as shown by Donnelly[43] and is evaluated as follows: E( d A -0.5 l( A) ) = ( n ), (4-6) n n min + where l(a) is the perimeter of the sensor with area A. As mention earlier, d min is the critical point which indicates the steepness of the distribution. Thus R n is a measure of whether the observed pattern is clustered, random or regular. The nearest neighbour index has a value between 0 and 2.15 where 0 measures of a clustered pattern and 2.15 is a regular pattern. If R n lies close to 1, then the observed pattern is most likely random. The tables of confidence levels for R n is given in[42]. The second assessment parameter is the standard normal deviate, it is a test of statistical significant of the comparison of G(d) and G^ (d). The standard normal deviate is calculated as follows: 122

139 dmin - E( dmin) z =, (4-7) Var( d ) where the variance of d from theoretical distribution is calculate by: min A A Var ( dmin ) = l( A). (4-8) 5 n n At the 95% confidence level, if the z-score is between and then we cannot reject the null hypothesis. In another words, the observed pattern is most likely a random distribution. If the z-score lies outside this range, the null hypothesis is rejected, and the observed pattern is either clustered or dispersed. Using the above equations, both R n and z are computed for each tested sensor from Table 4-2 at the three calibrated ISO levels, and the results are summarized in Table 4-7. Table 4-7. Comparison of Ĝ(d) and G(d) from each test cameras. ISO 400 ISO 800 ISO 1600 # Rn z # Rn z # Rn z A B C D E F G H I J K L M average: The average R n calculated at ISO 400 is 0.9, at 800 is 1.04 and at 1600 is Most of these R n values lie close to 1; thus the defect patterns observed on 123

140 these sensors at all ISO levels are most likely a random distribution. The number of defect identified varies between each tested sensor and increase at the higher ISO levels. Hence, the significance of each computed R n dependent of the faults on the sensor. An alternative test is to compare the calculated R n with the nearest neighbour index critcal values. For the given number of defects, n from each sensor, the R n must lie within the critcal range of a two-tailed test to not reject the null hypothesis. Verified from the two-tail test at 95% confidence level, all R n falls within the critical range for the given number of sample points, hence we cannot reject the null hypothes. Thus at 95% confidence level the defect patterns observed on these sensors are mostly likely random patterns. Additional verification from the standard-normal deviate, z, shows all calculated values fall within the and 1.96 ranges. Hence, we can again conclude at 95% significance level that the null hypothesis is accepted; each sensors exhibit a random pattern of defects Nearest neighour Monte-Carlo simulation In the last part of the analysis, we will compare the map of each camera with a set of simulated sensors using a Monte-Carlo method. Each simulated sensor will have a set of defects distributed randomly over the sensor area. Again the x and y location of the defects are generated with a random number generator. Given a finite set of random defect patterns S=<s>, there must be an upper and lower bound exists for Figure 4-11(b). If the faults on our tested sensors are randomly distributed, then the defect map of that sensor is simply an element in the set S and should lie within the boundary. Assume we have a finite 124

141 set S of 100 random spatial patterns. To see if the defect map existed within the finite set S, we need to find the upper and lower bound from the set S. To find the upper and lower bounds, we generated 99 simulated sensors with n defects randomly distributed over the area. The sensor size and number of defects of the 99 simulated imagers is based on the actual imagers. For each simulated sensor the computed EDF is denoted by G^ i(d), where i = 2, 3,, s. Note that G^ 1(d) is the EDF calculated from our actual imagers. The average EDF, Ḡ(d) from the 99 sensors is calculated as Gˆ ( d) j i G( d) =. s - 1 The upper and lower bound is defined by Equation (4-10) and (4-11). i (4-9) U( d) = max ( Gˆ i ( d)) (4-10) i = 2,..., s L( d) = min ( Gˆ i ( d)) (4-11) i = 2,..., s A sample plot of G^ 1(d) from camera M versus Ḡ(d) from 99 simulated sensors with the upper and lower bound is shown in Figure Different from the plot in Figure 4-11(b), this plot compares the observed pattern against the distributions derived from a set of simulated sensors with defects distributed by CSR event. The plot in Figure 4-12 shows that our observed pattern lies closely to the simulated distribution. In fact, G^ 1(d) falls within the upper and lower bounds from the simulation. Hence, this result suggests that the observed G^ 1(d) is simply a case of the randomly distributed defect patterns. Repeating this analysis for each tested sensors, visual inspection of the results shows all observed distributions fall within the simulated upper and lower bounds. Thus 125

142 again this confirms all defect patterns from our set of tested sensors are simply a case of the random defect pattern. Figure Empirical distribution of G(d) vs. Ḡ(d) with upper and lower bound Spatial distribution results In the spatial distribution analysis, all four methods, inter-defect distance distribution, chi-square test, nearest neighbour analysis and Montel-Carlo modelling all show with good statistical confidence that defects observed on our tested imagers are not cluster. Rather these faults resemble a spatial random pattern. As noted in chapter 3 the lack of defect clustering indicates these faults are not material degradation related. In fact the random distribution is indicating a random mechanism such as cosmic rays radiation. The finding from this analysis is indeed consistent with the experimental observation found in Theuwissen s studies[27],[28], which showed higher defect rates in higher radiation environments. 126

143 4.4 Basic defect data from small sensors Before 2002, the small area sensors market was dominated by the Point-and-Shoot(PS) cameras. However the past 5 years, a rapidly growing new class, cellphone cameras, had increased the occurrence of small area sensor more than ever. Both cellphone and PS cameras targets portability over image quality; thus the sensors employed by these cameras are small, ~2-8% of the sensor area compared to typical DSLR sensors. The functions available on these cameras are relatively simple as well. Missing features such as manual exposure control and raw image mode made it challenging to calibrate these cameras. Instead of the standard dark frame calibration procedure use for DSLRs, we use the customize procedure as discussed in section which extracts the defects from jpeg images. The customized dark-frame calibration can identify bright pixels from the dark images. However defects are distorted by the imaging process, the exact location can only be approximated within a couple of pixels by identifying the peak in each defect cluster Defect data from cellphone cameras In this study we have worked with a collection of 10 cellphones of the same model (Nokia N82) which are all manufactured about the same time. Each of these cellphones have a build in APS sensor of size 3.0 x 2.4mm and a pixel size of 2.2 x 2.2µm. Using the calibration procedure from section 3.2.2, and repeating it about once every year, the results are summarized in Table

144 Table 4-8. Accumulated defects count from 10 cellphone cameras (ISO 400). Cellphone Phone A Phone B Phone C Phone D Phone E Phone F Phone G Phone H Phone I Phone J Cumulative Total: Average per phone: Due to the limitation with the exposure and ISO control only a short range of exposures are available and we can only calibrate at ISO 400. We cannot plot the dark response versus exposure time, thus it is not possible to measure the defect parameters (i.e. dark current and offset). We are only able to conclude these are bright defects but not the exact defect type. However, as no stuck high or partially-stuck defects were found in DSLRs, these observed faults are most likely hot pixels. From the first calibrations when these cellphones were <1 year old, we have identified 117 faults; thus on average, there is ~12 defects on each sensor. As compared to any 1 year old DSLRs sensors, the numbers of faults developed on these small sensors are significantly higher than the 3-4 faults/year (at ISO 400) in a sensor with 12x the sensor area. To keep a low manufacturing cost in these embedded cameras, where typically the cellphones costs less than the full PS cameras, the mapping of fabrication time defects prior to shipment are not feasible. Hence; the faults identified on these imagers will include manufacture time defects plus those developed while operating in the field. Despite the lack of defect mapping from the manufacturers, by the second and 128

145 third tests, we have reported a cumulative total of 177 and 213 defects respectively Defect data from Point-and-shoot cameras In addition to cellphone cameras, we have also identified defects from a set of PS cameras. Each of these cameras uses a CCD sensor with area ranged from 20 40mm 2 and the pixel size from µm. The age span of these PS cameras is 1 7 years old. In PS cameras there is explicit control to adjust the ISO settings; thus we are able to calibrate these cameras at various ISO levels. However control of the shutter time and output of only jpeg images require the use of the same defect identification method as in the cellphone cameras. Shown in Table 4-9 is the count of defect identified from the set of PS cameras. In these PS cameras the manufacture time defects are mapped out. Table 4-9. Accumulated defect count from Point-and-Phoot at various ISO levels. Camera Sensor Defect count Age (year) Type ISO 100 ISO 200 ISO 400 PS-A CCD PS-B CCD PS-C CCD PS-D CCD Cumulative total: Inspecting the defect count from each PS camera in Table 4-9, the results here also show an increase of the defect count as we calibrated at the higher ISO levels. Unfortunately for the PS cameras we do not yet have multiple year data. Nevetheless, the defect number are much higher than DSLRS at the same ISO. In the next section we will examine the temporal growth of defects based on the sensor age and defect count for each camera type. 129

146 4.5 Temporal Growth Temporal growth is also another aspect that is often examined in a defect yield analysis. The growth of defect with time is an important factor in determining how the camera images will deteriorate as the system ages. Also the rate at which faults developed on the sensor will give another indication of the characteristics of the defect source. In this study, two methods were used to measure the defect development rates for each tested imager. The first method utilized the results from periodic calibrations which we will show in this section. The second method used historical images to identify the first appearance of defects will be shown in the chapter 5. Each dark-frame calibration gives the number of defects on the sensor at their specific age. Thus by calibrating each sensor periodically, we can collect the number of defects developed over the lifetime of the sensor. However as many of the cameras are only occasionally available (we borrow them from several owners) the times between tests is rather random. By plotting the defect count versus age as shown in Figure 4-13 for camera A, we can observe the trend at which defect increases with time. The size of sensors used by the three types of cameras is different. Hence we will divide this analysis into two parts. First we will examine the defect rates from DSLRs, then in the second part we will look into the small sensors used by cellphones and PS cameras. 130

147 Total number of defects Camera age (months) Figure Defect count vs. sensor age for camera A from dark-frame calibration (at ISO 400) Defect growth rate on large area sensors Visual inspection of the plot in Figure 4-13 suggested the defect on sensor increase linear by time. Hence a linear regression fit function is used to measure the defect growth rate. In the investigation of the defect rates on the large area sensors, we have generated plots like Figure 4-13 for each camera in Table 4-1. The measured defect rates for each of these cameras are summarized in Table 4-10 for mid-size DSLRs and Table 4-11 for full-frame DSLRs. For the subset of cameras in Table 4-2, we have measured the rates at the different ISO levels. 131

148 Table Measured defect rate from calibration result for all tested mid-size DSLRs. Camera Sensor Type Defect Rate (defects/year) A APS C APS D APS E APS F APS H APS I APS J CCD K CCD L CCD M CCD O CCD N CCD P CCD Q APS R CCD S CCD T APS U CCD Average rate (APS): Average rate (CCD): Table Measured defect rate from calibration result for all tested full-frame DSLRs. Camera Sensor Type Defect Rate (defects/year) B APS G APS Average rate (APS): For the cameras that had been calibrated at various ISOs, the defect rates increases as we measure at the higher ISO levels. Taking the average of the defect rates measured from mid-size DSLR sensors, the result is summarized at the end of Table 4-10 for each sensor type (i.e. APS, CCD). As shown from the results, the average rate of the mid-size APS at ISO 100 is 1.34 defects/year and this increases by a factor of 2.6 to 3.49 defects/year at ISO For the full frame sensor, shown in Table 4-11, at ISO 100, the average defect rate is 2.18 defects/year and this increase by a factor of 11 to defects/year at ISO Similarly, for the mid-size CCD sensors, we found 1.81 defects/year 132

149 at ISO 100 and this increases to defects/year at ISO 3200, which is 15 times higher. Since more low impact defects are detected at the higher ISOs, the measurement is most likely reflecting the true defect rate from hot pixels that were too weak to be observed above the noise at the lower gain levels. From Table 4-10, the average rates calculated for mid-size APS and CCD sensors showed that the mid-size CCDs have a higher defect rate than the APS sensors. Shown in Figure 4-14 is the average defect count versus age of the sensors from the cameras in Table 4-1. Defect count APS CCD Age Figure Average defect count vs. sensor age by sensor type at ISO 400. From visual inspection, the chart in Figure 4-14 demonstrates on average the CCD sensors have a higher defect count as compare to the APS sensors of the same age in every year measured. In fact, reported in Table 4-10, at ISO 400, the average growth rate from the CCDs (5.75defects/year) is 3 times higher than the APSs (1.82defects/year). In fact, the defect rates of the mid-size CCD are nearly the same as the full-frame APS sensors. Thus this finding suggests first the defect rate might scales with the sensor area, as we will be exploring this in detail in chapter 6. Second, the CCD sensors might be more sensitive to defects as compare to the APS sensors. 133

150 Both the APS and CCD sensors show a continuous linear increase of defects with time. By comparison as noted in chapter 3, material degradation mechanism creates an exponential growth in defects with time. This again indicated the in-field defects are not cause by the material stability issues related to the manufacturing processes. The similar trend shared by these sensors suggested the causal mechanism is independent of the sensor design. In fact, it is most likely that both sensors are exposed and affected by the same causal mechanism. Also the linear growth rates suggested that faults are not cause by a single traumatic event but a continuous impact of some source on the sensors. However, the higher defect rate found in CCD does indicate that this type of sensors might be more sensitive to the cause of the defects. One factor that might have affected the defect rate is the fill factor of the pixel which fraction of the photosensitive area of pixel. For a typical APS pixel, the fill factor is ~25%, while for the CCD pixel the fill factor ranges from 70-90%. The larger photosensitive area in the CCD pixel will have more surface exposure to the defect source. Thus the probability of the defect damage on the photosensitive area is higher on a CCD pixel, and this might result in the higher defect count in the CCD sensors. From an in-depth investigation, several cameras which have been on transatlantic/pacific flights have shown more defects than other cameras of the same age and model. It is known that the cosmic rays radiation level is 100 times higher in transatlantic/pacific flights. Since it has been hypothesized that cosmic rays are the causal source of hot pixels, this would lead to higher defect 134

151 count. To better understand the effect cosmic rays radiation as the defect source, we must gain a better measurement of the dates which each defect developed. This experiment will be done and the results will be shown in chapter 5 by analyzing the historical images captured by the sensors Defect growth rate on small area sensor From the multiple calibrations taken with the cellphone cameras, as shown in Table 4-8, we can plot the defect count versus sensor age for each cellphone. As the defects identified from calibrations include manufacture time defects, we cannot assume zero defects at time 0. The defect rates measured with the linear regression fits for each tested cellphone are summarized in Table Table Measured defect rates from cellphone cameras at ISO 400. Cellphone Defect Rate (defect/year) Phone A 3.95 Phone B 1.97 Phone C 4.93 Phone D 3.95 Phone E 4.93 Phone F 1.97 Phone G 1.97 Phone H 5.92 Phone I 2.96 Phone J 2.96 Average rate: 3.55 Reported from our measurments, on average these sensors have developed ~3.55 defects/year. The rates observed from the cellphone cameras are 1.9x higher than the 1.82 defects/year for the mid-size DSLR APS at ISO 400 (Table 4-10). However, the areas of the cellphone sensors are more than 12x smaller than DSLRs. Thus the defect rate per sensor area is actually much 135

152 higher on these small area, small pixel sensors. Chapter 6 will investigate the impact of sensor and pixel size on the defect rate in details. Using the defect count and sensor age reported in Table 4-9, we can measure the temporal growth of defects for the PS cameras. Since the manufacturers do perform defect mapping prior to the shipment on these PS cameras, thus we can assume there are no defect at time 0. The defect rates measured at various ISO levels are listed in Table A point to note is that only one calibration was taken with each of these PS cameras; hence the measurements of these defect rates have more uncertainty than for the other cameras (i.e. DSLRs). The software calibration tool for the PS cameras was only developed at the end of this thesis so this work will be extened by future studies. Table Measured defect rates for Point-and-Shoot at various ISO levels. Camera Sensor Type Defect rate (defects/year) ISO 100 ISO 200 ISO 400 PS-A CCD PS-B CCD PS-C CCD PS-D CCD Average rate: From the first measurments, the average defect rate of the 4 tested PS cameras at ISO 400 is 6.88 defects/year, which is higher than the 3.55 defects/year reported from the cellphone cameras. The CCD sensors used in the PS cameras is typically 3x larger than the APS sensors in cellphone cameras. Hence, the high defect rates of these CCD sensors are consistent with our previous observation where the comparison of defect count in the CCDs is higher than the APSs. The 6.88 defects/year from the PS cameras is similar to 136

153 the 5.73 defects/year of the mid-size CCDs (Table 4-10). However, the difference in sensor area again is showing that the small area, small pixel sensors are experiencing a higher defect rate per mm Calibration temporal growth limitations There are two maindraw backs to this method of temporal growth measurment. First, the accuracy of the defect rate approximation depends on the number of sample points used in the linear fit. In another words, if only one calibration was taken with the imager, the measuremet rate from the fit function will be biased by the single point. The second problem is the frequency at which calibrations were taken. If the time between each calibration is one year or more apart, the calibration will not be able to provide a close estimation of the first appearance of the defects. This will cause an underestimate of the defect rate. Due to the limited access to some of the cameras, only a few imagers benefited from the continuous calibrations at a few months apart, while most cameras are calibrated once a year or longer. To overcome this problem in chapter 5, we will present a statistical accumulation approach to extract defect dates from the historical images captured by the cameras. Such method can increase the accuracy of defect rate measurements as the frequency at which images were captured is much higher than the calibrations. The preliminary results showed the defect growth rate for the two types of sensors (i.e. APS and CCD) with damage accumulated as the sensor ages. Increases in defects on the sensors create a limitation to the useful lifetime of a sensor. Although photographers will often purchase new cameras every 5 years 137

154 or less, in the case of embedded systems, such as sensors in vehicle and security cameras, this could affect the reliability of the devices over time. 4.6 Chapter Summary In this chapter we have showed the defect count collected from DSLRs, PS and cellphone cameras. All these cameras showed increases in defects with the age of the sensor. In addition, while calibrations at higher ISOs revealed more low impact defects, this suggested much of the faults developed in the field are created with low damage. However the detail analysis on the defect parameters has shown the brightness of the defect increases at a much higher rate than the noise signal. Hence, more low impact defects are being seen at the high ISOs and the moderate defects found at low ISO will reach saturation. From the first spatial analysis we have looked at the inter-distance between all defects. The histograms of the inter-distances have shown a broad distribution with one single peak at ~10mm. Then a chi-square test was used to test the observed distribution to the expected distribution derived from a set of 100 simulated sensors with defects scattered randomly across the area. The chi-square value suggested the observed distribution at ISO 400, 800 and 1600 all resembles a random distribution at 95% confidence level. In the third method, the nearest neighbour analysis was used. The distributon of the shortest distances between defects from each sensor were computed and compared to the CSR event base distribution. Both the nearest neighbour index R n and the standard normal deviate confirmed at 95% confidence that these defects are randomly spaced on the sensors. The last method model a set of sensors with 138

155 randomly scattered defects using the Monte-Carlo methods. The neareset neighbour distribution from the simulated sensors formed an upper and lower bound. The comparison of the measured distribution shows all sensors fall within that boundry. Thus these defect patterns are a case of the random pattern. The temporal growth of defects from all imagers indicated a linear increase of defects. The continuous accumulation suggested the defect causal mechanism is not related to the sensor design or manufacturing process but shared by both the APSs and CCDs. The characteristics found in the spatial and temporal analysis indicated the defects on these sensors are most likely caused by cosmic rays radiation. More importantly the higher defect rates found in the CCDs indicated this type of sensors might be more sensitive to radiation. Lastly, the preliminary results on the small sensors are indicating higher defect rate per mm 2. This study had looked at defect pattern at various ISO levels. With more defects found at the higher ISOs and no clustering patterns were found, this strengthen the statistical relevance of our analysis and confirmed that the defects on sensors is most likely cause by a random external source rather than material degradation. 139

156 5: TEMPORAL GROWTH OF IN-FIELD WITH DEFECTS TRACE ALGORITHM The general defect growth rates can be measured using a series of calibrations taken over time. Each calibration result provided the number of defects on the sensors at the time of the test. Thus, with calibration images collected over the lifetime of the sensor, we can observe the trend of the defect rate for each individual sensor. However, the defect growth rates measured using calibration results suffers one main problem which is the time between calibrations. Since some cameras are not accessible for frequent calibrations, the period between each test range from several months to over a year. Thus the errors of the growth rates will suffer accuracy. Ideally we would like to know the defect development date within a few days. Instead of measuring the growth rates from calibration images, an alternative choice is to identify the defect date utilizing the first appearance in regular images take by the cameras. Each image captured by the camera is a record of the current state of the sensor as shown in Figure 5-1. By analyzing the presence/absence of defects over the entire historical image dataset, we can better identify the defect development date of each faulty pixel. Since photos are taken on a regular basis, usually with intervals of minutes to less than 3 months, this can improve the measurement of the defect growth rate over that of simple calibrations. To find the first appearance of a defect, we could visually inspect each image. However 140

157 this process is slow and cumbersome. With some image dataset have over 10,000 pictures, this process is not feasible. In this research, we have developed a mathematical algorithm that will use the image itself to evaluate the appearance of the defects and accumulate statistics on this over a sequence of images to evaluate the first defect development date. In the first part of this chapter, we will present the basics of the algorithm. Then we will demonstrate the accuracy of the algorithm with a set of simulations. Finally this algorithm will be applied to seven image datasets from cameras that have been operating in the field. Then the defect development dates established by these searches will be used to measure the defect rates. These results will be compared to the calibration established rates shown in section Figure 5-1. Concept of defect trace algorithm. 5.1 Bayes defect trace algorithm Previously research by our lab has shown that Bayes algorithms can identify defects from a seqeuence of pictures[32]. In this work, we extend the method to find the development dates of the hot pixel defects. In this algorithm, the imager is described by an array of W x H pixels and output of each pixel is denoted by y i,j. This algorithm focuses on analyzing 8-bit RGB color images 141

158 which means each pixel is composed of 3 color channels (Red, Green, and Blue), each of which has an intensity between 0 (dark) to 255 (saturation). Most commercial digital imagers use the CFA sensors; thus the images are assumed to have undergone demosaicing and image compression, as none of these images were raw files. From section we had introduced a mathematical function to characterize the operation of a pixel, this equation can be simplified into y = m ( x + Texp RDark + b) = m x +, where ISO x m =. (5-1) ISOcalibrated The parameter x is the incident light intensity that strikes the pixel, the defect parameter m (T exp R Dark + b) is denoted by, and m is the amplification adjusted by the ISO setting. The defect parameters R Dark and b are estimated from the calibration test. However this value depends on the ISO setting which the calibration was taken with. The algorithm analyzes sequence of images from each camera individually. For each camera, information such as the spatial location of the defects and the magnitude of R Dark and b are needed and are collected through dark frame calibrations. The first step of the algorithm begins with the estimation of the expected value for each pixel, denoted by z, by interpolation with the neighboring pixels. This assumes that the presence of the defect will create a known deviation (i.e. ) from the expected value obtained by interpolation. Hence, the output of a good pixel is z, and a defective pixel is z+. The interpolation scheme adopted by this algorithm is a ring mask as shown in Figure 5-2. This scheme will only take the average from the pixels on the 142

159 perimeter of the mask and omitting everything else. As discussed in section 3.3 demosaicing will cause a single defective pixel to appear as a cluster of defects in color images. Thus by omitting the immediate neighbors around the center defective pixel, which would be affected by the presence of the defect, we can gain a more accurate estimate of the expected good pixel value. Figure 5-2. Ring interpolation. In any image interpolation, the values produced might differ from the actual pixel signal. Thus after calculating the image-wide interpolated values, we compare the difference between the expected and the actual pixel value (e i,j = y i,j - z i,j ) and obtain the image-wide interpolation errors. From these collected image wide error values, we can compute the interpolation error Probability Density Function (PDF), p E (e i,j ), and Cumulative Density Function (CDF), P E (e i,j ). The image wide interpolate error PDF as shown in Figure 5-3(a) plots the occurence of each interpolation error value from the range of -255 to 255 (i.e. 8 bit value pixel). The frequency of each error value is used as a statistic measured to evaluate the likelihood of the error value being an interpolation error or due to the presence of defect. The interpolation error CDF as shown in Figure 5-3(b) plots the count error < e for e is from -255 to

160 frequency (%) frequency (%) error (a) Interpolation errors PDF error (b) Interpolation errors CDF Figure 5-3. Image wide interpolation errors (a) PDF, (b) CDF. The second step is to evaluate for the presence of defects in each image. For each identified hot pixel (from calibration), we move recursively forward in time over a sequence of images and use the Bayesian function: Prob( Good y k ) = Prob( y k (5-2) Prob( yk Good) Prob( Good yk 1). Good) Prob( Good y ) + Prob( y Hot) Prob( Hot y ) The probability, Prob(Good y k ), evaluate the likelihood of the pixel, with an output y k, being good in the k-th image. Likewise the probability k 1 k k 1 Prob( Hot y k ) = 1 Prob( Good y k ), (5-3) will evaluate the likelihood of the pixel is hot in the k-th image. This probability Prob(Good y k ) will be close to 1 at the beginning when the pixel is still good, and will eventually go down to 0 as we move forward in time when the pixel becomes defective. Thus for the first image where Prob(Good y k ) falls below our predetermine threshold, we can identify the defect development date from that image. 144

161 The two conditional terms in Equation (5-2), Prob(y k Good) and Prob(y k Hot) are computing the likelihood that the pixel with output y k is in a good or hot state. This is calculated using the interpolation error PDF, p E (e k ), as indicated by Equation (5-4), and (5-5) respectively. Prob( y k Good) = pe ( y k zk ) (5-4) Prob ( y Hot) p ( y ( z + m ( Texp R + b)) = p ( y ( z + )) (5-5) k = E k k Dark E k k Assuming the expected value of a good pixel z k (from interpolation), if the actual value y k (from k-th image) is for a good pixel, then the error e k = y k - z k would be approximately zero, as in Equation (5-4). Likewise if the actual value y k is for a hot pixel, then the expected value is corrected by the deviation factor (z k + ); thus error e k = y k - (z k + ) will be approximately zero, as in Equation (5-5). Because the PDF is derived from the image wider interpolation errors, the evaluation of Equation (5-4), and (5-5) will depend on the accuracy of the interpolation scheme. From Equation (5-5), we assume that the defect parameters R Dark and b are constant values. However in reality this is not true, as R Dark and b will vary due to the temperature changes in the sensor[32]. Thus the term e k = y k - (z k + ) which assumes a fixed defect parameter will be an inaccurate estimate. Instead of a constant defect parameter, we modify the model such that the fluctuation of these values will be considered. To compensate for the variation in the defect parameters, we will provide a conservative underestimate of the dark offset as denote by min. The lower bound of the combined dark offset min is defined by = m ( R T b ) Dark + min, (5-6) min min exp 145

162 where both R Dark-min and b min are the conservative lower bounds to the range which R Dark and b may assume during the camera operation. With the estimate of the lower bound for min, we can correct the estimate of Prob(y k Hot) with range of min and max. Thus the derivation of the new Prob(y k Hot) is as follows: Prob(y Hot) = pe (y (z + ) = Prob(y ) Prob(y ) ~Prob(y min max max Prob(y ) = Prob(y ) Prob( ) min max min = max min p ( y ( z + )) Prob( ) (5-7) e The probability function, Prob( ), is the PDF of delta and, is treated as a uniform distribution between min and max. In an 8-bit imaging system, the maximum value for e and are 255, thus we will treat max = 255, so Prob( ) is simply a discrete summation from min to 255 ) = Pr ob ( y Hot) pe( y ( z + )). (5-8) min 255 min The evaluation of Equation (5-8) is repeatedly performed for each identified defect on the sensor and is repeated for every image in the dataset. Assuming there are n defects on the sensor and k number of images in the dataset, we will need to repeat the calculated for n k times. The overhead in this computation is a major drawback to large image datasets. The Equation (5-8) can be simplified with a change of variables where x = y-(z+ ). Then the inner summation from Equation (5-8) is simply the interpolation error CDF, as shown in Equation (5-9). 146

163 xupper 1 x Prob ( y Hot) = p e ( x), where 255 x min x lower lower upper : x < y z 255 : x > y z 1 Prob( y Hot) = [ PE ( y z min) PE ( y z 255) ] (5-9) 255 min Now the calculation of Prob(y Hot) just requires a simple subtraction between values be read from a CDF vector. min Interpolation scheme The core of the algorithm is based on comparison of the interpolated value with the observed pixel value to determine the presence of hot pixels in each image. As shown in derivation of the Bayes detection algorithm, the PDF p e (e) and CDF P E (e) are derived from the image wide interpolation errors of all pixels. Therefore the choice of the interpolation schemes has a significant effect on the accuracy of the algorithm. Interpolation from close region around the pixel x will provide the closest estimate value of x. For example average from the 3x3 nearest neighbor usually gives the most accurate estimate of x. However, in our interpolation we want to estimate the expected good output of x. From the demosaicing analysis in section we have shown a single defective pixel will spread into its neighbouring pixels and this will distorts neighbouring pixels around the defect. Hence, the estimation the good output from the 3x3 region of a defective pixel is misleading. To achieve a better estimate of the good output from each pixel, we modify the typically interpolation mask to one with a ring averaging as shown in Figure 5-2. An example of a regular 5x5 averaging is shown in Figure 5-4(a). The estimation of this interpolation is simply the average from all pixels in the mask area. Shown in Figure 5-4(b) is an example of the 5x5 147

164 ring averaing mask. The pixel x is interpolated using only the average from the pixel on the perimeter of the mask. Hence we can get a better approximation of the good output of a defective pixel by eliminating the immediate neighbour pixels which are most affected by the demosaicing spread of the defect. (a) 5x5 regular (b) 5x5 ring Figure 5-4. A 5x5 pixel interpolation mask weighting factor: (a) regular averaging (b) ring averaging. Note. The 0 and x pixels are not counted in the averaging. The image wide interpolation error distribution derived from a collection of 10 images using the two different interpolation schemes are shown in Figure 5-5. It is easy to see because the 3x3 mask size consists of the immediate neighbours to the center pixel. Hence a 3x3 ring mask is simply the same as the 3x3 regular mask. The summary of the distribution plots is reported in Table 5-1. Table 5-1. Compared interpolation error from various interpolation schemes. 3x3 5x5 7x7 Mean Std Mean Std Mean Std Regular Ring NA NA

165 frequency (%) frequency (%) error error frequency (%) error (a) 3x3 (b) 5x5 Regular (c) 7x7 Regular frequency (%) frequency (%) error error (d) 5x5 Ring (e) 7x7 Ring Figure 5-5. Image wide interpolation error derived from regular and ring averaging. Visual inspection of the 3x3 error distribution in Figure 5-5(a) has peak at 0.55 and the count of small errors is much higher than the other mask sizes. The As the interpolation mask, the mean error increase to 0.67 with the 5x5 regular 0.76 with the 7x7 regular averaging. The same trend is found in the ring interpolation. Although the ring averaging omitted the nearest neighbour pixels from the interpolation, the average error of 0.7 with the 5x5 ring is only slightly higher than the average error of 0.67 with the 5x5 regular mask. Thus we did not loose significant accuracy by omitting the immediate neighbours. 149

166 5.1.2 Windowing and Correction scheme Consider a time ordered sequence of pictures from a camera (i.e. an image dataset). We will now use the Bayes accumulation of statistics to identify when a defect develops from the sequence of images. The detection of defects is based on the change in the accumulated probability Prob(Good y k ). However the visibility of the bad pixels is affected by conditions such as the scene being captured, the exposure and the ISO speed used. The resulting statistics from a large set of images will encounter problems such as saturation of the accumulated Bayesian value. For example if a pixel turns bad after a long operation time, then the accumulation from early images in the dataset will cause the probability to saturate at the good state. Thus it is hard to detect the small deviation from the low impact defects. To better identify the instantaneous change of a pixels cause by a defect developing, it is better to confine the calculation to a subset sequence of images using a window where changes of the weaker defects can be detected. For a sliding window through the picture sequence with length n (i.e. number of pictures), the accumulation will be defined by the n most recently loaded images, as shown in Figure 5-6. Figure 5-6. Sliding window approach to defect identification. 150

167 The implementation of the sliding window is translated into two First-in- First-out (FIFO) queues for each defective pixel. The two FIFO queues will each store the p E (e) and P E (e) that are calculated for the each defect from the specific image as shown in Figure 5-6. A problem with the algorithm up to this point is that we are assuming the interpolation error on an image is minimal thus the large errors measured are due to the presence of the defect. However, this is not always true as the details of the image scene will affect the accuracy of the interpolated values. For example, a local image region with an edge or fine details tends to have large color variations and in such cases a large estimation error is unavoidable. In addition, for images captured at high ISO settings, the noise level will become an issue even in a uniform color region. This will also affect the performance of the interpolation scheme. A simple solution to these problems would be filtering out images from the dataset with these problems. However fine details are common in localized regions of the picture, thus tossing out images will potentially flush away other useful information. Instead of discarding images, we designed a post-correction procedure which can help correct any false identification due to the interpolation error. The procedure for the post-correction is shown in Figure 5-7. First each defect is identified by examining the plot of Prob(Good y k ) over the entire image dataset as shown in Figure 5-7(b). For the first image where Prob(Good y k ) is below the threshold value, it will be declared as the first defect development date. Given the point where Prob(Good y k ) < threshold, the post correction procedure will examine the local region from the k-th image. The 151

168 evaluation will be based on the color mean and variance around the defective pixel. Given these two measurements, if the color variance and mean exceed a predefined threshold, then it indicates this region suffers large interpolation error or pixels are at or near saturation. Hence the identification from this region suffers accuracy and will be considered invalid. If the identified point fails the post-correction test, then the next detection point in the sequence will be tested in the same way. Since the hot pixels turn on at a particular time, and do not change, the creation point can thus be identified. Figure 5-7. Post-correction procedure. 5.2 Simulation results To test out the algorithm, we will first create a set of images, where simulated defects are injected into these images. Then the Bayes detection algorithm will be used to find the first image which each defect was first injected. There are several factors that can affect the performance of the algorithm, which included: the window length, interpolation ring size, image exposure time and magnitude of the defect parameters. Each of these factors will be examined in the simulation for its impact on the performance of the detection algorithm. 152

169 The experiment is performed on a 1MP simulated sensor of size 1234 x 823 pixels. A fixed number of simulated standard hot pixels will be scattered randomly over the sensor area. However; to simulate an aging imager, one additional defect will be added progressively at a fix rate in the set of 50 images. Thus we will start with a defect free sensor, after the k-th image, an additional defect will be created. This process will allow us to keep track of the first image at which the defect was injected. For each RGB color photo used in the simulation, we first converted the image into raw form where defect can be injected. The dark current from the simulated hot pixel will be added on top of the pixel value from the image to create the defective pixel. Then the bilinear demosaicing function will be used to return image to a full color form. Both the magnitude of the dark current and the exposure setting used to capture the photo will impact the visibility of faults in an image. Hence, the evaluation of the Bayes detection function is divided into 3 parts. First we will focused on the dark current of the defect, next is the exposure time of the image then finally we will simulated a random process which will model real image dataset. For each set of simulations we will test the limits where defects are still detectable. In each experiment we will explore with different sizes of ring mask and window lengths. The detection of each defect is defined by as a hit or a miss. A hit indicates the algorithm is able to identify the image which the defect first appears. The error, k, is the image count deviation between the known first defective 153

170 image and the detected image. Hence, a hit occurs when k = 0. A miss occurs when the algorithm fails to detect the defect in all images in the sequence. In the first simulation we want to evaluate the performance of the algorithm for defects with a dark current range between /s. On each simulation run, 10 faulty pixels with same dark current will be randomly scattered over the sensor area. Each defect will be injected into the image assuming to use a shutter duration between ( s). The bayes detection algorithm will calculate Prob(Good y k ) for each defect in the sequence of images. The first appearance of defect is identified with a threshold test for Prob(Good y k )<0.5. The simulation is repeated for the three interpolation schemes (3x3, 5x5 and 7x7 ring) at window lengths of 3, 5, and 7 images. Based on these settings, the results for each interpolation mask are summarized in Table 5-2, Table 5-3, and Table 5-4. Table 5-2. Performance of Bayes detection at fixed dark current (Intp: 3x3). Window = 3 Window = 5 Window = 7 Dark Current %Hit %Miss k %Hit %Miss k %Hit %Miss k Table 5-3. Performance of Bayes detection at fixed dark current (Intp: 5x5 ring). Window = 3 Window = 5 Window = 7 Dark Current %Hit %Miss k %Hit %Miss k %Hit %Miss k

171 Table 5-4. Performance of Bayes detection at fixed dark current (Intp: 7x7 ring). Window = 3 Window = 5 Window = 7 Dark Current %Hit %Miss k %Hit %Miss k r %Hit %Miss k Two common trends were observed among the results from the three interpolation schemes. First, the highest number of undetected defects (i.e. % miss) occurred from the detection of low impact defects. A 12% miss rate is found in the detection of defect with dark current = 0.2 using a 7x7 ring with window = 7. As the magnitude of dark current increases, the defects become more visible, thus the hit rate (%hit) also increases. Secondly, the variation of the window length has a great impact on the accuracy of the detection. The window length of 3 images achieved the highest hit rate for all interpolation schemes. In contrast, the window length of 7 images suffered accuracy especially in the detection of low impact defects where 10-12% of defects are not detected. The long window accumulated information from more images. Thus the small changes from the low impact defects are more difficult to detect over the long sequence of accumulation. Shown Figure 5-8 is the plot of the, Prob(Good y k ), for a simulated defect with dark current = 0.2. The accumulated probability is calculated over a sequence of images using window length 3, 5 and 7 images. As demonstrated in Figure 5-8(b) for window length of 5 and (c) 7 where both plots show a smooth Bayes accumulation. Thus any small fluctuations such as interpolation errors and detection of low impact defects will not get emphasized. On the other hand, with a window length of 3, Figure 5-8(a) the plot shows more fluctuation in the Bayes accumulation. With less images 155

172 used in the accumulation, small changes are being emphasized, thus the low impact defects are more detectable in this setting. As reflected from the results, window 7 suppressed small changes, the error of the detection is large and the miss rate is high. Although window length of 5 has a lower miss rate than window with 7 images, this detection error k in most cases are higher than window Probability 0.5 Probability 0.5 Probability n Image n Image n Image (a) window length 3 (b) window length 5 (c) window length 7 Figure 5-8. Plot of Prob(Good y) vs. image in the windowing test. Despite the impact of window length, in most cases the 5x5 ring averaging achieved the best hit rate among the 3 interpolation schemes. With the short window setting (3 images), the 5x5 ring achieved a hit rate of 73% for defect with dark current = 0.8/s and no misses. Although the 7x7 ring scheme with the same window length has a 75% hit rate, the average image error (i.e k ) and miss rates are also higher, which suggested the ring size suffers large interpolation errors as shown in Table 5-1. In the second part of the simulation, we will test the performance of the detection at different image exposure durations. For this simulation the exposure time (i.e. shutter speed) of the tested images are kept constant while the 156

173 simulated hot pixels will have dark current range between /s. The tested exposure duration settings were 0.06, 0.125, 0.25 and 0.5s. Again 10 simulated defects will be injected into a 1MP sensor and the simulation is repeated on 10 simulated sensors. The average hit and miss rate results from the detections are summarized in Table 5-5, Table 5-6, and Table 5-7 for the three interpolation ring sizes. Table 5-5. Performance of Bayes detection at fixed exposure (Intp: 3x3). Window = 3 Window = 5 Window = 7 Exposure time %Hit %Miss k %Hit %Miss k %Hit %Miss k Table 5-6. Performance of Bayes detection at fixed exposure (Intp: 5x5 ring). Window = 3 Window = 5 Window = 7 Exposure time %Hit %Miss k %Hit %Miss k %Hit %Miss k Table 5-7. Performance of Bayes detection at fixed exposure (Intp: 7x7 ring). Window = 3 Window = 5 Window = 7 Exposure time %Hit %Miss k %Hit %Miss k %Hit %Miss k Observing the miss rates from all three interpolation schemes, the highest miss rate is ~28% for detection from short exposure (0.06s) images. This is in agreement to the visibility of hot pixels depended on the exposure level. At short 157

174 exposure times, the faults appear close to the noise level. Thus, it is hard to distinguish the difference between the noise signal and the defect. The use of a long window shows ~28 out of 100 defects were not detected by the Bayes detection algorithm. This result suggested if the image dataset is consisted of mostly short exposure images, the accuracy of defect date will suffer large error. The impact of window length on the performance of the detection was similar to that observed from the previous simulation. The long window length settings suffered accuracy especially when detecting faults from short exposure time images where the hit rate is below 10%. The use of short window length improves the detection drastically with a 40% hit rate. The brightness of the hot pixels is a function of the exposure duration. Hence hot pixels will appear close to the noise signal when capture at short exposure duration. Accumulation with long window tends to neglect small changes from the weak hot pixels which results in a high miss rate. While small changes are being emphasized in a short window, this suggested that confining the accumulation to a subset of images is crucial for detection of low impact defects and from short exposure time images. The choice of the interpolation mask will also affect the performance of the detection. Again, in most cases, the 5x5 ring gives the highest hit rate. Both the 7x7 and 3x3 ring suffered from problems with interpolation errors or spreading of the defects. Hence these are reflected in the lower hit rate. In the last part of the simulation, we will model what is expected of a real image dataset, where both the dark current and exposure time will take on variations. The dark current of the simulated defects will range from /s 158

175 and exposure time of the each image will vary between s. Again we will simulate 10 sensors and each sensor will have 10 simulated standard hot pixels randomly scattered over the sensing area. The average detection result from the 10 simulated sensors is summarized in Table 5-8. Table 5-8. Performance of Bayes detection using various interpolation schemes. Window = 3 Window = 5 Window = 7 Interpolation %Mis %Mis %Mis %Hit Scheme s k %Hit s k %Hit s k 3x x x Consistence with the previous simulation results, the 5x5 ring averaging has the best average hit rate as compared to the 3x3 and 7x7 ring. The impact of the demosaicing on the nearest neighbours is reflected from the large error with 3x3 ring averaging. Although the 7x7 averaging has omitted the immediate neighbour pixels, the large ring size fails to give accurate pixel estimation because the pixels at that distance do not well reflect the actual pixel value. As seen from Table 5-1, this ring mask suffers the largest interpolation errors. The second factor that affects the performance of the detector is the length of the sliding window. From the trend observed in previous simulations, the short window has the advantage in identifying low impact defects. The results from Table 5-8 again show that the window of 3 images has the highest hit rate of 78%. As discussed in section 4.2, in-field defects are mostly created with low damage. Hence, from our simulations, it suggested that the optimal setting of the detector will be the 5x5 ring averaging scheme and a window length of

176 It is important to note that our simulations are based on standard hot pixels. The additional offset in partially-stuck-hot pixel will enhance the brightness of the faults and reduce the impact of exposure duration on detection. Hence, the detection accuracy of such defects should be higher. The main problem suffers by this algorithm is that the performance highly depends on the parameters of the images available. In other words, for periods when images are capture frequently there will be more samples to test for defects. Thus the date error in detection will remain small. If images are seldom collected, the detection date error will be large as we have seen in the calibration. In addition, the appearance of the defects is highly affected by the camera settings as mention before the exposure time and the ISO setting. Thus when images are captured on a sunny day (short exposure and low ISO), defects are not likely visible in these images. Hence this can delay our detection of defects. By comparison long duration and high ISOs pictures enhance the defects and provide a better condition to detect the defects. 5.3 Experimental results With the defect trace algorithm, we can extract the defect development date by analyzing for the first appearance of the defect from the regular photos captured by the imager. While photos are being captured on a regular basis, it provides more frequent samples of the sensor state than the yearly calibrations. However, due to privacy issues, we only have access to an image dataset from 7 of the 21 cameras shown in Table 4-1. The specifications of the cameras 160

177 available are listed in Table 5-9. This subset of cameras consisted of 5 APS and 2 CCD imagers of size ~23 x 26 mm and an average pixel size of 6.5µm. Table 5-9. Specification of test cameras. Camera Sensor Type Sensor Size Pixel Size Age A APS 22.7 x x B APS 36.0 x x C APS 22.7 x x D APS 22.2 x x N CCD 23.6 x x P CCD 23.7 x x Q APS 22.5 x x Previously, simulation results have demonstrated that the 5x5 ring averaging was able to compensate for the spreading of defects due to demosaicing. In addition, the window length of 3 provides the optimal setting for detection from short exposure images and low impact defects. Thus for the following experiment such a combination will be the setup for the Bayes detection algorithm. Based on the defects identified from the calibrations at ISO 400, we have applied the Bayes detection algorithm to find the first image in which each defect clearly appears. Then the defect date is read from the meta-data of the image. From the 7 tested cameras, there is well over 30,000 images; therefore it was not feasible to visually inspect the presence of the defects from each image. Instead we will be comparing with the growth rate measured from the calibration method as shown in section Indeed it is common with digital camera to take tens to hundres of pictures within a day. Hence, there will be a stream of images capturing the state of sensor within a relatively short time frame. 161

178 The defect dates identified from the Bayes detection algorithm are plotted against the sensor age as shown in Figure 5-9. Similar to the plot generated from the calibration methods, the growth of defects from the Bayes detection also follows a linear trend. Hence a linear regression fit function is used to measure the defect growth rate. The fitted defect rates for each imager using the two methods are summarized in Table Table Manual calibration and Bayes detection growth rate comparison at ISO 400. Camera Defect growth rate (defects/year) Manual Calibration Bayes Detection Diff (%) A 3.57 ± ± B 7.35 ± ± C 1.20 ± ± D 3.68 ± ± N ± ± P 3.81 ± ± Q 0.77 ± ±

179 Total number of defects Camera age (months) (a) camera A Total number of defects Camera age (months) (b) camera B Total number of defects Camera age (months) (c) camera C Total number of defects Camera age (months) (d) camera D Total number of defects Camera age (months) (e) camera N Total number of defects Camera age (months) (f) camera P Total number of defects Camera age (months) (g) camera Q Figure 5-9. Defect growth rate at ISO 400 with calibration and Bayes search identification. 163

180 The defect rates measured by the two methods are in close approximation. In fact the largest difference is ~30% which is found in camera P and Q. For camera P, Figure 5-9(f) and Q, Figure 5-9 (g), the measure of defect growth rate from the calibration is based on result from one single test. Also this single calibration was taken when these cameras were 2-4 years old. Thus for any defects that were developed at the early stage of the sensor will suffer large estimation error (i.e up to 4 years). The Bayes detection for camera P in Figure 5-9 (f) demonstrated that by analyzing the historical images, this method is able to recover information that was missing from the calibration. Hence the defect rate measure by the Bayes detection will be more accurate. The comparison shows that, in all but one case (camera D), where the difference between the measured rates are still within the regression error, the calibration will give an underestimate of the defect rate. This would be expected because of the large time separation between calibrations. The disadvantage of Bayes detection is that it requires access to images take by the camera which is not always available. Both camera P and Q suffers such problem where we only have access to a subset of images. Thus as seen in camera P, some defects were not found due to missing samples from the image set. Having access to the images captured by the camera is important as each image records the state of the sensor. This is also shown in Figure 5-9(b), for camera B, a 1 year old sensor. The rate estimated with the manual method is also based on a single calibration, but since the time gap between purchase date to the first calibration is less than 164

181 1 year. The difference between the two estimated rates is less than 2%. Also, from the large set of images being captured in the short period this potentially increases the accuracy of the Bayes detection. The number of defects found on sensors depends on the ISO setting used to perform the calibration. In chapter 4, we have shown that calibration at high ISOs will reveal some low impact defects which were not detectable above the noise in the low ISO pictures. As more defects are found at the higher ISOs, the defect rates will change. In the following experiment we will applied the Bayes detection to the same set of cameras given the defect found at ISO 800 and Since only 4 of the 7 cameras had been calibrated at these ISO levels; this test is only performed on a subset of imagers (i.e camera A, B, C and D). The estimated rates for these cameras are summarized in Table 5-11 for ISO 800, and Table 5-12 for ISO

182 Table Manual Calibration and Bayes detection growth rate comparison at ISO 800 Camera Defect growth rate (defects/year) Manual Calibration Bayes Detection Diff (%) A 3.35 ± ± B ± ± C 2.10 ± ± D 3.86 ± ± Total number of defects Camera age (months) (a) camera A Total number of defects Camera age (months) (b) camera B Total number of defects Camera age (months) (c) camera C Total number of defects Camera age (months) (d) camera D Figure Defect growth rate at ISO 800 with calibration and Bayes search identification. 166

183 Table Manual Calibration and Bayes detection growth rate comparison at ISO Camera Defect growth rate (defects/year) Manual Calibration Bayes Detection Diff (%) A 3.83 ± ± B ± ± C 3.80 ± ± D 7.71 ± ± Total number of defects Total number of defects Camera age (months) (a) Camera A Camera age (months) (c) Camera C Total number of defects Total number of defects Camera age (months) (b) Camera B Camera age (months) (d) Camera D Figure Defect growth rate at ISO1600 with calibration and Bayes search identification. The number of defects identified at ISO 800 and 1600 is significantly higher than that at ISO 400 (Table 5-10), thus the defect rates measured will increase. In particular, the most significant increase is in camera B, where the rate doubles from 7.47 defect/year at ISO 400 to defect/year at ISO Due to the limited number of calibrations taken at these ISO levels, the difference 167

184 between the rates estimated by the two methods are greater than at ISO 400. Again the large time gap between calibrations will increase the error in the defect date estimated with the calibrations. As shown from camera A in Figure 5-11(a), although several calibrations were used to approximate the defect rate, there was a ~4 year span where no calibrations were taken at ISO This gap is due to the fact that the calibrations for this research did not begin unitil the camera was 4 years old. Hence, the defects developed within those 4 years will be falsely estimated by the calibration. With the Bayes detection, images taken continuously can fill in the information missing from the calibration retroactively from the time before the calibration test begin. Indeed this shows how the defect development can be recovered using the Bayes algorithm. Thus the rate measured using the Bayes detection should resemble more closely to the real temporal growth of defects. Despite the fact that different techniques were used to approximate the defect rates, both methods have suggested that faults are developed continuously and the numbers increase linear with time. With more defects found in higher ISOs, the same trend was observed. A linear growth indicates that in-field defects are not likely cause by a single traumatic event or material degradation. Rather the defect mechanism is a continuous source. Material related defects will usually develop local clustering of defects both in space and in time. Both APS and CCD show an increase of defect count over time, this suggested the defects source is not related to the sensor architecture. Rather these imagers are continuously exposed to the same random defect source. 168

185 5.4 Summary Images captured by the cameras can be utilized to trace back the development of defects while the imagers operated in the field. In this the chapter, a Bayesian recursive function was presented to automatically trace the first appearance of defects from the historical image dataset. The image wide interpolation errors are served to provide statistics which measure the presence of defects. As part of this, the interpolation scheme is used to provide estimate of a good pixel output. To avoid inaccurate interpolation estimates from defect spreading in color images, a ring mask is used. In the ring interpolation scheme, the nearest neighbour pixels are discarded, as these pixels are most affected by demosaicng. The Bayesian function accumulates statistics over a sequence of images to measure the likelihood of a pixel being in a hot state. Long accumulations will cause the statistics to saturate. Hence, a sliding window is used to confine the accumulations to the n most recent images. In addition, false detections due to large interpolation error can be corrected with the post-correction procedure. From a set of simulation performed with the Bayes detection algorithm, the visibility of hot pixels limits the accuracy of the detection. Testing with various settings has demonstrated that a window length of 3 images is the optimal setting to detect low visibility defects. In addition, the 5x5 ring averaging has shown this ring size is the best tradeoff for good pixel estimation while avoiding the defect pixel spreading problem. 169

186 The defect rates measured using the Bayes detection is in close agreement with the calibration test method. The comparison between the two methods showed that the calibration method usually underestimates the defect rates due the large time spans between each calibration. With the Bayes detection, the frequency of images taken by the camera can fill in the information missing from the calibrations. Hence the rates measured by this detection algorithm will be a more accurately measure of the true defect rates. 170

187 6: THE IMPACT OF PIXEL AND SENSOR DESIGN ON DEFECTIVE PIXELS There are four main trends that are developing in the design of new digital imagers. First is the choice of APS or CCD type sensors. The early imager market was dominated by the CCD sensors, while the APSs are mostly used for low performance imaging devices. In recent years, APSs have been replacing the CCDs as large area sensors in many DSLR cameras. The second trend is the expanded ISO range. Before 2008 most DSLRs had a usable range of up to ISO However in the newer camera models, the typical top ISO range has increased to Some high-end DSLRs have a usable range up to 25,600. The increase of ISO permits natural light photography, and reduces the use of long exposure under low light conditions. The third trend is the changes in the sensor size. The divergent demand of both large and small area sensors has increased drastically by the drive from DSLRs (i.e big sensor) and cellphone cameras (i.e. small sensor). As reported from CIPS[44], although the production of PS cameras is much higher than DSLRs, the relative growth of DSLRs per year is actually higher. The drive of better image quality recently has resulted in more full frame sensors being introduced into the commercial high-end cameras. Since the image sensor was first introduced as an embedded device in cellphone, the demand for small sensors has increased drastically. A market report published in 2007 by 171

188 Tessera[45] has shown that in 2006 over 600 millions mobile phones had a build in camera and this trend will continue to increase. Since most cellphones are embedded with more than one camera, the report from Tessera has shown that the mobile phones device has dominated 53% of the sensor market. The last trend is the increase in the pixel number on sensor. From recent data collected by CIPS[44] the break down of mega pixels on sensor from each year is shown in Figure 6-1. Millions 100 Number of camera Manufactured less than 2MP 2-3MP 3-6MP 6-8MP over 8MP Figure 6-1. Mega Pixel design trends in digital cameras 2001 to The number of pixels found in commercial imagers has increased from 250,000 pixels (2001) to over 10 MP (2010), and some high-end camera systems have up to 21 MP in DSLRs and 50 MP in medium format (Hasselbad). The increase in the pixel number allows the captured images to have a higher resolution, thus the dimension of the printed images can be made larger. When 172

189 the sensor size remains the same, the increase of pixel number implies the shrinkage of pixel dimension. In the previous sections we have observed two trends from the analysis of defect rate from various imagers. First we have seen that the CCD sensors appear to have a higher defect rate as compared to the APSs. This trend suggested that CCD sensors might be more sensitive to the defect source due to the different design of the two sensors. Secondly, we have seen that for the same type of sensor, the larger area sensors seem to have a higher defect rate. At the same time, newer cameras which use small sensors and reduced pixel size also appear to have higher defect rate per sensor area. The two observations had lead to the possible impact on defect rates from the four new imager design trends. In the following sections, we will explore each of the design trends in details and analyzing the impact it has on the defect development rate. 6.1 Impact of sensor design trend on defects on imagers New commercial digital cameras are improving in different aspects such as the choice of sensor (i.e CCD and APS) used, ISO range, sensor size and pixel number. From our defect analysis, we have correlated some of these changes to the impact of defects on the sensors. In this study, we have examined the defects from three classes of imagers: cellphones, PSs and DSLRs. Small sensors (~20mm 2 ) are used in the embedded cellphone and PS cameras, the mid-size sensors (~300mm 2 ) are used in entry and mid-range DSLRs, and the full frame sensors (864mm 2 ) are found in professional DSLRs. 173

190 The typical specifications for each of the three types of sensors are listed in the Table 6-1. Table 6-1. Average sensor and pixel sizes from tested cameras. Camera Type Sensor Sensor Size Sensor Area Pixel size Pixel area Type (mm) (mm 2 ) (µm) (µm 2 ) Cellphone APS 5.40 x x Point-and-shoot CCD 6.08 x x Mid DLSR APS x x Mid DSLR CCD x x Full frame DSLR APS x x In Table 6-2 is a summary of the average defect rates from the three classes of cameras at various ISOs. The average defect rates of DSLRs are collected from Table 4-10, Table 4-11 and are categorized by the sensor type and size. The temporal defect growth rates of the cellphones are collected from Table 4-12, and PS cameras are from Table Table 6-2. Average defect rate for various sizes of sensors. Defect rate (defects/year) ISO level Cellphone (APS) Point-andshoot(CCD) Mid DSLR (APS) Mid DSLR (CCD) Full frame DSLR (APS) 100 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA Defect count on APS vs. CCD During the early development of digital imagers, the CCDs were the main sensors employed in this application. In 1990, with the improvements in the CMOS technology, the APS sensors had gained more attention and were 174

191 recognized as one of the mainstream imaging devices. The CCD sensors being the more mature technology require dedicated process lines. This sensor is usually the preferred choice for medium quality imaging (i.e. DSLRs, PS, and scientific imaging). However, in recent years with the tremendous improvements on the APS sensors, many commercial DSLRs are moving toward APS sensors. Also since the APS sensors are CMOS compatible; it is favoured by many embedded application such as cellphone devices. In terms of defects found on these sensors, from our study, we have observed a significant difference in defect count among the two pixel types. In particular, we have noticed that most of the tested CCD imagers tend to have a higher defect count than APS sensors of the same age. As shown in Table 6-1, the average sensor area of the mid-size APS is mm 2 which is close to the mid-size CCD sensors with mm 2. As observed from Table 6-2, the defect rates measured at ISO 400 from our collection of DSLRs are 5.75 defects/year for the CCD sensors and 1.82 defects/year for the APS sensors. Although the sensing areas of the two imagers are nearly the same, the defect rate of the CCDs is ~3x higher than APS imagers at all ISO levels. By comparison in the full frame APS sensors (864mm 2 ), the sensing area is 2.3x larger than the mid-size CCD sensors. However, at ISO 400, the defect rate of the CCD sensors is still 1.2x higher than the full frame sensors, (4.68defects/year). The high defect rate of the CCDs suggested this sensor might be more sensitive to the defect source. On the average pixel area shown in Table 6-1, the area of the CCD pixels is 2x larger than the APS pixels. In 175

192 addition, the fill factor of CCD pixels (~70-90%) is ~2-3x larger than that of the APS pixels (~25-30%). This is in agreement with the high defect rate observed from CCD sensors. Hence, the higher defect rate indicates that the larger photosensitive area on CCD pixel will increase the exposure to the defect source. Since cosmic rays occurrence scales with the area, this trend in agreement with cosmic rays being the source mechanism. The defect rate on CCDs will create a greater impact when we increase the sensor size. If we scale the defect rates with the sensor area, then at ISO 3200 the defects/year on the mid-size CCD sensor will increase to 64.1 defects/year on a full frame sensor (i.e a scaling factor of 2.3x). In fact, this approximation is for sensors operating in the terrestrial environment where radiation level is minimal. Many large area CCD imagers are employed in the space applications where radiation level is 300x higher. Thus the expect defect rates in the space environment is much higher and the usable lifetime of these sensors are very limited by the high defect rate Impact of ISO trend on defects The second trend observed from the newly released cameras is the expanded ISO range. As the sensor technologies improved, the noise level is reduced; thus the usable ISO range expands. This trend is especially noticeable in the mid and high-end DSLRs. From our study we have shown that the hot pixel intensity scales approximately with the ISO levels. Thus doubling the ISO will doubles the 176

193 intensity of each fault. One of the main trends shared by all tested camera is the increase of defect count when calibration is performed at the higher ISO settings. The improvement in noise level, enable a clearer distinction between the background noise and defects at the extended ISO levels. Hence, the calibration at these higer ISO ranges can reveal more low impact defects. In fact the amplification of the dark offset is most significant. At ISO 400, 46.3% of the defects are classified as partially-stuck hot pixels, but at ISO 1600 the number increases to 71.2%. This is important as partially-stuck hot pixels, unlike the standard ones, affect the sensor at very short exposure durations. The results from our analysis have pinpointed that many of these extra hot pixels are created with low damage. The increase of ISO gain will cause these low impact defects to become more prominent. In addition, the number of saturated defects at these high ISO levels creates a major impact on the image quality. In the analysis of defect rate as summarized in Table 6-2, we have observed that an increase in the defect rates as more low impact defects were found at the higher ISO levels. Hence the measured rates at the expanded ISO range provide a closer approximation to the true defect development rate for all strengths of defects Defect growth rate vs. sensor area The third trend in the new cameras is the changes in sensor size. In recent years, the full-frame sensors were being used in the DSLRs to match the size of the traditional 35mm film. This large area sensor provides more vibrant image quality and better operation in low-light conditions. However; at the same 177

194 time the number of defects found on the sensors will also increase. On the other hand, a rising class of imagers: embedded cellphone cameras, has dominated the market for small sensors. If all sensors are exposed to the same radiation level, we would expect the sensor with the largest sensing area to develop the most number of defects if the pixel sizes are constant. The defect rates reported in Table 6-2 shows that at ISO 400, the cellphone cameras which have the smallest APS sensor (23.11mm 2 ), is 3.55 defects/year. This defect rate is much higher than the 1.82 defects/year from the mid-size APS (338.36mm 2 ) but comparable to the 4.68 defects/year from the full-frame APS (864mm 2 ). The sensor area of the cellphone camera is 6.8% of the mid-size APS sensor and only 2.8% of the full frame sensor. If we scale the defect rate of cellphone camera (3.55 defects/year) by the sensing area, it will translate into defects/year on a mid-size DSLR and defects/year on a full-frame sensor. The expected rate from scaling with the sensor area is much higher than the observed rates for DSLRs reported in Table 6-2. It is important to take note on the pixel size difference between these three sensors. The small sensors have a pixel size of 2.2 x 2.2µm whereas the mid and large area sensors have a pixel size ~6.2 x 6.2µm. Thus this suggested there is possible impact from the reduction of the pixel size. Now between the two APS DSLRs, the sensing area of the mid-size DSLRs is only 39% of the full frame sensor. If the defect rate scales proportionally with the sensor area, we would expect the defect rate on a full frame sensor to be 2.55x that of the mid-size DSLRs. Taking the defect rates 178

195 from Table 6-2 for the mid-size DSLRs (APS) measured at various ISOs, we can calculate the expected defect rates of the full frame sensor by scaling these measurements with the 2.55x factor as shown in Table 6-3. The calculated rates are compared to the observed full frame defect rates collected from Table 6-2 and show a close comparison. Table 6-3. Comparison of APS DSLRs defect rates at various ISOs scaled with sensor area. Defect Rate (defects/year) ISO Mid-size DSLR: Expected Fullframe: Observed Fullframe: Full frame Difference 44.29% 4.22% 0.86% 5.21% 1.69% The results shown in Table 6-3 indicates the expected rates calculated with the scaling factor resembled closely to the observed rate measured from our tested full frame imagers. The area scaled rates average only a 11.25% difference from the actual full frame rates. Different from the cellphone cameras, both the mid-size and full frame sensor has approximately the same pixel size. Thus this result shows that defect rate scales with the sensor area when the pixel size remains the same. Hence we should use a metric of defect rate per sensor area (i.e. defects/year/mm 2 ) when comparing sensors Defect growth rate vs. pixel size The last design trend found among most of the new cameras is the increase of pixel numbers on the sensor. While the sensor sizes of the PS and mid-size DSLRs do not change much, the increase of pixel numbers on the imager will reduce the pixel dimensions. This shrinkage of pixel size will reduce 179

196 the sensing area of each pixel. Hence, the dynamic range and the noise-tosignal ratio will be reduced as well. In the previous section, we have observed the defect rates scaled with the sensing area when the pixel size on the imager is nearly the same. Hence, we can scale the defect rates from Table 6-2 by the sensing area as summarized in Table 6-4 for various ISOs and camera types. Table 6-4. Average defect rate per sensor area for all camera types at various ISOs. Defect rate per Sensor area (defects/year/mm 2 ) ISO level Cellphone (APS) Point-andshoot(CCD) Mid DSLR (APS) Mid DSLR (CCD) Full frame DSLR x x x x x x x x x x x x x x x x x x x x x x x 10-2 Again in Table 6-4, the defect rate per mm 2 of the cellphone cameras is ~28x higher than the mid-size and full frame DSLR APS sensors. The defect rate from the small PS CCD sensors is 16x higher (at ISO 400) than the mid-size DSLR CCD sensors. Hence this suggests the possible impact from the increase of pixel count or the shrinkage of pixel size. A study on defect size by Dudas[32] had shown that the estimated defect point size is very small. With the 367 isolated defect observed at ISO 1600, the estimated defect size is <0.04µm, which is well within the 2.2µm pixel size. Hence defects on the small sensors should be a point source and the dark current magnitude should remain the same, independent of the pixel size. 180

197 However, the small APS pixels have less sensing area as compared to the large pixel. Since the capacitance of the photodetector scales approximately by the sensing area, the output of the pixel remains constant. However, the dark current magnitude does not change for a given defect. Hence, when the pixel size shrinks, the sensitivity of the pixel to each dark current electron increases. This means that even a weak hot pixel damage can cause a significant effect in the case of the small pixels. Assume that all pixels have the same efficiency and the capacitance of the pixel scales proportional with the sensor area. Recall the output of the photodetector from Equation(2-8). As demonstrated in Figure 6-2, if the pixel area is reduced by half, then the sensitivity of the small pixel to each electron will double. Figure 6-2. Impact of dark current on large and small pixel. This scaling factor is like the ISO amplification factor m from Equation(3-3). Hence, shrinking the pixel dimension will increase the scaling factor m. The defect parameters will scale like I offset from Equation(3-4). Thus the hot pixels that are considered as low impact defects in the large pixels will become more prominent in the small pixel when measured at the same ISO level. 181

198 Using this assumption, the average pixel area of the mid-size DSLR APS (26.42µm 2 ) is 5.45x of the small APS pixels (4.84µm 2 ) in the cellphone cameras. Hence the defect rate observed at ISO 400 from the small APS sensor should be compared to the defect rate of the large pixel measured at ISO 2180 (close to ISO 1600 in our table). However, as shown in Table 6-4, the defect rate of the small APS pixel at ISO 400 (15.4x10-2 defects/year/mm 2 ) is 12x higher than the large APS pixel at ISO1600 (1.32x10-2 defects/year/mm 2 ). With the same kind of comparison, the pixels area of the CCD sensor used in DSLRs (47.53µm 2 ) is 9.64x larger than the pixel area on the PS sensor (4.93µm 2 ). Hence the defect rates measured at ISO 100 from the PS sensor should resemble the rate measured at ISO 800 from the DSLRs. This comparison is summarized in Table 6-5. Table 6-5. Comparison of defect rate per sensor area between CCD in PS and DSLRs. PS (CCD) Defect rate (defects/year/mm 2 ) Defect rate (defects/year/mm 2 ) DSLR (CCD) ISO x x 10-2 ISO 800 ISO x x 10-2 ISO1600 ISO x x 10-2 ISO 3200 The comparison of defect rates in Table 6-5 shows the development of faults in the PS is still 2-3x higher than DSLRs at the higher ISO levels. Both the small APS and CCD pixels have shown a higher defect rate. Hence, this finding indicates the impact of defect on small area pixels is more significant than a simple scaling of the pixel area. 182

199 Using the defect rates at ISO 400 collected from Table 4-10, 11 and 12 of DSLRs, cellphone and PS imagers respectively, we scale these measurments by the sensor areas and then plot it against the pixel size as shown in Figure 6-3. Defect rate (/year/mm2) Pixel size (um) Figure 6-3. Defect rate per sensor area vs. pixel size (ISO400). Visual inspection of the plot in Figure 6-3 shows that the defect rates increases rapidly as the pixel size reduced. However, the defect rates do not scale linearly with the pixel size. In fact, the plot suggested a possible exponential increase of the defect rates when the pixel size goes down. In Figure 6-4 we show a semi-log plot of the defect rate per sensor area versus the pixel size. 183

200 Defect rate (/year/mm2) Pixel size (um) Figure 6-4. Semi-log of defect rate per sensor area vs. pixel size. This semi-log plot suggested a possible log linear regression fit. However, the modest accuracy (R 2 = 0.68) suggesting this was not the correct equation. Instead, a log-log plot is used which is shown in Figure 6-5. Defect rate (/year/mm 2 ) Pixel size (um) (a) Linear regression fit Residual Pixel size (um) (b) Residues from regression Figure 6-5. Logarithmic plot of defect rate per sensor area of all tested imagers. The log-log plot of the defect rate versus pixel size shows a much stronger indication of a linear trend. Hence the linear regression fit function used in Figure 6-5 (a) is 184

201 log( y ) = log( A) + Blog( x). (6-1) On a linear scale, this the regression fit function is simply a power function, y B = A X. (6-2) Table 6-6. Linear regression fit statistics on defects/year/mm 2 vs. pixel size A B R The regression statistics from log-log plot are summarized in Table 6-6. The R 2, which measures the goodness-of-fit is ~ If the measure of R 2 is close to unity, it indicates the regression fit function is a close approximation to the observed values. Hence the value of indicates the power function is a good fit function. The residuals of the fit plot Figure 6-5(b) shows the deviation are nearly uniformly distributed about the fit. This strongly indicates the power law is a good equation fit to the data. The power function indicates the defect rate does not scale linearly with the pixel size. Instead it increases in a power law as the pixel size decreases. The exponent factor of suggested, the defect rate scales a bit greater than by the pixel area. As shown in Table 6-4, the defect rate per sensor area still indicates that the mid-size CCDs developed 3x more defects than the mid-size APS sensors. Hence, in the following plots, we separate the analysis by the sensor type. The log-log plot of defect rate per sensor area versus pixel size of all tested APS sensors is shown in Figure 6-6, and for CCD sensors in Figure 6-7. Again a 185

202 linear regression fit is used and the statistics from the fit function are summarized in Table 6-7. Defect rate (/year/mm 2 ) Residual Pixel size (um) (a) Linear regression fit Pixel size (um) (b) Residues from regression Figure 6-6. Logarithm plot of defect rate per sensor area versus pixel size of all tested APS imagers. Defect rate (/year/mm 2 ) Residual Pixel size (um) (a) Linear regression fit Pixel size (um) (b) Residues from regression Figure 6-7. Logarithm plot of defect rate per sensor area versus pixel size of all tested CCD imagers. 186

203 Table 6-7. Linear regression fit statistics on defect rate/mm 2 vs. pixel size. Sensor Type A B R 2 APS CCD All the points on the residual plots in Figure 6-6(b), Figure 6-7(b) are randomly distributed on either side of the curve, again supporting the power law equation. Thus all the data points in Figure 6-6(a) and Figure 6-7(a) lie closely to the regression fit function. The R 2 recorded in Table 6-7 for the APS and CCD are both ~0.8 which is modestly better than the fit shown for combined imagers (Table 6-6). Since both the APS and CCD sensors show the same good regression fit with the power function, this strongly indicates that defect rate increase in a power law with the shrinkage of pixel size. The power factor B estimated for the CCDs is ~-2.05 which shows that the defect rate scales approximately by the pixel area. However, the power factor B for the APSs is, Hence this shows that the impact of scaling down the pixel size on the APSs will cause the each pixel to become more sensitive to the defect source than just with the pixel. Using the regression factor, we can calculate the defect rate/mm 2 of APS and CCD at various pixel sizes, as summarized in Table

204 Table 6-8. Estimated defect rate/mm 2 at various pixel sizes with the fitted power function. Sensor Pixel size (µm) Type APS x x x x x x10-3 CCD x x x x x x10-3 CCD/APS The estimated defect rate/mm 2 at 6-7µm pixel shows that the CCDs will develop 3-4x more defects than APS sensors. The fill factor of the CCD pxiels is approximately 2-3x larger than the APS pixels, which is similar to the difference of 3-4x factor in the defect rate. Hence, this indicates the size of the photosensitive area is likely the cause of the defect rate changes on the CCD pixels. However at the small pixel end (2µm), the APS sensors are measured to have nearly the same defect rate/mm 2 as the CCD sensors. Although the large photosensitive area increases the radiation exposure of the large pixels, the shrinkage of the pixel size will increase the sensitivity to each electron. Such impact has shown a drastic increase of the defect rate in the APS pixels but not the CCD pixels. Hence the impact of defects on the small APS pixels becomes much more significant than on the small CCD pixels. This suggests that for even smaller pixels the APS may have a higher defect rate than for the CCDs. It is important to note that this trend in reducing the pixel size where the manufactures are trying to increase the pixel count with smaller pixels on the small sensors (cellphone and PS). The drive for higher MP on mid-size and full frame DSLR cameras will cause the manufactures to look at smaller pixels on the large area sensors as well. The impact of defect on these small pixels will be a 188

205 significant drawback on the image quality. This power law relationship and the sensor area scaling suggested important tradeoff for the sensor designers. 6.2 Chapter Summary The design trends of new imagers are driven by the demand in the commercial camera markets. The use of CCD and APS sensors, ISO range, sensor size and pixel number are design to improve the imaging performance while keeping a production cost low. However, these design trends have neglected one important aspect they affect the defects on the sensors. In this analysis we have shown that many of these design trends will have an impact on the growth rate of defects on the sensor. In particular the expanded ISO range continues to reveal more low impact defects and cause moderate defect to saturate. The analysis of the defect rates on various sensor sizes indicates the rates scale proportionally with the sensor area if the pixel size is constant. Hence the full-frame sensor will develop the most defects. Finally observations from the small pixels shows that defect rates will scale as a power law with the pixel size. Result from the regression fit function measures that the defect rate of the CCD sensors scales by a power factor of near 2, which is approximately the pixel area. However the defect rate of the APS scales at a higher rate with a power factor of 3. This analysis suggested that scaling down the pixel size to ~2µm in APS sensor will cause these pixels to become more sensitive to the dark current. By comparison for the larger pixels (~6µm) the CCD will show more defects. 189

206 7: MULTI-FINGER ACTIVE PIXEL SENSOR Prior to 1980, the CCD was the dominate sensor technology used by most sensing devices. However, with the improvement of CMOS processing, the APS became one of the mainstream sensing technologies. The APS pixel opens new options with some of its attractive advantages. Its compatibility with other CMOS process makes this less expensive to fabricate and embeddable into other devices and the single operating voltage/low power consumption have lead to a wide range of applications. The two main photodetectors used by the APS sensors are photodiode and photogate. While the photodiode based APS is a more commonly used technology, in this study we will focus on exploring how to improve the photogate APS. As discussed in section 2.1.2, the photogate detector is simply a MOS capacitor with a poly-silicon gate deposited on the top surface. When incident light strikes the poly-gate photons will penetrate through the poly-silicon layer and can be collected in the silicon substrate. In the following section we will explore possible alternative designs to enhance the sensitivity of the photogate by using a multi-finger gate on the detection area. A multi-finger design is composed of poly-silicon stripes spaced evenly over the surface. The multi-finger photogate had been proposed by Chapman and his graduate students[13] and other researchers[12]. In a study from La Haye[13] both the standard and multi-finger photogate APS pixels of size 5.4x5.4µm were 190

207 fabricated with 0.18µm CMOS technology. Preliminary results from this study had suggested that the multi-finger photogate structures had a higher sensitivity response as compare to the standard photogate when the pixel was exposed to red light. In this thesis, we focus on the actual sensitivity measurements from different multi-finger structure at varies photon energies. In addition, we will explore the concept of the fringing field to enhance photon collection in the substrate. 7.1 Multi-Fingered Photogate APS The sensitivity photogate APS is dependent on the number of elector-hole pairs generated by the incident photons during integration time. In a standard photogate APS, the large absorption from the poly-silicon gate increases over the visible spectrum; thus the sensitivity near the blue spectrum is significantly weaker than in the red. Due to the absorption in the silicon material, photons will penetrate to different depths, and the light intensity is characterized by I = Io exp( αx), (7-1) where I o is the initial light intensity at the surface, α is the absorption coefficient and x is the penetrated depth. The absorption coefficient α varies at different at wavelength, as shown in Figure 7-1 where the absorption is ~ cm -1 in the visible spectrum. In general, the absorption increases as the photon energy increases; thus the intensity of blue light will significantly weaker than the red light at when measure as the same penetration depth. 191

208 1.E+05 Absorption coefficient (1/cm) 1.E+04 1.E+03 1.E Photon energy (ev) Figure 7-1. Single silicon absorption coefficient vs. photon energy. (Data from Refs[14]) Hence, the main drawback to this photodetector is the limitation due to the optical properties of the poly-silicon which causes loss of collection in the short wavelengths due to absorption by the poly-gate. The absorption characteristics of the poly-silicon are dependent on the full wafer fabrication process and are also affected by the of the oxide and silicon layers below the poly. Thus, depositing the poly-gate in isolation such as on a glass substrate will not reproduce the true optical characteristics of the films. In a standard photogate APS as shown in Figure 7-2, the poly-silicon gate is deposited over the entire detection area, thus the absorption of photons is unavoidable. A fully depleted region is created and will extend over the entire pixel. 192

209 Figure 7-2. Standard photogate photodetector. If open areas are introduced in the poly-silicon gate where the opening is filled with transparent insulator material as shown in Figure 7-3, then the loss of photons due to absorption can be reduced. In this multi-finger design, we estimated in addition to the depleted region forming under each poly finger, the gates will create a fringing field that will reach over to the open area thus provide a continuous potential well under the entire detection area. Figure 7-3. Multi-finger photogate photodetector. In a study [13], La Haye had implemented both the standard and three different layouts of multi-finger photogate APS as shown in Figure 7-4. In the each multi-finger photogate designs, we consider a photogate consists of a polyring divided by one or more poly-fingers. The multi-finger photogate shown in Figure 7-4(b), the poly-ring is composed of 1 poly-finger, (c) 3 poly-fingers, and (d) 5 poly-fingers. The spacing between the inserted poly-fingers in each layout 193

210 is summarized in Table 7-1. Again in Figure 7-4 we provide a first estimate of the potential well formed under this poly-gate design. As the spacing between the poly-fingers decreases, the strength of fringing field grows stronger. Thus we estimate the depth of the well below the open area will be more uniform and photon collection will be enhanced. Pixel Potential well (a) standard (b) 1-Finger (c) 3-Finger (d) 5-Finger Figure 7-4. Standard and multi-finger photogate APS design and expected potential well [13]. Table 7-1. Multi-finger photogate APS poly-finger spacing [13]. Multi-finger structure Spacing (µm) % open area 1-finger finger finger The photogate APS pixels designed by La Haye[13] were implemented using 0.18µm CMOS technology by the Canadian Microelectronic Corp. and each inserted poly-finger is 0.72µm wide. The width of the poly-finger is set not by the minimum geometry of the technology but by the design rule by which the poly can be masked so that the metal-silicide is not deposited on the photogate area. 194

Fundamentals of CMOS Image Sensors

Fundamentals of CMOS Image Sensors CHAPTER 2 Fundamentals of CMOS Image Sensors Mixed-Signal IC Design for Image Sensor 2-1 Outline Photoelectric Effect Photodetectors CMOS Image Sensor(CIS) Array Architecture CIS Peripherals Design Considerations

More information

Improved Correction for Hot Pixels in Digital Imagers

Improved Correction for Hot Pixels in Digital Imagers Improved Correction for Hot Pixels in Digital Imagers Glenn H. Chapman, Rohit Thomas, Rahul Thomas School of Engineering Science Simon Fraser University Burnaby, B.C., Canada, V5A 1S6 glennc@ensc.sfu.ca,

More information

Photons and solid state detection

Photons and solid state detection Photons and solid state detection Photons represent discrete packets ( quanta ) of optical energy Energy is hc/! (h: Planck s constant, c: speed of light,! : wavelength) For solid state detection, photons

More information

Increases in Hot Pixel Development Rates for Small Digital Pixel Sizes

Increases in Hot Pixel Development Rates for Small Digital Pixel Sizes Increases in Hot Pixel Development Rates for Small Digital Pixel Sizes Glenn H. Chapman, Rahul Thomas, Rohan Thomas, Klinsmann J. Coelho Silva Meneses, Tommy Q. Yang; School of Engineering Science Simon

More information

Lecture 18: Photodetectors

Lecture 18: Photodetectors Lecture 18: Photodetectors Contents 1 Introduction 1 2 Photodetector principle 2 3 Photoconductor 4 4 Photodiodes 6 4.1 Heterojunction photodiode.................... 8 4.2 Metal-semiconductor photodiode................

More information

EE 392B: Course Introduction

EE 392B: Course Introduction EE 392B Course Introduction About EE392B Goals Topics Schedule Prerequisites Course Overview Digital Imaging System Image Sensor Architectures Nonidealities and Performance Measures Color Imaging Recent

More information

Chapter 2-Digital Components

Chapter 2-Digital Components Chapter 2-Digital Components What Makes Digital Cameras Work? This is how the D-SLR (Digital Single Lens Reflex) Camera works. The sensor This is the light sensitive part of your camera There are two basic

More information

Optical Receivers Theory and Operation

Optical Receivers Theory and Operation Optical Receivers Theory and Operation Photo Detectors Optical receivers convert optical signal (light) to electrical signal (current/voltage) Hence referred O/E Converter Photodetector is the fundamental

More information

CCDS. Lesson I. Wednesday, August 29, 12

CCDS. Lesson I. Wednesday, August 29, 12 CCDS Lesson I CCD OPERATION The predecessor of the CCD was a device called the BUCKET BRIGADE DEVICE developed at the Phillips Research Labs The BBD was an analog delay line, made up of capacitors such

More information

Digital Cameras The Imaging Capture Path

Digital Cameras The Imaging Capture Path Manchester Group Royal Photographic Society Imaging Science Group Digital Cameras The Imaging Capture Path by Dr. Tony Kaye ASIS FRPS Silver Halide Systems Exposure (film) Processing Digital Capture Imaging

More information

Introduction. Chapter 1

Introduction. Chapter 1 1 Chapter 1 Introduction During the last decade, imaging with semiconductor devices has been continuously replacing conventional photography in many areas. Among all the image sensors, the charge-coupled-device

More information

Detectors for microscopy - CCDs, APDs and PMTs. Antonia Göhler. Nov 2014

Detectors for microscopy - CCDs, APDs and PMTs. Antonia Göhler. Nov 2014 Detectors for microscopy - CCDs, APDs and PMTs Antonia Göhler Nov 2014 Detectors/Sensors in general are devices that detect events or changes in quantities (intensities) and provide a corresponding output,

More information

Part I. CCD Image Sensors

Part I. CCD Image Sensors Part I CCD Image Sensors 2 Overview of CCD CCD is the abbreviation for charge-coupled device. CCD image sensors are silicon-based integrated circuits (ICs), consisting of a dense matrix of photodiodes

More information

Charged Coupled Device (CCD) S.Vidhya

Charged Coupled Device (CCD) S.Vidhya Charged Coupled Device (CCD) S.Vidhya 02.04.2016 Sensor Physical phenomenon Sensor Measurement Output A sensor is a device that measures a physical quantity and converts it into a signal which can be read

More information

Electronic devices-i. Difference between conductors, insulators and semiconductors

Electronic devices-i. Difference between conductors, insulators and semiconductors Electronic devices-i Semiconductor Devices is one of the important and easy units in class XII CBSE Physics syllabus. It is easy to understand and learn. Generally the questions asked are simple. The unit

More information

Ultra-high resolution 14,400 pixel trilinear color image sensor

Ultra-high resolution 14,400 pixel trilinear color image sensor Ultra-high resolution 14,400 pixel trilinear color image sensor Thomas Carducci, Antonio Ciccarelli, Brent Kecskemety Microelectronics Technology Division Eastman Kodak Company, Rochester, New York 14650-2008

More information

Digital Cameras vs Film: the Collapse of Film Photography Can Your Digital Camera reach Film Photography Performance? Film photography started in

Digital Cameras vs Film: the Collapse of Film Photography Can Your Digital Camera reach Film Photography Performance? Film photography started in Digital Cameras vs Film: the Collapse of Film Photography Can Your Digital Camera reach Film Photography Performance? Film photography started in early 1800 s almost 200 years Commercial Digital Cameras

More information

Solar Cell Parameters and Equivalent Circuit

Solar Cell Parameters and Equivalent Circuit 9 Solar Cell Parameters and Equivalent Circuit 9.1 External solar cell parameters The main parameters that are used to characterise the performance of solar cells are the peak power P max, the short-circuit

More information

CHARACTERlZATlON OF FAULT TOLERANT AND DUO-OUTPUT ACTIVE PIXEL SENSORS

CHARACTERlZATlON OF FAULT TOLERANT AND DUO-OUTPUT ACTIVE PIXEL SENSORS CHARACTERlZATlON OF FAULT TOLERANT AND DUO-OUTPUT ACTIVE PIXEL SENSORS Cory Gifford Jung B.A.Sc. Simon Fraser University 2004 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE

More information

Properties of a Detector

Properties of a Detector Properties of a Detector Quantum Efficiency fraction of photons detected wavelength and spatially dependent Dynamic Range difference between lowest and highest measurable flux Linearity detection rate

More information

Enhanced Correction Methods for High Density Hot Pixel Defects in Digital Imagers

Enhanced Correction Methods for High Density Hot Pixel Defects in Digital Imagers Enhanced Correction Methods for High Density Hot Pixel Defects in Digital Imagers Glenn H. Chapman *a, Rahul Thomas a, Rohit Thomas a, Zahava Koren b, Israel Koren b a School of Engineering Science, Simon

More information

LAB V. LIGHT EMITTING DIODES

LAB V. LIGHT EMITTING DIODES LAB V. LIGHT EMITTING DIODES 1. OBJECTIVE In this lab you are to measure I-V characteristics of Infrared (IR), Red and Blue light emitting diodes (LEDs). The emission intensity as a function of the diode

More information

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 20

FIBER OPTICS. Prof. R.K. Shevgaonkar. Department of Electrical Engineering. Indian Institute of Technology, Bombay. Lecture: 20 FIBER OPTICS Prof. R.K. Shevgaonkar Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture: 20 Photo-Detectors and Detector Noise Fiber Optics, Prof. R.K. Shevgaonkar, Dept.

More information

Digital Cameras vs Film: the Collapse of Film Photography Can Your Digital Camera reach Film Photography Performance? Film photography started in

Digital Cameras vs Film: the Collapse of Film Photography Can Your Digital Camera reach Film Photography Performance? Film photography started in Digital Cameras vs Film: the Collapse of Film Photography Can Your Digital Camera reach Film Photography Performance? Film photography started in early 1800 s almost 200 years Commercial Digital Cameras

More information

Silicon sensors for radiant signals. D.Sc. Mikko A. Juntunen

Silicon sensors for radiant signals. D.Sc. Mikko A. Juntunen Silicon sensors for radiant signals D.Sc. Mikko A. Juntunen 2017 01 16 Today s outline Introduction Basic physical principles PN junction revisited Applications Light Ionizing radiation X-Ray sensors in

More information

CCD Characteristics Lab

CCD Characteristics Lab CCD Characteristics Lab Observational Astronomy 6/6/07 1 Introduction In this laboratory exercise, you will be using the Hirsch Observatory s CCD camera, a Santa Barbara Instruments Group (SBIG) ST-8E.

More information

Digital camera. Sensor. Memory card. Circuit board

Digital camera. Sensor. Memory card. Circuit board Digital camera Circuit board Memory card Sensor Detector element (pixel). Typical size: 2-5 m square Typical number: 5-20M Pixel = Photogate Photon + Thin film electrode (semi-transparent) Depletion volume

More information

What is the highest efficiency Solar Cell?

What is the highest efficiency Solar Cell? What is the highest efficiency Solar Cell? GT CRC Roof-Mounted PV System Largest single PV structure at the time of it s construction for the 1996 Olympic games Produced more than 1 billion watt hrs. of

More information

FUTURE PROSPECTS FOR CMOS ACTIVE PIXEL SENSORS

FUTURE PROSPECTS FOR CMOS ACTIVE PIXEL SENSORS FUTURE PROSPECTS FOR CMOS ACTIVE PIXEL SENSORS Dr. Eric R. Fossum Jet Propulsion Laboratory Dr. Philip H-S. Wong IBM Research 1995 IEEE Workshop on CCDs and Advanced Image Sensors April 21, 1995 CMOS APS

More information

Application of CMOS sensors in radiation detection

Application of CMOS sensors in radiation detection Application of CMOS sensors in radiation detection S. Ashrafi Physics Faculty University of Tabriz 1 CMOS is a technology for making low power integrated circuits. CMOS Complementary Metal Oxide Semiconductor

More information

LEDs, Photodetectors and Solar Cells

LEDs, Photodetectors and Solar Cells LEDs, Photodetectors and Solar Cells Chapter 7 (Parker) ELEC 424 John Peeples Why the Interest in Photons? Answer: Momentum and Radiation High electrical current density destroys minute polysilicon and

More information

LAB V. LIGHT EMITTING DIODES

LAB V. LIGHT EMITTING DIODES LAB V. LIGHT EMITTING DIODES 1. OBJECTIVE In this lab you will measure the I-V characteristics of Infrared (IR), Red and Blue light emitting diodes (LEDs). Using a photodetector, the emission intensity

More information

Charged-Coupled Devices

Charged-Coupled Devices Charged-Coupled Devices Charged-Coupled Devices Useful texts: Handbook of CCD Astronomy Steve Howell- Chapters 2, 3, 4.4 Measuring the Universe George Rieke - 3.1-3.3, 3.6 CCDs CCDs were invented in 1969

More information

Megapixels and more. The basics of image processing in digital cameras. Construction of a digital camera

Megapixels and more. The basics of image processing in digital cameras. Construction of a digital camera Megapixels and more The basics of image processing in digital cameras Photography is a technique of preserving pictures with the help of light. The first durable photograph was made by Nicephor Niepce

More information

Chapter 3 OPTICAL SOURCES AND DETECTORS

Chapter 3 OPTICAL SOURCES AND DETECTORS Chapter 3 OPTICAL SOURCES AND DETECTORS 3. Optical sources and Detectors 3.1 Introduction: The success of light wave communications and optical fiber sensors is due to the result of two technological breakthroughs.

More information

10/27/2009 Reading: Chapter 10 of Hambley Basic Device Physics Handout (optional)

10/27/2009 Reading: Chapter 10 of Hambley Basic Device Physics Handout (optional) EE40 Lec 17 PN Junctions Prof. Nathan Cheung 10/27/2009 Reading: Chapter 10 of Hambley Basic Device Physics Handout (optional) Slide 1 PN Junctions Semiconductor Physics of pn junctions (for reference

More information

Department of Electrical Engineering IIT Madras

Department of Electrical Engineering IIT Madras Department of Electrical Engineering IIT Madras Sample Questions on Semiconductor Devices EE3 applicants who are interested to pursue their research in microelectronics devices area (fabrication and/or

More information

The Charge-Coupled Device. Many overheads courtesy of Simon Tulloch

The Charge-Coupled Device. Many overheads courtesy of Simon Tulloch The Charge-Coupled Device Astronomy 1263 Many overheads courtesy of Simon Tulloch smt@ing.iac.es Jan 24, 2013 What does a CCD Look Like? The fine surface electrode structure of a thick CCD is clearly visible

More information

Abstract. Preface. Acknowledgments

Abstract. Preface. Acknowledgments Contents Abstract Preface Acknowledgments iv v vii 1 Introduction 1 1.1 A Very Brief History of Visible Detectors in Astronomy................ 1 1.2 The CCD: Astronomy s Champion Workhorse......................

More information

Detectors for Optical Communications

Detectors for Optical Communications Optical Communications: Circuits, Systems and Devices Chapter 3: Optical Devices for Optical Communications lecturer: Dr. Ali Fotowat Ahmady Sep 2012 Sharif University of Technology 1 Photo All detectors

More information

Active Pixel Sensors Fabricated in a Standard 0.18 um CMOS Technology

Active Pixel Sensors Fabricated in a Standard 0.18 um CMOS Technology Active Pixel Sensors Fabricated in a Standard.18 um CMOS Technology Hui Tian, Xinqiao Liu, SukHwan Lim, Stuart Kleinfelder, and Abbas El Gamal Information Systems Laboratory, Stanford University Stanford,

More information

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the

More information

Photodiode: LECTURE-5

Photodiode: LECTURE-5 LECTURE-5 Photodiode: Photodiode consists of an intrinsic semiconductor sandwiched between two heavily doped p-type and n-type semiconductors as shown in Fig. 3.2.2. Sufficient reverse voltage is applied

More information

Semiconductor Detector Systems

Semiconductor Detector Systems Semiconductor Detector Systems Helmuth Spieler Physics Division, Lawrence Berkeley National Laboratory OXFORD UNIVERSITY PRESS ix CONTENTS 1 Detector systems overview 1 1.1 Sensor 2 1.2 Preamplifier 3

More information

OPTOELECTRONIC and PHOTOVOLTAIC DEVICES

OPTOELECTRONIC and PHOTOVOLTAIC DEVICES OPTOELECTRONIC and PHOTOVOLTAIC DEVICES Outline 1. Introduction to the (semiconductor) physics: energy bands, charge carriers, semiconductors, p-n junction, materials, etc. 2. Light emitting diodes Light

More information

Two-phase full-frame CCD with double ITO gate structure for increased sensitivity

Two-phase full-frame CCD with double ITO gate structure for increased sensitivity Two-phase full-frame CCD with double ITO gate structure for increased sensitivity William Des Jardin, Steve Kosman, Neal Kurfiss, James Johnson, David Losee, Gloria Putnam *, Anthony Tanbakuchi (Eastman

More information

Design and Simulation of N-Substrate Reverse Type Ingaasp/Inp Avalanche Photodiode

Design and Simulation of N-Substrate Reverse Type Ingaasp/Inp Avalanche Photodiode International Refereed Journal of Engineering and Science (IRJES) ISSN (Online) 2319-183X, (Print) 2319-1821 Volume 2, Issue 8 (August 2013), PP.34-39 Design and Simulation of N-Substrate Reverse Type

More information

Novel laser power sensor improves process control

Novel laser power sensor improves process control Novel laser power sensor improves process control A dramatic technological advancement from Coherent has yielded a completely new type of fast response power detector. The high response speed is particularly

More information

Lecture 8 Optical Sensing. ECE 5900/6900 Fundamentals of Sensor Design

Lecture 8 Optical Sensing. ECE 5900/6900 Fundamentals of Sensor Design ECE 5900/6900: Fundamentals of Sensor Design Lecture 8 Optical Sensing 1 Optical Sensing Q: What are we measuring? A: Electromagnetic radiation labeled as Ultraviolet (UV), visible, or near,mid-, far-infrared

More information

Lecture 7. July 24, Detecting light (converting light to electrical signal)

Lecture 7. July 24, Detecting light (converting light to electrical signal) Lecture 7 July 24, 2017 Detecting light (converting light to electrical signal) Photoconductor Photodiode Managing electrical signal Metal-oxide-semiconductor (MOS) capacitor Charge coupled device (CCD)

More information

PERFORMANCE CHARACTERIZATION OF AMORPHOUS SILICON DIGITAL DETECTOR ARRAYS FOR GAMMA RADIOGRAPHY

PERFORMANCE CHARACTERIZATION OF AMORPHOUS SILICON DIGITAL DETECTOR ARRAYS FOR GAMMA RADIOGRAPHY 12 th A-PCNDT 2006 Asia-Pacific Conference on NDT, 5 th 10 th Nov 2006, Auckland, New Zealand PERFORMANCE CHARACTERIZATION OF AMORPHOUS SILICON DIGITAL DETECTOR ARRAYS FOR GAMMA RADIOGRAPHY Rajashekar

More information

Digital Imaging Rochester Institute of Technology

Digital Imaging Rochester Institute of Technology Digital Imaging 1999 Rochester Institute of Technology So Far... camera AgX film processing image AgX photographic film captures image formed by the optical elements (lens). Unfortunately, the processing

More information

Overview. Charge-coupled Devices. MOS capacitor. Charge-coupled devices. Charge-coupled devices:

Overview. Charge-coupled Devices. MOS capacitor. Charge-coupled devices. Charge-coupled devices: Overview Charge-coupled Devices Charge-coupled devices: MOS capacitors Charge transfer Architectures Color Limitations 1 2 Charge-coupled devices MOS capacitor The most popular image recording technology

More information

An Introduction to CCDs. The basic principles of CCD Imaging is explained.

An Introduction to CCDs. The basic principles of CCD Imaging is explained. An Introduction to CCDs. The basic principles of CCD Imaging is explained. Morning Brain Teaser What is a CCD? Charge Coupled Devices (CCDs), invented in the 1970s as memory devices. They improved the

More information

Research Article Responsivity Enhanced NMOSFET Photodetector Fabricated by Standard CMOS Technology

Research Article Responsivity Enhanced NMOSFET Photodetector Fabricated by Standard CMOS Technology Advances in Condensed Matter Physics Volume 2015, Article ID 639769, 5 pages http://dx.doi.org/10.1155/2015/639769 Research Article Responsivity Enhanced NMOSFET Photodetector Fabricated by Standard CMOS

More information

Introduction to Photovoltaics

Introduction to Photovoltaics Introduction to Photovoltaics PHYS 4400, Principles and Varieties of Solar Energy Instructor: Randy J. Ellingson The University of Toledo February 24, 2015 Only solar energy Of all the possible sources

More information

PentaVac Vacuum Technology

PentaVac Vacuum Technology PentaVac Vacuum Technology Scientific CCD Applications CCD imaging sensors are used extensively in high-end imaging applications, enabling acquisition of quantitative images with both high (spatial) resolution

More information

Chapter 3 SPECIAL PURPOSE DIODE

Chapter 3 SPECIAL PURPOSE DIODE Chapter 3 SPECIAL PURPOSE DIODE 1 Inventor of Zener Diode Clarence Melvin Zener was a professor at Carnegie Mellon University in the department of Physics. He developed the Zener Diode in 1950 and employed

More information

Review Energy Bands Carrier Density & Mobility Carrier Transport Generation and Recombination

Review Energy Bands Carrier Density & Mobility Carrier Transport Generation and Recombination Review Energy Bands Carrier Density & Mobility Carrier Transport Generation and Recombination Current Transport: Diffusion, Thermionic Emission & Tunneling For Diffusion current, the depletion layer is

More information

Tunable wideband infrared detector array for global space awareness

Tunable wideband infrared detector array for global space awareness Tunable wideband infrared detector array for global space awareness Jonathan R. Andrews 1, Sergio R. Restaino 1, Scott W. Teare 2, Sanjay Krishna 3, Mike Lenz 3, J.S. Brown 3, S.J. Lee 3, Christopher C.

More information

CMOS ACTIVE PIXEL SENSOR DESIGNS FOR FAULT TOLERANCE AND BACKGROUND ILLUMINATION SUBTRACTION

CMOS ACTIVE PIXEL SENSOR DESIGNS FOR FAULT TOLERANCE AND BACKGROUND ILLUMINATION SUBTRACTION CMOS ACTIVE PIXEL SENSOR DESIGNS FOR FAULT TOLERANCE AND BACKGROUND ILLUMINATION SUBTRACTION Desmond Yu Hin Cheung B.A.Sc. Simon Fraser University 2002 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

More information

Avalanche Photodiode. Instructor: Prof. Dietmar Knipp Presentation by Peter Egyinam. 4/19/2005 Photonics and Optical communicaton

Avalanche Photodiode. Instructor: Prof. Dietmar Knipp Presentation by Peter Egyinam. 4/19/2005 Photonics and Optical communicaton Avalanche Photodiode Instructor: Prof. Dietmar Knipp Presentation by Peter Egyinam 1 Outline Background of Photodiodes General Purpose of Photodiodes Basic operation of p-n, p-i-n and avalanche photodiodes

More information

Measurements of dark current in a CCD imager during light exposures

Measurements of dark current in a CCD imager during light exposures Portland State University PDXScholar Physics Faculty Publications and Presentations Physics 2-1-28 Measurements of dark current in a CCD imager during light exposures Ralf Widenhorn Portland State University

More information

Engineering Medical Optics BME136/251 Winter 2018

Engineering Medical Optics BME136/251 Winter 2018 Engineering Medical Optics BME136/251 Winter 2018 Monday/Wednesday 2:00-3:20 p.m. Beckman Laser Institute Library, MSTB 214 (lab) *1/17 UPDATE Wednesday, 1/17 Optics and Photonic Devices III: homework

More information

Lecture Notes 5 CMOS Image Sensor Device and Fabrication

Lecture Notes 5 CMOS Image Sensor Device and Fabrication Lecture Notes 5 CMOS Image Sensor Device and Fabrication CMOS image sensor fabrication technologies Pixel design and layout Imaging performance enhancement techniques Technology scaling, industry trends

More information

CHAPTER 3 TWO DIMENSIONAL ANALYTICAL MODELING FOR THRESHOLD VOLTAGE

CHAPTER 3 TWO DIMENSIONAL ANALYTICAL MODELING FOR THRESHOLD VOLTAGE 49 CHAPTER 3 TWO DIMENSIONAL ANALYTICAL MODELING FOR THRESHOLD VOLTAGE 3.1 INTRODUCTION A qualitative notion of threshold voltage V th is the gate-source voltage at which an inversion channel forms, which

More information

FEATURES GENERAL DESCRIPTION. CCD Element Linear Image Sensor CCD Element Linear Image Sensor

FEATURES GENERAL DESCRIPTION. CCD Element Linear Image Sensor CCD Element Linear Image Sensor CCD 191 6000 Element Linear Image Sensor FEATURES 6000 x 1 photosite array 10µm x 10µm photosites on 10µm pitch Anti-blooming and integration control Enhanced spectral response (particularly in the blue

More information

Christian Boit TUB Berlin University of Technology Sect. Semiconductor Devices. 1

Christian Boit TUB Berlin University of Technology Sect. Semiconductor Devices. 1 Semiconductor Device & Analysis Center Berlin University of Technology Christian Boit TUB Berlin University of Technology Sect. Semiconductor Devices Christian.Boit@TU-Berlin.DE 1 Semiconductor Device

More information

Spectral Analysis of the LUND/DMI Earthshine Telescope and Filters

Spectral Analysis of the LUND/DMI Earthshine Telescope and Filters Spectral Analysis of the LUND/DMI Earthshine Telescope and Filters 12 August 2011-08-12 Ahmad Darudi & Rodrigo Badínez A1 1. Spectral Analysis of the telescope and Filters This section reports the characterization

More information

14.2 Photodiodes 411

14.2 Photodiodes 411 14.2 Photodiodes 411 Maximum reverse voltage is specified for Ge and Si photodiodes and photoconductive cells. Exceeding this voltage can cause the breakdown and severe deterioration of the sensor s performance.

More information

Digital Photographs, Image Sensors and Matrices

Digital Photographs, Image Sensors and Matrices Digital Photographs, Image Sensors and Matrices Digital Camera Image Sensors Electron Counts Checkerboard Analogy Bryce Bayer s Color Filter Array Mosaic. Image Sensor Data to Matrix Data Visualization

More information

High Speed pin Photodetector with Ultra-Wide Spectral Responses

High Speed pin Photodetector with Ultra-Wide Spectral Responses High Speed pin Photodetector with Ultra-Wide Spectral Responses C. Tam, C-J Chiang, M. Cao, M. Chen, M. Wong, A. Vazquez, J. Poon, K. Aihara, A. Chen, J. Frei, C. D. Johns, Ibrahim Kimukin, Achyut K. Dutta

More information

MOSFET short channel effects

MOSFET short channel effects MOSFET short channel effects overview Five different short channel effects can be distinguished: velocity saturation drain induced barrier lowering (DIBL) impact ionization surface scattering hot electrons

More information

Field-Effect Transistor (FET) is one of the two major transistors; FET derives its name from its working mechanism;

Field-Effect Transistor (FET) is one of the two major transistors; FET derives its name from its working mechanism; Chapter 3 Field-Effect Transistors (FETs) 3.1 Introduction Field-Effect Transistor (FET) is one of the two major transistors; FET derives its name from its working mechanism; The concept has been known

More information

CHARGE-COUPLED DEVICE (CCD)

CHARGE-COUPLED DEVICE (CCD) CHARGE-COUPLED DEVICE (CCD) Definition A charge-coupled device (CCD) is an analog shift register, enabling analog signals, usually light, manipulation - for example, conversion into a digital value that

More information

PHYSICAL ELECTRONICS(ECE3540) APPLICATIONS OF PHYSICAL ELECTRONICS PART I

PHYSICAL ELECTRONICS(ECE3540) APPLICATIONS OF PHYSICAL ELECTRONICS PART I PHYSICAL ELECTRONICS(ECE3540) APPLICATIONS OF PHYSICAL ELECTRONICS PART I Tennessee Technological University Monday, October 28, 2013 1 Introduction In the following slides, we will discuss the summary

More information

The Evolution of Digital Cameras A Patent History

The Evolution of Digital Cameras A Patent History The Evolution of Digital Cameras A Patent History ipwatchdog.com /2014/10/28/the-evolution-of-digital-cameras-a-patent-history/id=51846/ #The 1 Patent Bar Review Course LIVE or HOME STUDY ~ CLICK HERE

More information

Simulation of High Resistivity (CMOS) Pixels

Simulation of High Resistivity (CMOS) Pixels Simulation of High Resistivity (CMOS) Pixels Stefan Lauxtermann, Kadri Vural Sensor Creations Inc. AIDA-2020 CMOS Simulation Workshop May 13 th 2016 OUTLINE 1. Definition of High Resistivity Pixel Also

More information

An Engineer s Perspective on of the Retina. Steve Collins Department of Engineering Science University of Oxford

An Engineer s Perspective on of the Retina. Steve Collins Department of Engineering Science University of Oxford An Engineer s Perspective on of the Retina Steve Collins Department of Engineering Science University of Oxford Aims of the Talk To highlight that research can be: multi-disciplinary stimulated by user

More information

A Short History of Using Cameras for Weld Monitoring

A Short History of Using Cameras for Weld Monitoring A Short History of Using Cameras for Weld Monitoring 2 Background Ever since the development of automated welding, operators have needed to be able to monitor the process to ensure that all parameters

More information

Fully depleted, thick, monolithic CMOS pixels with high quantum efficiency

Fully depleted, thick, monolithic CMOS pixels with high quantum efficiency Fully depleted, thick, monolithic CMOS pixels with high quantum efficiency Andrew Clarke a*, Konstantin Stefanov a, Nicholas Johnston a and Andrew Holland a a Centre for Electronic Imaging, The Open University,

More information

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern James DiBella*, Marco Andreghetti, Amy Enge, William Chen, Timothy Stanka, Robert Kaser (Eastman Kodak

More information

CHAPTER 9 CURRENT VOLTAGE CHARACTERISTICS

CHAPTER 9 CURRENT VOLTAGE CHARACTERISTICS CHAPTER 9 CURRENT VOLTAGE CHARACTERISTICS 9.1 INTRODUCTION The phthalocyanines are a class of organic materials which are generally thermally stable and may be deposited as thin films by vacuum evaporation

More information

Lecture 29: Image Sensors. Computer Graphics and Imaging UC Berkeley CS184/284A

Lecture 29: Image Sensors. Computer Graphics and Imaging UC Berkeley CS184/284A Lecture 29: Image Sensors Computer Graphics and Imaging UC Berkeley Photon Capture The Photoelectric Effect Incident photons Ejected electrons Albert Einstein (wikipedia) Einstein s Nobel Prize in 1921

More information

InP-based Waveguide Photodetector with Integrated Photon Multiplication

InP-based Waveguide Photodetector with Integrated Photon Multiplication InP-based Waveguide Photodetector with Integrated Photon Multiplication D.Pasquariello,J.Piprek,D.Lasaosa,andJ.E.Bowers Electrical and Computer Engineering Department University of California, Santa Barbara,

More information

Three Ways to Detect Light. We now establish terminology for photon detectors:

Three Ways to Detect Light. We now establish terminology for photon detectors: Three Ways to Detect Light In photon detectors, the light interacts with the detector material to produce free charge carriers photon-by-photon. The resulting miniscule electrical currents are amplified

More information

System and method for subtracting dark noise from an image using an estimated dark noise scale factor

System and method for subtracting dark noise from an image using an estimated dark noise scale factor Page 1 of 10 ( 5 of 32 ) United States Patent Application 20060256215 Kind Code A1 Zhang; Xuemei ; et al. November 16, 2006 System and method for subtracting dark noise from an image using an estimated

More information

Reducing Proximity Effects in Optical Lithography

Reducing Proximity Effects in Optical Lithography INTERFACE '96 This paper was published in the proceedings of the Olin Microlithography Seminar, Interface '96, pp. 325-336. It is made available as an electronic reprint with permission of Olin Microelectronic

More information

Development of Solid-State Detector for X-ray Computed Tomography

Development of Solid-State Detector for X-ray Computed Tomography Proceedings of the Korea Nuclear Society Autumn Meeting Seoul, Korea, October 2001 Development of Solid-State Detector for X-ray Computed Tomography S.W Kwak 1), H.K Kim 1), Y. S Kim 1), S.C Jeon 1), G.

More information

10/14/2009. Semiconductor basics pn junction Solar cell operation Design of silicon solar cell

10/14/2009. Semiconductor basics pn junction Solar cell operation Design of silicon solar cell PHOTOVOLTAICS Fundamentals PV FUNDAMENTALS Semiconductor basics pn junction Solar cell operation Design of silicon solar cell SEMICONDUCTOR BASICS Allowed energy bands Valence and conduction band Fermi

More information

Lecture #29. Moore s Law

Lecture #29. Moore s Law Lecture #29 ANNOUNCEMENTS HW#15 will be for extra credit Quiz #6 (Thursday 5/8) will include MOSFET C-V No late Projects will be accepted after Thursday 5/8 The last Coffee Hour will be held this Thursday

More information

Chap14. Photodiode Detectors

Chap14. Photodiode Detectors Chap14. Photodiode Detectors Mohammad Ali Mansouri-Birjandi mansouri@ece.usb.ac.ir mamansouri@yahoo.com Faculty of Electrical and Computer Engineering University of Sistan and Baluchestan (USB) Design

More information

Camera Image Processing Pipeline

Camera Image Processing Pipeline Lecture 13: Camera Image Processing Pipeline Visual Computing Systems Today (actually all week) Operations that take photons hitting a sensor to a high-quality image Processing systems used to efficiently

More information

Digital Cameras. Consumer and Prosumer

Digital Cameras. Consumer and Prosumer Digital Cameras Overview While silver-halide film has been the dominant photographic process for the past 150 years, the use and role of technology is fast-becoming a standard for the making of photographs.

More information

STA1600LN x Element Image Area CCD Image Sensor

STA1600LN x Element Image Area CCD Image Sensor ST600LN 10560 x 10560 Element Image Area CCD Image Sensor FEATURES 10560 x 10560 Photosite Full Frame CCD Array 9 m x 9 m Pixel 95.04mm x 95.04mm Image Area 100% Fill Factor Readout Noise 2e- at 50kHz

More information

Lecture 9 External Modulators and Detectors

Lecture 9 External Modulators and Detectors Optical Fibres and Telecommunications Lecture 9 External Modulators and Detectors Introduction Where are we? A look at some real laser diodes. External modulators Mach-Zender Electro-absorption modulators

More information

1 Semiconductor-Photon Interaction

1 Semiconductor-Photon Interaction 1 SEMICONDUCTOR-PHOTON INTERACTION 1 1 Semiconductor-Photon Interaction Absorption: photo-detectors, solar cells, radiation sensors. Radiative transitions: light emitting diodes, displays. Stimulated emission:

More information

In this lecture we will begin a new topic namely the Metal-Oxide-Semiconductor Field Effect Transistor.

In this lecture we will begin a new topic namely the Metal-Oxide-Semiconductor Field Effect Transistor. Solid State Devices Dr. S. Karmalkar Department of Electronics and Communication Engineering Indian Institute of Technology, Madras Lecture - 38 MOS Field Effect Transistor In this lecture we will begin

More information

Chapter 3 Wide Dynamic Range & Temperature Compensated Gain CMOS Image Sensor in Automotive Application. 3.1 System Architecture

Chapter 3 Wide Dynamic Range & Temperature Compensated Gain CMOS Image Sensor in Automotive Application. 3.1 System Architecture Chapter 3 Wide Dynamic Range & Temperature Compensated Gain CMOS Image Sensor in Automotive Application Like the introduction said, we can recognize the problem would be suffered on image sensor in automotive

More information

Optical Amplifiers. Continued. Photonic Network By Dr. M H Zaidi

Optical Amplifiers. Continued. Photonic Network By Dr. M H Zaidi Optical Amplifiers Continued EDFA Multi Stage Designs 1st Active Stage Co-pumped 2nd Active Stage Counter-pumped Input Signal Er 3+ Doped Fiber Er 3+ Doped Fiber Output Signal Optical Isolator Optical

More information