Topics on CMOS Image Sensors

Size: px
Start display at page:

Download "Topics on CMOS Image Sensors"

Transcription

1 Linköping Studies in Science and Technology Thesis No Topics on CMOS Image Sensors Leif Lindgren LiU-TEK-LIC-2005:37 Department of Electrical Engineering Linköpings universitet, SE Linköping, Sweden Linköping 2005

2 Topics on CMOS Image Sensors ISBN X ISSN Printed in Sweden by UniTryck, Linköping July 2005

3 Abstract Today there exist several applications where a real visible scene needs to be sampled to electrical signals, e.g., video cameras, digital still cameras, and machine vision systems. Since the 1970 s charge-coupled device (CCD) sensors have primarily been used for this task, but during the last decade CMOS image sensors have become more and more popular. The demand for image sensors has lately grown very rapidly due to the increased market for, e.g., digital still cameras and the integration of image sensors in mobile phones. The first out of three included papers presents a programmable multiresolution machine vision sensor with on-chip image processing capabilities. The sensor comprises an innovative multiresolution sensing area, 1536 A/D converters, and a SIMD array of 1536 bit-serial processors with corresponding memory. The SIMD processor array can deliver more than 100 GOPS sustained and the onchip pixel-analyzing rate can be as high as 4 Gpixels/s. The sensor is intended for high-speed multisense imaging where, e.g., color, greyscale, internal material light scatter, and 3-D profiles are captured simultaneously. Experimental results showing very good image characteristics and a good digital to analog noise isolation are presented. The second paper presents a mathematical analysis of how temporal noise is transformed by quantization. A new method for measuring temporal noise with a low-resolution ADC and then accurately refer it back to the input of the ADC is shown. The method is, for instance, applicable to CMOS image sensors where photon shot noise is commonly used for determining conversion gain and quantum efficiency. Experimental tests have been carried out using the above mentioned sensor, which has an on-chip ADC featuring programmable gain and offset. The measurements verify the analysis and the method. i

4 ii Abstract The third paper presents a new column parallel ADC architecture, named simultaneous multislope, suitable for array implementations in, e.g., CMOS image sensors. The simplest implementation of the suggested architecture is almost twice as fast as a conventional slope ADC, while it requires only a small amount of extra circuitry. Measurements have been performed using the above mentioned sensor, which implements parts of the proposed ADC. The measurements show good linearity and verify the concept of the new architecture.

5 Preface Most of the research for this thesis was conducted in 1999 through 2003, while I was working for IVP Integrated Vision Products AB (now SICK IVP AB). The company develops and provides high end machine vision systems consisting of hardware, in the form of smart cameras, and software packages. The main research project during this time was the development of the smart vision sensor M12, containing an image sensor, parallel A/D conversion, and a SIMD processor with corresponding memory on a single die. Seen from a system level this sensor is a further development of earlier sensors from IVP. However, from a technology point of view it is a completely new design where almost all circuit solutions are new compared to the earlier sensors. This required a major research and development effort which was started in 1999 and initially conducted by Lic. Eng. Johan Melander and myself. In the middle of 2000 the actual design process was started and Robert Johansson and later on Björn Möller joined the development of the new sensor. Towards the end of the project Calle Björnlert, Jürgen Hauser, and Magnus Engström joined the team to speed up the development of the digital control logic. Furthermore, Dr. J. Jacob Wikner was a great resource in the development of the on-chip DAC. The sensor was taped out in January 2002 and worked perfectly at first silicon. It is currently found in several SICK IVP products and will, most likely, be used in several future products. This thesis includes three papers, where the first paper describes the M12 sensor. The second paper is a result of the temporal noise measurements performed on the M12 sensor. The measurements showed that the temporal noise, for low illumination levels, was much lower than the resolution of the on-chip A/D conversion. This made it impossible to accurately refer the measured noise back to the input of the ADCs using a traditional method. This lead to the development of the new method for elimination of quantization effects in measured temporal iii

6 iv Preface noise, which makes it possible to accurately determine ADC input referred temporal noise even if it is much lower than the resolution of the ADC. This new method is described in the second paper. The third paper presents a new ADC architecture that can be seen as a further development of the single slope ADC architecture used in the M12 sensor. I would like to thank the aforementioned persons for their contributions in the design of the M12 sensor. Johan Melander, Robert Johansson, and Björn Möller are further acknowledged for many fruitful discussions and for all their comments regarding this thesis. All employees at IVP are acknowledged for making it a fun and interesting company to work for. Furthermore, I would like to thank Professor Christer Svensson for his contributions in the form of examiner of this thesis. I would like to thank Tina, Emmy, and Amanda for making my life fun and interesting. I also thank my sister Karin, and my mom and dad for always being there.

7 Contents Chapter 1. Introduction History Chapter 2. CMOS Image Sensor Circuitry Photodetectors Pixel and Readout Circuits Passive Pixels Active Pixels Logarithmic Pixels Global Shutter ADC Paper I. A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 17 1 Introduction Multisense Imaging v

8 vi Contents 3 Architecture System Level Multiresolution Sensor Area and Analogue Readout A/D Conversion Processor and Registers Circuit Implementation Mixed-Signal Aspects Large Distance Signalling GLU, COUNT, and GOR Pixels and Analogue Readout A/D Conversion Experimental Results Comparison With Other Sensors Conclusion Paper II. Elimination of Quantization Effects in Measured Temporal Noise 47 1 Introduction Quantization Transformation Uniformly Distributed Fractional Part Achieving Uniform Distribution Experimental Results Conclusion Acknowledgment

9 Contents vii Paper III. A New Simultaneous Multislope ADC Architecture for Array Implementations 61 1 Introduction Architecture Using Current Steering DACs Fast Pseudo Conversion ADC Characterization Experimental Test Setup and Calculations Results Conclusion and Discussion

10

11 Chapter 1 Introduction Today there exist several applications where a real visible scene needs to be sampled to electrical signals, e.g., video cameras, digital still cameras, and machine vision systems. Since the 1970 s charge-coupled device (CCD) sensors have primarily been used for this task, but during the last decade CMOS image sensors have become more and more popular. The demand for image sensors has lately grown very rapidly due to the increased market for, e.g., digital still cameras and the integration of image sensors in mobile phones. A CMOS image sensor is a chip that converts incoming light to electrical signals, and it is made in a complementary metal oxide semiconductor (CMOS) process. Fig. 1.1 shows a cross section of a transistor in a CMOS chip. If a CMOS chip is packaged in a way that permits light to hit the chip, the light will penetrate the transparent silicon dioxide and hit the parts of the silicon substrate that are not covered by the metal layers. If the incoming photon has greater energy than the band gap energy of silicon, which is ev and corresponds to a wavelength less than 1.1 µm, it can excite one of the valence electrons in a silicon atom and, thereby, move it into the conduction band. This is according to the photoelectric effect for which Albert Einstein received the 1921 Nobel prize for physics. It works the same way as in a normal film used in a conventional camera; e.g., for a black and white film a photon hits, typically, a bromide atom and an electron is set free. This electron is then picked-up by a positive charged silver atom and when the film is processed the silver atoms are grouped together. In a CMOS image sensor there are three different ways of separating and collecting the photo 1

12 2 Chapter 1. Introduction an incomig photon polysilicon gate metal layer 1 via1 metal layer 2 via2 transparent and insulating silicon dioxide an n-type transistor n-doped diffusion, the source channel of the transistor n-doped diffusion, the drain channel of the transistor p-doped silicon substrate connected to GND Fig. 1.1: Cross section of an n-type transistor in a CMOS process. generated electron-hole pairs: by using an array of photodiodes, photogates or phototransistors. CMOS image sensors have several advantages when compared to CCDs. These advantages include low power consumption, low operating voltage, integrating circuitry on-chip, random access to image data, high-speed, and potentially lower costs [1]. Since the CMOS process makes it possible to include other circuits than the image sensor array on a chip it is possible to do complete imaging systems on one single chip. Analog-to-digital conversion can be integrated either by having, e.g., one fast analog-to-digital converter (ADC), a column parallel ADC [2] or an ADC in each pixel [3]. Color information can be captured by having a color mosaic filter mounted on top of the chip and the color interpolation can be done on-chip. Furthermore, memory, control circuitry, and, e.g., image compression can be implemented on-chip. These things make it possible to build a so called camera-on-a-chip. Fig. 1.2 shows a typical CMOS image sensor with control circuitry, a row decoder, column readout amplifiers, ADC, and a 2-D array of pixels each containing a photodetector. 1.1 History In the 1960 s several research groups worked on image sensors using NMOS, PMOS or Bipolar technologies [4]. The first report on CCD sensors came in Since CCDs offered much lower fixed pattern noise (FPN) and smaller

13 1.1 History 3 Pixel array Row decoder Ctrl Clk Ctrl Readout amps ADC Digital out Fig. 1.2: A CMOS image sensor with on-chip ADC and control circuitry. pixels than MOS sensors this new technology was adopted by the industry, and little research was done on MOS image sensors during the 1970 s and 1980 s. However, some research on MOS image sensors was still conducted. During the early 1980 s Hitachi and Matsushita developed MOS sensors, but later they abandoned this research. For more information about the early history of MOS image sensors see [4]. In the late 1980 s and early 1990 s CMOS image sensors with on-chip ADC and image processing capabilities, e.g., PASIC [2] and MAPP2200 [5], were developed at Linköpings universitet and at the company IVP Integrated Vision Products AB in Sweden. Also in the late 1980 s research on CMOS image sensors was done at University of Edinburgh in Scotland and the company VLSI Vision Limited (VVL) was founded. In the first half of the 1990 s Jet Propulsion Laboratory (JPL) in the USA developed CMOS sensors using active pixels and Photobit Corporation was founded. These efforts led to major advances in CMOS image sensor technology. During the second half of the 1990 s research on CMOS image sensors was performed at many universities, and many CMOS image sensor companies were founded. These startups were fabless and used silicon foundries with standard CMOS processes for manufacturing their sensors. However, the growing market segment of image sensors have made many of the big IC manufacturing companies to set up their own CMOS image sensor development. This have been done either by own research efforts, by the acquirement of image sensor companies, or by a combination of both. Examples of larger manufacturers acquiring image sensor companies are STMicroelectronics who bought VVL, and Micron Technology who bought Photobit. Other examples are Cypress Semicondutcor Corporation who bought FillFactory (a Belgian company spun-off from sensor research

14 4 Chapter 1. Introduction at IMEC), and Agilent Technologies who bought Pixel Devices International. An advantage for these companies is that they have their own manufacturing, making it possible to tune the CMOS processes for image sensing. This can, e.g., include tailoring junction depths and doping levels, but also adding processing steps for, e.g., buried photodiodes and color filter and micro lens deposition. Today also silicon foundries like, e.g., TSMC, UMC, and Tower Semiconductor offer CMOS processes tuned for image sensing. References [1] H-S. Wong, Technology and Device Scaling Considerations for CMOS Imagers, IEEE Trans. Electron Devices, vol. 43, no. 12, Dec [2] K. Chen, M. Afghani, P-E. Danielsson, and C. Svensson, PA- SIC: A Processor-A/D converter-sensor Integrated Circuit, Proc. IEEE Int. Symp. Circuits and Systems (ISCAS 90), vol. 3, pp , May [3] B. Fowler, A. El Gamal, and D. Yang, Techniques for Pixel Level Analog to Digital Conversion, Aerosense, [4] E. R. Fossum, CMOS Image Sensors: Electronic Camera-On-A-Chip, IEEE Trans. Electron Devices, vol. 44, no. 10, Oct [5] R. Forchheimer, P. Ingelhag, and C. Jansson, MAPP a second generation smart optical sensor, Proc. SPIE, Image Processing and Interchange: Implementation and Systems, vol. 1659, pp. 2-11, 1992.

15 Chapter 2 CMOS Image Sensor Circuitry Fig. 2.1 shows a typical CMOS image sensor with control circuitry, row select logic, a 2-D array of pixels, readout amplifiers, column parallel ADC, and column select logic. Most CMOS image sensors use photodiodes in integrating mode. This means the pixels integrate light during a certain amount of time referred to as the integration time or exposure time. For sensors with a global shutter all pixels are typically first reset, then they integrate light and then the shutter mechanism moves the integrated charge to a storage node in each pixel. After this the pixels are scanned row by row. This is done by connecting all pixels in a row to the vertical column buses, see Fig At the bottom of these column buses are read amplifiers which senses the signals from the pixels. In Fig. 2.1 the output from each readout amplifier connects to an ADC, i.e., the ADC is here column parallel. The digital outputs from the ADCs are then scanned by the column select logic and sent off chip. The column select logic, and also the row select logic, can e.g. be a shift register or a counter feeding a decoder. Having counters and decoders enables windowing, i.e., where only a part of the entire array is read, and also different schemes of subsampling. Most sensors do, however, not feature a global shutter. Instead an electronic rolling shutter is used since it simplifies the pixels. With this type of shutter the pixels integration times are not simultaneous for all pixels. Instead, the integration starts are offset in time, with a time equal to the row read time, between consecutive rows. 5

16 6 Chapter 2. CMOS Image Sensor Circuitry Pixel array Column Bus Row select Row Select Readout amps Ctrl Clk Control Parallel ADC Column select Digital out Fig. 2.1: A CMOS image sensor with on-chip ADC and control circuitry. 2.1 Photodetectors The photodetectors collect the electrons, or holes, that are set free by the incoming photons. In a CMOS process there exist three basic types of photodetectors: photodiodes, photogates and phototransistors. A photodiode is a reverse biased pn-junction. The electric field over the junction separates the electron-hole pairs, and a photo current proportional to the light intensity flows through the junction. This photo current can be integrated over the built in junction capacitance and the potential change over the junction will then be proportional to the collected light (assuming the capacitance is voltage independent which, however, is not entirely true). In an n-well CMOS process there are three different pn-junctions that can be used: n-diffusion/psubstrate, n-well/p-substrate and p-diffusion/n-well, see Fig. 2.2(a-c). Earlier the n-diffusion/p-substrate diode was commonly used, but the n-well/p-substrate diode is now also rather common. A parasitic vertical bipolar transistor can be created in an n-well CMOS process according to Fig. 2.2(d). The reverse biased pn-junction works the same way as for a photodiode and creates a photo current. This photo current is then amplified by the bipolar transistor gain which is typically much greater than one. In [1] it is concluded that photodiodes are better suited for image sensors than phototransistors since they have lower temporal noise, lower dark current and lower FPN. The phototransistors have potentially higher quantum efficiency (QE) due to the amplification, but at low light levels this QE decreases.

17 2.1 Photodetectors 7 out out n+ p+ n-well n-well photogeneration region p-sub photogeneration region p-sub a) an n-well/p-substrate diode b) a p-diff/n-well diode out emitter n+ p+ photogeneration region p-sub photogeneration region floating n-well p-sub c) an n-diff/p-substrate diode d) a vertical phototransistor gate tg tg n+ n+ p+ n n+ photogeneration region p-sub photogeneration region p-sub e) a photogate f) a buried photodiode Fig. 2.2: Different types of photodetectors in a CMOS process.

18 8 Chapter 2. CMOS Image Sensor Circuitry The photogate uses a structure borrowed from CCDs. It has a polygate over the detector area and by setting the voltage of this gate so that a channel is introduced under it, the electrons are collected in the neighboring potential well, see Fig. 2.2(e). The poly layer over the photodetector in the photogate structure blocks some of the light, especially if the process uses silicide/salicide. This blocking gives the photogates lower QE, especially in the blue area of the spectrum, when compared to photodiodes. Furthermore, the extra transistors and control signals reduce the fill factor (FF). A processing difficulty is to realize a complete charge transfer. However, when this is done true correlated double sampling (CDS) can be implemented and the temporal reset noise is, thereby, canceled. This is the great advantage of the photogate structure. Fig. 2.2(f) shows a buried photodiode (also referred to as a pinned photodiode). This type of photodiode needs special processing steps to work at low voltage [2]. The buried diode works a bit like the photogate structure but without the need for the poly layer above the sensing area. As for the photogate a complete charge transfer allows for true CDS, which cancels the reset noise in the pixel. 2.2 Pixel and Readout Circuits The most common way to use a photodiode in a pixel is in integrating mode, which means that the photocurrent is integrated over the built in diode capacitance for a certain amount of time. Another way is to use the photodiode in direct mode in, e.g., a logarithmic pixel. Below follows examples of the most common pixel types. There exists numerous other ways to handle the signals from the photodetectors, see [3] for some examples Passive Pixels The simplest integrating pixel is the passive pixel, shown in Fig Sensors based on passive pixels are simply called passive pixel sensors (PPS). The passive pixels were used in, e.g., sensors from Linköpings universitet, IVP (e.g. MAPP2200), University of Edinburgh, and VVL during the late 1980 s and early 1990 s. The passive pixel uses an NMOS pass transistor to connect the photodiode to the column bus. A resetable charge amplifier located at the bottom of the

19 2.2 Pixel and Readout Circuits 9 row select c1 s2 Vref2 photodiode column bus s1 s3 Vref1 Fig. 2.3: Passive pixel and column readout amplifier. column is then used to convert the charge over the diode capacitance to a voltage, see Fig When reading out the pixel the amplifier is first reset by turning on switches s1 and s2. Then the readout phase takes place and s1 and s2 are turned off while s3 and the transistor in the pixel are turned on. This result in a voltage change at the output of the amplifier that is proportional to the amount of photons collected by the diode, assuming the capacitor c1 is linear. The major advantage with a passive pixel is its simplicity. With only the horizontal column select wire, the vertical column bus wire, and one transistor per pixel the pixels can be made very small and still achieve a high FF. Another advantage is analog row binning, which means it is possible to add the signals from pixels in the same column. This trades vertical resolution against the signal-to-noise ratio (SNR), and it is also used in some machine vision applications like, e.g., laser scatter measurements. The major drawback with the passive pixel is high temporal noise. Temporal noise at the column bus node during the reset phase is amplified by the ratio of the parasitic capacitance at the bus node over the integration capacitance, c1 in the readout amplifier. This ratio increases linearly with the number of rows used and can, e.g., be around 100 for a sensor having 512 rows. Another drawback is column leakage, which makes the readout amplifier sensitive to photons hitting pixels in the entire column during the readout phase. This column leakage is partly caused by photons being picked up at the column bus node diffusion of the pixel transistors. It is also caused by the capacitive coupling between the column bus node and the photodiodes. The column leakage is minimized by having a short readout time which, unfortunately, is power consuming. Another drawback is the complexity of the readout amplifiers where mismatches can cause FPN

20 10 Chapter 2. CMOS Image Sensor Circuitry between columns. The passive pixel using only one transistor does not have any anti-blooming capability. Blooming occurs when a photodiode gets completely discharged and causes surrounding pixels to get more discharged than they should. In reality anti-blooming is often desired, and it is accomplished by adding an extra transistor with wires for the gate voltage and the source voltage. This extra transistor hinders the voltage over the photodiode to get below a certain limit and, thereby, prevents blooming. This anti-blooming yields a pixel with twice as many wires and transistors as the passive pixel in Fig. 2.3, and the major advantage of a small pixel with high FF is then reduced. All these drawbacks make the passive pixels rather unattractive and they are nowadays not used for 2-D sensors Active Pixels Fig. 2.4 shows an integrating three transistor active pixel with a typical column readout circuit. Sensors based on active pixels are called active pixel sensors (APS). In an active pixel the charge to voltage conversion is performed within the pixel. This voltage is buffered by the transistor m2 which together with a bias current source (m3), located at the bottom of the column, form a source follower circuit. This means a voltage is driven onto the column bus. To reduce offsets between these source follower circuits a double sampling approach is used. After the signal has been read from the pixel, it is reset and the reset voltage is read. The difference between the reset value and the signal value then gives an offset free voltage that corresponds to the amount of detected photons. This is often referred to as correlated double sampling (CDS), but sometimes referred to as only double sampling (DS) since the operation does not cancel the reset noise from transistor m1. In an active pixel anti-blooming is performed by the reset transistor, m1 in Fig By letting the low level of the gate voltage of this reset transistor be much higher than the ground voltage, e.g., around the sum of the threshold voltage for m1 and m3 is suitable, the voltage over the photodiode is hindered to go all the way down to ground. The high level of the gate voltage of m1 can also be tuned. If it is set to Vdd the reset transistor will be in the subthreshold region at the end of reset and this is called soft reset. The voltage over the photodiode then only reaches within about one threshold voltage from Vdd. Furthermore, the reset level reached depends slightly on the voltage over the diode at the start of the reset

21 2.2 Pixel and Readout Circuits 11 Vdd reset m1 m2 row select photodiode column bus Vbias m3 csignal creset Fig. 2.4: Active pixel and readout circuit. phase. This is a phenomenon called image lag. To overcome these issues the high level of the gate voltage of m1 can be pumped up higher. If it is set higher than Vdd plus the threshold voltage of m1 the voltage over the photodiode will become reset all the way to Vdd. This is referred to as hard reset. A drawback with this approach is that the temporal reset noise from m1 becomes around twice as high compared to the soft reset approach. It is possible to circumvent this increase in temporal noise and still reduce the image lag. A technique referred to as flushed reset has a separate wire routed to the pixel for the drain voltage of the reset transistor [4], [5]. This wire is dropped to a low voltage at the beginning of the reset phase and, thereby, causes a hard reset and removes the image lag. Then the voltage is increased back to Vdd and causes soft reset. Other techniques for low reset noise is hard-to-soft reset described in [4], and active reset described in [6]. Compared to PPS APS give lower temporal noise and shorter read time. Active pixels also feature the possibility to do multiple sampling, which is a technique for reaching high dynamic range by using several different integration times. This is possible since the readout of an active pixel is non-destructive, i.e., the pixel is not automatically reset when being read as is the case with the passive pixel. This non-destructive readout is, however, not entirely compatible with the double sampling described above. Since the built-in photodiode capacitance is slightly non-linear and the gain of the source follower varies slightly with the input voltage, the transfer function from detected photon to output voltage is not entirely linear. Integral non-linearity

22 12 Chapter 2. CMOS Image Sensor Circuitry Vdd m1 row select photodiode column bus Fig. 2.5: Active logarithmic pixel. figures for a sensor using the three transistor active pixel is often around 0.5 to 1%. Variations in the pixel conversion capacitance and source follower gain cause conversion gain variations among the pixels in the sensor array. This causes gain FPN, which is undesired. For more information about PPS and APS see, e.g., [7] Logarithmic Pixels Fig. 2.5 shows a direct mode logarithmic pixel. The voltage over the diode is set by the characteristics of the load transistor, m1, and the photo current. This means light integration is not needed. Since the photo current is typically very small, the load transistor will end up in the subthreshold region and the response will, therefore, be logarithmic. The logarithmic response gives the circuit a very high dynamic range. However, drawbacks are high FPN and slow response time for low light levels Global Shutter The rolling shutter used in the passive pixel and the three transistor active pixel causes distortion artifacts in fast moving scenes. To overcome this a global shutter, also referred to as synchronous shutter, can be implemented. The global shutter makes it possible to expose all pixels simultaneously, which is not the case with the rolling shutter. Fig. 2.6 shows a 4-transistor integrating active pixel with a global shutter, and Fig. 2.7 shows a 5-transistor integrating active pixel

23 2.2 Pixel and Readout Circuits 13 reset Vdd snap n1 Vdd row select photodiode c column bus Fig. 2.6: 4-transistor shuttered pixel. reset1 Vdd reset2 snap n1 Vdd row selsect photodiode c column bus Fig. 2.7: 5-transistor shuttered pixel. with global shutter. A benefit with the 5-transistor solution is that it allows for simultaneous exposure and pixel read, which is important for high-speed imaging. The storage capacitor c1 in Fig. 2.6 and Fig. 2.7 could be implemented as a MOS capacitor. However, often the parasitic capacitances at the storage node are sufficient. With a global shutter it is very important that the shuttered node, node n1 in the figures, is not discharged by the incoming photons while the shutter is closed (i.e., when snap is low). A first measure to combat this is to use the metal layers to shield the source/drain diffusions connected to this node. If the process uses a twin-well it may be possible to have the transistors placed in a p-well, while no well is used in the region of the photodiode leaving it in the lightly doped substrate [8]. The difference in doping concentrations then limits the diffusion of the electrons freed by the incoming photons.

24 14 Chapter 2. CMOS Image Sensor Circuitry 2.3 ADC In general there exist three architectures for on-chip ADCs for CMOS image sensors. They are one single ADC for all pixels, one ADC per column, and one ADC per pixel. Also some derivatives of these exist, like a few ADCs onchip instead of one, a column parallel ADC where a group of columns share one ADC, or one ADC per group of, e.g., four neighboring pixels. One single ADC is often used in low-speed sensors. The pixel level ADC, on the other hand, can handle very high speed, however, it drastically decreases the FF. The column parallel architecture can be used for both high-speed applications and low-power applications. For the single ADC many different types of ADC architectures can be used. For pixel level ADC a single slope approach is most often used, see e.g. [9]. The single slope approach is also often used for column parallel ADCs [10], however, e.g. successive approximation (SA) ADCs using switched capacitor charge sharing [8] and cyclic ADCs [11] are also used for column parallel ADCs. Compared to the cyclic and SA architectures the single slope ADC requires much less area and it is easier to reach good accuracy using the single slope. A drawback with the single slope is the speed since one conversion requires 2 n clock cycles. It is, however, possible to trade accuracy or signal swing for higher speed, and the clock frequency can also be much higher than for the cyclic and SA ADCs. For more information about column parallel ADCs see Paper III. References [1] B. Fowler, CMOS area image sensor with pixel level A/D conversion, PhD Thesis, Stanford University, [2] I. Inoue, H. Tanaka, H. Yamashita, T. Yamaguchi, H. Ishiwata, and H. Ihara, Low-Leakage-Current and Low-Operating-Voltage Buried Photodiode for a CMOS Imager, IEEE Trans. Electron Devices, vol. 50, no. 1, pp , Jan [3] A. Moini, Vision chips or seeing silicon, Technical Report, Centre for High Performance Integrated Technologies and Systems, The University of Adelaide, Australia, Mar

25 References 15 [4] B. Pain et al., Analysis and enhancement of low-light-level performance of photodiode-type CMOS active pixel imagers operated with sub-threshold reset, in 1999 IEEE Workshop on Charge-Coupled Devices and Advanced Image Sensors, Nagano, Japan, pp , June [5] H. Tian, B. Fowler, and A. El Gamal, Analysis of Temporal Noise in CMOS Photodiode Active Pixel Sensor, IEEE J. Solid-State Circuits, vol. 36, no. 1, pp , Jan [6] B. Fowler, M. D. Godfrey, J. Balicki, and J. Canfield, Low Noise Readout using Active Reset for CMOS APS, Proc. SPIE, vol. 3965, pp , San Jose, CA, [7] A. El Gamal, Lecture notes in the course EE392 at Stanford University, [8] A. I. Krymski and N. Tu, A 9-V/Lux-s 5000-Frames/s CMOS Sensor, IEEE Trans. Electron Devices, vol. 50, no. 1, Jan [9] S. Kleinfelder, S. Lim, X. Liu, and A. El Gamal, A 10kframes/s 0.18µm CMOS Digital Pixel Sensor with Pixel-Level Memory, ISSCC Dig. Tech. Papers, pp , [10] K. Chen, M. Afghahi, P-E. Danielsson, and C. Svensson, PA- SIC: A Processor-A/D converter-sensor Integrated Circuit, Proc. IEEE Int. Symp. Circuits and Systems (ISCAS 90), vol. 3, pp , May [11] K. Chen, and C. Svensson, A Parallel A/D Converter Array Structure with Common Reference Processing Unit, IEEE Trans. Circuits Syst., vol. 36, no. 8, pp , Aug

26

27 Paper I A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging L. Lindgren, J. Melander, R. Johansson, and B. Möller IEEE J. Solid-State Circuits, vol. 40, no. 6, pp , June

28 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging Leif Lindgren Johan Melander, Member, IEEE Robert Johansson Björn Möller Abstract This paper presents a multiresolution general-purpose high-speed machine vision sensor with on-chip image processing capabilities. The sensor comprises an innovative multiresolution sensing area, 1536 A/D converters, and a SIMD array of 1536 bit-serial processors with corresponding memory. The sensing area consists of an area part with pixels, and a line-scan part with a set of rows with 3072 pixels each. The SIMD processor array can deliver more than 100 GOPS sustained and the on-chip pixel-analysing rate can be as high as 4 Gpixels/s. The sensor is ideal for high-speed multisense imaging where, e.g., colour, greyscale, internal material light scatter, and 3-D profiles are captured simultaneously. When running only 3-D laser triangulation, a data rate of more than 20,000 profiles/s can be achieved when delivering 1536 range values per profile with 8 bits of range resolution. Experimental results showing very good image characteristics and a good digital to analogue noise isolation are presented. Index Terms APS, CMOS image sensors, laser triangulation, machine vision, MAPP, multiresolution, multisense, smart vision sensors, 3-D. L. Lindgren was with IVP Integrated Vision Products AB, and is now with Synective Labs AB, Linköping, Sweden ( leifl@isy.liu.se). J. Melander is with SICK IVP AB, Linköping, Sweden ( jme@sickivp.se). R. Johansson was with IVP Integrated Vision Products AB, and is now with Micron Imaging, 0349 Oslo, Norway ( rjohansson@micron.com). B. Möller was with IVP Integrated Vision Products AB, and is now with Metrima AB, Linköping, Sweden ( fullcustom.designer@home.se). 18

29 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 19 1 Introduction Today two main alternatives exist for general image sensing, charge-coupled devices (CCD) and CMOS image sensors. CCDs still offer the best image performance in high-end systems such as space and medical imaging. However, in contrast to CCDs CMOS image sensors offer the possibility of system integration on one chip [1 4] giving advantages such as high speed, compact system, low cost, and low power. Machine vision spans a wide range of applications where one of the biggest and fastest growing segments is real-time control in factory automation. Economical aspects today are pushing this segment towards 100% in-line inspection without slowing down production. A clear trend in machine vision is the move towards augmenting the normal 2-D greyscale inspection with 3-D measurements. Many critical inspection tasks require control of the third dimension and the trend is perhaps clearest in electronics, e.g., solder paste volume, and wood inspection, e.g., wane detection (wane is missing wood due to the curved log exterior and knowledge of wane position and defects is essential for cut selection). A generalpurpose machine vision camera system in this segment can be characterised as a high-speed 2-D/3-D measurement system. As a consequence of high-speed, thus short integration times, high sensitivity is also required. Furthermore, a compact system is important in order to get in close enough range to the objects or just to be able to mount the system on moving parts. Image quality requirements are in most cases mid-range. With the potential of system integration, mid-range image quality, and sensitivity comparable to CCDs a CMOS image sensor seems to be the ideal candidate for machine vision. The sensor presented is a further development and extension of the previous sensors LAPP [1], PASIC [2], MAPP2200 [3], and MAPP2500 [5]. It is a generalpurpose high-speed smart machine vision sensor, fabricated in a standard 0.35 µm triple metal double poly CMOS process. The sensor integrates on one chip an innovative multiresolution sensing area, containing one area part and one highresolution line-scan part (HiRes), a column parallel A/D conversion, and a column parallel single-instruction multiple-data (SIMD) processor. The processor implements image oriented instructions and delivers more than 100 giga operations per second (GOPS) sustained. The processor architecture can be used for a variety of image processing tasks such as filtering, template matching, edge detection, run-length coding, and line-scan shading correction. Although being

30 20 Paper I Camera with newly developed sensor 3-D triangulation illumination Object movement Scatter illumination Colour and greyscale illumination HiRes rows field-of-view Object Camera field-of-view Fig. 1: A typical multisense system. general-purpose the main application field for this sensor is high-speed highresolution multisense imaging where, e.g., colour, greyscale, internal material light scatter, and 3-D profiles are captured simultaneously. This is enabled by the ability to dedicate different rows of the sensor to different measurement tasks, the high-speed pixel readout, and the on-chip SIMD processor. The multiresolution concept and the high level of integration enable low-cost high-performance machine vision systems. Section 2 exemplifies a multisense system. Section 3 describes the architecture of the sensor. The actual circuit implementation is described in Section 4 and experimental results are provided in Section 5. Section 6 compares the presented sensor with other advanced sensors. Finally, Section 7 concludes the paper. 2 Multisense Imaging A typical multisense system utilising the presented sensor is depicted in Fig. 1. A horizontally moving object is inspected and 3-D shape, internal material light scatter, colour, and greyscale are extracted in a single pass using the same camera system. One part of the area sensor is dedicated to 3-D shape measurements. The 3- D extraction is achieved using the well-known sheet-of-light technique where

31 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 21 profile data are acquired by triangulation [6]. A laser line is projected on the target and the offset position of the reflected light on the sensor carries the 3-D data for one profile. The complete 3-D shape of the object is built up from consecutive profiles of the moving target. On-chip hardware implements all data processing necessary for high-quality profiles to be sent out from the sensor. Running 3-D profiles only, a data rate of more than 20,000 profiles/s can be achieved with this sensor delivering 1536 range values per profile with 8 bits of range resolution. This translates to the extremely high pixel analysing rate of 4 Gpixels/s. When going down to 7 bits of range resolution the data rate increases to more than 40,000 profiles/s producing 61 M range values per second. A second part of the area sensor is devoted to internal material light scatter measurements [7]. When light strikes a surface it will be scattered within the material and give a bright region around the point of influence. The amount of scattering depends on the optical density of the material; low density materials will scatter more than high density materials. This effect is very useful in inspection of vegetables and in wood inspection where, e.g., a knot will scatter much less light than the surrounding clear wood due to its higher density and different fibre orientation. A laser line illumination source is mounted perpendicular to the object and from this light a reflected and a scattered component are extracted. As for the case of 3-D profiles, hardware support is implemented for effective laser scatter line composition. A third part of the area sensor is coated with colour filters, typically red, green, and blue filters, and provides colour information. A white light source is focused on this area and 8-bit data per colour channel are sent out from the chip. The HiRes rows constitute a fourth sensor part. A white light source, preferably the same as for the colour section, is focused on the HiRes rows and 8-bit highresolution greyscale line-scan data are sent out from the sensor. The sensor can, e.g., deliver bit range data, bit scatter data, bit colour data per channel, and bit greyscale data 7900 times per second. This translates to a data rate of 777 Mbit/s.

32 Row decoder Digital ctrl 22 Paper I Column HiRes rows (3072 pixels each) 1536 x 512 Array RW E B 24 AS 1536 Columns Analogue readout and AD-conversion Processor and registers DAC AL DL A S DP 32 Fig. 2: Chip block diagram. 3 Architecture 3.1 System Level Fig. 2 depicts the high-level chip block diagram. The core of the chip consists, with a few exceptions, of a linear array of 1536 columns butted together in the x-direction. Each column contains a part of the multiresolution sensor area, analogue readout, A/D conversion/thresholding, processor, and registers. The instruction decoding and digital control, analogue switch control (AS), and pixel array row addressing are placed on the left side of the core. The dataport logic (DL), A/D conversion logic (AL), and the DAC used for A/D conversion are placed on the right side. The chip has a simple interface where instructions are sent on a 24-bit bus, B, synchronised with the main chip clock, E, running at 33 MHz. The chip interprets and executes instructions within a single clock cycle, in most cases. Instructions sent to the core array constitute a typical example of a single-instruction multipledata (SIMD) processor architecture. Thus, a single instruction is distributed to all columns and each column executes the instruction using its respective column data. The instruction bus, B, can be turned around by changing the read-write signal, RW, and status information can then be read instead. A fraction of the registers in the core array are augmented with a dataport capability. This gives

33 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 23 the possibility to stream out data on a 32-bit wide bus, DP, controlled by a separate clock, S, at 1.1 Gbit/s. The parallelism is further increased by using a third separate clock, A, that controls the progress of the A/D conversion. The parallelism implemented in this design by the use of three separate clock domains, E, S, and A, allows for highly parallel high-speed algorithm implementations. 3.2 Multiresolution Sensor Area and Analogue Readout The analogue part of one column is shown in the left part of Fig. 3. The multiresolution sensor area is made up of two different parts. The area part has 1536 columns and 512 rows with 9.5 µm square pixels and the HiRes part has 3072 columns and a set of rows with a high and narrow pixel. The pixel array has 1536 vertical column lines for reading out the pixel voltages. To connect the HiRes rows to the vertical lines each HiRes row has two row addresses, making it possible to first read out the even columns and then the odd columns, Fig. 3. Each of the vertical lines in the pixel array connects to a correlated double sampling (CDS) and digitally programmable gain amplifier (DPGA) circuit. This circuit performs CDS, with a programmable gain of four settings, and can also perform analogue row binning. The sensor area (including HiRes) is fully addressable via the row address decoder and the column selectability of the dataport. This enables multiple windowing and region of interest readout. Furthermore, the row address decoder allows for different integration times for different rows. 3.3 A/D Conversion The A/D conversion is column parallel and of single slope converter type [2]. This is implemented with a comparator and 8-bit memory in each column, and a common counter that feeds a DAC and the 8-bit in-column memory. The counter counts from 0 to 255, making the DAC produce a voltage ramp. When the ramp exceeds the voltage from the DPGA/CDS circuit the comparator switches and the counter value is loaded into the column memory (ADREG in Fig. 3). The ADC resolution is programmable from 3 to 8 bits, with a fast pseudo conversion (FPC) option. When using FPC the counter step-size is doubled after reaching the value 64, thereby decreasing the greyscale resolution of highly illuminated

34 24 Paper I PD ADREG HiRes 8 8b counter GLU 512 rows column line NLU ALU line PLU ST CDS DPGA comp vramp Acc 16 COUNT/GOR PD RREG SREG/dataport Fig. 3: The column architecture. parts of the image. The A/D conversion is clocked at 33 MHz resulting in 7.7 µs per 8-bit conversion and 4.8 µs per 8-bit FPC conversion. An advantage with this ADC topology is the ability to perform fast thresholding, a crucial operation in many image processing algorithms. Besides making the DAC produce a ramp, it is possible to load a digital value into the DAC and thereby produce a voltage for the threshold operation. The DAC features a 256-step programmable gain and a 256-step programmable offset, making it possible to do an automatic calibration of the A/D conversion. If the calibration is performed under a well-defined illumination it is possible to make different sensors have the same photo response. The programmable gain and offset can also be used together with the on-chip digital processor to implement an innovative dithering scheme making it possible to do A/D conversions with more than 8 bits of resolution. A 9-bit conversion is realised by adding the result from two 8-bit conversions where one of the conversions was made with the ADC offset changed by 1/2 LSB. In a similar manner 9.5-bit and 10-bit conversions can be obtained. Since the ADC offset step size is independent of the

35 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 25 ADC gain setting the gain setting has to be set so the step size equals 1/2 LSB, or 1/3 LSB in the 9.5-bit case and 1/4 LSB in the 10-bit case, to get a low DNL. The dithering scheme is compatible with the FPC option. For instance, a 9-bit conversion takes 16 µs, a 9.5-bit conversion takes 24 µs, a 10-bit conversion takes 32 µs, and a 10-bit pseudo conversion takes 20 µs. 3.4 Processor and Registers The processor and registers constitute a high-performance image processing unit by the use of massive parallelism and the implementation of image oriented instructions. The processor in each column is bit-serial allowing for extremely high-speed binary image processing. The bit-serial approach is also flexible whereas it allows variable word lengths and data formats to be used for greyscale image processing. For example, for each row, a 3 3 Gauss filtering takes 8b cycles and a 3 3 Sobel filtering takes 11b cycles, where b is the number of bits used. Run-length coding takes 10 clock cycles per object per row, which, e.g., makes it possible to capture binary images and output run-length coding at more than 600 full frames per second, when allowing ten objects per row. Furthermore, a multiplication takes 3b 2 cycles and can for instance be used for shading correction in a line-scan application. A column in the digital part of the sensor is illustrated in the right part of Fig. 3 and it consists of the following parts. PD is a 1-bit register that holds the thresholded value of the column and ADREG is an 8-bit register that holds the A/D converted column value. Below that is the processor part consisting of a global logical unit (GLU), neighbourhood logical unit (NLU), and a point logical unit (PLU) with status registers and 16 accumulators. Further below are the global feature extractions COUNT and GOR. Finally at the bottom the general registers, consisting of 96 bits per column (RREG) and 16 bits per column (SREG), where SREG is also part of the high-speed dataport. Each column carries a 1-bit ALU line that is the intra-column communication channel between the different blocks in the digital part. The simplest type of instructions are the PLU-instructions that perform Boolean operations on the column data. The increased number of bits in the accumulator, 16 bits compared to 1 bit in earlier sensors, together with new instructions for addition and subtraction increase the speed of greyscale image processing. Typical

36 26 Paper I for image processing is the fact that the result for a pixel depends on its neighbours, such as median filtering or template matching. The NLU was designed for this purpose allowing a single instruction for a three-column median filtering or template match. Note that the NLU and PLU are tightly integrated indicating that a PLU-instruction can only be performed by passing data through the NLU. The result of an NLU/PLU-instruction is stored in one of the 16 accumulators. Since all NLU/PLU-instructions can be performed in a single clock cycle the arithmetic performance is very high exceeding 100 GOPS. The GLU was added to circumvent the problems associated with global instructions and SIMD architectures. The GLU provides a set of instructions where each column result depends on all columns input data, see [3]. The global feature extraction units, COUNT and GOR, operate on accumulator 0. The COUNT feature outputs a digital value equal to the total number of ones in accumulator 0 in all columns. The GOR feature is a single-bit result operation performing a global-or on accumulator 0 in all columns. The results from GOR and COUNT are available at the B bus together with other status information. In addition, the COUNT-value can be read from the dataport and the GOR result is available on an external pad. 4 Circuit Implementation A chip photograph is shown in Fig. 4. The chip measures 16.8 mm 11.2 mm and comprises 5.8 M transistors. It is implemented in a 0.35 µm 3.3 V/5 V triple metal double poly standard CMOS process. Both the analogue and the digital parts of the sensor are powered with 3.3 V. The large chip dimensions, mixed-signal nature, and the use of relatively high-speed synchronous control present many design challenges. Special attention has for instance been focused on mixed-signal noise isolation, large distance signalling, synchronisation, and acceleration techniques for parallel feature extraction logic. 4.1 Mixed-Signal Aspects In this design the use of EPI-wafers, consisting of a thin lightly doped p epitaxial layer on top of a heavily doped bulk, is required from an image sensing point

37 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 27 Fig. 4: Chip photo. of view. EPI-wafers offer a high quality substrate for the photodiodes, which yields lower and more uniform dark currents. Furthermore, EPI-wafers relax the need for substrate contacts in each pixel, thus increasing the fill factor (FF). EPIwafers also offer the potential for good digital-analogue noise isolation if a low ohmic die back-side connection to ground can be made combined with the use of separate power domains [8], a technique used in this design. Although having only a small impact on noise isolation for EPI-wafers, traditional techniques such as the use of guard rings, separating digital and analogue parts by distance, etc. have also been used. On-chip decoupling has been used extensively for the power supplies. There is 20 nf of gate decoupling capacitance in the analogue domain and 17 nf in the digital domain. 4.2 Large Distance Signalling A multitude of horizontal control signals are used in this design, each measuring about 15 mm and connecting to an input in each column. The clocking approach in this design uses two non-overlapping clocks and their inverses, hence it is critical to synchronise and preserve the non-overlap inside a column and

38 28 Paper I between columns. The minimal non-overlap that can be used is determined by the RC-time constant spread of all control signals. Initially a variation in control signal load capacitance between 12 pf and 100 pf prevented a fast design, i.e., small non-overlap. This was relaxed by the innovation of making every four columns share a local inverter and instead distribute the inverse of the control signal. This decreased the variation in capacitive load to 5 12 pf, and significantly reduced the rise and fall time of the clock signal seen in each column. A standard sized control signal driver was then used and the RC-time constant was equalised among horizontal control signals by sizing the wire width. In this design this resulted in an RC-time constant of 1 ns and wire widths of 2 5 µm. For analogue control signals (pixel array, readout, and ADC) local inverters are not used. Non-overlap is instead guaranteed by delaying ON-signals half a clock cycle compared to OFF-signals. 4.3 GLU, COUNT, and GOR Intricate and innovative acceleration structures have been used for the GLU, GOR and COUNT. Simulations at 85 C using worst speed transistor models and distributed RC-extraction netlists show that COUNT needs 35 ns to evaluate for the worst input condition. This means that one clock cycle is required after an instruction that alters accumulator 0 before a valid COUNT-value can be read from the sensor. Similar simulations of the GLU show that the result is not ready within 15 ns in the worst case. Due to instruction timing issues this also means that it will take one extra clock cycle before the GLU-result can be read. Often this does not mean that an algorithm is slowed down since one instruction, not requiring the GLU/COUNT-result and not affecting accumulator 0, can be issued before using the result. Simulations of GOR showed 6.1 ns for the worst condition, which is when only accumulator 0 in the rightmost column is set. 4.4 Pixels and Analogue Readout The pixels are standard three transistor active pixels. The fill factor, being the pixel-area not covered by metal or n + -diffusion, is calculated to 60% for the array pixels and 80% for the HiRes pixels. The voltage levels at the gate of the reset transistor are controlled from the outside of the chip making both soft reset

39 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 29 and hard reset possible [9]. Hard reset has been used due to the demand of high speed and no image lag. The DPGA/CDS circuit is shown in Fig. 5 along with the comparator, one column line, and one pixel. The circuit contains two independent readout paths, one using an SC-amplifier and the other using a single capacitor. The SC-amplifier is an inverting voltage amplifier with three gain settings, -1, -3, and -4. The gain is set by the g0 and g1 switches. The operational transconductance amplifier (OTA) is of folded cascode type and its offset is removed by offset compensating at the comparator (sr5). When using the single capacitor readout (SCR) g0, g1 and si2 are turned off and ng is turned on. The right side of the capacitor C3 is charged either by vrampp, which is the positive output from the DAC, or by the external reference signal vdac0, when the signal is read from the pixel. In the next phase the pixel is reset and the right side of the C3 capacitor is floating making it follow the voltage change on the column line. This results in a voltage change on the negative input of the comparator that corresponds to the difference between the signal value and the reset value from the pixel, resulting in CDS being performed (this is sometimes referred to as only double sampling (DS) or double data sampling (DDS) since the temporal noise of the reset transistor is not cancelled). With the SCR it is also possible to do true CDS, meaning the reset voltage is first read, then the pixels integrate light, and then the signal voltage is read. This cancels the reset noise in the pixel and can for instance be used in an application requiring extremely low temporal noise for greyscale line-scanning. No limitations on the integration time exist for this readout mode, except for limitations set by dark current in the pixels and the fact that this readout mode slows down operation since no other rows can be accessed during this integration. By operating g0 and g1 in a special way it is possible to readout a pixel value with gain -1 and use that value for thresholding or A/D conversion. The gain -1 value can then be amplified with a factor of four with the pixel voltage still at reset level. The gain -4 value can then be used for thresholding or A/D conversion. This opens the possibility of high dynamic range algorithms. 4.5 A/D Conversion To get a monotonous and glitch free ramp with very low differential non-linearity (DNL) and integral non-linearity (INL) the 8-bit DAC is of thermometer decoded

40 30 Paper I rst sel column line C2 si1 vdac0 g0 C0 sr2 g1 C1 vdac255 OTA si2 sr3 sr4 sr5 comp ng C3 si4 si3 1 0 s1 vrampn vrampp Fig. 5: DPGA/CDS circuit along with the comparator, column line, and a pixel. current steering type [10]. The DAC produces two outputs where one ramps down, vrampn, whereas the other ramps up, vrampp. According to Fig. 5, either of the two ramps can be connected to the positive input of the comparator. The reason for this is that vrampp is used together with the SCR path whereas vrampn is used with the SC-amplifier path. The DAC has a programmable offset for each of the two outputs and the offsets are individually set with 8-bit resolution. Within the DAC there is also an 8-bit current steering DAC that produces a voltage that controls the amount of current supplied by the current sources producing the two ramps. This makes it possible to change the swing of the ramps in 256 steps, from 0.55 V to 1.6 V. The gain setting does not affect the offset current sources. It is important that the signal from the DAC is stable over temperature, therefore, a circuit based on an on-chip bandgap voltage reference has been used to achieve this. To get a low column to column fix pattern noise (FPN) it is important to have comparators with very low input offset. It is desirable that the comparators draw a constant current so they do not produce switching noise on the power supply, and that they have a high PSRR. Furthermore, the comparators should be insensitive to charge injection and clock feed through (CI/CFT) from switches turning off so that there will be no low frequency FPN in the form of a predictable column gradient due to the fact that CI/CFT are dependent on the fall time of the gate signal [11]. To meet these requirements a four stage comparator is used. The two first stages are fully differential, the third stage is an innovative constant current differential to single ended converter, and the fourth stage is a conventional inverter that makes the output rail-to-rail. Offset compensation is realised

41 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 31 Table 1: Characteristics for the DAC, the ADC, and the Array pixels Parameter Value Global DAC DNL, (1 V) RMS=0.04, max=0.12 LSB Global DAC INL, (1 V) RMS=0.04, max=0.13 LSB ADC DNL, (1 V) RMS=0.06, max=0.50 LSB ADC INL, (1 V) RMS=0.08, max=0.45 LSB Photo response INL RMS=0.09%, max=0.16% FF (array pixels) 60% QE 45% at 610 nm Pixel capacitance 7.5 ff Dynamic range 62 db, (65 db reset on) Dark signal 0.35 LSB/ms at 60 C Conversion gain 14.5, 62.0 µv/e, (DPGA gain: G=1, G=4) Temporal noise (RMS, in dark) 900 µv, 3.99 mv, (G=1, G=4) Temporal noise (RMS, in dark, 700 µv, 2.76 mv, (G=1, G=4) pixel reset on) FPN (RMS, dark, entire array) 0.15, 0.22 LSB, (G=1, G=4) Gain FPN (Global - entire array) 1% RMS Gain FPN (Local pixels) 0.58% RMS Image lag Below measurement limit by output offset storage (OOS) for the first stage and input offset storage (IOS) for the second stage [12]. The first three stages are connected to an analogue power supply and they draw almost constant current even when switching. The inverter is driven by a digital power supply in order not to produce noise on the analogue power domain. 5 Experimental Results For evaluation purposes the sensor was mounted chip-on-board on an evaluation board. An integrating sphere and a stabilised DC light provide uniform illumination. Many of the test results from the A/D conversion and the array pixels are listed in Table 1. For most measurements the DAC swing was set to 1 V, and 8-bit A/D conversions were performed.

42 32 Paper I It was possible to characterise the global DAC and the column ADC separately due to the possibility to multiplex the DAC output out to an external ADC, and the possibility to feed an analogue value from an external DAC to the input of the internal ADC. The RMS INL of the DAC and the ADC was calculated as the RMS value of the INL vector obtained when fitting a curve using least square approximation, while the max INL was calculated as the maximum value of the INL vector obtained when fitting a curve by minimising the maximum deviation. Fig. 6 shows the measured DNL vector and the minimise max INL vector for the DAC, and extracted values are found in Table 1. The results are very good and the 0.13 LSB max INL gives a relative accuracy of 10.9 effective number of bits (ENOB) for the DAC. The ADC INL and DNL was measured individually for all 1536 columns, and the results for all columns were very similar. Fig. 7 shows the measured DNL vector and the minimise max INL vector for the ADC in column 0 for a standard 8-bit conversion. The much shorter conference paper [13] showed higher values for the ADC INL than the ones presented in Table 1. This because INL was then calculated using data from the entire DAC sweep instead of the found DNL points. This makes the INL RMS of an ideal ADC equal to the quantisation noise and INL max equal to 1/2 (noticeable is that the INL RMS of 0.31 LSB presented in [13] is very close to the theoretical quantisation noise of 1/ LSB). The ADC DNL measurements showed that the very first few steps of the A/D conversion were smaller than the rest of the steps, see Fig. 7. It is believed that this is due to the delay of the DAC and the comparators becoming settled after a few DAC steps. The smaller steps cause both the max DNL and max INL of the ADC to be close to 0.5 LSB which is considerably higher than for the DAC. In reality the smaller steps actually gives higher resolution for the first grey levels. However, if this is not desirable it is possible to use the other of the two ramps for A/D conversion and then invert all the bits in the digital output from the conversion. This will make the smaller steps show up at the other end of the range, i.e., around grey level 255. Another way is to use the programmable offset to move the ramp so the signal will always be higher than the first few grey levels. If this later method is used the max DNL and max INL of the used ADC range becomes close to the non-linearity values for the DAC. Both these methods have been tested and work as expected. The photo-response integral non-linearity for the entire signal chain, i.e., from photons to digital value, was measured under uniform illumination using 128 equally spaced integration times. At each integration time 20 images were captured and the average pixel value calculated. As for the DAC and ADC, the INL

43 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 33 DNL (LSB) INL (LSB) DAC input code (LSB) Fig. 6: Measured DNL and INL for the internal DAC. 0.2 DNL (LSB) INL (LSB) ADC output (LSB) Fig. 7: Measured DNL and INL for the internal ADC.

44 34 Paper I Global FPN (LSB) Local Grey Level (LSB) Fig. 8: Measured global and local total FPN vs. grey level. was calculated by fitting a curve both using least square approximation and by minimising the maximum deviation. The results were extremely good with an INL of 0.09% RMS and 0.16% max. The FPN in dark and gain FPN was measured using an ADC swing of 1 V. FPN was measured as the RMS error, i.e., standard deviation, among the pixels in temporally averaged images captured in room temperature, i.e., it also contains dark current FPN. For the dark FPN the integration time was 400 µs. The results are given in Table 1. Fig. 8 shows global and local (9 9 neighbourhood) total FPN at different grey levels. There are two possible ways to read out the reset value from a pixel. Either while the reset transistor in the pixel is still turned on, or after turning the reset transistor off. Having the reset transistor on results in lower temporal noise since the noise from the reset transistor then becomes low pass filtered via the pixel source follower readout, but it also results in higher FPN due to charge injection and clock feed-through from the reset transistor. For accurate measurements of the ADC input referred temporal noise the method presented in [14] was used. Fig. 9 shows the measured ADC input referred temporal noise as function of mean ADC input signal with reset on. Calculating the theoretical RMS reset noise

45 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging Temporal RMS noise (mv) Mean ADC input signal (V) Fig. 9: Measured temporal noise referred to the ADC input vs. ADC input mean (reset on). in the pixel [9] and adding the simulated temporal noise from the pixel source follower and the comparator result in 810 µv for reset off. This is slightly lower than the 900 µv measured. Another observation concluded from measurements is that heavy digital activity, including fast I/O, only affects the temporal noise marginally. An increase of LSB can be measured depending on readout mode. No effect on FPN was measured. This verifies that noise isolation in the sensor is very good. The conversion gain, from collected electrons to ADC input, was calculated by applying Poisson statistics on the measured temporal noise in Fig. 9, [15]. From the conversion gain the pixel capacitance was calculated to 7.5 ff. The absolute spectral response was measured for the pixel array using a focused light source with variable wavelength, controllable in 5 nm steps, see Fig. 10. Peak QE FF is 27% and occurs at 610 nm; using a FF of 60% a peak QE of 45% is obtained. The power consumed for a typical high-speed application is 680 mw. This is distributed as 24% in the digital part (including I/O), 20% in the global DAC, and the remaining 56% in the analogue part. It should be noted that the analogue part has also been designed to work at half of the nominal bias current, set by external

46 36 Paper I QE (electrons/photon) Light wavelength (nm) Fig. 10: Measured quantum efficiency vs. light wavelength. resistors, which effectively halves the analogue power, thus trading lower speed for lower power. To measure the dark current as a function of temperature an on-chip temperature sensor has been used for junction temperature measurements. The dark current results in an average pixel leakage of 0.35 LSB/ms at 60 C and it doubles for every 10 C increase in temperature. To validate the increased horizontal resolution of the HiRes rows compared to the area pixels the modulation transfer function (MTF) was measured [16]. The result of the MTF measurement was 0.4 at the Nyquist frequency for the HiRes rows, and 0.45 at the Nyquist frequency for the area pixels. Since the Nyquist frequency is twice as high for the HiRes rows compared to the area pixels, it can be concluded that the horizontal resolution of the HiRes rows is almost twice as high. The HiRes pixels behave similarly to the array pixels. Dark current was however 60% higher for the HiRes pixels, compared to the normal array pixels. This is attributed to the larger perimeter and area of the HiRes photo diodes [17]. Temporal noise was slightly lower due to the larger diode capacitance in the HiRes pixels.

47 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 37 Fig. 11: Scatter component (top) and direct component (bottom) of a scan of a piece of wood. Besides the extensive image characterisation all digital operations have been tested and work as expected. Fig. 11 shows the scatter component and direct component of a scan of a piece of wood. The knots become dark in the scatter image because the laser light does not move along the natural fibres in the wood when a knot is scanned. An example of a 3-D scan of a BGA is shown in Fig. 12. The image shown has been rotated, median filtered, zoomed in (it shows pixels while each 3-D profile is originally 1536 range values wide), and merged with the greyscale image to better show details. The axes denote pixels and one pixel here corresponds to 10 µm.

48 38 Paper I Fig. 12: Zoomed in BGA 3-D scan. 6 Comparison With Other Sensors Compared to the pixels sensor MAPP2500, see [5], the area part of the presented sensor has three times higher horizontal resolution and the HiRes rows has six times higher resolution. The pixel readout speed is much higher, 4 Gpixels/s compared to 288 Mpixels/s and an A/D conversion is twice as fast. Furthermore, the noise is much lower due to the use of passive pixels in MAPP2500. The digital part in the presented sensor is clocked more than four times faster than the digital part in MAPP2500, which ran with an 8 MHz clock. With three times as many columns and with the increased functionality of the processor core, meaning that many operations require fewer clock cycles, the processing capability is 12 to 24 times higher in the presented sensor. Due to the unique architecture a comparison with other sensors is difficult since the only sensors, known to the authors, offering similar flexibility is the previous

49 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 39 generation sensors, i.e., MAPP2200 and MAPP2500. However, Table 2 compares the presented sensor with a high-speed sensor without processing, a lowresolution sensor with bit-serial processors, a D and greyscale sensor, and a D and greyscale sensor. Compared to state-of-the-art high-speed CMOS image sensors without processing, but with ADC on-chip, the presented sensor has matching image performance. In [18] a sensor with pixels was presented. This 4.1 Mpixel sensor features a 7 µm three transistor active pixel. It has higher QE FF compared to the sensor presented in this paper, which may be due to the use of a specialised CMOS sensor process with improved collection. When using soft reset this sensor gives slightly lower noise in darkness, 65 e temporal and FPN, than the presented sensor, 74 e, but when using hard reset the noise was considerably higher for the 4.1 Mpixel sensor, 103 e. The 4.1 Mpixel sensor uses a 10-bit ADC and can run at 240 frames per second. This gives a row time of 2.41 µs. This is more than six times longer than the shortest row time possible with the presented sensor, which is 0.38 µs for thresholded images. It is, however, about as fast as when a 7-bit FPC conversion is used in the presented sensor, which takes 2.73 µs, and about twice as fast as an 8-bit FPC conversion, which takes 5.16 µs. The 4.1 Mpixel sensor has a conversion gain of 39 µv/e which is comparable to using the gain -3 setting on the presented sensor. With this conversion gain the use of an 8-bit conversion is motivated since the quantisation noise is then lower than the noise in darkness of both sensors and does only add marginally to the total noise. In [19] a sensor with bit-serial processors was presented. Compared to the presented sensor it lacks circuitry for row-global operations and feature extraction provided by the GLU, COUNT, and GOR. The sensor is targeted at video-rate applications, e.g., colour processing tasks at 30 frames/s is mentioned and each ADC is shared among four neighbouring columns slowing down high-speed operations. However, it seems to be possible to use the sensor for 3-D laser triangulation at moderate speed. If the row addressing is flexible it is then also possible to do multisensing where, e.g., 3-D, greyscale, and colour is captured. In [20] a sensor for 3-D and 2-D greyscale is presented. This 3-D-VGA sensor uses a three transistor pixel that is similar to the standard active pixel with the exception of one extra vertical wire. External 8-bit ADCs were used for the 2- D greyscale data and they limited the greyscale pixel rate to 4 Mpixels/s. The 3-D

50 40 Paper I Table 2: Comparison with other sensors M12 4.1M APS Process 0.35µm 0.35µm Resolution / 3072 n Pixel pitch 9.5µm / 4.75µm 7µm Fill factor 60% 43% (QE FF) CDS Yes Yes ADC 1-10 bits 10-bit FPN + temporal noise 74e 65e / 103e 3-D profile width 1536 No 3-D range rate 60 Mrangels/s, 7-bit No 3-D range resolution 1/2-1/16 sub-pixel No 3-D + grey + colour Yes + additional features No Programmable processing 1536 bit-serial, 33MHz No SIMD 3-D-VGA Process 0.6µm 0.6µm Resolution Pixel pitch 18µm 12µm Fill factor? 30% CDS Yes No ADC Cyclic per 4 columns 3-bit TDA for 3-D FPN + temporal noise? High 3-D profile width D range rate Moderate 20 Mrangels/s 3-D range resolution? <0.1 sub-pixel 3-D + grey + colour Probably No Programmable processing bit-serial, 20MHz No 3-D-RP Process 0.18µm Resolution Pixel pitch 11.25µm Fill factor 23% CDS No ADC 1-bit FPN + temporal noise High 3-D profile width D range rate 137 Mrangels/s 3-D range resolution 0.2 sub-pixel 3-D + grey + colour No Programmable processing No

51 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 41 laser triangulation is performed using a fast time-domain approximative (TDA) readout approach. The use of 3-bit TDA ADCs enable the 3-D-VGA sensor to use external gravity centre calculation to reach <0.1 sub-pixel accuracy. In [21] a sensor for 3-D and binary 2-D is presented. It uses a rowparallel architecture and realizes the very high 3-D range rate of 137 M range values per second. However, this comes at the expense of 24 transistors and a lot of metal routing in each pixel which prohibits small pixels with high FF. A range accuracy of 0.2 sub-pixel is obtained by the use of multisampling. Both these 3-D sensors are intended for 3-D laser triangulation with a scanning mirror and a fixed scene and fixed camera setup. It is possible to use these sensors in a 3-D setup with a moving object and fixed camera and laser, as in Fig. 1. However, since these sensors output one range value per row, contrary to one range value per column as in the presented sensor, these sensors can not simultaneously capture greyscale, or binary, line-scan data in this type of setup and is therefore not suitable for this kind of multisense imaging. The 3-D range rate of the 3-D-VGA is 20 M range values per second which is three times lower than the maximum range rate from the presented sensor. The potential 3-D range rate of the 3-D-RP sensor is 137 M range values per second which is more than twice as fast as the presented sensor. The 3-D profiles from the 3-D-VGA sensor contains 480 range values, and the 3-D-RP profiles 365, compared to 1536 in the presented sensor. The 3-D-VGA architecture does not permit CDS for the 3-D range finding and is sensitive to transistor and wire capacitance mismatches. The 3-D-RP sensor also suffers from offsets from the pixels and has a low FF. This forces the use of a strong laser, which is not suitable for many machine vision applications due to high cost and eye safety regulations. In [21] a 300 mw laser was not sufficient for running the sensor at the highest possible speed. To conclude the comparison the presented sensor is the only one, known to the authors, capable of high-resolution high-speed multisense imaging. 7 Conclusion An innovative multiresolution general-purpose high-speed machine vision sensor with on-chip image processing capabilities has been presented. The computa-

52 42 Paper I tional power of the SIMD processor array, 100 GOPS, together with the highspeed image sensor part makes the continuous internal pixel-analysing rate as high as 4 Gpixels/s for simple machine vision applications. The sensor can simultaneously capture colour, greyscale, internal material light scatter, and 3-D profiles at very high speed. For instance, the sensor can deliver bit range data, bit scatter data, bit colour data per channel, and bit greyscale data 7900 times per second. When running only 3-D laser triangulation, a data rate of more than 20,000 profiles/s can be achieved when delivering 1536 range values per profile with 8 bits of range resolution. The work presented shows that it is possible to integrate high-performance programmable digital image processing circuits with a high-speed CMOS image sensor, and still achieve low noise. Measurements show a photo response nonlinearity of 0.09% RMS, global offset FPN of 0.15 LSB RMS, temporal noise of 900 µv RMS, and a dynamic range of 62 db. Acknowledgment The authors wish to acknowledge Calle Björnlert, Dr. J. Jacob Wikner, Jürgen Hauser, and Magnus Engström for implementing special parts of the design. Also Ola Petersson is acknowledged for his work on the sensor evaluation board. References [1] R. Forchheimer, and A. Ödmark, A Single Chip Linear Array Picture Processor, Proc. of SPIE, Vol. 397, pp , Jan [2] K. Chen, M. Afghahi, P-E. Danielsson, and C. Svensson, PASIC: A Processor-A/D converter-sensor Integrated Circuit, Proc. IEEE Int. Symposium on Circuits and Systems (ISCAS 90), Vol. 3, pp , May [3] R. Forchheimer, P. Ingelhag, and C. Jansson, MAPP A second generation smart optical sensor, Proc. of SPIE, Vol. 1659, pp. 2-11, Feb

53 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 43 [4] E. R. Fossum, CMOS Image Sensors: Electronic Camera-On-A-Chip, IEEE Trans. Electron Devices, Vol. 44, No. 10, pp , Oct [5] M. Gökstorp, and R. Forchheimer, Smart Vision Sensors, Proc. IEEE Int. Conference on Image Processing (ICIP 98), Vol. 1, pp , Oct [6] M. Johannesson, Fast, programmable, sheet-of-light range finding using MAPP2200, Proc. of SPIE, Vol. 2273, pp , July [7] E. Åstrand, Automatic Inspection of Sawn Wood, Ph.D. thesis No. 424, Linköping University, Sweden, [8] X. Aragonès, J. L. González, and A. Rubio, Analysis and Solutions for Switching Noise Coupling in Mixed-Signal ICs, Kluwer Academic Publishers, [9] H. Tian, B. Fowler, and A. El Gamal, Analysis of Temporal Noise in CMOS Photodiode Active Pixel Sensor, IEEE J. Solid-State Circuits, Vol. 36, No. 1, pp , Jan [10] J. J. Wikner, Studies on CMOS Digital-to-Analog Converters, Ph.D. thesis No. 667, Linköping University, Sweden, [11] J. H. Shieh, M. Patil, and B. J. Sheu, Measurement and Analysis of Charge Injection in MOS Analog Switches, IEEE J. Solid-State Circuits, Vol. 22, No. 2, pp , April [12] B. Razavi, and B. A. Wooley, Design Techniques for High-Speed, High- Resolution Comparators, IEEE J. Solid-State Circuits, Vol. 27, No. 12, pp , Dec [13] R. Johansson, L. Lindgren, J. Melander, and B. Möller, A Multi- Resolution 100 GOPS 4 Gpixels/s Programmable CMOS Image Sensor for Machine Vision, in 2003 IEEE Workshop on CCDs and Advanced Image Sensors, Elmau, Germany, May [14] L. Lindgren, Elimination of Quantization Effects in Measured Temporal Noise, Proc. IEEE Int. Symposium on Circuits and Systems (ISCAS 04), Vol. 4, pp , May 2004.

54 44 Paper I [15] B. Fowler, A. El Gamal, D. Yang, and H. Tian, A Method for Estimating Quantum Efficiency for CMOS Image Sensors, Proc. of SPIE, Vol. 3301, pp , April [16] C-S. S. Lin, B. P. Mathur, and M-C. F. Chang, Analytic Charge Collection and MTF Model for Photodiode-Based CMOS Imagers, IEEE Trans. Electron Devices, Vol. 49, No. 5, pp , May [17] I. Shcherback, A. Belenky, and O. Yadid-Pecht, Active Area Shape Influence on the Dark Current of CMOS Imagers, Proc. of SPIE, Vol. 4669, pp , April [18] A. Krymski, N. Bock, N. Tu, D. Van Blerkom, E. Fossum, A High-Speed, 240-Frames/s, 4.1-Mpixel CMOS Sensor, IEEE Trans. Electron Devices, Vol. 50, No. 1, pp , Jan [19] H. Yamashita, C. Sodini, A CMOS Imager with Bit- Serial Column-Parallel PE Array, ISSCC Dig. Tech. Papers, Vol. 44, pp , Feb [20] Y. Oike, M. Ikeda, K. Asada, Design and Implementation of Real-Time 3-D Image Sensor With Pixel Resolution, IEEE J. Solid-State Circuits, Vol. 39, No. 4, pp , April [21] Y. Oike, M. Ikeda, K. Asada, A High-Speed 3-D Range-Finding Image Sensor Using Row-Parallel Search Architecture and Multisampling Technique, IEEE J. Solid-State Circuits, Vol. 40, No. 2, pp , Feb

55 A Multiresolution 100-GOPS 4-Gpixels/s Programmable Smart Vision Sensor for Multisense Imaging 45 Leif Lindgren was born in Uppsala, Sweden, in He received the M.Sc. degree in computer science and engineering from Linköpings universitet, Sweden. He is currently pursuing the Ph.D. degree at the Electronic Devices division, Department of Electrical Engineering, Linköpings universitet, Sweden. During 1998, he was a visiting researcher at Stanford University, Stanford, CA. In 1999, he joined IVP Integrated Vision Products AB, Sweden, where he developed smart CMOS image sensors. In 2004, he joined Synective Labs AB, Sweden, where he works with high-performance computing based on FPGA super clusters, and has worked on the world s fastest commercial IC photomask writer for the 65 nm design node. His research interest is smart CMOS image sensors. Mr. Lindgren received the IEEE ISCAS Sensory Systems Track Best Paper Award in Johan Melander (M 98) was born in Örebro, Sweden, in He received the M.Sc. degree in computer science and engineering from Linköpings universitet, Sweden, and the Licentiate of Engineering degree from the Electronics Systems division, Department of Electrical Engineering, Linköpings universitet, Sweden. Since 1997, he has been developing smart CMOS image sensors and CMOS camera platforms at SICK IVP AB, Sweden.

56 46 Paper I Robert Johansson was born in Nyköping, Sweden, in He received the M.Sc. degree in applied physics and electrical engineering from Linköpings universitet, Sweden. In 2000, he joined IVP Integrated Vision Products AB, Sweden, where he developed smart CMOS image sensors. In 2003, he joined Micron Technology Inc. at their imaging design center in Oslo, Norway. Björn Möller was born in Eskilstuna, Sweden, in He received the M.Sc. degree in applied physics and electrical engineering from Linköpings universitet, Sweden. In 2000, he joined IVP Integrated Vision Products AB, Sweden, where he developed smart CMOS image sensors. In 2005, he joined Metrima AB, Sweden.

Fundamentals of CMOS Image Sensors

Fundamentals of CMOS Image Sensors CHAPTER 2 Fundamentals of CMOS Image Sensors Mixed-Signal IC Design for Image Sensor 2-1 Outline Photoelectric Effect Photodetectors CMOS Image Sensor(CIS) Array Architecture CIS Peripherals Design Considerations

More information

EE 392B: Course Introduction

EE 392B: Course Introduction EE 392B Course Introduction About EE392B Goals Topics Schedule Prerequisites Course Overview Digital Imaging System Image Sensor Architectures Nonidealities and Performance Measures Color Imaging Recent

More information

Photons and solid state detection

Photons and solid state detection Photons and solid state detection Photons represent discrete packets ( quanta ) of optical energy Energy is hc/! (h: Planck s constant, c: speed of light,! : wavelength) For solid state detection, photons

More information

A High Image Quality Fully Integrated CMOS Image Sensor

A High Image Quality Fully Integrated CMOS Image Sensor A High Image Quality Fully Integrated CMOS Image Sensor Matt Borg, Ray Mentzer and Kalwant Singh Hewlett-Packard Company, Corvallis, Oregon Abstract We describe the feature set and noise characteristics

More information

A 1.3 Megapixel CMOS Imager Designed for Digital Still Cameras

A 1.3 Megapixel CMOS Imager Designed for Digital Still Cameras A 1.3 Megapixel CMOS Imager Designed for Digital Still Cameras Paul Gallagher, Andy Brewster VLSI Vision Ltd. San Jose, CA/USA Abstract VLSI Vision Ltd. has developed the VV6801 color sensor to address

More information

Trend of CMOS Imaging Device Technologies

Trend of CMOS Imaging Device Technologies 004 6 ( ) CMOS : Trend of CMOS Imaging Device Technologies 3 7110 Abstract Which imaging device survives in the current fast-growing and competitive market, imagers or CMOS imagers? Although this question

More information

A CMOS Image Sensor with Ultra Wide Dynamic Range Floating-Point Pixel-Level ADC

A CMOS Image Sensor with Ultra Wide Dynamic Range Floating-Point Pixel-Level ADC A 640 512 CMOS Image Sensor with Ultra Wide Dynamic Range Floating-Point Pixel-Level ADC David X.D. Yang, Abbas El Gamal, Boyd Fowler, and Hui Tian Information Systems Laboratory Electrical Engineering

More information

UNIT-II LOW POWER VLSI DESIGN APPROACHES

UNIT-II LOW POWER VLSI DESIGN APPROACHES UNIT-II LOW POWER VLSI DESIGN APPROACHES Low power Design through Voltage Scaling: The switching power dissipation in CMOS digital integrated circuits is a strong function of the power supply voltage.

More information

Power and Area Efficient Column-Parallel ADC Architectures for CMOS Image Sensors

Power and Area Efficient Column-Parallel ADC Architectures for CMOS Image Sensors Power and Area Efficient Column-Parallel ADC Architectures for CMOS Image Sensors Martijn Snoeij 1,*, Albert Theuwissen 1,2, Johan Huijsing 1 and Kofi Makinwa 1 1 Delft University of Technology, The Netherlands

More information

CMOS Active Pixel Sensor Technology for High Performance Machine Vision Applications

CMOS Active Pixel Sensor Technology for High Performance Machine Vision Applications CMOS Active Pixel Sensor Technology for High Performance Machine Vision Applications Nicholas A. Doudoumopoulol Lauren Purcell 1, and Eric R. Fossum 2 1Photobit, LLC 2529 Foothill Blvd. Suite 104, La Crescenta,

More information

A flexible compact readout circuit for SPAD arrays ABSTRACT Keywords: 1. INTRODUCTION 2. THE SPAD 2.1 Operation 7780C - 55

A flexible compact readout circuit for SPAD arrays ABSTRACT Keywords: 1. INTRODUCTION 2. THE SPAD 2.1 Operation 7780C - 55 A flexible compact readout circuit for SPAD arrays Danial Chitnis * and Steve Collins Department of Engineering Science University of Oxford Oxford England OX13PJ ABSTRACT A compact readout circuit that

More information

IN RECENT years, we have often seen three-dimensional

IN RECENT years, we have often seen three-dimensional 622 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 39, NO. 4, APRIL 2004 Design and Implementation of Real-Time 3-D Image Sensor With 640 480 Pixel Resolution Yusuke Oike, Student Member, IEEE, Makoto Ikeda,

More information

Active Pixel Sensors Fabricated in a Standard 0.18 um CMOS Technology

Active Pixel Sensors Fabricated in a Standard 0.18 um CMOS Technology Active Pixel Sensors Fabricated in a Standard.18 um CMOS Technology Hui Tian, Xinqiao Liu, SukHwan Lim, Stuart Kleinfelder, and Abbas El Gamal Information Systems Laboratory, Stanford University Stanford,

More information

Characterization of CMOS Image Sensors with Nyquist Rate Pixel Level ADC

Characterization of CMOS Image Sensors with Nyquist Rate Pixel Level ADC Characterization of CMOS Image Sensors with Nyquist Rate Pixel Level ADC David Yang, Hui Tian, Boyd Fowler, Xinqiao Liu, and Abbas El Gamal Information Systems Laboratory, Stanford University, Stanford,

More information

Introduction. Chapter 1

Introduction. Chapter 1 1 Chapter 1 Introduction During the last decade, imaging with semiconductor devices has been continuously replacing conventional photography in many areas. Among all the image sensors, the charge-coupled-device

More information

High-end CMOS Active Pixel Sensor for Hyperspectral Imaging

High-end CMOS Active Pixel Sensor for Hyperspectral Imaging R11 High-end CMOS Active Pixel Sensor for Hyperspectral Imaging J. Bogaerts (1), B. Dierickx (1), P. De Moor (2), D. Sabuncuoglu Tezcan (2), K. De Munck (2), C. Van Hoof (2) (1) Cypress FillFactory, Schaliënhoevedreef

More information

IT FR R TDI CCD Image Sensor

IT FR R TDI CCD Image Sensor 4k x 4k CCD sensor 4150 User manual v1.0 dtd. August 31, 2015 IT FR 08192 00 R TDI CCD Image Sensor Description: With the IT FR 08192 00 R sensor ANDANTA GmbH builds on and expands its line of proprietary

More information

TRIANGULATION-BASED light projection is a typical

TRIANGULATION-BASED light projection is a typical 246 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 39, NO. 1, JANUARY 2004 A 120 110 Position Sensor With the Capability of Sensitive and Selective Light Detection in Wide Dynamic Range for Robust Active Range

More information

An Introduction to CCDs. The basic principles of CCD Imaging is explained.

An Introduction to CCDs. The basic principles of CCD Imaging is explained. An Introduction to CCDs. The basic principles of CCD Imaging is explained. Morning Brain Teaser What is a CCD? Charge Coupled Devices (CCDs), invented in the 1970s as memory devices. They improved the

More information

Ultra-high resolution 14,400 pixel trilinear color image sensor

Ultra-high resolution 14,400 pixel trilinear color image sensor Ultra-high resolution 14,400 pixel trilinear color image sensor Thomas Carducci, Antonio Ciccarelli, Brent Kecskemety Microelectronics Technology Division Eastman Kodak Company, Rochester, New York 14650-2008

More information

Detectors for microscopy - CCDs, APDs and PMTs. Antonia Göhler. Nov 2014

Detectors for microscopy - CCDs, APDs and PMTs. Antonia Göhler. Nov 2014 Detectors for microscopy - CCDs, APDs and PMTs Antonia Göhler Nov 2014 Detectors/Sensors in general are devices that detect events or changes in quantities (intensities) and provide a corresponding output,

More information

VLSI DESIGN OF A HIGH-SPEED CMOS IMAGE SENSOR WITH IN-SITU 2D PROGRAMMABLE PROCESSING

VLSI DESIGN OF A HIGH-SPEED CMOS IMAGE SENSOR WITH IN-SITU 2D PROGRAMMABLE PROCESSING VLSI DESIGN OF A HIGH-SED CMOS IMAGE SENSOR WITH IN-SITU 2D PROGRAMMABLE PROCESSING J.Dubois, D.Ginhac and M.Paindavoine Laboratoire Le2i - UMR CNRS 5158, Universite de Bourgogne Aile des Sciences de l

More information

CMOS Today & Tomorrow

CMOS Today & Tomorrow CMOS Today & Tomorrow Uwe Pulsfort TDALSA Product & Application Support Overview Image Sensor Technology Today Typical Architectures Pixel, ADCs & Data Path Image Quality Image Sensor Technology Tomorrow

More information

A Dynamic Range Expansion Technique for CMOS Image Sensors with Dual Charge Storage in a Pixel and Multiple Sampling

A Dynamic Range Expansion Technique for CMOS Image Sensors with Dual Charge Storage in a Pixel and Multiple Sampling ensors 2008, 8, 1915-1926 sensors IN 1424-8220 2008 by MDPI www.mdpi.org/sensors Full Research Paper A Dynamic Range Expansion Technique for CMO Image ensors with Dual Charge torage in a Pixel and Multiple

More information

STA1600LN x Element Image Area CCD Image Sensor

STA1600LN x Element Image Area CCD Image Sensor ST600LN 10560 x 10560 Element Image Area CCD Image Sensor FEATURES 10560 x 10560 Photosite Full Frame CCD Array 9 m x 9 m Pixel 95.04mm x 95.04mm Image Area 100% Fill Factor Readout Noise 2e- at 50kHz

More information

A Digital High Dynamic Range CMOS Image Sensor with Multi- Integration and Pixel Readout Request

A Digital High Dynamic Range CMOS Image Sensor with Multi- Integration and Pixel Readout Request A Digital High Dynamic Range CMOS Image Sensor with Multi- Integration and Pixel Readout Request Alexandre Guilvard1, Josep Segura1, Pierre Magnan2, Philippe Martin-Gonthier2 1STMicroelectronics, Crolles,

More information

A CMOS Imager with PFM/PWM Based Analogto-digital

A CMOS Imager with PFM/PWM Based Analogto-digital Edith Cowan University Research Online ECU Publications Pre. 2011 2002 A CMOS Imager with PFM/PWM Based Analogto-digital Converter Amine Bermak Edith Cowan University 10.1109/ISCAS.2002.1010386 This conference

More information

Adaptive sensing and image processing with a general-purpose pixel-parallel sensor/processor array integrated circuit

Adaptive sensing and image processing with a general-purpose pixel-parallel sensor/processor array integrated circuit Adaptive sensing and image processing with a general-purpose pixel-parallel sensor/processor array integrated circuit Piotr Dudek School of Electrical and Electronic Engineering, University of Manchester

More information

A Digital High Dynamic Range CMOS Image Sensor with Multi- Integration and Pixel Readout Request

A Digital High Dynamic Range CMOS Image Sensor with Multi- Integration and Pixel Readout Request A Digital High Dynamic Range CMOS Image Sensor with Multi- Integration and Pixel Readout Request Alexandre Guilvard 1, Josep Segura 1, Pierre Magnan 2, Philippe Martin-Gonthier 2 1 STMicroelectronics,

More information

Simulation of High Resistivity (CMOS) Pixels

Simulation of High Resistivity (CMOS) Pixels Simulation of High Resistivity (CMOS) Pixels Stefan Lauxtermann, Kadri Vural Sensor Creations Inc. AIDA-2020 CMOS Simulation Workshop May 13 th 2016 OUTLINE 1. Definition of High Resistivity Pixel Also

More information

BICMOS Technology and Fabrication

BICMOS Technology and Fabrication 12-1 BICMOS Technology and Fabrication 12-2 Combines Bipolar and CMOS transistors in a single integrated circuit By retaining benefits of bipolar and CMOS, BiCMOS is able to achieve VLSI circuits with

More information

the need for an intensifier

the need for an intensifier * The LLLCCD : Low Light Imaging without the need for an intensifier Paul Jerram, Peter Pool, Ray Bell, David Burt, Steve Bowring, Simon Spencer, Mike Hazelwood, Ian Moody, Neil Catlett, Philip Heyes Marconi

More information

ELEN6350. Summary: High Dynamic Range Photodetector Hassan Eddrees, Matt Bajor

ELEN6350. Summary: High Dynamic Range Photodetector Hassan Eddrees, Matt Bajor ELEN6350 High Dynamic Range Photodetector Hassan Eddrees, Matt Bajor Summary: The use of image sensors presents several limitations for visible light spectrometers. Both CCD and CMOS one dimensional imagers

More information

FUTURE PROSPECTS FOR CMOS ACTIVE PIXEL SENSORS

FUTURE PROSPECTS FOR CMOS ACTIVE PIXEL SENSORS FUTURE PROSPECTS FOR CMOS ACTIVE PIXEL SENSORS Dr. Eric R. Fossum Jet Propulsion Laboratory Dr. Philip H-S. Wong IBM Research 1995 IEEE Workshop on CCDs and Advanced Image Sensors April 21, 1995 CMOS APS

More information

Demonstration of a Frequency-Demodulation CMOS Image Sensor

Demonstration of a Frequency-Demodulation CMOS Image Sensor Demonstration of a Frequency-Demodulation CMOS Image Sensor Koji Yamamoto, Keiichiro Kagawa, Jun Ohta, Masahiro Nunoshita Graduate School of Materials Science, Nara Institute of Science and Technology

More information

A new Photon Counting Detector: Intensified CMOS- APS

A new Photon Counting Detector: Intensified CMOS- APS A new Photon Counting Detector: Intensified CMOS- APS M. Belluso 1, G. Bonanno 1, A. Calì 1, A. Carbone 3, R. Cosentino 1, A. Modica 4, S. Scuderi 1, C. Timpanaro 1, M. Uslenghi 2 1-I.N.A.F.-Osservatorio

More information

A new Photon Counting Detector: Intensified CMOS- APS

A new Photon Counting Detector: Intensified CMOS- APS A new Photon Counting Detector: Intensified CMOS- APS M. Belluso 1, G. Bonanno 1, A. Calì 1, A. Carbone 3, R. Cosentino 1, A. Modica 4, S. Scuderi 1, C. Timpanaro 1, M. Uslenghi 2 1- I.N.A.F.-Osservatorio

More information

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the

More information

A 200X100 ARRAY OF ELECTRONICALLY CALIBRATABLE LOGARITHMIC CMOS PIXELS

A 200X100 ARRAY OF ELECTRONICALLY CALIBRATABLE LOGARITHMIC CMOS PIXELS A 200X100 ARRAY OF ELECTRONICALLY CALIBRATABLE LOGARITHMIC CMOS PIXELS Bhaskar Choubey, Satoshi Aoyama, Dileepan Joseph, Stephen Otim and Steve Collins Department of Engineering Science, University of

More information

Control of Noise and Background in Scientific CMOS Technology

Control of Noise and Background in Scientific CMOS Technology Control of Noise and Background in Scientific CMOS Technology Introduction Scientific CMOS (Complementary metal oxide semiconductor) camera technology has enabled advancement in many areas of microscopy

More information

Fully depleted, thick, monolithic CMOS pixels with high quantum efficiency

Fully depleted, thick, monolithic CMOS pixels with high quantum efficiency Fully depleted, thick, monolithic CMOS pixels with high quantum efficiency Andrew Clarke a*, Konstantin Stefanov a, Nicholas Johnston a and Andrew Holland a a Centre for Electronic Imaging, The Open University,

More information

Characterisation of a CMOS Charge Transfer Device for TDI Imaging

Characterisation of a CMOS Charge Transfer Device for TDI Imaging Preprint typeset in JINST style - HYPER VERSION Characterisation of a CMOS Charge Transfer Device for TDI Imaging J. Rushton a, A. Holland a, K. Stefanov a and F. Mayer b a Centre for Electronic Imaging,

More information

CCD1600A Full Frame CCD Image Sensor x Element Image Area

CCD1600A Full Frame CCD Image Sensor x Element Image Area - 1 - General Description CCD1600A Full Frame CCD Image Sensor 10560 x 10560 Element Image Area General Description The CCD1600 is a 10560 x 10560 image element solid state Charge Coupled Device (CCD)

More information

3084 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 60, NO. 4, AUGUST 2013

3084 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 60, NO. 4, AUGUST 2013 3084 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 60, NO. 4, AUGUST 2013 Dummy Gate-Assisted n-mosfet Layout for a Radiation-Tolerant Integrated Circuit Min Su Lee and Hee Chul Lee Abstract A dummy gate-assisted

More information

Difference between BJTs and FETs. Junction Field Effect Transistors (JFET)

Difference between BJTs and FETs. Junction Field Effect Transistors (JFET) Difference between BJTs and FETs Transistors can be categorized according to their structure, and two of the more commonly known transistor structures, are the BJT and FET. The comparison between BJTs

More information

CCD Analogy BUCKETS (PIXELS) HORIZONTAL CONVEYOR BELT (SERIAL REGISTER) VERTICAL CONVEYOR BELTS (CCD COLUMNS) RAIN (PHOTONS)

CCD Analogy BUCKETS (PIXELS) HORIZONTAL CONVEYOR BELT (SERIAL REGISTER) VERTICAL CONVEYOR BELTS (CCD COLUMNS) RAIN (PHOTONS) CCD Analogy RAIN (PHOTONS) VERTICAL CONVEYOR BELTS (CCD COLUMNS) BUCKETS (PIXELS) HORIZONTAL CONVEYOR BELT (SERIAL REGISTER) MEASURING CYLINDER (OUTPUT AMPLIFIER) Exposure finished, buckets now contain

More information

Charged Coupled Device (CCD) S.Vidhya

Charged Coupled Device (CCD) S.Vidhya Charged Coupled Device (CCD) S.Vidhya 02.04.2016 Sensor Physical phenomenon Sensor Measurement Output A sensor is a device that measures a physical quantity and converts it into a signal which can be read

More information

Jan Bogaerts imec

Jan Bogaerts imec imec 2007 1 Radiometric Performance Enhancement of APS 3 rd Microelectronic Presentation Days, Estec, March 7-8, 2007 Outline Introduction Backside illuminated APS detector Approach CMOS APS (readout)

More information

E19 PTC and 4T APS. Cristiano Rocco Marra 20/12/2017

E19 PTC and 4T APS. Cristiano Rocco Marra 20/12/2017 POLITECNICO DI MILANO MSC COURSE - MEMS AND MICROSENSORS - 2017/2018 E19 PTC and 4T APS Cristiano Rocco Marra 20/12/2017 In this class we will introduce the photon transfer tecnique, a commonly-used routine

More information

Techniques for Pixel Level Analog to Digital Conversion

Techniques for Pixel Level Analog to Digital Conversion Techniques for Level Analog to Digital Conversion Boyd Fowler, David Yang, and Abbas El Gamal Stanford University Aerosense 98 3360-1 1 Approaches to Integrating ADC with Image Sensor Chip Level Image

More information

Transistor was first invented by William.B.Shockley, Walter Brattain and John Bardeen of Bell Labratories. In 1961, first IC was introduced.

Transistor was first invented by William.B.Shockley, Walter Brattain and John Bardeen of Bell Labratories. In 1961, first IC was introduced. Unit 1 Basic MOS Technology Transistor was first invented by William.B.Shockley, Walter Brattain and John Bardeen of Bell Labratories. In 1961, first IC was introduced. Levels of Integration:- i) SSI:-

More information

CHARGE-COUPLED device (CCD) technology has been. Photodiode Peripheral Utilization Effect on CMOS APS Pixel Performance Suat Utku Ay, Member, IEEE

CHARGE-COUPLED device (CCD) technology has been. Photodiode Peripheral Utilization Effect on CMOS APS Pixel Performance Suat Utku Ay, Member, IEEE IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 55, NO. 6, JULY 2008 1405 Photodiode Peripheral Utilization Effect on CMOS APS Pixel Performance Suat Utku Ay, Member, IEEE Abstract A

More information

DESIGN AND ANALYSIS OF LOW POWER CHARGE PUMP CIRCUIT FOR PHASE-LOCKED LOOP

DESIGN AND ANALYSIS OF LOW POWER CHARGE PUMP CIRCUIT FOR PHASE-LOCKED LOOP DESIGN AND ANALYSIS OF LOW POWER CHARGE PUMP CIRCUIT FOR PHASE-LOCKED LOOP 1 B. Praveen Kumar, 2 G.Rajarajeshwari, 3 J.Anu Infancia 1, 2, 3 PG students / ECE, SNS College of Technology, Coimbatore, (India)

More information

Based on lectures by Bernhard Brandl

Based on lectures by Bernhard Brandl Astronomische Waarneemtechnieken (Astronomical Observing Techniques) Based on lectures by Bernhard Brandl Lecture 10: Detectors 2 1. CCD Operation 2. CCD Data Reduction 3. CMOS devices 4. IR Arrays 5.

More information

IEEE. Proof. CHARGE-COUPLED device (CCD) technology has been

IEEE. Proof. CHARGE-COUPLED device (CCD) technology has been TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL. 55, NO. 6, JULY 2008 1 Photodiode Peripheral Utilization Effect on CMOS APS Pixel Performance Suat Utku Ay, Member, Abstract A photodiode (PD)-type

More information

Low Power Highly Miniaturized Image Sensor Technology

Low Power Highly Miniaturized Image Sensor Technology Low Power Highly Miniaturized Image Sensor Technology Barmak Mansoorian* Eric R. Fossum* Photobit LLC 2529 Foothill Blvd. Suite 104, La Crescenta, CA 91214 (818) 248-4393 fax (818) 542-3559 email: barmak@photobit.com

More information

CCD Characteristics Lab

CCD Characteristics Lab CCD Characteristics Lab Observational Astronomy 6/6/07 1 Introduction In this laboratory exercise, you will be using the Hirsch Observatory s CCD camera, a Santa Barbara Instruments Group (SBIG) ST-8E.

More information

Lecture Notes 5 CMOS Image Sensor Device and Fabrication

Lecture Notes 5 CMOS Image Sensor Device and Fabrication Lecture Notes 5 CMOS Image Sensor Device and Fabrication CMOS image sensor fabrication technologies Pixel design and layout Imaging performance enhancement techniques Technology scaling, industry trends

More information

FEATURES GENERAL DESCRIPTION. CCD Element Linear Image Sensor CCD Element Linear Image Sensor

FEATURES GENERAL DESCRIPTION. CCD Element Linear Image Sensor CCD Element Linear Image Sensor CCD 191 6000 Element Linear Image Sensor FEATURES 6000 x 1 photosite array 10µm x 10µm photosites on 10µm pitch Anti-blooming and integration control Enhanced spectral response (particularly in the blue

More information

DURING the past few years, fueled by the demands of multimedia

DURING the past few years, fueled by the demands of multimedia IEEE SENSORS JOURNAL, VOL. 11, NO. 11, NOVEMBER 2011 2621 Charge Domain Interlace Scan Implementation in a CMOS Image Sensor Yang Xu, Adri J. Mierop, and Albert J. P. Theuwissen, Fellow, IEEE Abstract

More information

A vision sensor with on-pixel ADC and in-built light adaptation mechanism

A vision sensor with on-pixel ADC and in-built light adaptation mechanism Microelectronics Journal 33 (2002) 1091 1096 www.elsevier.com/locate/mejo A vision sensor with on-pixel ADC and in-built light adaptation mechanism Amine Bermak*, Abdessellam Bouzerdoum, Kamran Eshraghian

More information

Basic Fabrication Steps

Basic Fabrication Steps Basic Fabrication Steps and Layout Somayyeh Koohi Department of Computer Engineering Adapted with modifications from lecture notes prepared by author Outline Fabrication steps Transistor structures Transistor

More information

An Introduction to Scientific Imaging C h a r g e - C o u p l e d D e v i c e s

An Introduction to Scientific Imaging C h a r g e - C o u p l e d D e v i c e s p a g e 2 S C I E N T I F I C I M A G I N G T E C H N O L O G I E S, I N C. Introduction to the CCD F u n d a m e n t a l s The CCD Imaging A r r a y An Introduction to Scientific Imaging C h a r g e -

More information

1 FUNDAMENTAL CONCEPTS What is Noise Coupling 1

1 FUNDAMENTAL CONCEPTS What is Noise Coupling 1 Contents 1 FUNDAMENTAL CONCEPTS 1 1.1 What is Noise Coupling 1 1.2 Resistance 3 1.2.1 Resistivity and Resistance 3 1.2.2 Wire Resistance 4 1.2.3 Sheet Resistance 5 1.2.4 Skin Effect 6 1.2.5 Resistance

More information

ALTHOUGH zero-if and low-if architectures have been

ALTHOUGH zero-if and low-if architectures have been IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 40, NO. 6, JUNE 2005 1249 A 110-MHz 84-dB CMOS Programmable Gain Amplifier With Integrated RSSI Function Chun-Pang Wu and Hen-Wai Tsao Abstract This paper describes

More information

Device design for global shutter operation in a 1.1-um pixel image sensor and its application to nearinfrared

Device design for global shutter operation in a 1.1-um pixel image sensor and its application to nearinfrared Device design for global shutter operation in a 1.1-um pixel image sensor and its application to nearinfrared sensing Zach M. Beiley Robin Cheung Erin F. Hanelt Emanuele Mandelli Jet Meitzner Jae Park

More information

THE CCD RIDDLE REVISTED: SIGNAL VERSUS TIME LINEAR SIGNAL VERSUS VARIANCE NON-LINEAR

THE CCD RIDDLE REVISTED: SIGNAL VERSUS TIME LINEAR SIGNAL VERSUS VARIANCE NON-LINEAR THE CCD RIDDLE REVISTED: SIGNAL VERSUS TIME LINEAR SIGNAL VERSUS VARIANCE NON-LINEAR Mark Downing 1, Peter Sinclaire 1. 1 ESO, Karl Schwartzschild Strasse-2, 85748 Munich, Germany. ABSTRACT The photon

More information

Abstract. Preface. Acknowledgments

Abstract. Preface. Acknowledgments Contents Abstract Preface Acknowledgments iv v vii 1 Introduction 1 1.1 A Very Brief History of Visible Detectors in Astronomy................ 1 1.2 The CCD: Astronomy s Champion Workhorse......................

More information

A 3MPixel Multi-Aperture Image Sensor with 0.7µm Pixels in 0.11µm CMOS

A 3MPixel Multi-Aperture Image Sensor with 0.7µm Pixels in 0.11µm CMOS A 3MPixel Multi-Aperture Image Sensor with 0.7µm Pixels in 0.11µm CMOS Keith Fife, Abbas El Gamal, H.-S. Philip Wong Stanford University, Stanford, CA Outline Introduction Chip Architecture Detailed Operation

More information

All-digital ramp waveform generator for two-step single-slope ADC

All-digital ramp waveform generator for two-step single-slope ADC All-digital ramp waveform generator for two-step single-slope ADC Tetsuya Iizuka a) and Kunihiro Asada VLSI Design and Education Center (VDEC), University of Tokyo 2-11-16 Yayoi, Bunkyo-ku, Tokyo 113-0032,

More information

12-nm Novel Topologies of LPHP: Low-Power High- Performance 2 4 and 4 16 Mixed-Logic Line Decoders

12-nm Novel Topologies of LPHP: Low-Power High- Performance 2 4 and 4 16 Mixed-Logic Line Decoders 12-nm Novel Topologies of LPHP: Low-Power High- Performance 2 4 and 4 16 Mixed-Logic Line Decoders Mr.Devanaboina Ramu, M.tech Dept. of Electronics and Communication Engineering Sri Vasavi Institute of

More information

Advanced output chains for CMOS image sensors based on an active column sensor approach a detailed comparison

Advanced output chains for CMOS image sensors based on an active column sensor approach a detailed comparison Sensors and Actuators A 116 (2004) 304 311 Advanced output chains for CMOS image sensors based on an active column sensor approach a detailed comparison Shai Diller, Alexander Fish, Orly Yadid-Pecht 1

More information

Last class. This class. CCDs Fancy CCDs. Camera specs scmos

Last class. This class. CCDs Fancy CCDs. Camera specs scmos CCDs and scmos Last class CCDs Fancy CCDs This class Camera specs scmos Fancy CCD cameras: -Back thinned -> higher QE -Unexposed chip -> frame transfer -Electron multiplying -> higher SNR -Fancy ADC ->

More information

ABSTRACT. Keywords: 0,18 micron, CMOS, APS, Sunsensor, Microned, TNO, TU-Delft, Radiation tolerant, Low noise. 1. IMAGERS FOR SPACE APPLICATIONS.

ABSTRACT. Keywords: 0,18 micron, CMOS, APS, Sunsensor, Microned, TNO, TU-Delft, Radiation tolerant, Low noise. 1. IMAGERS FOR SPACE APPLICATIONS. Active pixel sensors: the sensor of choice for future space applications Johan Leijtens(), Albert Theuwissen(), Padmakumar R. Rao(), Xinyang Wang(), Ning Xie() () TNO Science and Industry, Postbus, AD

More information

Semiconductor Detector Systems

Semiconductor Detector Systems Semiconductor Detector Systems Helmuth Spieler Physics Division, Lawrence Berkeley National Laboratory OXFORD UNIVERSITY PRESS ix CONTENTS 1 Detector systems overview 1 1.1 Sensor 2 1.2 Preamplifier 3

More information

Design of Low-Power High-Performance 2-4 and 4-16 Mixed-Logic Line Decoders

Design of Low-Power High-Performance 2-4 and 4-16 Mixed-Logic Line Decoders Design of Low-Power High-Performance 2-4 and 4-16 Mixed-Logic Line Decoders B. Madhuri Dr.R. Prabhakar, M.Tech, Ph.D. bmadhusingh16@gmail.com rpr612@gmail.com M.Tech (VLSI&Embedded System Design) Vice

More information

Comparative Analysis of SNR for Image Sensors with Enhanced Dynamic Range

Comparative Analysis of SNR for Image Sensors with Enhanced Dynamic Range Comparative Analysis of SNR for Image Sensors with Enhanced Dynamic Range David X. D. Yang, Abbas El Gamal Information Systems Laboratory, Stanford University ABSTRACT Dynamic range is a critical figure

More information

UNIT 3 Transistors JFET

UNIT 3 Transistors JFET UNIT 3 Transistors JFET Mosfet Definition of BJT A bipolar junction transistor is a three terminal semiconductor device consisting of two p-n junctions which is able to amplify or magnify a signal. It

More information

Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias

Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias 13 September 2017 Konstantin Stefanov Contents Background Goals and objectives Overview of the work carried

More information

In this lecture we will begin a new topic namely the Metal-Oxide-Semiconductor Field Effect Transistor.

In this lecture we will begin a new topic namely the Metal-Oxide-Semiconductor Field Effect Transistor. Solid State Devices Dr. S. Karmalkar Department of Electronics and Communication Engineering Indian Institute of Technology, Madras Lecture - 38 MOS Field Effect Transistor In this lecture we will begin

More information

ISSCC 2004 / SESSION 25 / HIGH-RESOLUTION NYQUIST ADCs / 25.4

ISSCC 2004 / SESSION 25 / HIGH-RESOLUTION NYQUIST ADCs / 25.4 ISSCC 2004 / SESSION 25 / HIGH-RESOLUTION NYQUIST ADCs / 25.4 25.4 A 1.8V 14b 10MS/s Pipelined ADC in 0.18µm CMOS with 99dB SFDR Yun Chiu, Paul R. Gray, Borivoje Nikolic University of California, Berkeley,

More information

Department of Electrical Engineering IIT Madras

Department of Electrical Engineering IIT Madras Department of Electrical Engineering IIT Madras Sample Questions on Semiconductor Devices EE3 applicants who are interested to pursue their research in microelectronics devices area (fabrication and/or

More information

Module-3: Metal Oxide Semiconductor (MOS) & Emitter coupled logic (ECL) families

Module-3: Metal Oxide Semiconductor (MOS) & Emitter coupled logic (ECL) families 1 Module-3: Metal Oxide Semiconductor (MOS) & Emitter coupled logic (ECL) families 1. Introduction 2. Metal Oxide Semiconductor (MOS) logic 2.1. Enhancement and depletion mode 2.2. NMOS and PMOS inverter

More information

ABSTRACT. Section I Overview of the µdss

ABSTRACT. Section I Overview of the µdss An Autonomous Low Power High Resolution micro-digital Sun Sensor Ning Xie 1, Albert J.P. Theuwissen 1, 2 1. Delft University of Technology, Delft, the Netherlands; 2. Harvest Imaging, Bree, Belgium; ABSTRACT

More information

Introduction to Computer Vision

Introduction to Computer Vision Introduction to Computer Vision CS / ECE 181B Thursday, April 1, 2004 Course Details HW #0 and HW #1 are available. Course web site http://www.ece.ucsb.edu/~manj/cs181b Syllabus, schedule, lecture notes,

More information

Application of CMOS sensors in radiation detection

Application of CMOS sensors in radiation detection Application of CMOS sensors in radiation detection S. Ashrafi Physics Faculty University of Tabriz 1 CMOS is a technology for making low power integrated circuits. CMOS Complementary Metal Oxide Semiconductor

More information

Low-Power Digital Image Sensor for Still Picture Image Acquisition

Low-Power Digital Image Sensor for Still Picture Image Acquisition Low-Power Digital Image Sensor for Still Picture Image Acquisition Steve Tanner a, Stefan Lauxtermann b, Martin Waeny b, Michel Willemin b, Nicolas Blanc b, Joachim Grupp c, Rudolf Dinger c, Elko Doering

More information

IRIS3 Visual Monitoring Camera on a chip

IRIS3 Visual Monitoring Camera on a chip IRIS3 Visual Monitoring Camera on a chip ESTEC contract 13716/99/NL/FM(SC) G.Meynants, J.Bogaerts, W.Ogiers FillFactory, Mechelen (B) T.Cronje, T.Torfs, C.Van Hoof IMEC, Leuven (B) Microelectronics Presentation

More information

An introduction to Depletion-mode MOSFETs By Linden Harrison

An introduction to Depletion-mode MOSFETs By Linden Harrison An introduction to Depletion-mode MOSFETs By Linden Harrison Since the mid-nineteen seventies the enhancement-mode MOSFET has been the subject of almost continuous global research, development, and refinement

More information

A rad-hard 8-channel 12-bit resolution ADC for slow control applications in the LHC environment

A rad-hard 8-channel 12-bit resolution ADC for slow control applications in the LHC environment A rad-hard 8-channel 12-bit resolution ADC for slow control applications in the LHC environment G. Magazzù 1,A.Marchioro 2,P.Moreira 2 1 INFN-PISA, Via Livornese 1291 56018 S.Piero a Grado (Pisa), Italy

More information

Cameras CS / ECE 181B

Cameras CS / ECE 181B Cameras CS / ECE 181B Image Formation Geometry of image formation (Camera models and calibration) Where? Radiometry of image formation How bright? What color? Examples of cameras What is a Camera? A camera

More information

1 Introduction & Motivation 1

1 Introduction & Motivation 1 Abstract Just five years ago, digital cameras were considered a technological luxury appreciated by only a few, and it was said that digital image quality would always lag behind that of conventional film

More information

The Charge-Coupled Device. Many overheads courtesy of Simon Tulloch

The Charge-Coupled Device. Many overheads courtesy of Simon Tulloch The Charge-Coupled Device Astronomy 1263 Many overheads courtesy of Simon Tulloch smt@ing.iac.es Jan 24, 2013 What does a CCD Look Like? The fine surface electrode structure of a thick CCD is clearly visible

More information

A DIGITAL CMOS ACTIVE PIXEL IMAGE SENSOR FOR MULTIMEDIA APPLICATIONS. Zhimin Zhou, Bedabrata Paint, Jason Woo, and Eric R. Fossum*

A DIGITAL CMOS ACTIVE PIXEL IMAGE SENSOR FOR MULTIMEDIA APPLICATIONS. Zhimin Zhou, Bedabrata Paint, Jason Woo, and Eric R. Fossum* A DIGITAL CMOS ACTIVE PIXEL IMAGE SENSO FO MULTIMEDIA APPLICATIONS Zhimin Zhou, Bedabrata Paint, Jason Woo, and Eric. Fossum* Electrical Engineering Department University of California, Los Angeles 405

More information

An ambient-light sensor system with startup. correction, LTPS TFT, LCD

An ambient-light sensor system with startup. correction, LTPS TFT, LCD LETTER IEICE Electronics Express, Vol.11, No.5, 1 7 An ambient-light sensor system with startup correction for LTPS-TFT LCD Ilku Nam 1 and Doohyung Woo 2a) 1 Dept of EE and also with PNU LG Smart Control

More information

(Refer Slide Time: 02:05)

(Refer Slide Time: 02:05) Electronics for Analog Signal Processing - I Prof. K. Radhakrishna Rao Department of Electrical Engineering Indian Institute of Technology Madras Lecture 27 Construction of a MOSFET (Refer Slide Time:

More information

444 Index. F Fermi potential, 146 FGMOS transistor, 20 23, 57, 83, 84, 98, 205, 208, 213, 215, 216, 241, 242, 251, 280, 311, 318, 332, 354, 407

444 Index. F Fermi potential, 146 FGMOS transistor, 20 23, 57, 83, 84, 98, 205, 208, 213, 215, 216, 241, 242, 251, 280, 311, 318, 332, 354, 407 Index A Accuracy active resistor structures, 46, 323, 328, 329, 341, 344, 360 computational circuits, 171 differential amplifiers, 30, 31 exponential circuits, 285, 291, 292 multifunctional structures,

More information

IT IS widely expected that CMOS image sensors would

IT IS widely expected that CMOS image sensors would IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 14, NO. 1, JANUARY 2006 15 A DPS Array With Programmable Resolution and Reconfigurable Conversion Time Amine Bermak, Senior Member,

More information

A 1Mjot 1040fps 0.22e-rms Stacked BSI Quanta Image Sensor with Cluster-Parallel Readout

A 1Mjot 1040fps 0.22e-rms Stacked BSI Quanta Image Sensor with Cluster-Parallel Readout A 1Mjot 1040fps 0.22e-rms Stacked BSI Quanta Image Sensor with Cluster-Parallel Readout IISW 2017 Hiroshima, Japan Saleh Masoodian, Jiaju Ma, Dakota Starkey, Yuichiro Yamashita, Eric R. Fossum May 2017

More information

NON-LINEAR DARK CURRENT FIXED PATTERN NOISE COMPENSATION FOR VARIABLE FRAME RATE MOVING PICTURE CAMERAS

NON-LINEAR DARK CURRENT FIXED PATTERN NOISE COMPENSATION FOR VARIABLE FRAME RATE MOVING PICTURE CAMERAS 17th European Signal Processing Conference (EUSIPCO 29 Glasgow, Scotland, August 24-28, 29 NON-LINEAR DARK CURRENT FIXED PATTERN NOISE COMPENSATION FOR VARIABLE FRAME RATE MOVING PICTURE CAMERAS Michael

More information