Putting It All Together: Computer Architecture and the Digital Camera

Similar documents
Lecture Introduction

Photons and solid state detection

Topic 9 - Sensors Within

Cameras CS / ECE 181B

Imaging serial interface ROM

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

Hardware Platforms and Sensors

MEMS in ECE at CMU. Gary K. Fedder

CS302 - Digital Logic Design Glossary By

CHARGE-COUPLED DEVICE (CCD)

Design of Mixed-Signal Microsystems in Nanometer CMOS

VGA CMOS Image Sensor BF3905CS

The Architecture of the BTeV Pixel Readout Chip

Lineup for Compact Cameras from

A 1.3 Megapixel CMOS Imager Designed for Digital Still Cameras

An Introduction to CCDs. The basic principles of CCD Imaging is explained.

Lecture 8 Optical Sensing. ECE 5900/6900 Fundamentals of Sensor Design

Digital Cameras The Imaging Capture Path

EE 392B: Course Introduction

Detectors for microscopy - CCDs, APDs and PMTs. Antonia Göhler. Nov 2014

Digital Cameras. Consumer and Prosumer

CMOS MT9D111Camera Module 1/3.2-Inch 2-Megapixel Module Datasheet

A New Capacitive Sensing Circuit using Modified Charge Transfer Scheme

CMOS MT9D112 Camera Module 1/4-Inch 3-Megapixel Module Datasheet

Detectors that cover a dynamic range of more than 1 million in several dimensions

Digital Photographs and Matrices

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

VGA CMOS Image Sensor

Lecture Wrap up. December 13, 2005

Vision-Guided Motion. Presented by Tom Gray

Megapixels and more. The basics of image processing in digital cameras. Construction of a digital camera

ICM532A CIF CMOS image sensor with USB output. Data Sheet

Charged Coupled Device (CCD) S.Vidhya

A 100V, 3 Phase Gate Driver with integrated digital PWM Generation and Current Sampling

Embedded Sensors. We can offer you complete solutions for intelligent integrated sensor systems.

Part Number SuperPix TM image sensor is one of SuperPix TM 2 Mega Digital image sensor series products. These series sensors have the same maximum ima

CCDS. Lesson I. Wednesday, August 29, 12

Digital Photographs, Image Sensors and Matrices

Designing Information Devices and Systems II Fall 2017 Note 1

Gamma Spectrometer Initial Project Proposal

The Latest High-Speed Imaging Technologies and Applications

Training Schedule. Robotic System Design using Arduino Platform

Camera Overview. Digital Microscope Cameras for Material Science: Clear Images, Precise Analysis. Digital Cameras for Microscopy

Bar Code Labels. Introduction

EECS150 - Digital Design Lecture 15 - CMOS Implementation Technologies. Overview of Physical Implementations

EECS150 - Digital Design Lecture 9 - CMOS Implementation Technologies

Lithography in our Connected World

Control of Noise and Background in Scientific CMOS Technology

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

Digital camera. Sensor. Memory card. Circuit board

Camera Overview. Digital Microscope Cameras for Material Science: Clear Images, Precise Analysis. Digital Cameras for Microscopy

Digital Design and System Implementation. Overview of Physical Implementations

Getting light to imager. Capturing Images. Depth and Distance. Ideal Imaging. CS559 Lecture 2 Lights, Cameras, Eyes

APPLICATION NOTE 695 New ICs Revolutionize The Sensor Interface

Camera Image Processing Pipeline

Introduction to Computer Vision

Digital Imaging Rochester Institute of Technology

SMART SENSORS AND MEMS

NTSC/PAL CMOS Image Sensor. BF3009CL Datasheet

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

ISSCC 2001 / SESSION 11 / SRAM / 11.4

6.012 Microelectronic Devices and Circuits

Device Technology( Part 2 ): CMOS IC Technologies

Switched-Capacitor Converters: Big & Small. Michael Seeman Ph.D. 2009, UC Berkeley SCV-PELS April 21, 2010

Copyright by Syed Ashad Mustufa Younus Copyright by Syed Ashad Mustufa Younus

F-number sequence. a change of f-number to the next in the sequence corresponds to a factor of 2 change in light intensity,

VGA CMOS Image Sensor BF3005CS

05/11/2006. Lecture What does a computer do? Logic Manipulation. Data manipulation

Photography is everywhere

A SPAD-Based, Direct Time-of-Flight, 64 Zone, 15fps, Parallel Ranging Device Based on 40nm CMOS SPAD Technology

Camera Overview. Digital Microscope Cameras for Material Science: Clear Images, Precise Analysis. Digital Cameras for Microscopy

Light gathering Power: Magnification with eyepiece:

5. Transducers Definition and General Concept of Transducer Classification of Transducers

EECS150 - Digital Design Lecture 19 CMOS Implementation Technologies. Recap and Outline

The Development and Application of High Compression Ratio Methanol Engine ECU

DIGITAL IMAGING. Handbook of. Wiley VOL 1: IMAGE CAPTURE AND STORAGE. Editor-in- Chief

A 3D Multi-Aperture Image Sensor Architecture

brief history of photography foveon X3 imager technology description

1 Introduction 1.1 HISTORICAL DEVELOPMENT OF MICROELECTRONICS

Overview 256 channel Silicon Photomultiplier large area using matrix readout system The SensL Matrix detector () is the largest area, highest channel

ISSCC 2003 / SESSION 12 / CMOS IMAGERS, SENSORS AND DISPLAYS / PAPER 12.2

Sensors. CS Embedded Systems p. 1/1

A new Photon Counting Detector: Intensified CMOS- APS

Light. Path of Light. Looking at things. Depth and Distance. Getting light to imager. CS559 Lecture 2 Lights, Cameras, Eyes

Introduction. ELCT903, Sensor Technology Electronics and Electrical Engineering Department 1. Dr.-Eng. Hisham El-Sherif

1.6 Beam Wander vs. Image Jitter

The Advantages of Integrated MEMS to Enable the Internet of Moving Things

DIGITAL CAMERA SENSORS

Fundamentals of CMOS Image Sensors

The Datasheet and Interfacing EE3376

A new Photon Counting Detector: Intensified CMOS- APS

Cameras As Computing Systems

Sensors. Chapter 3. Storey: Electrical & Electronic Systems Pearson Education Limited 2004 OHT 3.1

Micromachined Floating Element Hydrogen Flow Rate Sensor

Electronic Systems - B1 23/04/ /04/ SisElnB DDC. Chapter 2

ELECTRONIC SYSTEMS. Introduction. B1 - Sensors and actuators. Introduction

Development of a High Temperature Venus Seismometer and Extreme Environment Testing Chamber

Data Sheet SMX-160 Series USB2.0 Cameras

Basic principles of photography. David Capel 346B IST

Transcription:

461 Putting It All Together: Computer Architecture and the Digital Camera This book covers many topics in circuit analysis and design, so it is only natural to wonder how they all fit together and how they are used in practice. Even though we have attempted to highlight the answers to these questions through the various Technology Briefs and Application Notes, we felt it would be helpful to pull much of these concepts together to show you how the various disciplines in electrical engineering come together to make the systems that power your world. To this end, we have chosen the digital camera as a platform with which to illustrate where all of the book s concepts become relevant in standard practice. What Do Cameras, Phones, and Computers Have in Common? The increasing pace of technological miniaturization (Technology Briefs 1 and 2 on pages 10 and 20, respectively) is enabling ever more functions to be packaged into single products. Digital cameras, mobile phones, and personal data assistants (PDAs) are examples of how extremely small sensing, computation, and communication elements enable the creation of indispensable technology. If you think about it a bit, you may begin to see that all of these devices are actually very similar. Even to non-engineers, the line between a cell phone, a computer, a PDA and a digital camera is starting to blur. With extreme miniaturization, even the distinction between environment and computer blurs (Technology Brief 19 on page 442). In broad terms, all engineered systems perform the functions shown in Fig. P-1. The system converts energy to and from its environment into electrical signals; this is known as transduction. As we saw in Technology Brief 12 on page 269, if environmental signals are converted into electrical signals, this is known as sensing. If the conversion is from the system s electrical signals into the environment, it is usually termed actuation. The internal manipulation of the electrical signals (or, rather, the information contained in the signals) by the system is known as computation. Some of this information must be stored for later use; this is known generally as storage or sometimes as memory. All of the energy required to perform these transduction, computation, and storage functions must come from some source of energy. Some of the signals sensed and generated by the system are meant explicitly for communicating with other systems; this is known as communication. Figure P-1: A schematic of the functions and energy flow in a system from an electrical-engineering perspective. This very general model applies equally well to a digital camera, a laptop computer, or a human.

462 PUTTING IT ALL TOGETHER: COMPUTER ARCHITECTURE AND THE DIGITAL CAMERA Figure P-2: A conceptual diagram of a digital camera. Light enters the camera through a series of lenses and strikes the CMOS or CCD imaging sensor. This sensor converts the light signals to electrical signals and sends the information to the ASIC, which modifies the image data as necessary and then stores or communicates it to the user. An LED display shows the user the image striking the sensor and allows data entry through a touchscreen. Let us consider two example systems: a mobile phone and you. A mobile phone has a set of sophisticated integrated circuits (ICs, as described in Technology Brief 7 on page 135) that sense an electromagnetic signal from the environment (radio waves, Technology Brief 8 on page 158) through an antenna, then perform many computations on that signal. The resulting electrical signal is used to actuate a device that produces pressure waves in the air (i.e., sound). Some of this information, like voicemail or images, can be stored in memory (Technology Briefs 9 and 10 on pages 163 and 190, respectively). All of these functions are possible only if there is a battery connected to provide energy (Technology Brief 11 on page 199). Modern mobile phones transduce many other signals as well: inertial sensors detect motion and position (Technology Brief 12 on page 269), radio waves are used to provide very accurate position through the GPS system, and onboard optical sensors allow the user to take pictures with the phone. You are not so different (but vastly more sophisticated, of course). You transduce signals from the environment through your sensory organs which generate signals for your neural system. Your neural system processes these signals and performs computational operations. These computations result in transductions back into the environment through your limbs. None of this is possible without the energy stored in your body as fat and carbohydrates. You often use the same transducers that you use for sensing to communicate with other systems (i.e., your friends). This analogy between man and machine is very significant. As we discuss the digital camera example in the next few pages, it is well worth asking to what extent the same concepts we apply to analyzing electrical systems can be applied to biological ones (Technology Brief 22 on page 536). With this in mind, let us take a look at the digital camera. The Digital Camera Figure P-2 shows a conceptual mechanical view of where things are placed in a hypothetical camera; the specifics vary from manufacturer to manufacturer. Figure P-3 shows a simplified block diagram of the various components in a hypothetical digital camera; for each component, the relevant Technology Brief or chapter section is highlighted in red. A digital camera (like a mobile phone or a PDA) is essentially a computer with specialized input/output capabilities; a

PUTTING IT ALL TOGETHER: COMPUTER ARCHITECTURE AND THE DIGITAL CAMERA 463 Figure P-3: A functional block diagram of the various components of the digital camera. Red arrows indicate analog electrical signals, while blue arrows indicate digital electrical signals. Broken arrows indicate transduction is occurring between a component and the outside environment. Relevant sections in the book are highlighted in red text. detailed version of Fig. P-3, including connection specifications, processor details and the like, could be termed the computer architecture. Light enters the camera through an optical system that consists of one or more motorized lenses and mirrors. There are many subtleties to how these multiple-lens systems are built that affect both the quality of the image and the user interface. The optical system projects the light onto an imaging sensor. These sensors are custom-built integrated circuits that have light-sensitive pixels (numbering in the millions of pixels, or megapixels, in modern cameras). The imaging chips convert the light signal into electrical signals. These electrical signals are then run through an analog-to-digital converter (ADC) and sent to the computational element, an application-specific integrated circuit (ASIC). The ASIC can perform various operations with the digital image data. The data can be altered to perform color correction or other operations. Data can be stored locally (usually on a hard drive or flash), communicated over a wire to a computer, communicated wirelessly, and/or displayed on the camera s LED screen for the user to see. Transduction: Optics and Motion Compensation (Technology Brief 12, Section 6-7) Various configurations of lenses, mirrors, and apertures guide the light from the field of view onto the imaging sensor. Most of that detail is beyond the scope of this overview. Of note are cameras with motion compensation. During the motion-compensation operation, an on-board microfabricated inertial sensor (an accelerometer; see Technology Brief

464 PUTTING IT ALL TOGETHER: COMPUTER ARCHITECTURE AND THE DIGITAL CAMERA 12 on page 269) senses the changes in acceleration of the camera. The vibration information then is used to compute control signals that are sent to a microfabricated actuator (Technology Brief 12 on page 269) that moves the optical lens system in real time to counteract the artifacts caused by vibration or high frequency motion of the camera (i.e., shaking or vehicle motion). Transduction: Image Sensor The image sensor is a crucial component in the digital camera. This device consists of a chip, usually manufactured in silicon in a process similar to the one that makes ICs (Technology Brief 7 on page 135). When an image is focused onto the chip, millions of individual pixels record the light levels at their location and encode the levels into electrical signals. Currently, there are two competing technologies of sensors: charge-coupled devices (CCD) and complementary metal oxide semiconductor (CMOS) detectors. In CCD chips, each light-sensitive pixel is a capacitor where the two conducting elements of the capacitor are built from silicon on the chip using IC fabrication (Technology Brief 7 on page 135). Its operation is basically governed by the same physical property that makes solar cells possible: the photoelectric effect. Because silicon is a semiconductor, photons striking the silicon surface generate electrons. As photons hit each silicon capacitor, the generated electrons build up charge in the capacitor proportional to the intensity of the light at that location. To take a picture, the CCD array is exposed to an image for a set period of time (the exposure time). Once this is done, the charge in each pixel is dumped, in sequence, through an amplifier. This amplifier generates a voltage output proportional to the amount of charge moving out of each capacitor; it is, in effect, a current-to-voltage amplifier (Example 4-2). The output of the amplifier is a sequence of voltage pulses; each pulse represents the amount of charge on one of the CCD pixels. This voltage pulse must be sent through an analog-to-digital converter before it can be processed by the ASIC. CMOS sensors (Section 4-11) are also made from silicon using IC fabrication techniques. However, the standard CMOS sensor has CMOS transistors built into each pixel. Recall from Section 5-7 that MOSFET transistors have associated parasitic capacitances; when light strikes the silicon, these capacitances will charge up as in a CCD. Unlike a CCD, the transistors sample the charge locally on each pixel and generate a digital 1 or 0 depending on the amount of charge. This eliminates the need for the external ADC used for CCDs. Also, the transistors enable the use of row and column addressing of the pixels (as in the dynamic memory of Technology Brief 9 on page 163), which also speeds up the device (because there is no need to wait for a sequence of voltage pulses as in a CCD). This technology only became commercially viable as transistor sizes decreased below 0.25 μm (Technology Briefs 1 and 2 on pages 10 and 20, respectively). CMOS sensors are increasingly displacing CCDs in the commercial market although they still suffer from noise and dynamic-range limitations when compared with CCDs. In order to obtain color images, different pixels are coated with different filter materials that pass through only certain colors. For example, the common Bayer filter method coats a pixel in either a red, green, or blue filter. These red, green, or blue pixels are then tiled together. The ASIC can then reconstruct the image from the different color pixels. More sophisticated methods include using three different sensor arrays (one for each color), using more color filter types or even performing three exposures in succession onto the same sensor chip with a different color filter each time. Transduction: LED Screen (Technology Briefs 5 and 6) Most commercial digital cameras now display the image seen through the sensor using a miniature LED display (Technology Briefs 5 and 6 on pages 96 and 106, respectively). These allow the user to choose camera settings, playback movies or images, modify the images, or control communications. The screen is driven by an ASIC. Computation: ASIC (Technology Brief 7, Sections 4-11, 5-7, and 11-10) The application specific integrated circuit (ASIC) is the brains of the camera. The ASIC chip usually has a microprocessor, on-chip memory, ADC and DAC circuits, and communication circuits. Like any IC, the ASIC chip

PUTTING IT ALL TOGETHER: COMPUTER ARCHITECTURE AND THE DIGITAL CAMERA 465 is made with silicon fabrication processes (Technology Brief 7 on page 135). The ASIC contains thousands or millions of MOSFET transistors (Sections 4-11 and 5-7) and increasingly, mixed signal circuits (Section 11-10) are used for some of the ADC and DAC functions (such as the ADC required for the CCD, described earlier). These circuits are designed and analyzed in SPICE simulators like Multisim 10. Once the image has been converted to digital format (by the ADC) the ASIC performs computations on it. These include interpolating colors from the different color pixels, smoothing, color correction of the data, and compression (for example, compressing the image into the common JPEG format). The ASIC also generates the signals for communicating image data to the LED screen, over an antenna (Technology Brief 8 on page 158), or over a transmission line to a computer (Sections 5-7 and 7-11). Storage: Primary Memory and Secondary Memory (Technology Briefs 6, 7, and 21) As you might expect, the microprocessor that runs the camera needs local memory to perform its computational operations. Moreover, image data must be stored somewhere for later use. The first requirement is dealt with by primary memory (Technology Brief 7 on page 135); most cameras have SRAM built into the microprocessor ASIC to act as a cache and stand-alone SRAM to store larger data sets. Once images are ready to be stored, they are copied into a secondary storage device by the ASIC. Before the advent of high density non-volatile memory like Flash cards (Technology Brief 7 on page 135), many cameras used hard drives or optical (CD/DVD) drives (Technology Brief 21 on page 509). Flash cards have made huge inroads into the camera market, precisely because they have no moving parts, are portable and small, and are of sufficient density to store many high-quality pictures. High-performance cameras and cameras that record moving video continue to use optical drives and/or hard drives, although this is likely to change as storage technologies change (Technology Briefs 6 and 7 on pages 106 and 135, respectively). Communication: Wired and Wireless Interfaces (Technology Brief 8, Sections 4-7, 5-7, 7-11, and 9-8) Most digital cameras can communicate with external computers using a serial port. A serial port is the name given to the collection of wires (usually two or three) through which two computers can exchange digital information. Several standard serial port types exist, each with its own protocol for exchanging information; the Universal Serial Bus (USB) and the FireWire standard are quite common. To communicate, the camera s ASIC generates digital pulses that pass through a buffer (Section 4-7) with a high voltage-to-current gain. The buffer is connected to the serial wires and sends a signal along 10 to 20 cm of wire to the computer; this wire behaves like a transmission line (Sections 5-7 and 7-11). The signal received at the computer s port is processed by the computer serial driver IC, which can then respond. As standard serial ports can exchange information at 60 MB/s, the camera can download image information rather quickly this way; for a more in-depth discussion of data transmission limits as they relate to circuit characteristics, see Technology Brief 17 on page 384. Cameras increasingly communicate wirelessly, either because they form part of a mobile phone or because they have their own antennas. Wireless standards, other than those used for mobile phones, vary somewhat. For our purposes, it is sufficient to know that the camera s ASIC outputs the information digitally to a radio circuit, which converts the digital bits into ac signals that are launched into free space by the camera s antenna (Technology Brief 8 on page 158, Section 9-8). Energy: Batteries (Technology Brief 11) Energy sources have not kept pace with miniaturization (Technology Brief 11 on page 199). Most of the weight in mobile phones, PDAs and digital cameras is associated with the batteries. Despite the development of extremely lowpower circuits, energy density and power density (Technology Brief 11 on page 199) requirements continue to place fundamental limitations on miniaturization. Energy technologies in the form of fuel cells, advanced supercapacitors, miniature combustion engines, and even energy scavenging from the environment are being pursued aggressively by researchers all over the world.