Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the digital sensing element is one of the following: Single sensor Line array Area array
Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the digital sensing element is one of the following: Single sensor Line array Area array indirect imaging techniques, e.g., MRI (Fourier), CT (Backprojection) physical quantities other than intensities are measured computation leads to 2-D map displayed as intensity
Single sensor acquisition
Linear array acquisition
Array sensor acquisition Irradiance incident at each photo-site is integrated over time Resulting array of intensities is moved out of sensor array and into a buffer Quantized intensities are stored as a grayscale image
Array sensor acquisition Irradiance incident at each photo-site is integrated over time Resulting array of intensities is moved out of sensor array and into a buffer Quantized intensities are stored as a grayscale image Two types of quantization: spatial: limited number of pixels gray-level: limited number of bits to represent intensity at a pixel
Spatial resolution
Grayscale resolution
Sensors - CCD & CMOS CCD (charge-coupled device) Charge-coupled device. Quantum efficiency of 70% (film has 2% QE). Mature technology. In development since 1969. Uses photo-diodes in conjunction with capacitors to store charge. Charge converted to voltage at limited nodes. Varied architectures used for read-out. Most of pixel area is light sensitive. Good fill-factor. CMOS (complementary metal-oxide-semiconductor) QE of 19-26%. Whole systems can be integrated on the same device. Camera-on-chip. Standard semiconductor device manufacturing process. Each pixel has read-out electronics, amplifiers, noise correction, and ADC. Consume far less power than CCDs. Need more room for electronics. Fill-factor generally not as good as CCDs.
CCD architectures CCDs function in two stages exposure and read-out Photons are collected and charge is accumulated during exposure Area arrays use vertical and horizontal shift registers for read-out In some architectures, charge is transferred to an inactive/opaque region before readout Linear array Pixel intensities are read sequentially Full frame transfer The entire pixel area is active Time between exposures is significant Needs mechanical shutter
CCD architectures Frame transfer Need 2x optically active area and thus are larger and costlier Half of the array (for storage) is masked Shutter delay is smaller than full frame transfer Interline transfer Charge shifted to adjacent opaque area Subsequently shifted row-wise to a horizontal shift register Complex design (requires micro-mirrors or microlenses for good optical efficiency)
Image formation Both CCD and CMOS sensors are monochromatic Color images are acquired using color filters overlaid on the sensor The intensity measured at a pixel is c i = f i (λ)g(λ)x(λ)l(λ)dλ + η i i = 1,..., k are distinct color channels sampled at each location f i (λ) - spectral transmittance of color filter g(λ) - sensitivity of sensor x(λ) - spectral reflectance of imaged surface l(λ) - spectral power density of illuminant η i - measurement noise
Spectral response of common illuminants Source: http://www.ni.com/white-paper/6901/en/
Multiple sensors To acquire a 2-D image, multiple CCDs are used to acquire separate color bands Dichroic prism A dichroic prism is used to split incoming irradiance into narrow-band beams Red, blue, and green beams directed to separate optical sensors Issues: cost, weight, registration Beam splitter in action
Single sensor acquisition To avoid the cost and complexity associated with multiple-sensor acquisition, most color digital cameras use a single sensor Each pixel is overlaid with a color filter such that only one color channel is acquired at a particular pixel location The Bayer array is the most common color filter array Green is sampled at twice the density of red and blue since the human visual system (HVS) is more sensitive in the green region of the spectrum The quincunx sampling arrangement ensures that aliasing in the green channel is least along the horizontal and vertical directions The full color image is recovered in a post-processing stage known as demosaicking
Direct color imaging The Foveon X3 sensor captures colors at different depths at the same spatial location The increased density leads to much better spatial resolution The spectral sensitivity functions at the different layers have substantial overlap Color separation is a major issue for such sensors
Digital camera pipeline Lens assembly IR blocking (hot mirror) Anti-aliasing: blurs to increase spatial correlation among color channels to help with demosaicking Focus control Active auto-focus systems use IR emitters to estimate distance A passive method dynamically adjusts the focus setting to maximimize high-frequency energy Exposure control Good contrast across image by manipulating aperture size and exposure time Prevents overand under-exposed images
Digital camera pipeline Correct for lens distortion: barrel (fish-eye), pincushion (telephoto), vignetting (reduced brightness at edges) Gamma correction to compensate for nonlinearity of sensor response (opto-electronic conversion function) Compensation for dark current. Capture appropriate dark-image, subtract from acquired image. Lens flare (scattered light) compensation (mostly proprietary)
Digital camera pipeline HVS remarkably adaptive; e.g., paper appears white under incandescent light or sunlight Imaging system will integrate spectral content of irradiance. Without color compensation, images appear unnatural and dissimilar to viewed scenes White balancing algorithms based on one of two philosophies: Gray-world assumption R = kr R, B = k b B; k r = G mean /R mean, k b = G mean /B mean Perfect reflector method Brightest pixel corresponds to white. R = R/R max, G = G/G max, B = B/B max
Digital camera pipeline Bayer demosaicking Reconstruct sparsely sampled signal to form 3-color image Multitude of methods based on heuristics, properties of the HVS, and mathematical formulations Since the Bayer array is the most common, most algorithms are tailored specifically for it Effective algorithms use inter-channel correlation
Digital camera pipeline Captured image is in the digital camera color space. Colors are not impulses at specific wavelength. The sensitivity function of the camera color sensors dictates the camera color space. The camera-rgb image is transformed to one of many standard color spaces. Most commonly, the transformation is Camera-RGB CIEXYZ. The CIEXYZ space defined by CIE (Commission Internationale de l Eclairage the International Commission on Illumination) corresponds to the human visual subspace Many enhancement algorithms use non-rgb color spaces.
Digital camera pipeline Removal of color artifacts due to demosaicking algorithms based on the constant-hue assumption Sharpening performed on luminance component only Denoising median filters, bilateral filtering, and thresholding
Digital camera pipeline Display Images are converted to a format appropriate for display medium (srgb for monitors, CMY/CMYK for printers). Compression Most cameras offer flexible compression options. JPEG is standard in current models. Some JPEG2000. Storage Low-end cameras offer only JPEG images as output. Some high-end point-and-shoot cameras and most SLRs will allow for retrieval of RAW images that are unprocessed. RAW images can be processed later on a PC without time and computational constraints.