A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications IEEE Transactions on Image Processing, Vol. 21, No. 2, 2012 Eric Dedrick and Daniel Lau, Presented by Ran Shu School of Electrical Engineering and Computer Science Kyungpook National Univ.
Abstract High dynamic range imaging Receiving little attention Dynamic range of a scene Exceeding captured by image sensor in single exposure Constructing radiance maps In measurement applications Proposing novel HDRI Pixel-by-pixel Kalman filtering Evaluating performance Using proposing objective metric Presented experiments 9.4-dB improvement in signal-to-noise ratio 29% improvement in radiometric accuracy Over classic method 2/37
Introduction Exceeding dynamic range of scene Captured by camera in single exposure Over and underexposed areas Multiexposure techniques Fusing exposures into single composite image With higher dynamic range 3/37
Previous multiexposure methods Using exposure time ratio and pixel value mappings between exposure Obtaining parametric camera response function Exposure fusion performed using weighted average Using reciprocity relation Exposure times and smoothness constraint Constructing nonparametric camera response function» Exposure fusion performed using weighted average Recovering camera response function In presence of camera white balancing Estimating radiance uncertainties» Basing on statistics of pooled pixels Performing exposure fusion by iteratively updating radiance estimates Using weighting term based on estimated noise variance 4/37
HDRI for infrared camera and temperature measurement Using blackbodies of controlled temperatures for calibration Finding response function of camera with InSb image sensor Using emissivities Performing exposure fusion Computing radiance of each pixel from exposure» Longest exposure time» Pixel not saturated HDRI for thermographic application Recovering inverse camera response function Mapping pixel values to radiance 5/37
Proposing method Postulating HDRI targeting measurement applications Rooted in solid-state image sensor models Weights used in exposure fusion» Basing on noise present in acquired exposures Estimating uncertainty in radiance estimates» Providing useful information to application Presenting new method of HDRI Camera response function Vary independently across image sensor Making term pixel response function more appropiate Calibration procedure Improving performance across sensor array Corrects pixelwise nonuniformity caused by sensor array» Measurement noise power» Scene illumination» Optical vignetting 6/37
Calibration limitation Unchanging part Illumination conditions Camera parameters Focus and aperture size Changing part Depth-dependent changes» In illumination» Object distance differing too much Performance of approach depending Controlled camera settings Environmental conditions 7/37
HDR achieving Applying Kalman filtering independently Each pixel location across multiple exposures Demonstrating usefulness of proposed method Introducing objective metrics Evaluating accuracy and performance» Comparing to classic HDRI techniques 8/37
Camera calibration Performing calibration Using spectrally flat white balance card Correcting any fixed pattern spatial nonuniformities Existing due to illumination, optical vignetting, sensor noise Not sensitive to type of illuminant Reflected radiant intensity of white balance card Target reflector Both scaling identically with variations in illumination intensity» Reflectance measurements not sensitive to illuminant intensity 9/37
Selecting camera Linear camera response function Without noise Response of each pixel z ATr B (1) where z T r AB, is output of a particular pixel, is exposure time, is scene radiance at this pixel location, parameters to be determined for each pixel through calibration 10/37
Spatial nonuniformities due to Lighting Vignetting fall-off Sensor fixed pattern noises Fig.1. Exposure of a uniform target taken using a 10-bit camera. Nonuniformities are evident. 11/37
Acquiring exposures of calibration card With varying exposure times allowing A and B Pixel gain coefficients Pixel offset coefficients (a) (b) (c) Fig.2. (a) Gain and (b) offset coefficients of the pixel response functions. (c) Variance of a sequence of 49 exposures taken with identical camera parameters. 12/37
Two noise sources Dark current Photo response nonuniformity» Corrected by pixel gain and offset terms of pixel response functions Zero-mean noise Shot noise Read noise» Not uniform across sensor» Few isolated pixels with large variances suppressed Modeled for particular pixel R CTr D (2) where calibration card defines, R CD, is measurement noise power, are parameters to be determined for each pixel through linear regression r 1 13/37
Coefficients of measurement noise power model (a) (b) (c) Fig.3. (a) Gain and (b) offset coefficients of the measurement noise power model. (c) Estimated process noise power Q. 14/37
Pixel response model Including process noise and measurement noise Estimating process noise power from residual error z AT r B n (3) 2 2 2 z A T Q R (4) where Q is process noise power at particular pixel location, Any outlier in calibration data Estimating of process noise power» Occasionally occur in some isolated pixels 15/37
Shot noise process Described by Poisson distribution Expecting number of occurrences increase Fig.4. (Open circle) Probability density function of a Poisson distribution overlaid with the probability density function of a Gaussian distribution. Both distributions have a mean and variance of 10. 16/37
Selecting Prosilica GC640 camera Micron MT9V203 CMOS image sensor Operating fully manual mode» Eliminating need to compensate for features» Reducing quantization noise and not introducing compression artifacts 17/37
Radiance estimation HDRI based on Kalman filtering Selected camera with linear response function Chosen operating region Applying Gaussian noise model State x of linear system Expressing in state space form x Φ x Γ u w (5) where Φ Γ w k k 1 k 1 k 1 k 1 k 1 governs the time evolution of the system, is a weight applied to the control, is additive white Gaussian process noise Measurement of state performing given by measurement model zk Hkxk v k (6) where z is an observation vector, H relates state to observation, v is additive white Gaussian process noise u 18/37
Expected value operator E E T T E T vw k k 0 w w Q, v v R, k k k k k k Optimal estimates of system state Generated recursively with Kalman filter where x Φ xˆ Γ u k 1 k 1 k 1 k 1 P Φ P Φ Q T k k 1 k 1 k 1 k 1 K P H H P H R T T k k k k k k k xˆ xˆ K z H xˆ k k k k k k, P Ex xˆ x x T T k k k k k k k k k P I K H P I K H K R K I K ˆx 1 designations refer to a priori and posteriori estimates, is an estimate of covariance, is the identity matrix, is the Kalman gain ˆ T (7) (8) (9) (10) (11) 19/37
Reciprocity relation stating response of pixel Function of product of scene radiance At that location Exposure time General process and measurement models Simpler scalar forms» With assumptions of static scene r r k k 1 k 1 z AT r B n k k k (12) (13) where A and B are gain and offset parameters of a particular pixel determined from calibration 20/37
Simplifications in Kalman filter Used to estimating radiance at this pixel location» Performing procedure independently for each pixel» Each with its own filter rˆ k rˆ k1 P P Q k k 1 k 1 K AT P A T P R 2 2 k k k k k K rˆ rˆ K z AT rˆ B k k k k k k 2 2 k k k k k k P l K AT P K R 1 (14) (15) (16) (17) (18) 21/37
Performance analysis Comparing performance of proposed method Previous methods Based on exposure sequences of Gretag-Macbeth color chart Illuminated by 60-W incandescent source Fig.5. Exposures used as inputs to HDRI algorithms. Exposure times were 0.5, 1.0, 2.5, 5.0, 6.5, 8.0, 15.5, 23.0, 35.5, and 65.5 ms. 22/37
Proposing characterizing algorithm performance Using objective metric with SNR as measure of precision Radiance ratio test used as measure of accuracy Previous camera response functions Fig.6. Camera response functions computed by the Debevec, Mitsunaga, and Robertson methods. 23/37
Number of usable samples and HDR results Fig.7. Number of usable samples in the sequence at each pixel location. Fig.8. Radiance estimates generated using (a) Debevec, (b) Akyuz, (c) Robertson, (d) Mitsunaga, (e)richards, and (f) Kalman filtering. 24/37
Uncertainty estimates generated by Kalman approach Fig.9. Estimates of the uncertainty in relative radiance generated by the Kalman-filtering approach. 25/37
Subtle difference between HDR images Considering for measurement techniques Applying objective metrics» Beginning with their associated SNRs» As measures of uniformity r SNR 20 log 10 r (19) where r is the mean of radiance of six fully visible patches in first row of in put images as signal amplitude, r is standard deviation of radiance is taken to be noise amplitude 26/37
SNRs of the original exposure sequence Table.1. SNR of the exposures shown in Fig. 5. 27/37
SNRs of various HDRI techniques Table.2. SNR of the HDR images generated from the sequence in Fig. 5. 28/37
Accuracy of radiance estimates Relative radiance easier to obtain than absolute radiance Devised a radiance ratio test» Comparing measured luminance values» To CIELAB coordinates of Gretag-Macbeth color chart Converting luminance values to Relative luminance» Divided by lightest reference checker» Yielded a ratio independent of white point normalization factor Averaged over each patch 29/37
Relative reflectances of the HDR images generated from the sequence in Fig.5 Table.3. Relative reflectances of the HDR images generated from the sequence in Fig.5 30/37
Examining Figs. 8 and 9 Uncertainty in relative radiance Closely related to relative radiance itself» Substituting (16) into (18) and iterating k times P k» Dividing by and letting P k» Further simplifies P k P 0 k i1 i 2 2 k k T i j 0 i1 i i 1 i 1 i Ri P0 P0 P A R R A 2 k i1 R i R 2 k k T i i 1Ri i1 Ri 2 A 2 1 k i1 Ti R i (20) (21) (22) 31/37
» Experimentally verified D is typically less than 1% of CTr P k 2 A» Sampling HDR scene 2 C i1» Double exposure time between each exposure P k A Sequence of exposures with much lower dynamic range Exposure times repeated k C 2 k 1 Ti R i 2 1 T 1 (23) (23) Fig.10. Second set of exposures used as inputs to the different HDRI techniques. Exposure times are 2, 2, 2, 2.5, 2.5, and 2.5 ms. 32/37
Table.4. SNR of the exposures shown in Fig. 10 Table.5. SNR of the HDR images generated from the input sequence in Fig. 10 33/37
Table.6. Relative reflectances of the HDR images generated from the sequence in Fig. 10 34/37
Relative radiance estimate of Kalman-filtering Fig.11. Relative radiance estimates of the Kalman-filtering approach. Fig.12. Relative radiance estimates of the Kalman-filtering approach with no process noise. 35/37
Frames from an HDR video sequence (a) (b) (c) (d) Fig.13. (a) a moving matchbox and (b) a candle, with the exposure times chosen for the (c) matchbox and (d) candle video sequences. 36/37
conclusions HDRI based on Kalman filtering Proposed objective quality metric Assess precision and accuracy Useful for measurement applicaiton 37/37