Optimal Single Image Capture for Motion Deblurring

Size: px
Start display at page:

Download "Optimal Single Image Capture for Motion Deblurring"

Transcription

1 Optimal Single Image Capture for Motion Deblurring Amit Agrawal Mitsubishi Electric Research Labs (MERL) 1 Broadway, Cambridge, MA, USA agrawal@merl.com Ramesh Raskar MIT Media Lab Ames St., Cambridge, MA, USA Abstract Deblurring images of moving objects captured from a traditional camera is an ill-posed problem due to the loss of high spatial frequencies in the captured images. Recent techniques have attempted to engineer the motion point spread function (PSF) by either making it invertible [16] using coded exposure, or invariant to motion [13] by moving the camera in a specific fashion. We address the problem of optimal single image capture strategy for best deblurring performance. We formulate the problem of optimal capture as maximizing the signal to noise ratio (SNR) of the deconvolved image given a scene light level. As the exposure time increases, the sensor integrates more light, thereby increasing the SNR of the captured signal. However, for moving objects, larger exposure time also results in more blur and hence more deconvolution noise. We compare the following three single image capture strategies: (a) traditional camera, (b) coded exposure camera, and (c) motion invariant photography, as well as the best exposure time for capture by analyzing the rate of increase of deconvolution noise with exposure time. We analyze which strategy is optimal for known/unknown motion direction and speed and investigate how the performance degrades for other cases. We present real experimental results by simulating the above capture strategies using a high speed video camera. 1. Introduction Consider the problem of capturing a sharp image of a moving object. If the exposure time can be made sufficiently small, a sharp image can be obtained. However, small exposure time integrates less amount of light, thereby increasing the noise in the captured image. As the exposure time increases, the SNR of the captured signal improves, but moving objects also result in increased motion blur. Motion deblurring attempts to obtain a sharp image by deconvolution, thereby resulting in increased deconvolution noise with exposure. In this paper, we ask the following question: What is the best exposure time and capture strategy for capturing a single image of a moving object? We formulate the problem of optimal capture as follows: Maximize the SNR of the deconvolved image of the moving object, given a certain scene light level, while not degrading the image corresponding to the static parts of the scene 1. To obtain the best deblurring performance, one needs to analyze the rate of increase of capture SNR versus deconvolution noise with the exposure time. For imaging sensors, the capture SNR increases proportional to the square root of the exposure time (sub-linear) due to the signal-dependent photon noise. It is well-known that deblurring of images obtained from a traditional camera is highly ill-posed, due to the loss of high spatial frequencies in the captured image. We first show a simple but rather non-intuitive result: the deconvolution noise for 1-D motion blur using a (static) traditional camera increases faster than capture SNR with the exposure time. Thus, increasing exposure time always decreases the SNR of the deconvolved moving object. We then analyze recent advances in engineering the motion PSF that dramatically improves the deconvolution performance. Two prominent methods are (a) making the PSF invertible using a coded exposure camera [16], and (b) making the PSF invariant by moving the camera with non-zero acceleration [13]. A coded exposure camera [16] modulates the integration pattern of light by opening and closing the shutter within the exposure time using a carefully chosen pseudorandom code. The code is chosen so as to minimize the deconvolution noise assuming a specific amount of motion blur in the image. However, coded exposure also loses light. In [16], the chosen code was 50% on/off, thereby losing half the light compared to a traditional camera with the same exposure time. While [16] analyzed the improvement in deconvolution performance, it ignored the loss of light in the image capture. We incorporate the loss of light in our analysis, and show that it is not necessary to have a 50% on/off code with signal-dependent noise; one does have the flexibility of choosing other codes. Note that the PSF is made invertible for any object motion direction, while mo- 1 Otherwise a trivial capture strategy would be to move the camera with the same speed as the object if the motion direction is known.

2 Motion Direction known unknown Motion Deblurring Coded Exposure MIP MIP Degrades gradually Coded Exposure known unknown Motion Magnitude Degrades sharply Single Image Capture Traditional MIP [13] Coded Exposure [16] Camera Motion Required? No Yes No Loss of Light No No Yes Invariant PSF No Yes No PSF Invertibility Very Bad Good Good Noise on Static Scene Parts No Deconvolution Noise No (for same light level) Only for object motion direction same as camera motion direction & object motion magnitude in a range For any object motion direction Figure 1. Overview of single image capture techniques for motion deblurring. Coded exposure is optimal for deblurring for any motion direction, if the motion magnitude is known; but motion PSF needs to be estimated for deblurring. MIP is optimal if the motion direction is known and magnitude is within a range (could be unknown), with additional advantage that motion PSF need not be estimated (invariant). However, performance of coded exposure degrades gradually as motion magnitude differs from the desired one, while MIP performance degrades sharply as motion direction differs from camera motion direction and motion magnitude goes beyond the assumed range. tion magnitude is required for optimal choice of code. Motion invariant photography (MIP) [13] moves the camera with a constant acceleration while capturing the image. The key idea is to make the motion PSF invariant to object speed within a certain range. Thus, objects moving with different speeds within that range would result in same motion PSF. Note that MIP needs to know the direction of the object motion, since the camera should be moved accordingly, but knowledge of motion magnitude is not required. Another disadvantage is that the static parts of the scene are also blurred during capture, leading to deconvolution noise on those scene parts. We compare the three techniques in terms of SNR of the deconvolved image and obtain optimal parameters given a scene light level and object velocity (or range of velocities). Given capture parameters for a scenario, we investigate how the performance degrades for different motion magnitudes and directions. An overview is shown in Figure Contributions We formulate the problem of optimal single image capture of a moving object as maximizing the SNR of the deconvolved image of the moving object. We show that for a traditional image capture using a static camera, SNR of the deblurred moving object decreases with the increase in exposure time. We investigate which capture strategy to choose, choice of exposure time and associated parameters and analyze its performance for different operating conditions such as known/unknown motion magnitude and direction Related work Motion deblurring has been an active area of research over last few decades. Blind deconvolution [9, 5] attempts to estimate the PSF from the given image itself. Since deblurring is typically ill-posed, regularization algorithms such as Richardson-Lucy [14, 18] are used to reduce noise. Recent papers [22,, 11,, 2] have shown promising results for PSF estimation and deblurring. Manipulating PSF: By coding the exposure, [16] made PSF invertible and easy to solve. Wavefront coding [3] modifies the defocus blur to become depth-independent using a cubic phase plate with lens, while Nagahara et al. [15] move the sensor in the lateral direction during image capture to achieve the same. MIP [13] makes motion PSF invariant for a range of speeds by moving the camera. Coded apertures [4] has been used in astronomy using MURA codes for low deconvolution noise, using broadband codes for digital refocusing [21] and for depth estimation in [8, 12]. Improved capture strategies: In [6], optimal exposures were obtained to combine images for high dynamic range imaging. Hadamard multiplexing was used in [19] to increase the capture SNR in presence of multiple light sources. The effect of photon noise and saturation was further included in [17] to obtain better codes. 2. Optimal single image capture Consider an object moving with a velocity v m/sec. For simplicity, we assume that the object is moving horizontally in a single plane parallel to the image plane and the object motion results in an image blur of v i pixels/ms. Let i denote the captured blurred image at exposure time of 1 ms and ī o and ī b be the average image intensity of the moving object and the static background in the captured image respectively. Define a baseline exposure time t 0 = 1 v i for which the blur is 1 pixel in the capture image. Let m be the size of the object in the image along the motion direction if it was static and letsnr 0 be the minimum acceptable SNR for the object. Image noise model: We use the affine noise model [17, 1, 7], where the noise η is described as the sum of a signal independent term and a signal dependent term. The signalindependent term is due to dark current, amplifier noise and the A/D quantizer. Let the gray level variance of this term be σ 2 gray. Signal-dependent noise is related to photon flux and the uncertainty of the electron-photon conversion process. The variance of the photon generated electrons linearly increases with the measured signal, and hence with

3 t m PSF f t -v L v vl PSF 0.2 k x x f x Figure 2. Comparison of capture strategies. (Left) A 1D object (blue) of length m blurs by k in x-t space with integration lines corresponding to traditional camera (solid brown), coded exposure (dotted) and MIP (yellow) and the resulting PSF s. Objects moving with speed v have energy along a single line in frequency domain f x-f t space. For traditional & coded exposure (static cameras), the captured image corresponds to the f t = 0 slice after modulation by sinc (red) and broadband (blue) filter respectively. Thus, for coded exposure, any velocity v results in non-zero energy on f t = 0 slice for all spatial frequencies. MIP optimally captures energy within the wedge given by [ v r, v r] [13] but performs poorly for v outside this range. (Right) Motion PSF for MIP becomes similar to box function as speed increases beyond the desired range (v r = 3). the exposure time t. Thus, the photon noise variance can be written as Ct, where C is a camera dependent constant. Thus, σ 2 η = σ 2 gray + Ct. Given this noise model, the SNR of the captured image is given by SNR capture = ī o t. (1) σgray 2 + Ct For long exposures, Ct σgray, 2 the photon noise dominates and SNR capture i o t C increases as the square root of the exposure time. When Ct σ 2 gray, SNR capture increases linearly with t. Deconvolution noise: At exposure time t, the amount of blur k = tv i. The captured image i(x,y) is modeled as a convolution of the sharp image of the object s(x,y) with the motion PSF h(x), along with added noise i(x,y) = s(x,y) h(x) + η(x,y), (2) where denotes convolution. For 1D motion, the discrete equation for each motion line is given by i = As+n, where A (m+k 1) m denotes the 1D circulant motion smear matrix, and s, i and n denote the vector of sharp object, blurred object and noise intensities along each motion line. The estimated deblurred image is then given by ŝ = (A T A) 1 A T i = s + (A T A) 1 A T n. (3) The covariance matrix of the noise in the estimate ŝ s is equal to Σ = (A T A) 1 A T σ 2 ηa(a T A) T = σ 2 η(a T A) 1. (4) The root mean square error (RMSE) increases by a factor f = trace(a T A) 1 /m. Thus, the SNR 2 of the deconvolved object at exposure time t is given by ī o t SNR d =, (5) f σgray 2 + Ct where f denotes the deconvolution noise factor (DNF). 2 We use log (.) for decibels Traditional camera For a traditional capture, motion PSF is a box function whose width is equal to the blur size k h(x) = 1/k if 0 < x < k, 0 o.w. (6) Figure 3 (left) show the plots of t f which is proportional to SNR d at high signal dependent noise (Ct σ 2 gray). Plots are shown for different object velocities assuming m = 300. Note that the SNR decreases as exposure time is increased. Thus, for traditional capture, increasing the exposure time decreases the SNR of the deconvolved object. For a specific camera, the minimum exposure time that satisfiessnr d > SNR 0 would be optimal, if this condition could be satisfied. Trivial capture: If the SNR at baseline exposure t 0 is greater than SNR 0, then the optimal exposure time is t 0. For example, if there is enough light in the scene (bright daylight), a short exposure image will capture a sharp image of a moving object with good SNR. 3. Reducing deconvolution noise Now we consider the following two approaches: (a) coded exposure camera, and (b) MIP for reducing the deconvolution noise and analyze the optimal capture strategy for known/unknown motion magnitudes and directions Coded exposure camera In coded exposure, the PSF h is modified by modulating the integration pattern of the light without camera motion. Instead of keeping the shutter open for the entire exposure time, a coded exposure camera flutters the shutter open and close using a carefully chosen binary code. Let n be the code length and s be the number of ones in the code. Light is integrated when the code is 1 and is blocked when it is 0. This preserves high spatial frequencies in the captured blurred image at the expense of losing light. Note that s = 1 is equivalent to the short exposure image and s = n is

4 22 Traditional Camera 2 4 Coded Exposure MIP SNR (db) SNR (db) 6 8 SNR (db) Exposure time t (ms) Exposure Time t (ms) Exposure Time t (ms) Figure 3. Key idea: At large signal-dependent noise (Ct σ 2 gray),snr d decreases as the exposure time is increased for traditional camera but not for coded exposure and MIP. Plots show the decrease in SNR for different object speeds. For these plots, parameters depending on exposure time and object speed were used for both coded exposure and MIP. equivalent to the traditional camera. Thus, coded exposure provides tradeoff between the amount of light and amount of deconvolution noise. The SNR of the deconvolved image for the coded exposure camera is given by SNR CE d = ī o ts/n, (7) f CE σgray 2 + Cts/n since both the signal and signal dependent noise will be reduced by a factor of s/n. In [16], the light loss was kept equal to 50% (s = n/2). Note that [16] ignores the loss of light in their analysis of deconvolution noise and thus only minimizes f CE for finding the best code, while one should maximizesnr CE d. We first evaluate the relationship between n and s for optimal code design incorporating the loss of light. Code selection incorporating light loss: First, we analyze choice of n for fix amount of light (same s). The same amount of light ensures that capture noise is similar and one can directly compare DNF s for different n. Figure 4 (left) show plots of DNF versus n, for several values of s. For each plot, n is in the range [s,3s]. Note that DNF decreases sharply as n is increased, as expected. However, the knee in the curves shows that increasing n beyond a certain point leads to similar performance. Since the knee occurs before 2s, this implies that a smaller code length can be used. Small codes are easier to search for and also lead to larger on/off switching time in a practical implementation. Next, we plot the SNR as s is increased from 1 to n for a fixed n. At low light levels,snr s/n f CE. At low s, the increase in noise due to low light overwhelms the reduction in deconvolution noise. At high s, deconvolution noise dominates. Thus, there is a distinct valley for each curve and s = n/2 is a good choice as shown in Figure 4 (middle). However, now consider the effect of signal-dependent s/n noise at high light levels. In this case, SNR f CE and the plots are shown in Figure 4 (right). Notice that for a given n, the performance is good for a range of s and not just for s = n/2. Thus, it means that a code with smaller s can ( be used. Since the size of search space is of the order of n ) s, it leads to a faster search for small s. For example, for n = 40, ( ) n = , while ( n 8) = Choice of t: Now we analyze the performance of coded exposure for different exposure time t by considering a code depending on the blur size. However, in practice, the object speed (blur size) is not known a-priori. The analysis of how a code generalizes for different object velocities is ts/n done in Section 4. Figure 3 (middle) plots SNR ( f CE ) for coded exposure versus t for different velocities v i, where for every blur size k = tv i, the best code was used. Note that with signal dependent noise, SNR does not decrease with exposure time. Thus, the exposure time could be increased and is useful as static parts of the scene could then be captured with higher SNR Motion invariant photography In MIP, the camera is moved with a constant acceleration so that objects moving with speed within [ v r,v r ] and in the same camera motion direction result is same motion PSF. Intuitively, for every object velocity v [ v r,v r ], the moving camera spends an equal amount of time moving with v, thus capturing it sharply for that time period. The PSF is thus peaked at zero and has low deconvolution noise compared to a box (flat) PSF for the traditional capture. Let the acceleration of the camera be a and let T = t/2. For velocity range [ v r,v r ] and exposure time t, a v r /2T = v r /t [13] for good performance. SNR of the deconvolved image for MIP is given by SNR MIP d = ī o t, (8) f MIP σgray 2 + Ct where f MIP depends on the modified PSF due to camera motion. Choice of t: We first analyze the performance of MIP for a given velocity v and exposure time t using a = v/t.

5 DNF (db) s=8 s= s=12 s=14 SNR n=12 n= n=28 n=36 SNR n=12 n= n=28 n= Code Length n Number of Ones in Code (s) Number of Ones in Code (s) Figure 4. Choice of optimal n and s for coded exposure. (Left) For a given s, DNF decreases as n is increased and the knee in each curve happens before n = 2s (marked with square). This indicates that s < n/2 could be used. (Middle) For signal-independent noise, SNR maximizes around s = n/2. (Right) However, for signal-dependent noise, smaller s can be used. Note that in practice, the speed and direction of the object is not known a-priori and how the performance generalizes is described in Section 4. Figure 3 (right) plots SNR at high t f MIP ) with respect to t for var- signal dependent noise ( ious speeds, assuming known motion direction. Note that the SNR does not decrease with increase in exposure time. Thus, exposure time for MIP can be increased similar to coded exposure camera for capture. DNF (db) Traditional Coded MIP Traditional Coded MIP Traditional Coded MIP 4. Comparisons and performance analysis First we compare different capture strategies for the same amount of captured light. This ensures that the capture noise is similar for all three capture strategies, allowing directly comparisons of the DNF s. Note that to keep the same light level, t is decreased by a factor of n/s for MIP and traditional camera. This will lead to more blur in coded exposure image by the same factor. In [13], coded exposure deblurring was visually compared with MIP using synthetic data, but [13] does not cite the code and blur size used for comparisons. Thus, it is difficult to fairly evaluate the performance in [13]. In addition, the captured light level is not same for comparisons in [13]. DNF comparison: Figure 5 compares DNF s with t for different velocities. For coded exposure, the motion direction is not known but speed was assumed to be known for computing the optimal code. In contrast, for MIP, motion direction was assumed to be known and maximum speed was set to v r = 3. In addition, a = v r /t was used separately for each t for best performance. While at lower speeds (v < v r ), MIP gives low deconvolution noise, as the speed approaches v r, coded exposure performs better than MIP. Performance generalization for motion magnitude: The acceleration parameter a for MIP is set based on the desired velocity range [ v r,v r ]. We first analyze how a particular choice of a performs when the object velocity is outside this range. Note that we assume that the camera can still choose a different a based on the exposure time Exposure Time t (ms) Figure 5. Comparison of DNF for various capture strategies for the same light level. t represents the exposure used for coded exposure camera. At lower speeds (v < v r), MIP gives low deconvolution noise than compared to coded exposure. But coded exposure becomes better as v approaches v r. Figure 6 (middle) shows DNF for velocity range [0,2v r ] with t, using v r = 3. Within the velocity range [0,v r ], the deconvolution noise is low, but it increases as the object speed increases. Intuitively, the reason for good performance of MIP is that for some amount of time within the exposure, the camera moves with the same speed as the object. Thus, PSF for all object speeds between [ v r,v r ] is highly peaked. However, when the speed of object lies outside [ v r,v r ], this does not happen, and PSF becomes more like a box function as shown in Figure 2. Using the frequency domain analysis in [13], the image due to MIP correspond to a parabola shaped slice in the f x -f t space. The parabola lies within the wedge given by the velocity range [ v r,v r ] and is optimal for [ v r,v r ]. However, object speed v outside [ v r,v r ] corresponds to a line outside the wedge and the parabolic slice does not capture high frequency information for those speeds (Figure 2). Thus, the performance degrades rapidly. In contrast, coded exposure camera is optimized for a

6 DNF (db) v=6 v p =3 DNF (db) DNF (db) θ=0 θ=15 θ=30 θ=45 θ= Exposure Time t (ms) Exposure time t (ms) Object Velocity Figure 6. Performance generalization: (Left) As the object speed increases beyond assumed, the deconvolution noise does not increase for coded exposure, but the minimum resolvable feature size increases. For MIP, performance degrades rapidly as the motion magnitude v increases beyond v r = 3 (middle) and as motion direction differs from camera motion direction (right). particular velocity v p instead of a velocity range. Similar to above, we assume that the camera can still choose a different code based on the exposure time t. Thus, for each t, a code with length k = v p t is chosen, while the actual object velocity v could lead to a different amount of blur k = vt. Figure 6 (left) plots DNF versus t for different object speeds assuming v p = 3. Note that the deconvolution noise does not increase as v increases beyond v p. However, since the blur could only be resolved within one chop (single 1) of the code, the minimum resolvable feature size increases with v. In frequency domain, coded exposure modulates along the f t direction using the frequency transform of the chosen code and the image corresponds to the horizontal slice (f t = 0). Even if the object velocity v differs from v p, broadband modulation along f t allows high frequency information to be captured in the horizontal slice, leading to good performance. Another interesting observation is that MIP optimizes the capture bandwidth for all speeds with [ v r,v r ]. In a practical situation, however, the speed of the object may not vary from 0 to v r, but rather in a small range, centered around a speed greater than zero. In such cases, the MIP capture process does not remain optimal. Performance generalization for motion direction: Coded exposure makes PSF invertible for any motion direction, but the direction needs to be known for deblurring as shown in [16]. For MIP, the camera needs to be moved along the object motion direction, while the magnitude of motion is not required for deblurring as PSF becomes invariant. However, as the object motion direction differs from the camera motion direction, performance degrades sharply for MIP, as the PSF does not remain invertible or invariant. Let θ denote the difference in camera and object motion directions. Figure 6 (right) plots DNF for θ ranging from 0 to 90. Note that noise increase sharply with θ and all curves meet at v = 0 (static scene). Static scene parts: For coded exposure and traditional camera, the static parts of the scene are captured without any degradation (for the same light level). For MIP, PSF estimation is not required for static scene parts, but they are also blurred due to camera motion, leading to SNR degradation. In conclusions, if the motion direction is known exactly and the motion magnitude is (unknown) within a range, MIP solution should be used for capture. However, if motion direction is unknown, coded exposure is the optimal choice. Moreover, the performance degrades slowly for coded exposure as the object speed/direction differs from assumed, but degrades sharply for MIP. 5. Implementation and results We capture a high speed video and simulate the various capture strategies for comparisons. A traditional camera image can be obtained by simply averaging the frames in the high speed video and that corresponding to a coded exposure camera can be obtained by averaging frames corresponding to the 1 of the code. MIP can be simulated by shifting the images according to the camera motion before averaging. For these experiments, individual high speed camera images are sufficiently above the noise bed of the camera (high signal-dependent noise). Thus, averaging N high speed camera images has similar noise characteristics as a single N times longer exposure image, since both increases capture SNR by N. Using a high speed video enables us to evaluate the performance of all three cameras on the same data, which otherwise would require complicated hardware setup. In addition, images corresponding to different exposure times and camera motions can be easily obtained for the same data. In general, there could be an integration gap between the frames of a high speed camera. We use the Photron FASTCAM-X high speed camera, with a mode that allows frame integration time to be equal to the inverse of frame rate. Thus, this gap will not have any significant effect. Moreover, any such effect will be identical for all three techniques. For all techniques, we deblur simply by solving the linear system (without any regularization) to analyze the effects of deconvolution noise clearly.

7 Setup: A high speed video of a moving resolution chart is captured at 00 fps. The speed is determined manually to be v = 0.28 pixels/frame and is fairly low for accurate simulation of various strategies. Traditional t=27 Coded Exposure t= MIP t=27 Figure 8. As object speed v increases beyond desired speed vp, performance of coded exposure does not degrade in terms of deconvolution noise. But the size of minimum resolvable feature increases. Deconvolution noise increase is 2.61, 2.65 and 2.69 db respectively. Figure 7. Comparison of three approaches for the same light level. (Top row) Blurred images. (Bottom Row) Corresponding deblurred images. DNF were empirically estimated to be 19.8, 2.41 and 1.5 db for traditional, coded exposure and MIP respectively. Visually, the coded exposure deblurring is sharper than MIP deblurring. Comparisons: Figure 7 show comparisons of the three techniques. The blurred image for coded exposure camera was generated using the code (n = 13, s = 9) and exposure time of 39 ms (chop time of 3 ms). For traditional camera (box) and MIP, the exposure time t was reduced to 39 9/13 = 27 ms to have the same amount of light level. For MIP, a = 2v/t was used to get good PSF (vr = 2v). As expected, both MIP and coded exposure results in good deblurring. DNF was calculated empirically using a 0 0 homogeneous region (shown in yellow box) as the ratio of the variance in the deblurred and blurred images. DNF values were 19.8, 2.41 and 1.5 db for traditional, coded exposure and MIP respectively. Visually, the coded exposure deblurring result is sharper than MIP deblurring. Coded exposure performance: An easy way to simulate faster moving object for coded exposure is to increase the object displacement within each 1 of the code by adding consecutive frames. We simulate the resolution chart motion with speeds of 1.07, 1.61 and 2.1 pixels/ms by adding 4, 6 and 8 consecutive images for each chop respectively using the same n = 13 code (optimal for vp = 1). Figure 8 shows that as the speed of object increases, deblurring results do not show deconvolution artifacts, but the size of minimum resolvable feature increases (the vertical lines at the bottom are not resolved clearly). However, the deconvolution noise remains almost constant: values were 2.61, 2.65 and 2.69 db respectively. This is a very useful property since only the effective resolution on the deblurred object vr=v vr=v/2 vr=v/3 Figure 9. Performance of MIP degrades sharply as the object velocity increases beyond assumed limit. Results are shown for v = 0.28 and vr = 0.28, 0.14 and 0.09 pixels/ms respectively. Note that the deblurring shows increased noise when v is greater than vr. Corresponding DNF s are 2.99,.7 and 13.1 db respectively. is decreased without any deconvolution artifacts, leading to a gradual performance decay. MIP performance: Figure 9 shows that MIP deblurring performance degrades rapidly as the object velocity increases beyond vr. For these results, blurred images were generated using a = v/t, a = v/2t and a = v/3t, effectively setting vr to v, v/2 and v/3 respectively. For vr = v, DNF was low (2.99 db) as expected, but increases to.7 and 13.1 db for vr = 2v and vr = 3v respectively. Figure shows that as object motion direction differs from camera motion, deblurring performance degrades sharply for MIP due to the vertical component of motion blur. Even though the vertical blur is smaller than 4 pixels for all three cases, deblurring results shows artifacts.

8 θ= θ= θ=30 Figure. MIP performance degrades as the camera motion direction differs from object motion direction by angle θ. Although the vertical blur is small (4 pixels), deblurring shows artifacts since the resulting PSF does not remain invertible. 6. Conclusions We posed the problem of optimal single image capture for motion deblurring as maximizing the SNR of the deconvolved object, taking into account capture noise, light level and deconvolution noise. We showed that increasing exposure time to gain more light is not beneficial for a traditional camera in presence of motion blur and signal dependent noise. For both coded exposure and MIP, exposure time could be increased without SNR degradation on moving objects. Coded exposure is optimal for any unknown motion direction with known motion magnitude, and its performance degrades gradually as motion magnitude differs from desired. MIP is optimal if the motion direction is known and the motion magnitude is within a known range, but its performance degrades rapidly as the motion magnitude and direction differs, along with increased noise on the static scene parts. We showed that optimal codes for coded exposure need not be 50% on-off if signal-dependent noise is taken into account. We presented evaluation on real datasets, allowing the design of optimal capture strategy for single image motion deblurring. Our analysis could also be extended for comparing other capture strategies for motion/defocus blur using single/multiple images and more complicated blur functions. Acknowledgements We thank the anonymous reviewers and several members of MERL for their suggestions. We also thank Jay Thornton, Keisuke Kojima, and Haruhisa Okuda, Mitsubishi Electric, Japan, for help and support. References [1] R. N. Clark. Digital camera sensor performance summary [2] S. Dai and Y. Wu. Motion from blur. In Proc. Conf. Computer Vision and Pattern Recognition, pages 1 8, June 08. [3] E. R. Dowski and W. Cathey. Extended depth of field through wavefront coding. Appl. Optics, 34(11): , Apr [4] E. Fenimore and T. Cannon. Coded aperture imaging with uniformly redundant arrays. Appl. Optics, 17: , [5] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake from a single photograph. ACM Trans. Graph., 25(3): , 06. [6] M. Grossberg and S. Nayar. High Dynamic Range from Multiple Images: Which Exposures to Combine? In ICCV Workshop on Color and Photometric Methods in Computer Vision (CPMCV), Oct 03. [7] G. E. Healey and R. Kondepudy. Radiometric ccd camera calibration and noise estimation. IEEE Trans. Pattern Anal. Machine Intell., 16(3): , [8] S. Hiura and T. Matsuyama. Depth measurement by the multi-focus camera. In Proc. Conf. Computer Vision and Pattern Recognition, pages , [9] P. Jansson. Deconvolution of Image and Spectra. Academic Press, 2nd edition, [] J. Jia. Single image motion deblurring using transparency. In Proc. Conf. Computer Vision and Pattern Recognition, pages 1 8, June 07. [11] N. Joshi, R. Szeliski, and D. Kriegman. PSF estimation using sharp edge prediction. In Proc. Conf. Computer Vision and Pattern Recognition, June 08. [12] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph., 26(3):70, 07. [13] A. Levin, P. Sand, T. S. Cho, F. Durand, and W. T. Freeman. Motion-invariant photography. ACM Trans. Graph., 27(3):1 9, 08. [14] L. Lucy. An iterative technique for the rectification of observed distributions. J. Astronomy, 79: , [15] H. Nagahara, S. Kuthirummal, C. Zhou, and S. Nayar. Flexible Depth of Field Photography. In Proc. European Conf. Computer Vision, Oct 08. [16] R. Raskar, A. Agrawal, and J. Tumblin. Coded exposure photography: motion deblurring using fluttered shutter. ACM Trans. Graph., 25(3): , 06. [17] N. Ratner and Y. Y. Schechner. Illumination multiplexing within fundamental limits. In Proc. Conf. Computer Vision and Pattern Recognition, June 07. [18] W. Richardson. Bayesian-based iterative method of image restoration. J. Opt. Soc. of America, 62(1):55 59, [19] Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur. A theory of multiplexed illumination. In Proc. Int l Conf. Computer Vision, volume 2, pages , 03. [] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. ACM Trans. Graph., 27(3):1, 08. [21] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph., 26(3):69, 07. [22] L. Yuan, J. Sun, L. Quan, and H.-Y. Shum. Progressive interscale and intra-scale non-blind image deconvolution. ACM Trans. Graph., 27(3):1, 08.

Coded Computational Photography!

Coded Computational Photography! Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!

More information

Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility

Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility Amit Agrawal Yi Xu Mitsubishi Electric Research Labs (MERL) 201 Broadway, Cambridge, MA, USA [agrawal@merl.com,xu43@cs.purdue.edu]

More information

Computational Camera & Photography: Coded Imaging

Computational Camera & Photography: Coded Imaging Computational Camera & Photography: Coded Imaging Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Image removed due to copyright restrictions. See Fig. 1, Eight major types

More information

Deblurring. Basics, Problem definition and variants

Deblurring. Basics, Problem definition and variants Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying

More information

Coding and Modulation in Cameras

Coding and Modulation in Cameras Coding and Modulation in Cameras Amit Agrawal June 2010 Mitsubishi Electric Research Labs (MERL) Cambridge, MA, USA Coded Computational Imaging Agrawal, Veeraraghavan, Narasimhan & Mohan Schedule Introduction

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2018, Lecture 14 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with

More information

Improved motion invariant imaging with time varying shutter functions

Improved motion invariant imaging with time varying shutter functions Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia

More information

Coded photography , , Computational Photography Fall 2017, Lecture 18

Coded photography , , Computational Photography Fall 2017, Lecture 18 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 18 Course announcements Homework 5 delayed for Tuesday. - You will need cameras

More information

Project 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/

More information

Resolving Objects at Higher Resolution from a Single Motion-blurred Image

Resolving Objects at Higher Resolution from a Single Motion-blurred Image MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Resolving Objects at Higher Resolution from a Single Motion-blurred Image Amit Agrawal, Ramesh Raskar TR2007-036 July 2007 Abstract Motion

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

When Does Computational Imaging Improve Performance?

When Does Computational Imaging Improve Performance? When Does Computational Imaging Improve Performance? Oliver Cossairt Assistant Professor Northwestern University Collaborators: Mohit Gupta, Changyin Zhou, Daniel Miau, Shree Nayar (Columbia University)

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

A Framework for Analysis of Computational Imaging Systems

A Framework for Analysis of Computational Imaging Systems A Framework for Analysis of Computational Imaging Systems Kaushik Mitra, Oliver Cossairt, Ashok Veeraghavan Rice University Northwestern University Computational imaging CI systems that adds new functionality

More information

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis

Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis Near-Invariant Blur for Depth and 2D Motion via Time-Varying Light Field Analysis Yosuke Bando 1,2 Henry Holtzman 2 Ramesh Raskar 2 1 Toshiba Corporation 2 MIT Media Lab Defocus & Motion Blur PSF Depth

More information

A Review over Different Blur Detection Techniques in Image Processing

A Review over Different Blur Detection Techniques in Image Processing A Review over Different Blur Detection Techniques in Image Processing 1 Anupama Sharma, 2 Devarshi Shukla 1 E.C.E student, 2 H.O.D, Department of electronics communication engineering, LR College of engineering

More information

Implementation of Image Deblurring Techniques in Java

Implementation of Image Deblurring Techniques in Java Implementation of Image Deblurring Techniques in Java Peter Chapman Computer Systems Lab 2007-2008 Thomas Jefferson High School for Science and Technology Alexandria, Virginia January 22, 2008 Abstract

More information

Motion-invariant Coding Using a Programmable Aperture Camera

Motion-invariant Coding Using a Programmable Aperture Camera [DOI: 10.2197/ipsjtcva.6.25] Research Paper Motion-invariant Coding Using a Programmable Aperture Camera Toshiki Sonoda 1,a) Hajime Nagahara 1,b) Rin-ichiro Taniguchi 1,c) Received: October 22, 2013, Accepted:

More information

To Denoise or Deblur: Parameter Optimization for Imaging Systems

To Denoise or Deblur: Parameter Optimization for Imaging Systems To Denoise or Deblur: Parameter Optimization for Imaging Systems Kaushik Mitra a, Oliver Cossairt b and Ashok Veeraraghavan a a Electrical and Computer Engineering, Rice University, Houston, TX 77005 b

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do? Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution

More information

Computational Cameras. Rahul Raguram COMP

Computational Cameras. Rahul Raguram COMP Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene

More information

Simulated Programmable Apertures with Lytro

Simulated Programmable Apertures with Lytro Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows

More information

To Denoise or Deblur: Parameter Optimization for Imaging Systems

To Denoise or Deblur: Parameter Optimization for Imaging Systems To Denoise or Deblur: Parameter Optimization for Imaging Systems Kaushik Mitra, Oliver Cossairt and Ashok Veeraraghavan 1 ECE, Rice University 2 EECS, Northwestern University 3/3/2014 1 Capture moving

More information

Extended depth of field for visual measurement systems with depth-invariant magnification

Extended depth of field for visual measurement systems with depth-invariant magnification Extended depth of field for visual measurement systems with depth-invariant magnification Yanyu Zhao a and Yufu Qu* a,b a School of Instrument Science and Opto-Electronic Engineering, Beijing University

More information

Computational Approaches to Cameras

Computational Approaches to Cameras Computational Approaches to Cameras 11/16/17 Magritte, The False Mirror (1935) Computational Photography Derek Hoiem, University of Illinois Announcements Final project proposal due Monday (see links on

More information

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho) Recent Advances in Image Deblurring Seungyong Lee (Collaboration w/ Sunghyun Cho) Disclaimer Many images and figures in this course note have been copied from the papers and presentation materials of previous

More information

Admin Deblurring & Deconvolution Different types of blur

Admin Deblurring & Deconvolution Different types of blur Admin Assignment 3 due Deblurring & Deconvolution Lecture 10 Last lecture Move to Friday? Projects Come and see me Different types of blur Camera shake User moving hands Scene motion Objects in the scene

More information

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Deconvolution , , Computational Photography Fall 2017, Lecture 17 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another

More information

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab 2009-2010 Vincent DeVito June 16, 2010 Abstract In the world of photography and machine vision, blurry

More information

Non-Uniform Motion Blur For Face Recognition

Non-Uniform Motion Blur For Face Recognition IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 08, Issue 6 (June. 2018), V (IV) PP 46-52 www.iosrjen.org Non-Uniform Motion Blur For Face Recognition Durga Bhavani

More information

Region Based Robust Single Image Blind Motion Deblurring of Natural Images

Region Based Robust Single Image Blind Motion Deblurring of Natural Images Region Based Robust Single Image Blind Motion Deblurring of Natural Images 1 Nidhi Anna Shine, 2 Mr. Leela Chandrakanth 1 PG student (Final year M.Tech in Signal Processing), 2 Prof.of ECE Department (CiTech)

More information

What are Good Apertures for Defocus Deblurring?

What are Good Apertures for Defocus Deblurring? What are Good Apertures for Defocus Deblurring? Changyin Zhou, Shree Nayar Abstract In recent years, with camera pixels shrinking in size, images are more likely to include defocused regions. In order

More information

multiframe visual-inertial blur estimation and removal for unmodified smartphones

multiframe visual-inertial blur estimation and removal for unmodified smartphones multiframe visual-inertial blur estimation and removal for unmodified smartphones, Severin Münger, Carlo Beltrame, Luc Humair WSCG 2015, Plzen, Czech Republic images taken by non-professional photographers

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation Kalaivani.R 1, Poovendran.R 2 P.G. Student, Dept. of ECE, Adhiyamaan College of Engineering, Hosur, Tamil Nadu,

More information

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy

A Novel Image Deblurring Method to Improve Iris Recognition Accuracy A Novel Image Deblurring Method to Improve Iris Recognition Accuracy Jing Liu University of Science and Technology of China National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

Image Deblurring with Blurred/Noisy Image Pairs

Image Deblurring with Blurred/Noisy Image Pairs Image Deblurring with Blurred/Noisy Image Pairs Huichao Ma, Buping Wang, Jiabei Zheng, Menglian Zhou April 26, 2013 1 Abstract Photos taken under dim lighting conditions by a handheld camera are usually

More information

Transfer Efficiency and Depth Invariance in Computational Cameras

Transfer Efficiency and Depth Invariance in Computational Cameras Transfer Efficiency and Depth Invariance in Computational Cameras Jongmin Baek Stanford University IEEE International Conference on Computational Photography 2010 Jongmin Baek (Stanford University) Transfer

More information

Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique

Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique Linda K. Le a and Carl Salvaggio a a Rochester Institute of Technology, Center for Imaging Science, Digital

More information

Coded Aperture Pairs for Depth from Defocus

Coded Aperture Pairs for Depth from Defocus Coded Aperture Pairs for Depth from Defocus Changyin Zhou Columbia University New York City, U.S. changyin@cs.columbia.edu Stephen Lin Microsoft Research Asia Beijing, P.R. China stevelin@microsoft.com

More information

Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions

Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions Jong-Ho Lee, In-Yong Shin, Hyun-Goo Lee 2, Tae-Yoon Kim 2, and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 26

More information

Coded Aperture and Coded Exposure Photography

Coded Aperture and Coded Exposure Photography Coded Aperture and Coded Exposure Photography Martin Wilson University of Cape Town Cape Town, South Africa Email: Martin.Wilson@uct.ac.za Fred Nicolls University of Cape Town Cape Town, South Africa Email:

More information

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic Recent advances in deblurring and image stabilization Michal Šorel Academy of Sciences of the Czech Republic Camera shake stabilization Alternative to OIS (optical image stabilization) systems Should work

More information

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing Image Restoration Lecture 7, March 23 rd, 2009 Lexing Xie EE4830 Digital Image Processing http://www.ee.columbia.edu/~xlx/ee4830/ thanks to G&W website, Min Wu and others for slide materials 1 Announcements

More information

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST) Gaussian Blur Removal in Digital Images A.Elakkiya 1, S.V.Ramyaa 2 PG Scholars, M.E. VLSI Design, SSN College of Engineering, Rajiv Gandhi Salai, Kalavakkam 1,2 Abstract In many imaging systems, the observed

More information

Analysis of Quality Measurement Parameters of Deblurred Images

Analysis of Quality Measurement Parameters of Deblurred Images Analysis of Quality Measurement Parameters of Deblurred Images Dejee Singh 1, R. K. Sahu 2 PG Student (Communication), Department of ET&T, Chhatrapati Shivaji Institute of Technology, Durg, India 1 Associate

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Blur and Recovery with FTVd. By: James Kerwin Zhehao Li Shaoyi Su Charles Park

Blur and Recovery with FTVd. By: James Kerwin Zhehao Li Shaoyi Su Charles Park Blur and Recovery with FTVd By: James Kerwin Zhehao Li Shaoyi Su Charles Park Blur and Recovery with FTVd By: James Kerwin Zhehao Li Shaoyi Su Charles Park Online: < http://cnx.org/content/col11395/1.1/

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

SINGLE IMAGE DEBLURRING FOR A REAL-TIME FACE RECOGNITION SYSTEM

SINGLE IMAGE DEBLURRING FOR A REAL-TIME FACE RECOGNITION SYSTEM SINGLE IMAGE DEBLURRING FOR A REAL-TIME FACE RECOGNITION SYSTEM #1 D.KUMAR SWAMY, Associate Professor & HOD, #2 P.VASAVI, Dept of ECE, SAHAJA INSTITUTE OF TECHNOLOGY & SCIENCES FOR WOMEN, KARIMNAGAR, TS,

More information

Motion Estimation from a Single Blurred Image

Motion Estimation from a Single Blurred Image Motion Estimation from a Single Blurred Image Image Restoration: De-Blurring Build a Blur Map Adapt Existing De-blurring Techniques to real blurred images Analysis, Reconstruction and 3D reconstruction

More information

PAPER An Image Stabilization Technology for Digital Still Camera Based on Blind Deconvolution

PAPER An Image Stabilization Technology for Digital Still Camera Based on Blind Deconvolution 1082 IEICE TRANS. INF. & SYST., VOL.E94 D, NO.5 MAY 2011 PAPER An Image Stabilization Technology for Digital Still Camera Based on Blind Deconvolution Haruo HATANAKA a), Member, Shimpei FUKUMOTO, Haruhiko

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 22: Computational photography photomatix.com Announcements Final project midterm reports due on Tuesday to CMS by 11:59pm BRDF s can be incredibly complicated

More information

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera

To Do. Advanced Computer Graphics. Outline. Computational Imaging. How do we see the world? Pinhole camera Advanced Computer Graphics CSE 163 [Spring 2017], Lecture 14 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir To Do Assignment 2 due May 19 Any last minute issues or questions? Next two lectures: Imaging,

More information

Introduction to Video Forgery Detection: Part I

Introduction to Video Forgery Detection: Part I Introduction to Video Forgery Detection: Part I Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 5,

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

Motion Blurred Image Restoration based on Super-resolution Method

Motion Blurred Image Restoration based on Super-resolution Method Motion Blurred Image Restoration based on Super-resolution Method Department of computer science and engineering East China University of Political Science and Law, Shanghai, China yanch93@yahoo.com.cn

More information

Motion Deblurring using Coded Exposure for a Wheeled Mobile Robot Kibaek Park, Seunghak Shin, Hae-Gon Jeon, Joon-Young Lee and In So Kweon

Motion Deblurring using Coded Exposure for a Wheeled Mobile Robot Kibaek Park, Seunghak Shin, Hae-Gon Jeon, Joon-Young Lee and In So Kweon Motion Deblurring using Coded Exposure for a Wheeled Mobile Robot Kibaek Park, Seunghak Shin, Hae-Gon Jeon, Joon-Young Lee and In So Kweon Korea Advanced Institute of Science and Technology, Daejeon 373-1,

More information

2D Barcode Localization and Motion Deblurring Using a Flutter Shutter Camera

2D Barcode Localization and Motion Deblurring Using a Flutter Shutter Camera 2D Barcode Localization and Motion Deblurring Using a Flutter Shutter Camera Wei Xu University of Colorado at Boulder Boulder, CO, USA Wei.Xu@colorado.edu Scott McCloskey Honeywell Labs Minneapolis, MN,

More information

Defocus Map Estimation from a Single Image

Defocus Map Estimation from a Single Image Defocus Map Estimation from a Single Image Shaojie Zhuo Terence Sim School of Computing, National University of Singapore, Computing 1, 13 Computing Drive, Singapore 117417, SINGAPOUR Abstract In this

More information

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

Linear Gaussian Method to Detect Blurry Digital Images using SIFT IJCAES ISSN: 2231-4946 Volume III, Special Issue, November 2013 International Journal of Computer Applications in Engineering Sciences Special Issue on Emerging Research Areas in Computing(ERAC) www.caesjournals.org

More information

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Motivation! wikipedia! exposure sequence! -4 stops! Motivation!

More information

Analysis of Coded Apertures for Defocus Deblurring of HDR Images

Analysis of Coded Apertures for Defocus Deblurring of HDR Images CEIG - Spanish Computer Graphics Conference (2012) Isabel Navazo and Gustavo Patow (Editors) Analysis of Coded Apertures for Defocus Deblurring of HDR Images Luis Garcia, Lara Presa, Diego Gutierrez and

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

Computational Photography

Computational Photography Computational photography Computational Photography Digital Visual Effects Yung-Yu Chuang wikipedia: Computational photography h refers broadly to computational imaging techniques that enhance or extend

More information

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Toward Non-stationary Blind Image Deblurring: Models and Techniques Toward Non-stationary Blind Image Deblurring: Models and Techniques Ji, Hui Department of Mathematics National University of Singapore NUS, 30-May-2017 Outline of the talk Non-stationary Image blurring

More information

A No Reference Image Blur Detection using CPBD Metric and Deblurring of Gaussian Blurred Images using Lucy-Richardson Algorithm

A No Reference Image Blur Detection using CPBD Metric and Deblurring of Gaussian Blurred Images using Lucy-Richardson Algorithm A No Reference Image Blur Detection using CPBD Metric and Deblurring of Gaussian Blurred Images using Lucy-Richardson Algorithm Suresh S. Zadage, G. U. Kharat Abstract This paper addresses sharpness of

More information

Point Spread Function Engineering for Scene Recovery. Changyin Zhou

Point Spread Function Engineering for Scene Recovery. Changyin Zhou Point Spread Function Engineering for Scene Recovery Changyin Zhou Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences

More information

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation

Single Digital Image Multi-focusing Using Point to Point Blur Model Based Depth Estimation Single Digital mage Multi-focusing Using Point to Point Blur Model Based Depth Estimation Praveen S S, Aparna P R Abstract The proposed paper focuses on Multi-focusing, a technique that restores all-focused

More information

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS

Wavefront coding. Refocusing & Light Fields. Wavefront coding. Final projects. Is depth of field a blur? Frédo Durand Bill Freeman MIT - EECS 6.098 Digital and Computational Photography 6.882 Advanced Computational Photography Final projects Send your slides by noon on Thrusday. Send final report Refocusing & Light Fields Frédo Durand Bill Freeman

More information

Multi-Image Deblurring For Real-Time Face Recognition System

Multi-Image Deblurring For Real-Time Face Recognition System Volume 118 No. 8 2018, 295-301 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Multi-Image Deblurring For Real-Time Face Recognition System B.Sarojini

More information

Image Enhancement of Low-light Scenes with Near-infrared Flash Images

Image Enhancement of Low-light Scenes with Near-infrared Flash Images Research Paper Image Enhancement of Low-light Scenes with Near-infrared Flash Images Sosuke Matsui, 1 Takahiro Okabe, 1 Mihoko Shimano 1, 2 and Yoichi Sato 1 We present a novel technique for enhancing

More information

Perceptually-Optimized Coded Apertures for Defocus Deblurring

Perceptually-Optimized Coded Apertures for Defocus Deblurring Volume 0 (1981), Number 0 pp. 1 12 COMPUTER GRAPHICS forum Perceptually-Optimized Coded Apertures for Defocus Deblurring Belen Masia, Lara Presa, Adrian Corrales and Diego Gutierrez Universidad de Zaragoza,

More information

Coded Aperture Flow. Anita Sellent and Paolo Favaro

Coded Aperture Flow. Anita Sellent and Paolo Favaro Coded Aperture Flow Anita Sellent and Paolo Favaro Institut für Informatik und angewandte Mathematik, Universität Bern, Switzerland http://www.cvg.unibe.ch/ Abstract. Real cameras have a limited depth

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Amit

More information

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm 1 Rupali Patil, 2 Sangeeta Kulkarni 1 Rupali Patil, M.E., Sem III, EXTC, K. J. Somaiya COE, Vidyavihar, Mumbai 1 patilrs26@gmail.com

More information

THE CCD RIDDLE REVISTED: SIGNAL VERSUS TIME LINEAR SIGNAL VERSUS VARIANCE NON-LINEAR

THE CCD RIDDLE REVISTED: SIGNAL VERSUS TIME LINEAR SIGNAL VERSUS VARIANCE NON-LINEAR THE CCD RIDDLE REVISTED: SIGNAL VERSUS TIME LINEAR SIGNAL VERSUS VARIANCE NON-LINEAR Mark Downing 1, Peter Sinclaire 1. 1 ESO, Karl Schwartzschild Strasse-2, 85748 Munich, Germany. ABSTRACT The photon

More information

Linear Motion Deblurring from Single Images Using Genetic Algorithms

Linear Motion Deblurring from Single Images Using Genetic Algorithms 14 th International Conference on AEROSPACE SCIENCES & AVIATION TECHNOLOGY, ASAT - 14 May 24-26, 2011, Email: asat@mtc.edu.eg Military Technical College, Kobry Elkobbah, Cairo, Egypt Tel: +(202) 24025292

More information

Sensing Increased Image Resolution Using Aperture Masks

Sensing Increased Image Resolution Using Aperture Masks Sensing Increased Image Resolution Using Aperture Masks Ankit Mohan, Xiang Huang, Jack Tumblin Northwestern University Ramesh Raskar MIT Media Lab CVPR 2008 Supplemental Material Contributions Achieve

More information

Introduction to Light Fields

Introduction to Light Fields MIT Media Lab Introduction to Light Fields Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Introduction to Light Fields Ray Concepts for 4D and 5D Functions Propagation of

More information

APJIMTC, Jalandhar, India. Keywords---Median filter, mean filter, adaptive filter, salt & pepper noise, Gaussian noise.

APJIMTC, Jalandhar, India. Keywords---Median filter, mean filter, adaptive filter, salt & pepper noise, Gaussian noise. Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Comparative

More information

On the Recovery of Depth from a Single Defocused Image

On the Recovery of Depth from a Single Defocused Image On the Recovery of Depth from a Single Defocused Image Shaojie Zhuo and Terence Sim School of Computing National University of Singapore Singapore,747 Abstract. In this paper we address the challenging

More information

Image Enhancement of Low-light Scenes with Near-infrared Flash Images

Image Enhancement of Low-light Scenes with Near-infrared Flash Images IPSJ Transactions on Computer Vision and Applications Vol. 2 215 223 (Dec. 2010) Research Paper Image Enhancement of Low-light Scenes with Near-infrared Flash Images Sosuke Matsui, 1 Takahiro Okabe, 1

More information

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008 ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 409 414 SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES

More information

Comparison of Reconstruction Algorithms for Images from Sparse-Aperture Systems

Comparison of Reconstruction Algorithms for Images from Sparse-Aperture Systems Published in Proc. SPIE 4792-01, Image Reconstruction from Incomplete Data II, Seattle, WA, July 2002. Comparison of Reconstruction Algorithms for Images from Sparse-Aperture Systems J.R. Fienup, a * D.

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

Realistic Image Synthesis

Realistic Image Synthesis Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106

More information

Total Variation Blind Deconvolution: The Devil is in the Details*

Total Variation Blind Deconvolution: The Devil is in the Details* Total Variation Blind Deconvolution: The Devil is in the Details* Paolo Favaro Computer Vision Group University of Bern *Joint work with Daniele Perrone Blur in pictures When we take a picture we expose

More information

IMAGE TAMPERING DETECTION BY EXPOSING BLUR TYPE INCONSISTENCY. Khosro Bahrami and Alex C. Kot

IMAGE TAMPERING DETECTION BY EXPOSING BLUR TYPE INCONSISTENCY. Khosro Bahrami and Alex C. Kot 24 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) IMAGE TAMPERING DETECTION BY EXPOSING BLUR TYPE INCONSISTENCY Khosro Bahrami and Alex C. Kot School of Electrical and

More information

Computational Photography Introduction

Computational Photography Introduction Computational Photography Introduction Jongmin Baek CS 478 Lecture Jan 9, 2012 Background Sales of digital cameras surpassed sales of film cameras in 2004. Digital cameras are cool Free film Instant display

More information

Removal of Glare Caused by Water Droplets

Removal of Glare Caused by Water Droplets 2009 Conference for Visual Media Production Removal of Glare Caused by Water Droplets Takenori Hara 1, Hideo Saito 2, Takeo Kanade 3 1 Dai Nippon Printing, Japan hara-t6@mail.dnp.co.jp 2 Keio University,

More information

Photographic Color Reproduction Based on Color Variation Characteristics of Digital Camera

Photographic Color Reproduction Based on Color Variation Characteristics of Digital Camera KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS VOL. 5, NO. 11, November 2011 2160 Copyright c 2011 KSII Photographic Color Reproduction Based on Color Variation Characteristics of Digital Camera

More information

Defocusing and Deblurring by Using with Fourier Transfer

Defocusing and Deblurring by Using with Fourier Transfer Defocusing and Deblurring by Using with Fourier Transfer AKIRA YANAGAWA and TATSUYA KATO 1. Introduction Image data may be obtained through an image system, such as a video camera or a digital still camera.

More information

Receiver Design for Passive Millimeter Wave (PMMW) Imaging

Receiver Design for Passive Millimeter Wave (PMMW) Imaging Introduction Receiver Design for Passive Millimeter Wave (PMMW) Imaging Millimeter Wave Systems, LLC Passive Millimeter Wave (PMMW) sensors are used for remote sensing and security applications. They rely

More information

Progressive Inter-scale and Intra-scale Non-blind Image Deconvolution

Progressive Inter-scale and Intra-scale Non-blind Image Deconvolution Progressive Inter-scale and Intra-scale Non-blind Image Deconvolution Lu Yuan 1 Jian Sun 2 Long Quan 1 Heung-Yeung Shum 2 1 The Hong Kong University of Science and Technology 2 Microsoft Research Asia

More information