Robust image restoration for rotary motion blur based on frequency analysis

Size: px
Start display at page:

Download "Robust image restoration for rotary motion blur based on frequency analysis"

Transcription

1 47 9, 974 September 28 Robust image restoration for rotary motion blur based on frequency analysis Zheng Yuan * Jianxun Li Shanghai Jiao Tong University Department of Automation School of Electrnic, Information, and Electrical Engineering Dongchuan Rd. 8# Shanghai, 224, China Zhengfu Zhu Optical Signature of Targets and Environments Key Laboratory Yongding Rd. 52#, Haidian Disctrict Beijing, 854, China Abstract. Most previous spatial methods to deblur rotary motion blur raise an overregularization problem in the solution of deconvolution. We construct a frequency domain framework to formulate the rotary motion blur. The well-conditioned frequency components are protected so as to avoid the overregularization. Then, Wiener filtering is applied to yield the optimal estimation of original pixels under different noise levels. The identifications of rotary motion parameters are also presented. To detect the rotary center, we develop a zero-interval searching method that works on the degraded pixel spectrum. This method is robust to noise. For the blur angle, it is iteratively calibrated by a novel divide-andconquer method, which possesses computational efficiency. Furthermore, this paper presents a shape-recognition and linear surface fitting method to interpolate missing pixels caused by circularly fetching. Experimental results illustrate the proposed algorithm outperforms spatial algorithms by up to.5 4 db in the peak signal-to-noise ratio and the improvement of signal-to-noise ratio and prove the methods for missing pixel interpolation and parameter identifications effective. 28 Society of Photo-Optical Instrumentation Engineers. DOI:.7/ Subject terms: frequency response; regularization; rotary motion blur; shape recognition; Wiener filter; divide-conquer algorithm; power spectrum. Paper 72R received Dec. 26, 27; revised manuscript received Jun. 7, 28; accepted for publication Jul., 28; published online Sep. 8, 28. Introduction Rotary motion blur is the result of the relative rotary motion between the camera and the scene during the integration time of image acquisition. This kind of blur is encountered in many areas where cameras are mounted on rotary platforms. Rotary motion blur also happens when the foreground object rotates with high velocity. Very often, the rotary motion blur is simply an undesired effect. It has plagued photography since its early days and is still considered to be an effect that can significantly degrade image quality. The rotary blur not only causes unpleasing effects on human eyes, but also invalidates many recognition algorithms that demand high-quality images. Accordingly, many serious problems emerge in jobs, such as feature correspondence extraction and object recognition. Rotary motion deblurring is used to recover scenes from the degraded images. Noticeably, some considerations should be focused in this work. The first issue is noise incurred during the image acquisition. It is well known that image restoration is mostly an ill-posed problem i.e., noise can be easily amplified while deblurring the blur. The other issue is that rotary motion parameters, including rotary center and the blur angle the rotary angle spanned within the camera exposure time, may not be precisely available. Thus, for an algorithm, it is desirable to be robust to these parameters or, more preferably, able to recover them. * yuanzheng625@sjtu.edu.cn, yuanzheng625@26.com /28/$ SPIE. Previous Work During past decades, the methods 3 to deblur motionblurred images can be divided into two categories according to the motion-blur models i.e., the spatially invariant and spatially variant motion deblurring. For the spatially invariant deblurring, if the point spread function PSF is known, the deblurring can be regarded as a deconvolution problem with the known convolution kernel. Accordingly, several algorithms, such as regularized least squares, Wiener filtering, and Lucy Richardson deconvolution, can be employed to infer the clear image. The difficulty in the deconvolution is that the convolution kernel of motion blur is often severely ill-posed. Thus, Kang and Katsagellos 4 propose to add a regularized term when minimizing the square mean error between the blurred image and the one resulting from the inferred clear image. Gull 5 discusses a maximum entropy constraint to deblur an astronomical image. Tsumuraya et al. 6 apply the Lucy Richardson algorithm to reconstruct the precise image. Zarowin 7 modifies Van Cittert deconvolution to a more computationally efficient version. On the other hand, when PSF is not available or inaccurate, blind deconvolution methods should be employed. Normally, a two-step strategy is adopted by applying a roughly estimated blur kernel and then deconvoluting the image. Li and Lii 8 propose an iterative algorithm to refine the blur kernel gradually until yielding the acceptable version of the inferred image, with the finalization of the blur kernel. Fergus et al. 9 present an ensemble learning method with the prior knowledge of modeling the distribution of the clear image gradient. The spatially variant motion deblurring is even more 974- September 28/Vol. 47 9

2 difficult. The model of its blur kernel is far more complicated i.e., the numerous variables in the kernel expand the solution space considerably and cannot produce a reliable kernel estimation provided that the optimization is easily stuck into local minimums. Hansen et al. decompose the spatially variant blur model into several local spatially invariant components. Their method simplifies the large linear equation system modeling the spatially variant blur kernel but does not explicitly reflect the physical mechanism of the motion blur. For the model of rotary motion blur, Sawchuk and Peyrovian discuss the blur distribution as a radiation model i.e., a pixel degrades to a more deteriorating extent when it goes further away from the rotary center. Ribaric et al. 2 propose a coordinate transform restoration CTR method to transplant the well-established spatially invariant deblurring methods into the rotary blur field. However, the geometric transform requires the interpolation for a large number of pixels, which gives rise to serious estimation errors. Besides, the method is computationally expensive. In recent years, Hong and Zhang 3 introduce a fast method by utilizing the block circular property of the blur kernel to the deconvolution. The method employs the Bresenham algorithm to fetch pixels aligning circularly and thus downsizes the number of missing pixels caused by the confliction between the polar coordinates of a pixel and its square alignment during image acquisition. Generally speaking, the methods 3 develop their algorithms based on the spatial domain analysis. Indeed, they achieve some robustness to noise by applying regularized least squares in deconvolution. Nevertheless, these methods 2,3 only yield a compromised version of the inferred image in terms of clearness i.e., although the rotary motion blur kernel is illposed, the regularization methods do not differentiate the ill-conditioned component from the kernel matrix. Thus, the regularized term often imposes an overregularization force to the well-conditioned part of the blur kernel. In addition, fetching pixel circularly still leads to a portion of pixels missing. Therefore, the interpolation for missing pixels should be considered as an integral subject in the whole rotary motion deblurring. As for the robustness-to-rotary motion parameters, including the rotary center and the blur angle, Hong and Zhang 3 present a cross-correlation method to identify these parameters. However, their method gradually invalidates as the noise amount is increasing. Fig. Formation of the rotary motion blur..2 Overview of This Work In this work, we propose a frequency domain algorithm to deblur the rotary motion blur from a single image. Our algorithm tolerates the inaccuracy of the rotary velocity, which is the most important factor to shape PSF. In order to transfer rotary motion blur into a combination of locally spatially invariant blur, we divide the degraded image into a series of concentric circles along the rotary center. On every circle, the pixels undergo the equivalent amount of rotary blur. Our method performs on a circle basis. For each circle, a specific PSF is estimated from the radius and the rotary velocity. The method analyzes the rotary blur mechanism in the frequency-domain framework. Thus, the blur kernel is formulated through its frequency response. Noting that the differentiation of the illconditioned component from the blur kernel is quantifiable in frequency domain, our approach avoids the overregularization problem in the spatial domain regularized leastsquares methods. It further introduces Wiener filtering to achieve an optimal estimation for the clear image in the sense of mean squares. Our approach further identifies the rotary center and the rotary velocity equivalently the blur angle, thereby achieving the robustness to them. Using the frequencydomain framework, it relocates the translated rotary center and calibrates a rough blur angle to its precise version. With the fact that the power spectrum of pixels on each circle demonstrates uniformly distributed minima, we propose a zero-interval searching method to judge the rotary center from a searching window. This method is strongly tolerant with noise. Then, we recover the blur angle indirectly through the blur length. A divide-and-conquer algorithm is presented to iteratively calibrate the blur angle. This method effectively downsizes the searching range of the blur length of each circle and possesses computational efficiency. In addition, to estimate missing pixels resulting from circularly fetching, we propose a hierarchical interpolation scheme, including shape recognition and linear surface fitting. The method utilizes the texture information of a neighborhood and the edge can be well preserved. The rest of this paper is organized as follows. We formulate the rotary motion model through its frequency response in Section 2. The Wiener filtering deconvolution is described in Section 3. In Section 4, we explain the pixel interpolation scheme. Identifications of rotary motion blur parameters are discussed in Section 5. We show the results in Section 6. Finally, we conclude our methods in Section 7. 2 Analysis of the Rotary Motion Blur Mechanism This section constructs a geometry model to represent the relative motion between the camera and scenery. In this work, we consider their relative motion as rotation around a center with constant velocity and suppose camera as the stationary reference object. As shown in Fig., let o be the rotary center and a marked by a rectangle be a single pixel on the observed image plane. Many scenery points marked by a cross may pass through the image pixel a within the camera exposure time. All their luminosities September 28/Vol. 47 9

3 contribute to the final intensity recorded by the pixel a. Particularly, under the circumstance of rotation with constant velocity, the intensity of a degraded pixel can be seen as the luminosity integral of scenery points passing through. Obviously, these scenery points follow an arcshaped trace, whose length is proportional to both its radius and the angle spanned within the exposure time T. Therefore, it is reasonable to calculate degraded pixel intensity in a polar coordinate system, T g r, = f r, t dt + r,, T where g r, is the intensity of a degraded pixel with coordinate r,, f r, t represents the luminosity of scenery points passing through. denotes the angular velocity of the rotary motion and r, is additive Gaussian white noise. Evidently, Eq. represents PSF of the rotary motion blur spatially. In brief, rotary motion blur can be interpreted as pixel intensity degrades from the luminosity of just one scenery point to an average of many different scenery points alongside an arc. Noticeably, the amount of degradation is indicated by the number of scenery points passing through, which is proportional to three prior parameters: rotary radius r, camera angular velocity, and camera exposure time T. In a degraded image, pixels with different locations share the same values for and T but different r, which accounts for the fact that the rotary motion raises spatially variant degradation. However, for those pixels aligning on the same circle the same r, rotary motion blur exerts the equivalent amount of degradation. Therefore, we disintegrate the degraded image into a group of concentric circles along the rotary center. Then, the rotary blur becomes locally spatially invariant for each circle with a specific PSF. Provided that we perform similar deblurring procedures on different circles, it is reasonable to detail the deblurring algorithm within one circle. Thus, we remove radius r from Eq., replace the angle with s= r and change t to l, which yields that L g s = f s l dl + s, 2 L where L denotes the length of the arc-shaped trace. Given the discrete property of a digital image, it makes sense to approximate the integral in Eq. 2 by the discrete summation. In this work, the deblurring is on the pixel level. Hence, we replace the continuous integral element dl with the discrete pixel coordinate i to present the discrete form of the blur function, L g n = f n i + n, 3 L i= where g n denotes the n th degraded pixel alongside a circle. f n i is the luminosity of the scenery point which locates i pixels away from g n, circularly. Here, the length L of the arc-shaped trace can be interpreted as the number of scenery points passing through during camera exposure. Note that L relates to the blur amount of each circle; we name it blur length. Determining how to retrieve the original pixel intensity f n from its degraded version g n is the core merit of our work. In our work, we choose Bresenham algorithm 4 to fetch discrete pixels, circle by circle, from the degraded image because it demonstrates a promising accuracy to assemble a series of points aligning orthogonally into a circle. However, for a whole image integrated by multiple concentric circles, there are a series of pixels that cannot be covered by any of the circles. To tackle this problem, we develop an interpolation method discussed in Section 4. 3 Frequency Domain Wiener Filtering This section discusses the deblurring with known rotary parameters, including the rotary center and the precise blur angle. As for the acquisition of these parameters in real cases, Section 5 elaborates the methods to identify them. To formulate the rotary motion blur in frequency domain, discrete Fourier transform DFT is applied to Eq. 3. In the frequency-domain framework, previous deconvolution to estimate the original pixels is implemented by inverse filtering, with the filter response equal to the reciprocal of rotary blur frequency response. Because the rotary blur frequency response can be precisely derived from the rotary velocity and camera exposure time, we may easily distinguish those zero ill-conditioned frequencies from nonzero well-conditioned frequencies. Accordingly, the ill-conditioned frequencies are filtered out directly, whereas the other frequencies are adjusted optimally under a certain amount of noise. In brief, we yield an optimal estimation for the frequency spectrum of original pixels in the sense of mean squares. Noting that our approach effectively protects well-conditioned frequencies, it will not raise overregularization to the deblurring solution. Equation 3 can be reorganized as the convolution form, g n = h f n + n, 4 where * is the convolution operator. The convolution kernel, h n =......, consists of L ones and NL zeros. Transform Eq. 4 into frequency domain using DFT and yield, G k = H k F k + k, 5 where H k is the frequency response of the rotary motion blur, L H k = = L i= L e j 2 /N ik k = exp j 2 N Lk exp j 2 N k k = N =, for k = N/L,2N/L,..., September 28/Vol. 47 9

4 where N is the circumference of a circle, and the blur length L equals the product of N and the blur angle. It should be noted that F k, G k, and H k denote the input, output, and rotary motion blur system, respectively. At some specific frequency points k, where kl/n equals an integer, H k equals zero. As a result, information contained by F k can never be delivered to the output G k because H k decays F k to zero. Thus, these frequency zeros are called ill-conditioned frequencies. On the other hand, when H k does not equal zero, rotary motion blur serves as a linear factor to map the input information proportionally to the output. Noticeably, the noise term contaminates the linearity of the output G k with F k. We can possibly retrieve the input F k from G k if noise is confined within a certain range. The regularization here is precisely orientated. We first screen the ill-conditioned frequencies out in case of the amplification of noise. Then, for those well-conditioned frequencies, the statistical properties of noise and degraded pixels are utilized to attain the most possible removal of noise. In this work, we suppose the signal-to-noise ratio SNR can be roughly estimated. Illuminated by the Wiener filtering, we yield an optimal recovery of original pixels in the sense of mean square, f n = w g n, where w n is the Wiener filter kernel. f n denotes the inferred pixel intensity to approximate the original pixel f n at the same coordinates on the image array. Considering noise incurred in most natural scenarios, we assume it conforms to the Gaussian white-noise model with zero mean and covariance of. The noise model is a stationary ergodic process with roughly constant power spectrum density. Also, the incurred noise does not correlate with the rotary motion, thereby, E h f n n = E h f n E n =. 8 Under the constraint that the filter output is the optimal estimation of the original signal in the sense of mean square, the Wiener filter is subject to the following equation 5 : E f n w g n g m =, which means the error of inferred pixels with their real intensity value is uncorrelated with the degraded pixel intensity, statistically. To yield the explicit form of Wiener filtering, Eq. 9 is deduced as follows: E f n g m E w g n g m =, E f n g m w n m E g n g m =, R fg n w R g n =, 7 9 where R fg n is the cross-correlation function of original pixels with their degraded versions. R g n denotes the autocorrelation function of degraded pixels. Note that the correlation function and the power spectrum density PSD are DFT correspondence; thus, apply DFT to Eq., S fg k W k S g k =, where S fg k is the cross spectrum of original pixels with their degraded versions. S g k is PSD of degraded pixels. W k is the frequency response of the optimal Wiener filter, W k = S fg k S g k. 2 The cross spectrum S fg can be calculated from the crosscorrelation function of the original pixel f with the degraded pixels g, E g n f m = E h f n f m + E n f m = h m n E f n f m, R fg n m = h m n R f n m, S fg k = H k S f k. 3 Also, S g can be retrieved from the auto correlation of the degraded pixel g, E g n g m = E h f n g m + E n g m = E h f n h f m + m + E n h f m + m = E h f n h f m + E n m, R g n m = h n m 2 R f n m + R n m, S g k = H k 2 S g k + S k. 4 Integrating Eqs. 2 4, the optimal Wiener filter is equal to W k = S fg k S g k = H k S g k H k 2 S g k + S k = H k H k 2 H k 2 + S k /S f k. 5 In practice, the power spectrum S f of original pixels is unavailable, especially the value at the ill-posed frequency points. Hence, we utilize the prior knowledge of SNR to replace S /S f with S /S g to produce the parametric Wiener filter W. Its frequency response is slightly larger than the optimal Wiener filter in the high-frequency domain, W k = H k H k 2 H k 2 +/SNR = H k H k 2 H k 2 + S k /S g k S fg k S g k September 28/Vol. 47 9

5 Within the low-frequency domain of the degraded pixel power spectrum, the blur term is the main contributor due to its high ratio to the power of noise. Thus, the ratio of S /S f is almost equal to S /S g, W low S fg k S g k. 7 Indeed, the Gaussian white noise dominates in the highfrequency domain and overrides the blur term there. Hence, the error between S /S f and S /S g is considerable. However, the high-frequency power of S g is so trivial compared to low-frequency components and is negligible when added with the low-frequency spectrum. The same fact also applies to S fg, considering its high-frequency spectrum nearly equals zero, indicated in Eq. 3. Finally, we investigate the constraint of the optimal filter in Eq., R fg n w R g n = low S fg W low S g e j 2 /N kn + S fg W high S g e j 2 /N kn high = S fg e j 2 /N kn + S fg e j 2 /N kn low high low high W low S g e j 2 /N kn W high S g e j 2 /N kn S fg low W low S g e j 2 /N kn, 8 which means the parametric Wiener filter W only slightly affects the constraint of the optimal filter. In conclusion, the high-frequency error of W is elegantly confined. Still, the filter output can approximate original pixels with enough accuracy in the sense of mean square. Table presents the procedure of the proposed frequency domain algorithm to deblur a rotary motion degraded image. Steps Table The procedure of the frequency-domain algorithm. Description Disintegrate the degraded image into a group of concentric circles; initialize g n as theintensity of pixels on the first circle the most inner circle of the group and calculate its DFT G k. 2 Calculate the blur length L of the circle according to its circumference and the blur angle. 3 Deduce the blur function according to Eq. 6 and eliminate the zeros on its frequency spectrum. 4 Produce the parametric Wiener filter for this circle according to Eq. 6 and calculate the filtering output F k. 5 Calculate the deblurred pixels by the Inverse Discrete Fourier Transform IDFT of F k. 6 If g n denotes pixels on the last circle, stop the procedure; otherwise, update g n as pixels on the next circle, then go to Step 2. 4 Interpolation of Missing Pixels Before deblurring degraded pixels using Wiener filtering, discrete pixels should be fetched circularly. This procedure is performed by Bresenham algorithm. Computer graphics 4 illustrate that this algorithm can guarantee the fetched pixels within a circle without overlap or loss. However, when it comes to the integration of different concentric circles into an image, a portion of pixels cannot be covered by any of the multiple circles. The pixel-missing problem is shown in Fig. 2. It is caused by the confliction of the circular alignment of pixels with their substantially orthogonal alignment during image acquisition. Hence, the distribution of missing pixels is exclusively dependent on the Bresenham algorithm rather than the content of an image. Statistical analysis shows that in a square image, 5% of the missing pixels have a 3 3 neighborhood with no other missing pixels and the rest have only one or two other missing pixels. In this section, a missing pixel is interpolated based on a hierarchical strategy to preserve the texture information of the neighborhood of missing pixels. If the neighborhood of a missing pixel is intact, then the shape recognition method is adopted; otherwise, the surface fitting method is applied. 4. Shape Recognition with Eight Neighboring Pixels Figure 3 depicts a 3 3 neighborhood centered by the missing pixel. The shape-recognition method first recog- Fig. 2 The missing pixels black pixels in the white background of a image caused by Bresenham circular fetching. Fig. 3 Four kinds of shapes in a 3 3 neighborhood unit: a H, b V, c PD, and d ND September 28/Vol. 47 9

6 nizes the shape contained in the neighborhood and then interpolates the missing pixel based on the accredited shape. Because shape is the expression of texture information of a neighborhood, this method is especially powerful in preserving the object edge from the background. We start from considering the neighborhood is rightly situated on the edge of an object, and then illustrate that this method can be extended to the scenario when the neighborhood lies inside an object domain. In our work, the intensity of a missing pixel is viewed as a random variable f. Compared to the size of a whole image, a 3 3 neighborhood is small enough to be viewed as a primitive shape. Thus it contains only four kinds of the edge shapes, which are horizontal line H, vertical line V, positive diagonal line PD, and negative diagonal line ND, shown in Fig. 3. Four parameters are defined as the numeric features of different shapes that the 3 3 unit may contain. Definition: G H = f 6 f 4 + f 4 f + f 7 f 2 + f 8 f 5 + f 5 f 3 The intensity of the pixels The intensity of a missing pixel can be estimated by its mathematical expectation, The lost pixel Fig. 4 The linear surface to estimate the missing pixel G V = f 3 f 2 + f 2 f + f 5 f 4 + f 8 f 7 + f 7 f 6 G PD = f 4 f 2 + f 6 f 3 + f 7 f 5 G ND = f 5 f 2 + f 8 f + f 7 f 4, 9 f = P f i f i = P H f H-lost + P V f V-lost + P PD i f PD-lost + P ND f ND-lost. 22 Associate Eq. 2 with Eq. 22, it yields where G H, G V, G PD, and G ND are the numeric features for shape H, V, PD, and ND, respectively. And f i is the intensity of pixel i pixel index shown in Fig. 3. In essence, these numeric features are Laplace edge indicators. The higher the value of a numeric feature, the more possible that an edge may be in its shape. We transform them into the corresponding probabilities via normalizing, G H P H = G H = G H + G V + G PD + G ND G V P V = G V = G H + G V + G PD + G ND G PD P PD = G PD = G H + G V + G PD + G ND 2 G ND P ND = G ND =. G H + G V + G PD + G ND They all fit in interval,. Because the missing pixel has a strong correlation with the edge shape rather than the entire 3 3 neighborhood, we present the interpolation equations, H: f H-lost = 2 f 4 + f 5 V: f V-lost = 2 f 2 + f 7 PD: f PD-lost = 2 f + f 8 ND: f ND-lost = 2 f 3 + f 6 2 G H f 4 + f 5 + G V f 2 + f 7 f = 2 + G PD f + f 8 + G ND f 3 + f G H + G V + G PD + G ND The proposed shape-recognition method is also justified when the 3 3 neighborhood locates in the interior domain of an object. Because the interior domain always owns a smooth texture, which implies the neighboring pixels almost have the same intensity, it is reasonable to predict the missing pixel intensity as the mean value of surrounding pixels. On the other hand, according to Eq. 9, G H, G V, G PD, and G ND are nearly equivalent as well, and then Eq. 23 regresses into the average of eight surrounding pixels. Therefore, Eq. 23 is fairly in accordance with the prediction. In general, Eq. 23 can estimate the intensity of missing pixels. 4.2 Linear Surface Fitting with Six or Seven Neighboring Pixels Assume that the point m i, I i denotes one of the remaining pixels, where m i is a vector denoting its coordinates in the neighborhood and I i is its intensity. Note that a 3 3 neighborhood is a small domain; it is justified to consider its inside pixels have close intensity values. Therefore, we plot the points onto a linear surface and accordingly estimate the missing pixel intensity, shown in Fig. 4. In this work, we plot the linear surface based on the least-squares fitting. The linear surface constructed by the intensity and coordinates of the neighboring pixels is modeled as f i = x m i + x 2 m i2 + x 3 = m it x September 28/Vol. 47 9

7 x = x,x 2,x 3 T, m i = m i,m i2, T, 24 where f i is the estimated intensity of the pixel i, x is the normal vector of the linear surface and m i is a vector denoting the coordinates of the pixel i in the neighborhood. To solve the factor vector x, the objective function as the minimization of the difference between the estimated and actual intensity is chosen, as defined by the least-squares method, 6 N F x = f i I i 2, 25 i= where I i is the actual intensity of the pixel i and N is the number of remaining neighboring pixels, thus, N equals 6 or 7. Based on the least-squares fitting solution, 7 we derive the linear surface normal vector as x = arg min F x = A T A A T I, 26 where A= m T,,m T N T is the coordinate matrix of remaining pixels. Given the coordinate vector m* of the missing pixel, we associate Eq. 24 with Eq. 26 to retrieve its intensity, f* = m* T x Rotary Motion Parameter Identifications The rotary center and the blur angle are parameters that affect PSF of the rotary motion blur as well as the deblurring performance. The rotary center guarantees that the intensity of those pixels fetched circularly are subject to Eq. 3. The blur angle relates to the blur length, which is a direct parameter of Wiener filter. 5. Identification of Rotary Center Because those pixels fetched circularly around the rotary center correlate with each other in terms of one uniform rotary blur function, their spectrum should demonstrate some patterns in regularity. On the contrary, for pixels on circles fetched around other points, their intensities do not degrade in accordance with the physical mechanism of the rotary motion. Thus, they may be regarded as randomly selected and no regular pattern is contained in their intensities. In the frequency-domain framework, we explore the pattern from the power spectrum of degraded pixels on a circle. In the spectrum, the pattern is that a series of minima distributed uniformly on the frequency axis with constant intervals. The constant interval remains unchanged among spectra in different circles. Illustrated by Eq. 4, the degraded pixel spectrum S g includes the blur term H k 2 S f and the noise term S. Noting that frequency zeros distribute uniformly on H k, where kl/ N =,2,...,L, the blur term also contains uniform zeros in such a frequency point k. In addition, the interval of zeros k equals N/L. N is proportional to L with a factor of the blur angle, which is constant among different circles. Hence, different circles also share equivalent zero intervals in their blur terms. In the existence of Gaussian white noise with a power spectrum roughly constant throughout all frequencies, the zeros in the blur term increase by an equivalent amount to become minima in the degraded pixel spectrum. Because these minima originate from the zeros in H k, we still name them as zeros and the intervals as zero interval for convenience. In contrast, for circles around other points, as the radius grows, the correlation of degraded pixels declines drastically. Hence, no obvious zero interval exists in the degraded pixel spectrum. Therefore, we identify a rotary center based on the existence of the constant zero interval in the concentric circles. An interval-searching method is introduced to evaluate the existence of the zero interval in a circle. The method is robust to noise because noise with a constant power spectrum only affects the value of minima rather than their interval. Nevertheless, the power spectrum of noise may fluctuate slightly all through the frequency axis. Indeed, the fluctuation may impair a particular minimum by making it not low enough to distinguish from the neighboring frequencies. This makes the direct searching for minima unreliable. However, because noise conforms to the Gaussian white-noise model after all, the random fluctuation in the power spectrum should be subject to an expectation of zero. In other words, the fluctuation may sharpen the salience of other minima by enhancing their neighboring frequencies. Therefore, the mean value of a frequency group with constant interval is chosen to evaluate the existence of a valid zero interval. The mean value is much more robust to noise. Its definition is given by Eq. 28. The minimum value is attained when a candidate interval equals the zero interval. Also, the mean decreases to its minimum value when H k or the blur term equals zero, thereby only the noise term remains in Eq. 4. Thus, the minimum value equals the covariance of noise, M S g k = S g i k min S g = S = 2, M i= M = int k N k zero = arg min S g k. Fig. 5 The standard image cars September 28/Vol. 47 9

8 We search each candidate zero-interval k and calculate the mean of their degraded pixel spectrum at the uniform frequencies produced by k. If the mean minimum is close to the covariance of noise, then we assume a valid zero interval exists. Then, each circle votes about the existence of the zero interval and the candidate with highest ballots is judged as the rotary center. The zero-interval searching method is outlined as follows: for each possible rotary center candidate for each concentric circle for each possible zero interval Extract the frequency group produced by the zero interval and calculate the mean value of the degraded pixel spectrum at these frequencies; Search out the zero interval with respect to the minimum mean; if the minimum is close to the noise covariance A valid zero interval exists on this circle, vote for the interval existence; otherwise No valid zero interval exists on this circle, vote against the interval existence; end end end Compute the number of circles voting for the existence of the valid zero-interval. end Judge the center candidate of the highest ballot as the correct rotary center. 5.2 Identification of the Blur Angle The blur angle is identified indirectly from calibrating of the blur length for each circle. In the context of digital image, the blur length of a circle is nearly linear to the blur angle, L = int N /36, = T 3 where int rounds a number to its nearest lower integer. N is the circle circumference, and denotes the blur angle, which is the product of the rotary-angle velocity and the exposure time T. We propose a divide-and-conquer algorithm that iteratively rectifies the calculated blur length of each circle and ultimately yields the precise blur angle. In the divide step, we decompose a large error set of the calculated blur length into several small sets; thus, searching the error of a blur length needs to investigate only one such set. In the conquer step, we rule out false blur lengths based on the spectrum of pixels in the inferred image. This algorithm is computationally efficient. The blur angle can be roughly calculated from the identified zero interval. According to Eq. 3, if a blur angle suffers from some error, then the resultant deviation in blur length is amplified by the circumference N. As the parameter of the Wiener filter, the blur length L is the most essential factor to shape the filter response. Because L is represented by integers in the context of the digital image, its error should be denoted by integers as well. For inner circles with small circumferences, the error of blur angle Fig. 6 Degraded images: SNR of a 4, b 3, and c 2 db. merely engenders moderate deviation of the blur length. However, for outer circles, the error in blur angle is amplified by large N and then a large deviation of the blur length L is raised. Given a rough blur angle and the circle circumference, we may calculate the rough blur length of a circle. Then, an error set integer set is assigned to denote the deviation of the rough blur length from its correct value. For a certain error range of a rough blur angle, the corresponding error range in blur length is deducted through multiplying by the September 28/Vol. 47 9

9 Fig. 7 Deblurring results: a c deblurred images using spatial algorithm d f deblurred images using proposed frequency algorithm: a, d 4, b, e 3, and c, f 2 db. circle circumference. Noting that the circumference increases gradually along neighboring circles away from rotary center, the deviation accumulates slowly as opposed to an abrupt hopping. In other words, the error in blur length of an outer circle is enhanced comparing to its adjacent inner circle, but the enhancing amount is small. We demarcate the degraded image into several adjacent ring belts see Fig. 3, which is subject to the following three conditions:. Classification condition: Each belt is made of several circles with the same error set, which is also the belt error set. For circles in different belts, their sets are different. 2. Adjacency condition for iteration: For adjacent belts, the error set of the outer belt is enlarged from the inner by merely two other elements, which are the maximum of the inner belt plus one and the minimum minus one. 3. Initialization condition: the most inner belt has an error set of three elements, which are,, and, respectively. Explicitly speaking, the i th belt have the error set as i,..,,,,..,i. Given the rough blur angle r and its error range r, we derive the circumferences N of circles on belt i are subject to i N r + r /36 N r r /36 i i 36/ r + r N i 36/ r r, 3 which means the difference between the maximum possible blur length and the minimum in the i th belt should be not more than i. Admittedly, the constraint imposed by Eq. 3 is slightly looser because the maximum-minimum blur length difference is larger than the blur-length error, thereby expanding the error set. However, the searching efficiency is not impaired because the algorithm narrows the searching for blur length error to one of small subsets with two or three elements. The adjacency condition guarantees the blur-length error only increases by one among the adjacent belts. Suppose belt i has error j and we decompose the error set of belt i i,...,,,,...,i into subsets i, i,.., 2,,,,,,2,.., i,i+. In fact, only j, j+ j,j when j is negative needs to be investigated. Under the initialization condition, we start the searching the error of belt from,, and repeat the searching iteratively until the error of the last belt is rectified. Meanwhile, the blur angle is identified by dividing the calibrated blur length of the most outer circle with its circumference, September 28/Vol. 47 9

10 = 36 N L, 32 where is the rectified blur angle, N is the circumference of the most outer circle, and L denotes its rectified blur length. Accordingly, given a blur length L calculated from the rough blur angle, we only need judge the correct blur length from L, L, orl+. The conquer step discusses an inverse filter method to rule out two other false blur lengths and thus yield the blur length error of a circle. The frequency zeros of these inverse filters are removed in case of the abnormal amplification of noise. Noting that noise is outweighed by the power spectrum of blurred pixels at median frequency domain, the trivial effects of noise at median frequency domainare neglected. The inverse filtering result is F k = = H k,l H k,l F k exp j2 /N Lk exp j2 /N L k F k exp j2 /N Lk exp j2 /N L k N k max = L, 2 N L,..., n N L, 33 where k max are frequencies that make the ratio of two blur functions maximum. At the median frequencies in k max, the ratio grows very large, for its small denominator cannot be cancelled with the numerator. The large ratio results in the abnormal peaks at the corresponding frequencies of the deblurred pixel spectrum, which should be around zero. On the other hand, if we deblur with the correct blur length, the ratio maintains constant as one and there is no amplification for any frequencies. Therefore, the blur length is distinguished by investigating whether the deblurred pixel median spectra at k max are extremely high. Fig. 8 The deblurred image corrupted by missing pixels. 6 Experimental Results A group of simulations are carried out to assess the performance of the proposed frequency domain deblurring method. Deblurring results are compared to the counterparts using the previous spatial algorithm. Moreover, we pay special attention to the missing-pixel interpolation method in Section 4. Meanwhile, rotary motion parameter identification results are shown. In these experiments, the test images are 8-bit pixel gray images with the resolution of Thus, the intensity value of each pixel resides in the range from to Deblurring Performance Using Frequency Algorithm with Known Blur Parameters We use standard gray-scale image cars, shown in Fig. 5 to test the overall performance of the proposed algorithm in rotary motion deblurring. The degraded versions of the standard images are synthetic, in order to bypass problems that are beyond the scope of this paper, such as turbulent blur. We blur cars with the blur angle of 5 deg to produce its degraded versions and contaminate them with additive noise of SNR 4, 3, and 2 db, respectively. The degraded images are depicted in Fig. 6. Given the prior knowledge of the blur angle and the SNR, the parametric Wiener filter is developed according to Eq. 6. Then the three degraded images were processed according to the procedures of the frequency algorithm outlined in Table and Table 2 PSNR comparison. Blur angle Frequency algorithm: PSNR db Spatial algorithm: PSNR db SNR db 5 deg 5 deg 25 deg 5 deg 5 deg 25 deg September 28/Vol. 47 9

11 (a) (c) (b) (d) Fig. 9 Uniformly aligned minimum frequency points of the degraded pixel power spectrum SNR: a, b 4, c 3, and d 2 db. the spatial algorithm. 3 In addition, based on the scheme in Section 4, we interpolate the missing pixels during the integration of multiple circles. Figure 7 compares the deblurring results using both the proposed frequency algorithm and spatial algorithm. Shown in Figs. 7 a 7 c, the spatially deblurred images still undergo obvious rotary blur. In contrast, the frequency-domain deblurred images in Figs. 7 d 7 f indicate the effective removal of the rotary motion degradation. From the human visual perspective, the proposed frequency method outperforms the spatial algorithm. To give the numeric illustration of this result, the peak SNR PSNR is used as the primary measurement of the performance for the two deblurring algorithms. The improvement of SNR ISNR is also presented as a secondary measurement. The PSNR and ISNR are defined as PSNR = log /N 2 N 2 i= f i f i 2 34 Table 3 ISNR comparison. Blur angle Frequency algorithm: ISNR db Spatial algorithm: ISNR db SNR db 5 deg 5 deg 25 deg 5 deg 5 deg 25 deg September 28/Vol. 47 9

12 (a) (c) (b) (d) Fig. The correct zero intervals indicated by the minimum of the mean of spectrum SNR: a, b 4, c 3, and d 2 db. N 2 ISNR = log i= f i g i 2 N 2 i= f i f i, Performance of the Interpolation Method This section is to evaluate the accuracy of the hierarchical estimation scheme for the missing pixel interpolation discussed in Section 4. Tests are also implemented on the standard image cars. The corruption in the deblurred image by missing pixels is depicted in Fig. 8, and the reconstructed version by interpolation can be referred to Fig. 6. We measure the accuracy of the proposed method in terms of the mean square error MSE and the relative MSE RMSE, as defined in Eq. 36. For an image size of , statistical analysis shows there were a total of 5328 pixels lost. Thus, N 2 =5328. RMSE is defined as the ratio of MSE to the pixel range, which is from to 255. where f i and f i denote the original pixels and the deblurred pixels, g i represents the degraded pixels, and N 2 is the total number of pixels. Noting that the resolution of the test image is , its internally tangent circle covers an area of ,472 pixels, which is the valid domain of the deblurring. Thus, here N 2 should be taken as 5,472. The PSNR and ISNR results are presented in Tables 2 and 3. As indicated, our algorithm achieves higher PSNR and ISNR and generally outperforms the spatial leastsquares methods by.5 4 db. MSE = N 2 N 2 f i f i 2 i= RMSE = MSE 255 %, 36 where all the symbols have the same denotations as those in Eq. 34. The resultant MSE equals.66, and the RMSE is.3%. Note that.3% is a slight deviation from the precise intensity; it is reasonable to assert the interpolation method effectively reconstructs the missing pixels. 6.3 Identification of the Rotary Center The degraded image for the test is rotated around the point 28,28. In the practical scenario, the rotary center is not precisely known if known, we do not have to identify it. We only use this prior knowledge for the purpose of comparing to the identified rotary center so as to evaluate the effectiveness of the identification method. We search the rotary center from a 5 5 window and apply the zerointerval searching method to each candidate in the window. As shown in the results, the method to identify rotary center is effective because the voting result shows the point 28,28 is the rotary center September 28/Vol. 47 9

13 (a) (c) (b) (d) Fig. The spectrum mean versus intervals in the circle fetched from a displaced rotary center SNR: a, b 4, c 3, and d 2 db. Given a candidate rotary center, pixels are fetched circularly as concentric circles from the degraded image. Then, we generate the spectrum of degraded pixels for each circle and apply the zero-interval searching method on a circle basis. Noting that the searching method is the same among different circles, we detail the implementation on a typical circle at the radius of 6. The other circles follow a similar procedure. Provided that 332 pixels align circularly on this circle, its circumference is taken as 332. The power spectrum of degraded pixels spans 332 frequency points as well. Figure 9 illustrates the profile of power spectra under different additive Gaussian white noise. Then according to Eq. 28, we calculate the spectrum means with respect to different intervals. When the candidate rotary center is 28,28, the resultant spectrum mean versus different intervals are illustrated in Fig.. For comparison, with the rotary center candidate 28,29, the spectrum mean with respect to intervals is depicted in Fig.. On the basis of Eq. 28, a zero interval is considered valid on this circle if the minimum of the spectrum mean is close to the covariance of noise, which can be estimated from the minimum value of the degraded pixel power spectrum S g Eq. 4. Then a threshold is set to cut off the invalid zero intervals. We take the threshold as.5 times of the noise covariance. In the cases with SNR of 4, 3, and 2, the thresholds are.,.5, and.2, respectively. Shown in Fig., when the interval equals 38, the spectrum mean decreases to its valley and the value is less than the threshold; thus, we judge a valid zero interval of 38 exists in this circle. It is also proved by Fig. that the zero-interval searching method is robust to noise since all valleys are salient under different noise levels. In contrast, Fig. shows that no valley in the spectrum mean is less than the threshold. This implies no valid zero intervals exist on this circle. Therefore, the circle at the radius of 6 votes for 28,28 as the rotary center, whereas against the candidate 28,29. The same procedures are implemented on other circles from the radius of 6 to 2 because the circumferences of these circles are long enough to make a fair evaluation. Each circle gives a vote for a rotary center candidate. Figure 2 illustrates the statistics of final ballots for each rotary center candidate under different noise levels. In all noise levels, the candidate 28,28 receives highest ballots to be judged as the rotary center. 6.4 Identification of the Blur Angle and the Blur Length We blur the standard test image cars with a blur angle of 2 deg, which is still unknown for deblurring. Supposing the calculated blur angle is deg within an error range from 5 to 5 deg, we derive the correct blur angle is in the range from 5 to 5 deg. Noting that the size of the degraded image is , radii of the concentric circles are from to 28. On basis of the divide step in Section September 28/Vol. 47 9

14 (26, 26). (26, 28). 4dB 3dB 2dB (26, 3). (27, 27). (27, 29). (28, 26). (28, 28). (28, 3). (29, 27). (29, 29). (3, 26). (3, 28). (3, 3). Fig. 2 The ballot of the candidate rotary centers. 5.2, we demarcate the degraded image into ten belts, which meet the three conditions in Section 5.2. The demarcation is shown in Fig. 3. The circumference N of a circle on belt i is from 72 i to 72i. On belt, the error set of the blur length includes three candidates,,, and. Also, we only detail the operation on the circle at a radius of 3 because the procedure on other circles is the same. This circle contains 68 pixels; thereby, its circumference is taken as 68. Given the rough blur angle of deg, the calculated blur length of this circle equals int / =2. With the initial error set,,, we produce the candidate blur length set, 2, 3. Using an inverse filter after removal of zeros with the three blur lengths, the spectra of deblurred pixels are illustrated in Fig. 4. The spectra with the blur length and 2 have median peaks; thus, they are ruled out. We assert blur length 3 as the correct blur length. As for other circles on this belt, we mainly consider outer circles because their circumferences are long enough to yield reliable results. Because there are many circles on the same belt, a voting strategy is adopted: each circle votes for a candidate blur length and we judge the one with the highest ballot as the correct blur length. Accordingly, belt in this experiment evaluates the blur length as 3 and its error as 3 2=. Depending on the adjacency condition, the error set for belt 2 is, 2. Similarly, consider the circle at a radius of 2 in belt 2; its Fig. 3 The demarcation of the degraded image: ten ring belts (a) (b) (c) Fig. 4 The spectrum of the deblurred pixels on the circle at the radius of 3: a peaks with blur length, b peaks with blur length 2, and c normal median spectrum with correct blur length. circumference is 4 and the calculated blur length is int /36 4 +=3. Hence, the candidate blur lengths are 4, 5. A similar operation is done on this circle, and finally, the candidate blur length 5 is ruled out. Then, we rectify the circles on other belts, iteratively. The resultant rectified blur length chain for each belt is 3, 5, 8,, 2, 5, 7, 2, 22, and 24. Note that for the last circle at the circumference of 76, based on Eq. 32, we rectify the blur angle as 36/76 24=2.6 deg. Therefore, the blur angle is effectively recognized. Figure 5 shows the deblurring results using the presented method and the spatial algorithm, 3 respectively, both with the blur parameter of deg. Shown in Fig. 5 a, the degradation is barely restored, which means the spatial algorithm cannot effectively deblur. In contrast, the presented method tolerates the blur parameter error and is proved with good performance and strong robustness September 28/Vol. 47 9

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

Blind Single-Image Super Resolution Reconstruction with Defocus Blur Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Blind Single-Image Super Resolution Reconstruction with Defocus Blur Fengqing Qin, Lihong Zhu, Lilan Cao, Wanan Yang Institute

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST)

International Journal of Advancedd Research in Biology, Ecology, Science and Technology (IJARBEST) Gaussian Blur Removal in Digital Images A.Elakkiya 1, S.V.Ramyaa 2 PG Scholars, M.E. VLSI Design, SSN College of Engineering, Rajiv Gandhi Salai, Kalavakkam 1,2 Abstract In many imaging systems, the observed

More information

Image Deblurring with Blurred/Noisy Image Pairs

Image Deblurring with Blurred/Noisy Image Pairs Image Deblurring with Blurred/Noisy Image Pairs Huichao Ma, Buping Wang, Jiabei Zheng, Menglian Zhou April 26, 2013 1 Abstract Photos taken under dim lighting conditions by a handheld camera are usually

More information

multiframe visual-inertial blur estimation and removal for unmodified smartphones

multiframe visual-inertial blur estimation and removal for unmodified smartphones multiframe visual-inertial blur estimation and removal for unmodified smartphones, Severin Münger, Carlo Beltrame, Luc Humair WSCG 2015, Plzen, Czech Republic images taken by non-professional photographers

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

A New Method for Eliminating blur Caused by the Rotational Motion of the Images

A New Method for Eliminating blur Caused by the Rotational Motion of the Images A New Method for Eliminating blur Caused by the Rotational Motion of the Images Seyed Mohammad Ali Sanipour 1, Iman Ahadi Akhlaghi 2 1 Department of Electrical Engineering, Sadjad University of Technology,

More information

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions. 12 Image Deblurring This chapter describes how to deblur an image using the toolbox deblurring functions. Understanding Deblurring (p. 12-2) Using the Deblurring Functions (p. 12-5) Avoiding Ringing in

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

Chapter 4 SPEECH ENHANCEMENT

Chapter 4 SPEECH ENHANCEMENT 44 Chapter 4 SPEECH ENHANCEMENT 4.1 INTRODUCTION: Enhancement is defined as improvement in the value or Quality of something. Speech enhancement is defined as the improvement in intelligibility and/or

More information

Motion Deblurring of Infrared Images

Motion Deblurring of Infrared Images Motion Deblurring of Infrared Images B.Oswald-Tranta Inst. for Automation, University of Leoben, Peter-Tunnerstr.7, A-8700 Leoben, Austria beate.oswald@unileoben.ac.at Abstract: Infrared ages of an uncooled

More information

Deblurring. Basics, Problem definition and variants

Deblurring. Basics, Problem definition and variants Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying

More information

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE International Journal of Electronics and Communication Engineering and Technology (IJECET) Volume 7, Issue 4, July-August 2016, pp. 85 90, Article ID: IJECET_07_04_010 Available online at http://www.iaeme.com/ijecet/issues.asp?jtype=ijecet&vtype=7&itype=4

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab 2009-2010 Vincent DeVito June 16, 2010 Abstract In the world of photography and machine vision, blurry

More information

Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique

Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique Linda K. Le a and Carl Salvaggio a a Rochester Institute of Technology, Center for Imaging Science, Digital

More information

Removing Temporal Stationary Blur in Route Panoramas

Removing Temporal Stationary Blur in Route Panoramas Removing Temporal Stationary Blur in Route Panoramas Jiang Yu Zheng and Min Shi Indiana University Purdue University Indianapolis jzheng@cs.iupui.edu Abstract The Route Panorama is a continuous, compact

More information

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing Image Restoration Lecture 7, March 23 rd, 2009 Lexing Xie EE4830 Digital Image Processing http://www.ee.columbia.edu/~xlx/ee4830/ thanks to G&W website, Min Wu and others for slide materials 1 Announcements

More information

Chapter 3. Study and Analysis of Different Noise Reduction Filters

Chapter 3. Study and Analysis of Different Noise Reduction Filters Chapter 3 Study and Analysis of Different Noise Reduction Filters Noise is considered to be any measurement that is not part of the phenomena of interest. Departure of ideal signal is generally referred

More information

Postprocessing of nonuniform MRI

Postprocessing of nonuniform MRI Postprocessing of nonuniform MRI Wolfgang Stefan, Anne Gelb and Rosemary Renaut Arizona State University Oct 11, 2007 Stefan, Gelb, Renaut (ASU) Postprocessing October 2007 1 / 24 Outline 1 Introduction

More information

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016 Image acquisition Midterm Review Image Processing CSE 166 Lecture 10 2 Digitization, line of image Digitization, whole image 3 4 Geometric transformations Interpolation CSE 166 Transpose these matrices

More information

Blind Blur Estimation Using Low Rank Approximation of Cepstrum

Blind Blur Estimation Using Low Rank Approximation of Cepstrum Blind Blur Estimation Using Low Rank Approximation of Cepstrum Adeel A. Bhutta and Hassan Foroosh School of Electrical Engineering and Computer Science, University of Central Florida, 4 Central Florida

More information

Deconvolution , , Computational Photography Fall 2017, Lecture 17

Deconvolution , , Computational Photography Fall 2017, Lecture 17 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 17 Course announcements Homework 4 is out. - Due October 26 th. - There was another

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain

Image Enhancement in spatial domain. Digital Image Processing GW Chapter 3 from Section (pag 110) Part 2: Filtering in spatial domain Image Enhancement in spatial domain Digital Image Processing GW Chapter 3 from Section 3.4.1 (pag 110) Part 2: Filtering in spatial domain Mask mode radiography Image subtraction in medical imaging 2 Range

More information

Low Spatial Frequency Noise Reduction with Applications to Light Field Moment Imaging

Low Spatial Frequency Noise Reduction with Applications to Light Field Moment Imaging Low Spatial Frequency Noise Reduction with Applications to Light Field Moment Imaging Christopher Madsen Stanford University cmadsen@stanford.edu Abstract This project involves the implementation of multiple

More information

On the evaluation of edge preserving smoothing filter

On the evaluation of edge preserving smoothing filter On the evaluation of edge preserving smoothing filter Shawn Chen and Tian-Yuan Shih Department of Civil Engineering National Chiao-Tung University Hsin-Chu, Taiwan ABSTRACT For mapping or object identification,

More information

Image Restoration using Modified Lucy Richardson Algorithm in the Presence of Gaussian and Motion Blur

Image Restoration using Modified Lucy Richardson Algorithm in the Presence of Gaussian and Motion Blur Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 3, Number 8 (2013), pp. 1063-1070 Research India Publications http://www.ripublication.com/aeee.htm Image Restoration using Modified

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Chapter 6. [6]Preprocessing

Chapter 6. [6]Preprocessing Chapter 6 [6]Preprocessing As mentioned in chapter 4, the first stage in the HCR pipeline is preprocessing of the image. We have seen in earlier chapters why this is very important and at the same time

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

This content has been downloaded from IOPscience. Please scroll down to see the full text.

This content has been downloaded from IOPscience. Please scroll down to see the full text. This content has been downloaded from IOPscience. Please scroll down to see the full text. Download details: IP Address: 148.251.232.83 This content was downloaded on 10/07/2018 at 03:39 Please note that

More information

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008 ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 409 414 SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES

More information

A Comparative Review Paper for Noise Models and Image Restoration Techniques

A Comparative Review Paper for Noise Models and Image Restoration Techniques Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 6.017 IJCSMC,

More information

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho) Recent Advances in Image Deblurring Seungyong Lee (Collaboration w/ Sunghyun Cho) Disclaimer Many images and figures in this course note have been copied from the papers and presentation materials of previous

More information

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation

A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation A Recognition of License Plate Images from Fast Moving Vehicles Using Blur Kernel Estimation Kalaivani.R 1, Poovendran.R 2 P.G. Student, Dept. of ECE, Adhiyamaan College of Engineering, Hosur, Tamil Nadu,

More information

An Efficient Noise Removing Technique Using Mdbut Filter in Images

An Efficient Noise Removing Technique Using Mdbut Filter in Images IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735.Volume 10, Issue 3, Ver. II (May - Jun.2015), PP 49-56 www.iosrjournals.org An Efficient Noise

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm

Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm Blurred Image Restoration Using Canny Edge Detection and Blind Deconvolution Algorithm 1 Rupali Patil, 2 Sangeeta Kulkarni 1 Rupali Patil, M.E., Sem III, EXTC, K. J. Somaiya COE, Vidyavihar, Mumbai 1 patilrs26@gmail.com

More information

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

DIGITAL IMAGE PROCESSING UNIT III

DIGITAL IMAGE PROCESSING UNIT III DIGITAL IMAGE PROCESSING UNIT III 3.1 Image Enhancement in Frequency Domain: Frequency refers to the rate of repetition of some periodic events. In image processing, spatial frequency refers to the variation

More information

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates Copyright SPIE Measurement of Texture Loss for JPEG Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates ABSTRACT The capture and retention of image detail are

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

ARRAY PROCESSING FOR INTERSECTING CIRCLE RETRIEVAL

ARRAY PROCESSING FOR INTERSECTING CIRCLE RETRIEVAL 16th European Signal Processing Conference (EUSIPCO 28), Lausanne, Switzerland, August 25-29, 28, copyright by EURASIP ARRAY PROCESSING FOR INTERSECTING CIRCLE RETRIEVAL Julien Marot and Salah Bourennane

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Toward Non-stationary Blind Image Deblurring: Models and Techniques

Toward Non-stationary Blind Image Deblurring: Models and Techniques Toward Non-stationary Blind Image Deblurring: Models and Techniques Ji, Hui Department of Mathematics National University of Singapore NUS, 30-May-2017 Outline of the talk Non-stationary Image blurring

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Interpolation of CFA Color Images with Hybrid Image Denoising

Interpolation of CFA Color Images with Hybrid Image Denoising 2014 Sixth International Conference on Computational Intelligence and Communication Networks Interpolation of CFA Color Images with Hybrid Image Denoising Sasikala S Computer Science and Engineering, Vasireddy

More information

Improved motion invariant imaging with time varying shutter functions

Improved motion invariant imaging with time varying shutter functions Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia

More information

Image Denoising Using Statistical and Non Statistical Method

Image Denoising Using Statistical and Non Statistical Method Image Denoising Using Statistical and Non Statistical Method Ms. Shefali A. Uplenchwar 1, Mrs. P. J. Suryawanshi 2, Ms. S. G. Mungale 3 1MTech, Dept. of Electronics Engineering, PCE, Maharashtra, India

More information

The Use of Non-Local Means to Reduce Image Noise

The Use of Non-Local Means to Reduce Image Noise The Use of Non-Local Means to Reduce Image Noise By Chimba Chundu, Danny Bin, and Jackelyn Ferman ABSTRACT Digital images, such as those produced from digital cameras, suffer from random noise that is

More information

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS 1 LUOYU ZHOU 1 College of Electronics and Information Engineering, Yangtze University, Jingzhou, Hubei 43423, China E-mail: 1 luoyuzh@yangtzeu.edu.cn

More information

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam

DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam DIGITAL IMAGE PROCESSING Quiz exercises preparation for the midterm exam In the following set of questions, there are, possibly, multiple correct answers (1, 2, 3 or 4). Mark the answers you consider correct.

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA) A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA) Suma Chappidi 1, Sandeep Kumar Mekapothula 2 1 PG Scholar, Department of ECE, RISE Krishna

More information

High-speed Noise Cancellation with Microphone Array

High-speed Noise Cancellation with Microphone Array Noise Cancellation a Posteriori Probability, Maximum Criteria Independent Component Analysis High-speed Noise Cancellation with Microphone Array We propose the use of a microphone array based on independent

More information

Blur Detection for Historical Document Images

Blur Detection for Historical Document Images Blur Detection for Historical Document Images Ben Baker FamilySearch bakerb@familysearch.org ABSTRACT FamilySearch captures millions of digital images annually using digital cameras at sites throughout

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

Design of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2

Design of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2 Design of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2 James E. Adams, Jr. Eastman Kodak Company jeadams @ kodak. com Abstract Single-chip digital cameras use a color filter

More information

Image Denoising Using Different Filters (A Comparison of Filters)

Image Denoising Using Different Filters (A Comparison of Filters) International Journal of Emerging Trends in Science and Technology Image Denoising Using Different Filters (A Comparison of Filters) Authors Mr. Avinash Shrivastava 1, Pratibha Bisen 2, Monali Dubey 3,

More information

Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering

Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering Stochastic Image Denoising using Minimum Mean Squared Error (Wiener) Filtering L. Sahawneh, B. Carroll, Electrical and Computer Engineering, ECEN 670 Project, BYU Abstract Digital images and video used

More information

EE4830 Digital Image Processing Lecture 7. Image Restoration. March 19 th, 2007 Lexing Xie ee.columbia.edu>

EE4830 Digital Image Processing Lecture 7. Image Restoration. March 19 th, 2007 Lexing Xie ee.columbia.edu> EE4830 Digital Image Processing Lecture 7 Image Restoration March 19 th, 2007 Lexing Xie 1 We have covered 2 Image sensing Image Restoration Image Transform and Filtering Spatial

More information

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering

Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering Image Processing Intensity Transformations Chapter 3 Prof. Vidya Manian Dept. of Electrical and Comptuer Engineering INEL 5327 ECE, UPRM Intensity Transformations 1 Overview Background Basic intensity

More information

Lab Report 3: Speckle Interferometry LIN PEI-YING, BAIG JOVERIA

Lab Report 3: Speckle Interferometry LIN PEI-YING, BAIG JOVERIA Lab Report 3: Speckle Interferometry LIN PEI-YING, BAIG JOVERIA Abstract: Speckle interferometry (SI) has become a complete technique over the past couple of years and is widely used in many branches of

More information

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing

Image Restoration. Lecture 7, March 23 rd, Lexing Xie. EE4830 Digital Image Processing Image Restoration Lecture 7, March 23 rd, 2008 Lexing Xie EE4830 Digital Image Processing http://www.ee.columbia.edu/~xlx/ee4830/ thanks to G&W website, Min Wu and others for slide materials 1 Announcements

More information

Multi-Image Deblurring For Real-Time Face Recognition System

Multi-Image Deblurring For Real-Time Face Recognition System Volume 118 No. 8 2018, 295-301 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Multi-Image Deblurring For Real-Time Face Recognition System B.Sarojini

More information

Improved Fusing Infrared and Electro-Optic Signals for. High Resolution Night Images

Improved Fusing Infrared and Electro-Optic Signals for. High Resolution Night Images Improved Fusing Infrared and Electro-Optic Signals for High Resolution Night Images Xiaopeng Huang, a Ravi Netravali, b Hong Man, a and Victor Lawrence a a Dept. of Electrical and Computer Engineering,

More information

CHAPTER 6 INTRODUCTION TO SYSTEM IDENTIFICATION

CHAPTER 6 INTRODUCTION TO SYSTEM IDENTIFICATION CHAPTER 6 INTRODUCTION TO SYSTEM IDENTIFICATION Broadly speaking, system identification is the art and science of using measurements obtained from a system to characterize the system. The characterization

More information

Image Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson

Image Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson Chapter 2 Image Demosaicing Ruiwen Zhen and Robert L. Stevenson 2.1 Introduction Digital cameras are extremely popular and have replaced traditional film-based cameras in most applications. To produce

More information

Midterm Review. Image Processing CSE 166 Lecture 10

Midterm Review. Image Processing CSE 166 Lecture 10 Midterm Review Image Processing CSE 166 Lecture 10 Topics covered Image acquisition, geometric transformations, and image interpolation Intensity transformations Spatial filtering Fourier transform and

More information

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker 2016 3 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-383-0 CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed

More information

Frequency Domain Enhancement

Frequency Domain Enhancement Tutorial Report Frequency Domain Enhancement Page 1 of 21 Frequency Domain Enhancement ESE 558 - DIGITAL IMAGE PROCESSING Tutorial Report Instructor: Murali Subbarao Written by: Tutorial Report Frequency

More information

ME scope Application Note 01 The FFT, Leakage, and Windowing

ME scope Application Note 01 The FFT, Leakage, and Windowing INTRODUCTION ME scope Application Note 01 The FFT, Leakage, and Windowing NOTE: The steps in this Application Note can be duplicated using any Package that includes the VES-3600 Advanced Signal Processing

More information

PERFORMANCE ANALYSIS OF LINEAR AND NON LINEAR FILTERS FOR IMAGE DE NOISING

PERFORMANCE ANALYSIS OF LINEAR AND NON LINEAR FILTERS FOR IMAGE DE NOISING Impact Factor (SJIF): 5.301 International Journal of Advance Research in Engineering, Science & Technology e-issn: 2393-9877, p-issn: 2394-2444 Volume 5, Issue 3, March - 2018 PERFORMANCE ANALYSIS OF LINEAR

More information

Noise and Restoration of Images

Noise and Restoration of Images Noise and Restoration of Images Dr. Praveen Sankaran Department of ECE NIT Calicut February 24, 2013 Winter 2013 February 24, 2013 1 / 35 Outline 1 Noise Models 2 Restoration from Noise Degradation 3 Estimation

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

Computation Pre-Processing Techniques for Image Restoration

Computation Pre-Processing Techniques for Image Restoration Computation Pre-Processing Techniques for Image Restoration Aziz Makandar Professor Department of Computer Science, Karnataka State Women s University, Vijayapura Anita Patrot Research Scholar Department

More information

Amplitude and Phase Distortions in MIMO and Diversity Systems

Amplitude and Phase Distortions in MIMO and Diversity Systems Amplitude and Phase Distortions in MIMO and Diversity Systems Christiane Kuhnert, Gerd Saala, Christian Waldschmidt, Werner Wiesbeck Institut für Höchstfrequenztechnik und Elektronik (IHE) Universität

More information

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks

Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks Chapter 2 Distributed Consensus Estimation of Wireless Sensor Networks Recently, consensus based distributed estimation has attracted considerable attention from various fields to estimate deterministic

More information

Non-Uniform Motion Blur For Face Recognition

Non-Uniform Motion Blur For Face Recognition IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 08, Issue 6 (June. 2018), V (IV) PP 46-52 www.iosrjen.org Non-Uniform Motion Blur For Face Recognition Durga Bhavani

More information

Non Linear Image Enhancement

Non Linear Image Enhancement Non Linear Image Enhancement SAIYAM TAKKAR Jaypee University of information technology, 2013 SIMANDEEP SINGH Jaypee University of information technology, 2013 Abstract An image enhancement algorithm based

More information

Attacking localized high amplitude noise in seismic data A method for AVO compliant noise attenuation

Attacking localized high amplitude noise in seismic data A method for AVO compliant noise attenuation Attacking localized high amplitude noise in seismic data A method for AVO compliant noise attenuation Xinxiang Li and Rodney Couzens Sensor Geophysical Ltd. Summary The method of time-frequency adaptive

More information

Direction based Fuzzy filtering for Color Image Denoising

Direction based Fuzzy filtering for Color Image Denoising International Research Journal of Engineering and Technology (IRJET) e-issn: 2395-56 Volume: 4 Issue: 5 May -27 www.irjet.net p-issn: 2395-72 Direction based Fuzzy filtering for Color Denoising Nitika*,

More information

Image Measurement of Roller Chain Board Based on CCD Qingmin Liu 1,a, Zhikui Liu 1,b, Qionghong Lei 2,c and Kui Zhang 1,d

Image Measurement of Roller Chain Board Based on CCD Qingmin Liu 1,a, Zhikui Liu 1,b, Qionghong Lei 2,c and Kui Zhang 1,d Applied Mechanics and Materials Online: 2010-11-11 ISSN: 1662-7482, Vols. 37-38, pp 513-516 doi:10.4028/www.scientific.net/amm.37-38.513 2010 Trans Tech Publications, Switzerland Image Measurement of Roller

More information

1.Explain the principle and characteristics of a matched filter. Hence derive the expression for its frequency response function.

1.Explain the principle and characteristics of a matched filter. Hence derive the expression for its frequency response function. 1.Explain the principle and characteristics of a matched filter. Hence derive the expression for its frequency response function. Matched-Filter Receiver: A network whose frequency-response function maximizes

More information

2D Barcode Localization and Motion Deblurring Using a Flutter Shutter Camera

2D Barcode Localization and Motion Deblurring Using a Flutter Shutter Camera 2D Barcode Localization and Motion Deblurring Using a Flutter Shutter Camera Wei Xu University of Colorado at Boulder Boulder, CO, USA Wei.Xu@colorado.edu Scott McCloskey Honeywell Labs Minneapolis, MN,

More information

Design of practical color filter array interpolation algorithms for digital cameras

Design of practical color filter array interpolation algorithms for digital cameras Design of practical color filter array interpolation algorithms for digital cameras James E. Adams, Jr. Eastman Kodak Company, Imaging Research and Advanced Development Rochester, New York 14653-5408 ABSTRACT

More information

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System 2016 International Conference on Computer Engineering and Information Systems (CEIS-16) Artifacts Reduced Interpolation Method for Single-Sensor Imaging System Long-Fei Wang College of Telecommunications

More information

A Spatial Mean and Median Filter For Noise Removal in Digital Images

A Spatial Mean and Median Filter For Noise Removal in Digital Images A Spatial Mean and Median Filter For Noise Removal in Digital Images N.Rajesh Kumar 1, J.Uday Kumar 2 Associate Professor, Dept. of ECE, Jaya Prakash Narayan College of Engineering, Mahabubnagar, Telangana,

More information

Reference Free Image Quality Evaluation

Reference Free Image Quality Evaluation Reference Free Image Quality Evaluation for Photos and Digital Film Restoration Majed CHAMBAH Université de Reims Champagne-Ardenne, France 1 Overview Introduction Defects affecting films and Digital film

More information

(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

(i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods Tools and Applications Chapter Intended Learning Outcomes: (i) Understanding the basic concepts of signal modeling, correlation, maximum likelihood estimation, least squares and iterative numerical methods

More information

Guided Image Filtering for Image Enhancement

Guided Image Filtering for Image Enhancement International Journal of Research Studies in Science, Engineering and Technology Volume 1, Issue 9, December 2014, PP 134-138 ISSN 2349-4751 (Print) & ISSN 2349-476X (Online) Guided Image Filtering for

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

Fast identification of individuals based on iris characteristics for biometric systems

Fast identification of individuals based on iris characteristics for biometric systems Fast identification of individuals based on iris characteristics for biometric systems J.G. Rogeri, M.A. Pontes, A.S. Pereira and N. Marranghello Department of Computer Science and Statistic, IBILCE, Sao

More information

Data Embedding Using Phase Dispersion. Chris Honsinger and Majid Rabbani Imaging Science Division Eastman Kodak Company Rochester, NY USA

Data Embedding Using Phase Dispersion. Chris Honsinger and Majid Rabbani Imaging Science Division Eastman Kodak Company Rochester, NY USA Data Embedding Using Phase Dispersion Chris Honsinger and Majid Rabbani Imaging Science Division Eastman Kodak Company Rochester, NY USA Abstract A method of data embedding based on the convolution of

More information

Blur Estimation for Barcode Recognition in Out-of-Focus Images

Blur Estimation for Barcode Recognition in Out-of-Focus Images Blur Estimation for Barcode Recognition in Out-of-Focus Images Duy Khuong Nguyen, The Duy Bui, and Thanh Ha Le Human Machine Interaction Laboratory University Engineering and Technology Vietnam National

More information

Comparison of Q-estimation methods: an update

Comparison of Q-estimation methods: an update Q-estimation Comparison of Q-estimation methods: an update Peng Cheng and Gary F. Margrave ABSTRACT In this article, three methods of Q estimation are compared: a complex spectral ratio method, the centroid

More information