Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis

Size: px
Start display at page:

Download "Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis"

Transcription

1 Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Huei-Yung Lin and Chia-Hong Chang Department of Electrical Engineering, National Chung Cheng University, 168 University Rd., Min-Hsiung Chia-Yi 621, Taiwan, R.O.C. Abstract. Motion blur is an important visual cue for the illusion of object motion. It has many applications in computer animation, virtual reality and augmented reality. In this work, we present a nonlinear imaging model for synthetic motion blur generation. It is shown that the response of the image sensor is determined by the optical parameters of the camera and can be derived by a simple photometric calibration process. Based on the nonlinear behavior of the image response, photo-realistic motion blur can be obtained and combined with real scenes with least visual inconsistency. Experiments have shown that the proposed method generates more photo-consistent results than the conventional motion blur model. 1 Introduction In the past few years, we have witnessed the convergence of computer vision and computer graphics [1]. Although traditionally regarded as inverse problems of each other, image-based rendering and modeling shares the common ground of these two research fields. One of its major topics is how to synthesize computer images for graphics representations based on the knowledge of a given vision model. This problem commonly arises in the application domains of computer animation, virtual reality and augmented reality [2]. For computer animation and virtual reality, synthetic images are generated from existing graphical models for rendering purposes. Augmented reality, on the other hand, requires the image composition of virtual objects and real scenes in a natural way. The ultimate goal of these applications is usually to make the synthetic images look as realistic as those actually filmed by the cameras. Most of the previous research for rendering synthetic objects into real scenes deal with static image composition, even used for generating synthetic video sequences. When modeling a scene containing a fast moving object during the finite camera exposure time, it is not possible to insert the object directly into the scene by simple image overlay. In addition to the geometric and photometric consistency imposed on the object for the given viewpoint, motion blur or temporal aliasing due to the relative motion between the camera and the scene usually have to be taken into account. It is a very important visual cue to human L.-W. Chang, W.-N. Lie, and R. Chiang (Eds.): PSIVT 26, LNCS 4319, pp , 26. c Springer-Verlag Berlin Heidelberg 26

2 1274 H.-Y. Lin and C.-H. Chang perception for the illusion of object motion, and commonly used in photography to illustrate the dynamic features in the scene. For computer generated or stop motion animations with limited temporal sampling rate, unpleasant effects such as jerky or strobing appearance might present in the image sequence if motion blur is not modeled appropriately. Early research on the simulation of motion blur suggested a method by convolving the original image with the linear optical system-transfer function derived from the motion path [3,4]. The uniform point spread function (PSF) was demonstrated in their work, but high-degree resampling filters were later adopted to further improve the results of temporal anti-aliasing [5]. More recently, Sung et al. introduced the visibility and shading functions in the spatial-temporal domain for motion blur image generation [6]. Brostow and Essa proposed a frame-to-frame motion tracking approach to simulate motion blur for stop motion animation [7]. Except for the generation of realistic motion blur, there are also some researchers focusing on real-time rendering using hardware acceleration for interactive graphics applications [8,9]. Although the results are smooth and visually consistent, they are only approximations due to the oversimplified image formation model. It is commonly believed that the image acquisition process can be approximated by a linear system, and motion blur can thus be obtained from the convolution with a given PSF. However, the nonlinear behavior of image sensors becomes prominent when the light source changes rapidly during the exposure time [1]. In this case, the conventional method using a simple box filter cannot create the photo-realistic or photo-consistent motion blur phenomenon. This might not be a problem in purely computer-generated animation, but inconsistency will certainly be noticeable in the image when combining virtual objects with real scenes. Thus, in this work we have proposed a nonlinear imaging model for synthetic motion blur generation. Image formation is modified and incorporated with nonlinear response function. More photo-consistent simulation results are then obtained by using the calibrated parameters of given camera settings. 2 Image Formation Model The process of image formation can be determined by the optical parameters of the lens, geometric parameters of the camera projection model, and photometric parameters associated with the environment and the CCD image sensor. To synthesize an image from the same viewpoint of the real scene image, however, only the photometric aspect of image formation has to be considered. From basic radiometry, the relationship between scene radiance L and image irradiance E is given by E = L π ( ) 2 d cos 4 α (1) 4 f where d, f and α are the aperture diameter, focal length and the angle between the optical axis and the line of sight, respectively [11]. Since the image

3 Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis 1275 is commonly used to represent the image irradiance, it is in turn assumed proportional to the scene radiance for a given set of camera parameters. Thus, most existing algorithms adopt a simple pinhole camera model for synthetic image generation of real scenes. Linear motion blur is generated by convolving the original image with a box filter or a uniform PSF [3]. Although the image synthesis or composition are relatively easy to implement based on the above image formation, the results are usually not satisfactory when compared to the real images captured by a camera. Consequently, photo-realistic scene modeling cannot be accomplished by this simplified imaging model. One major issue which is not explicitly considered in the previous approach is the nonlinear behavior of the image sensors. It is commonly assumed that the image increases linearly with the camera exposure time for any given scene point. However, nonlinear sensors are generally designed to have the output voltage proportional to the log of the light energy for high dynamic range imaging [12,13]. Furthermore, the response function of the image sensors is also affected by the F-number of the camera from our observation. To illustrate this phenomenon, an image printout with white, gray and black stripes is used as a test pattern. Image values under different camera exposures are calibrated for various F-number settings. The plots of value versus exposure time for both the black and gray image stripes 1 are shown in Figure 1. The figures demonstrate that, prior to saturation, the values increase nonlinearly with the exposure times. Although the nonlinear behaviors are not severe for large F-numbers (i.e., small aperture diameters), they are conspicuous for smaller F-numbers. Another important observation is that, even with different scene radiance, the response curves for the black and gray patterns are very similar if the time axis is scaled by a constant. Figure 2 shows the response curves for several F-numbers normalized with respect to the gray and black image patterns. The results suggest that the values of a scene point under different exposure time are governed by the F-number. To establish a more realistic image formation model from the above observations, a monotonically increasing function with nonlinear behavior determined by additional parameters should be adopted. Since the response curves shown in Figure 1 cannot be easily fitted by gamma or log functions with various F-numbers, we model the accumulation versus exposure using an operation similar to the capacitor charging process. The value of an image pixel I(t) is modeled as an inverse exponential function of the integration time t given by I(t) =I max (1 e k d2 ρ t ) for t T (2) where T, I max, d, k and ρ are the exposure time, maximum, aperture diameter, a camera constant, and a parameter related to the object surface s reflectance property, respectively. If all the parameters in Eq. (2) are known, 1 The response of the white image pattern is not shown because it is saturated in a small exposure range.

4 1276 H.-Y. Lin and C.-H. Chang F=2.4 F=2.4 F=5 F=5 5 F=7.1 5 F=7.1 F=9 F=9 F=11 F= x 1 4 (a) Intensity versus exposure time for the black (left) and gray (right) patterns F=2.4 F=2.4 F=5 F=5 5 F=7.1 5 F=7.1 F=9 F=9 F=11 F= (b) Small exposure range clearly shows the nonlinear behavior of the. Fig. 1. Nonlinear behavior of the response curves with different F-numbers then it is possible to determine the value of the image pixel for any exposure time less than T. For a general 8-bit greyscale image, the maximum I max is 255 and I(t) isalwayslessthani max. The aperture diameter d is defined as the F-number divided by the focal length, and can be obtained from the camera settings. The parameters k and ρ are constants for any fixed scene point in the image. Thus, Eq. (2) can be rewritten as I(t) =I max (1 e k t ) (3) for a given set of camera parameters. The only parameter k can then be determined by an appropriate calibration procedure with different camera settings. To verify Eq. (3), we first observe that I() = as expected for any camera settings. The value saturates as t, and the larger the parameter k is, the faster the saturation occurs. This is consistent with the physical model: k contains the reflectance of the scene point and thus represents the irradiance of the image point. Figures 1 and 2 illustrate that, the nonlinear responses are not noticeable for small apertures, but they are evident for large aperture sizes. For either case, the response function can be modeled by Eq. (3)

5 Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis exp black gray 5 exp black gray (a) F-2.4, k = (b) F-5, k = exp black gray 5 exp black gray (c) F-7.1, k = (d) F-11, k =.19 Fig. 2. Normalized response curves for different F-numbers with some constant k. Thus, the most important aspect of the equation is to characterize the image accumulation versus integration time based on the fixed camera parameters. For a given value, it is not possible to determine the exposure time since the image irradiance also depends on the object s reflectance property. However, it is possible to calculate the image of a scene point under any exposure if an -exposure pair is given and the normalized response curve is known for specific camera parameter settings. This is one of the requirements for generating space-variant motion blur as described in the following section. To obtain the normalized response function up to an unknown scale factor in the time domain, the images of the calibration patterns are captured with various exposure followed by least-squared fitting to find the parameter k for different F-numbers. As shown in Figure 2, the resulting fitting curves (black dashed lines) for any given F-number provide good approximation to the actual measurements for both the black and gray patterns. This curve fitting and parameter estimation process can be referred to as photometric calibration for the response function. It should be noted that only the shape of the response curve is significant, the resulting function is normalized in the time axis by an arbitrary scale factor. Given the value of an

6 1278 H.-Y. Lin and C.-H. Chang image pixel with known camera exposure, the corresponding scene point under different amount of exposure can be calculated by Eq. (3). 3 Synthetic Motion Blur Image Generation Motion blur arises when the relative motion between the scene and the camera is fast during the exposure time of the imaging process. The most commonly used model for motion blur is given by g(x, y) = T f(x x (t),y y (t))dt (4) where g(x, y) andf(x, y) are the blurred and ideal images, respectively. T is the duration of the exposure. x (t) andy (t) are the time varying components of motion in the x and y directions, respectively [3]. If only the uniform linear motion in the x-direction is considered, the motion blurred image can be generated by taking the average of line integral along the motion direction. That is, g(x, y) = 1 R R f(x ρ, y)dρ (5) where R is the extent of the motion blur. Eq. (5) essentially describes the blurred image as the convolution of the original (ideal) image with a uniform PSF h(x, y) = { 1/R, x R/2, otherwise (6) This model is de facto the most widely adopted method for generating motion blur images. Its discrete counterpart used for computation is given by g[m, n] = 1 K K 1 i= f[m i, n] (7) where K is the number of blurred pixels. As an example of using the above image degradation model, motion blur of an ideal step edge can be obtained by performing the spatial domain convolution with the PSF given by Eq. (6). The synthetic result is therefore a ramp edge with the width of the motion blur extent R. If this motion blur model is applied on a real edge image, however, the result is generally different from the recorded motion blur image. Figures 3(a), 3(b) and 3(c) illustrate the images and profiles of an ideal step edge, motion blur edge created using Eq. (7) and real motion blur edge captured by a camera, respectively. As shown in Figure 3(c), the profile indicates that there exists non-uniform weighting on the pixel intensities of real motion blur. Since the curve is not symmetric with respect to its midpoint, this nonlinear response is clearly not due to the optical defocus of the camera and cannot be described by a Gaussian process.

7 Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis (a) Ideal step edge and the corresponding profile (b) Motion blur edge generated using Eq. (7) (c) Real motion blur edge captured by a camera (d) Motion blur edge synthesized by the proposed method. Fig. 3. Motion blur synthesis of an ideal edge image

8 128 H.-Y. Lin and C.-H. Chang In this work, motion blur is modeled using nonlinear response of the image sensor as discussed in the previous section. For an image position under uniform motion blur, its value is given by the integration of image irradiance associated with different scene points during the exposure time. Every scene point in the motion path thus contributes the for smaller yet equal exposure time. Although these partial values can be derived from the static image with full exposure by linear interpolation in the time domain, nonlinear behavior of the response should also be taken into account. Suppose the monotonic response function is I(t), then the motion blur image g(x, y) isgivenby ( g(x, y) =I 1 R R ) I 1 (f(x ρ, y))dρ where R is the motion blur extent and I 1 ( ) is the inverse function of I(t). The discrete counterpart of Eq. (8) is given by ( ) K 1 1 g[m, n] =I I 1 (f[m i, n]) (9) K i= where K is the number of blurred pixels. If we consider the special case that the response function I(t) is linear, then Eqs. (8) and (9) are simplified to Eqs. (5) and (7), respectively. Figure 3(d) shows the synthetic motion blur edge of Figure 3(a) and the corresponding profile of the image scanlines generated using Eq. (9). The response function I(t) isgivenbyeq.(3)withf-5andk =.15. By comparing the generated images and profiles with those given by the real motion blur, the proposed model clearly gives more photoconsistent results than the one synthesized using uniform PSF. The fact that brighter scene points contribute more to the image pixels, as shown in Figure 3(c), is successfully modeled by the nonlinear response curve. 4 Results Figure 4 shows the experimental results of a real scene. The camera is placed at about 1 meter in front of the object (a tennis ball). The static image shown in Figures 4(a) is taken at F-5 with exposure time of 1/8 second. Figure 4(b) shows the motion blur image taken under 3 mm/sec. lateral motion of the camera using the same set of camera parameters. The blur extent in the image is 18 pixels, which is used for synthetic motion blur image generation. Figure 4(c) illustrates the motion blur synthesized using the widely adopted uniform PSF for image convolution. The result generated using the proposed nonlinear response function is shown in Figure 4(d). For the color images, red, green and blue channels are processed separately. Motion blur images are first created for each channel using the same response curve, and then combined to form the final result. (8)

9 Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis (a) Static image (b) Real motion blur image. (c) Motion blur generated using Eq. (7). (d) Motion blur by the proposed method. Fig. 4. Experimental results of a real scene Fig. 5. Motion blur generated with small F-number (large aperture size) With careful examination of Figure 4, it is not difficult to find that the image shown in Figure 4(d) is slightly better than Figure 4(c). The image scanline profiles of Figure 4(d) are very close to those exhibited in the real motion blur image. Figure 5 (left) shows another example taken at F-2.4 with exposure time of 1/8 second. The middle and right figures are the results using Eq. (7) and the proposed method, respectively. It is clear that the nonlinear behavior becomes prominent and has to be considered for more realistic motion blur synthesis.

10 1282 H.-Y. Lin and C.-H. Chang 5 Conclusion Image synthesis or composition with motion blur phenomenon have many applications in computer graphics and visualization. Most existing works generate motion blur by convolving the image with a uniform PSF. The results are usually not photo-consistent due to the nonlinear behavior of the image sensors. In this work, we have presented a nonlinear imaging model for synthetic motion blur generation. More photo-realistic motion blur can be obtained and combined with real scenes with least visual inconsistency. Thus, our approach can be used to illustrate dynamic motion for still images, or render fast object motion with limited frame rate for computer animation. References 1. Lengyel, J.: The convergence of graphics and vision. Computer 31(7) (1998) Kutulakos, K.N., Vallino, J.R.: Calibration-free augmented reality. IEEE Transactions on Visualization and Computer Graphics 4(1) (1998) Potmesil, M., Chakravarty, I.: Modeling motion blur in computer-generated images. In: Proceedings of the 1th annual conference on Computer graphics and interactive techniques, ACM Press (1983) Max, N.L., Lerner, D.M.: A two-and-a-half-d motion-blur algorithm. In: SIG- GRAPH 85: Proceedings of the 12th annual conference on Computer graphics and interactive techniques, New York, NY, USA, ACM Press (1985) Dachille, F., Kaufman, A.: High-degree temporal antialiasing. In: CA : Proceedings of the Computer Animation. (2) Sung, K., Pearce, A., Wang, C.: Spatial-temporal antialiasing. IEEE Transactions on Visualization and Computer Graphics 8(2) (22) Brostow, G., Essa, I.: Image-based motion blur for stop motion animation. In: SIGGRAPH 1 Conference Proceedings, ACM SIGGRAPH (21) Wloka, M.M., Zeleznik, R.C.: Interactive real-time motion blur. The Visual Computer 12(6) (1996) Meinds, K., Stout, J., van Overveld, K.: Real-time temporal anti-aliasing for 3d graphics. In Ertl, T., ed.: VMV, Aka GmbH (23) Rush, A.: Nonlinear sensors impact digital imaging. Electronics Engineer (1998) 11. Forsyth, D., Ponce, J.: Computer Vision: A Modern Approach. Prentice-Hall (23) 12. Debevec, P.E., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: SIGGRAPH 97: Proceedings of the 24th annual conference on Computer graphics and interactive techniques, ACM Press (1997) Schanz, M., Nitta, C., Bussman, A., Hosticka, B.J., Wertheimer, R.K.: A highdynamic-range cmos image sensor for automotive applications. IEEE Journal of Solid-State Circuits 35(7) (2)

Realistic Image Synthesis

Realistic Image Synthesis Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106

More information

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Cameras Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Camera Focus Camera Focus So far, we have been simulating pinhole cameras with perfect focus Often times, we want to simulate more

More information

High Dynamic Range Imaging

High Dynamic Range Imaging High Dynamic Range Imaging 1 2 Lecture Topic Discuss the limits of the dynamic range in current imaging and display technology Solutions 1. High Dynamic Range (HDR) Imaging Able to image a larger dynamic

More information

Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach

Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach 2014 IEEE International Conference on Systems, Man, and Cybernetics October 5-8, 2014, San Diego, CA, USA Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach Huei-Yung Lin and Jui-Wen Huang

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

Simulated Programmable Apertures with Lytro

Simulated Programmable Apertures with Lytro Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows

More information

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do? Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution

More information

Fast Perception-Based Depth of Field Rendering

Fast Perception-Based Depth of Field Rendering Fast Perception-Based Depth of Field Rendering Jurriaan D. Mulder Robert van Liere Abstract Current algorithms to create depth of field (DOF) effects are either too costly to be applied in VR systems,

More information

A Saturation-based Image Fusion Method for Static Scenes

A Saturation-based Image Fusion Method for Static Scenes 2015 6th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES) A Saturation-based Image Fusion Method for Static Scenes Geley Peljor and Toshiaki Kondo Sirindhorn

More information

Defense Technical Information Center Compilation Part Notice

Defense Technical Information Center Compilation Part Notice UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11345 TITLE: Measurement of the Spatial Frequency Response [SFR] of Digital Still-Picture Cameras Using a Modified Slanted

More information

Why learn about photography in this course?

Why learn about photography in this course? Why learn about photography in this course? Geri's Game: Note the background is blurred. - photography: model of image formation - Many computer graphics methods use existing photographs e.g. texture &

More information

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response - application: high dynamic range imaging Why learn

More information

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008 ICIC Express Letters ICIC International c 2008 ISSN 1881-803X Volume 2, Number 4, December 2008 pp. 409 414 SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 22: Computational photography photomatix.com Announcements Final project midterm reports due on Tuesday to CMS by 11:59pm BRDF s can be incredibly complicated

More information

Removing Temporal Stationary Blur in Route Panoramas

Removing Temporal Stationary Blur in Route Panoramas Removing Temporal Stationary Blur in Route Panoramas Jiang Yu Zheng and Min Shi Indiana University Purdue University Indianapolis jzheng@cs.iupui.edu Abstract The Route Panorama is a continuous, compact

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools Course 10 Realistic Materials in Computer Graphics Acquisition Basics MPI Informatik (moving to the University of Washington Goal of this Section practical, hands-on description of acquisition basics general

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

Fast Motion Blur through Sample Reprojection

Fast Motion Blur through Sample Reprojection Fast Motion Blur through Sample Reprojection Micah T. Taylor taylormt@cs.unc.edu Abstract The human eye and physical cameras capture visual information both spatially and temporally. The temporal aspect

More information

SUPER RESOLUTION INTRODUCTION

SUPER RESOLUTION INTRODUCTION SUPER RESOLUTION Jnanavardhini - Online MultiDisciplinary Research Journal Ms. Amalorpavam.G Assistant Professor, Department of Computer Sciences, Sambhram Academy of Management. Studies, Bangalore Abstract:-

More information

Introduction to Video Forgery Detection: Part I

Introduction to Video Forgery Detection: Part I Introduction to Video Forgery Detection: Part I Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 5,

More information

Single-Image Shape from Defocus

Single-Image Shape from Defocus Single-Image Shape from Defocus José R.A. Torreão and João L. Fernandes Instituto de Computação Universidade Federal Fluminense 24210-240 Niterói RJ, BRAZIL Abstract The limited depth of field causes scene

More information

Antialiasing and Related Issues

Antialiasing and Related Issues Antialiasing and Related Issues OUTLINE: Antialiasing Prefiltering, Supersampling, Stochastic Sampling Rastering and Reconstruction Gamma Correction Antialiasing Methods To reduce aliasing, either: 1.

More information

Sampling and Reconstruction

Sampling and Reconstruction Sampling and reconstruction COMP 575/COMP 770 Fall 2010 Stephen J. Guy 1 Review What is Computer Graphics? Computer graphics: The study of creating, manipulating, and using visual images in the computer.

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication Image Enhancement DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 15, 2013 Mårten Björkman (CVAP)

More information

Radiometric alignment and vignetting calibration

Radiometric alignment and vignetting calibration Radiometric alignment and vignetting calibration Pablo d Angelo University of Bielefeld, Technical Faculty, Applied Computer Science D-33501 Bielefeld, Germany pablo.dangelo@web.de Abstract. This paper

More information

High Dynamic Range Images : Rendering and Image Processing Alexei Efros. The Grandma Problem

High Dynamic Range Images : Rendering and Image Processing Alexei Efros. The Grandma Problem High Dynamic Range Images 15-463: Rendering and Image Processing Alexei Efros The Grandma Problem 1 Problem: Dynamic Range 1 1500 The real world is high dynamic range. 25,000 400,000 2,000,000,000 Image

More information

Vehicle Speed Estimation Based On The Image

Vehicle Speed Estimation Based On The Image SETIT 007 4 th International Conference: Sciences of Electronic, Technologies of Information and Telecommunications March 5-9, 007 TUNISIA Vehicle Speed Estimation Based On The Image Gholam ali rezai rad*,

More information

Focused Image Recovery from Two Defocused

Focused Image Recovery from Two Defocused Focused Image Recovery from Two Defocused Images Recorded With Different Camera Settings Murali Subbarao Tse-Chung Wei Gopal Surya Department of Electrical Engineering State University of New York Stony

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

Evolving Measurement Regions for Depth from Defocus

Evolving Measurement Regions for Depth from Defocus Evolving Measurement Regions for Depth from Defocus Scott McCloskey, Michael Langer, and Kaleem Siddiqi Centre for Intelligent Machines, McGill University {scott,langer,siddiqi}@cim.mcgill.ca Abstract.

More information

Analysis of the Interpolation Error Between Multiresolution Images

Analysis of the Interpolation Error Between Multiresolution Images Brigham Young University BYU ScholarsArchive All Faculty Publications 1998-10-01 Analysis of the Interpolation Error Between Multiresolution Images Bryan S. Morse morse@byu.edu Follow this and additional

More information

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE

DEFOCUS BLUR PARAMETER ESTIMATION TECHNIQUE International Journal of Electronics and Communication Engineering and Technology (IJECET) Volume 7, Issue 4, July-August 2016, pp. 85 90, Article ID: IJECET_07_04_010 Available online at http://www.iaeme.com/ijecet/issues.asp?jtype=ijecet&vtype=7&itype=4

More information

Defocus Blur Correcting Projector-Camera System

Defocus Blur Correcting Projector-Camera System Defocus Blur Correcting Projector-Camera System Yuji Oyamada and Hideo Saito Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi Kohoku-ku, Yokohama 223-8522, Japan {charmie,saito}@ozawa.ics.keio.ac.jp

More information

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Mihai Negru and Sergiu Nedevschi Technical University of Cluj-Napoca, Computer Science Department Mihai.Negru@cs.utcluj.ro, Sergiu.Nedevschi@cs.utcluj.ro

More information

SAR AUTOFOCUS AND PHASE CORRECTION TECHNIQUES

SAR AUTOFOCUS AND PHASE CORRECTION TECHNIQUES SAR AUTOFOCUS AND PHASE CORRECTION TECHNIQUES Chris Oliver, CBE, NASoftware Ltd 28th January 2007 Introduction Both satellite and airborne SAR data is subject to a number of perturbations which stem from

More information

On the Recovery of Depth from a Single Defocused Image

On the Recovery of Depth from a Single Defocused Image On the Recovery of Depth from a Single Defocused Image Shaojie Zhuo and Terence Sim School of Computing National University of Singapore Singapore,747 Abstract. In this paper we address the challenging

More information

Goal of this Section. Capturing Reflectance From Theory to Practice. Acquisition Basics. How can we measure material properties? Special Purpose Tools

Goal of this Section. Capturing Reflectance From Theory to Practice. Acquisition Basics. How can we measure material properties? Special Purpose Tools Capturing Reflectance From Theory to Practice Acquisition Basics GRIS, TU Darmstadt (formerly University of Washington, Seattle Goal of this Section practical, hands-on description of acquisition basics

More information

Introduction. Related Work

Introduction. Related Work Introduction Depth of field is a natural phenomenon when it comes to both sight and photography. The basic ray tracing camera model is insufficient at representing this essential visual element and will

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

Camera Models and Optical Systems Used in Computer Graphics: Part I, Object-Based Techniques

Camera Models and Optical Systems Used in Computer Graphics: Part I, Object-Based Techniques Camera Models and Optical Systems Used in Computer Graphics: Part I, Object-Based Techniques Brian A. Barsky 1,2,3,DanielR.Horn 1, Stanley A. Klein 2,3,JeffreyA.Pang 1, and Meng Yu 1 1 Computer Science

More information

Superfast phase-shifting method for 3-D shape measurement

Superfast phase-shifting method for 3-D shape measurement Superfast phase-shifting method for 3-D shape measurement Song Zhang 1,, Daniel Van Der Weide 2, and James Oliver 1 1 Department of Mechanical Engineering, Iowa State University, Ames, IA 50011, USA 2

More information

A Review over Different Blur Detection Techniques in Image Processing

A Review over Different Blur Detection Techniques in Image Processing A Review over Different Blur Detection Techniques in Image Processing 1 Anupama Sharma, 2 Devarshi Shukla 1 E.C.E student, 2 H.O.D, Department of electronics communication engineering, LR College of engineering

More information

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Motivation! wikipedia! exposure sequence! -4 stops! Motivation!

More information

Distributed Algorithms. Image and Video Processing

Distributed Algorithms. Image and Video Processing Chapter 7 High Dynamic Range (HDR) Distributed Algorithms for Introduction to HDR (I) Source: wikipedia.org 2 1 Introduction to HDR (II) High dynamic range classifies a very high contrast ratio in images

More information

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES Petteri PÖNTINEN Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing, Finland petteri.pontinen@hut.fi KEY WORDS: Cocentricity,

More information

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture: The Lecture Contains: Effect of Temporal Aperture: Spatial Aperture: Effect of Display Aperture: file:///d /...e%20(ganesh%20rana)/my%20course_ganesh%20rana/prof.%20sumana%20gupta/final%20dvsp/lecture18/18_1.htm[12/30/2015

More information

Deconvolution , , Computational Photography Fall 2018, Lecture 12

Deconvolution , , Computational Photography Fall 2018, Lecture 12 Deconvolution http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 12 Course announcements Homework 3 is out. - Due October 12 th. - Any questions?

More information

Supplementary Material of

Supplementary Material of Supplementary Material of Efficient and Robust Color Consistency for Community Photo Collections Jaesik Park Intel Labs Yu-Wing Tai SenseTime Sudipta N. Sinha Microsoft Research In So Kweon KAIST In the

More information

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

Blind Single-Image Super Resolution Reconstruction with Defocus Blur Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Blind Single-Image Super Resolution Reconstruction with Defocus Blur Fengqing Qin, Lihong Zhu, Lilan Cao, Wanan Yang Institute

More information

Chapter 18 Optical Elements

Chapter 18 Optical Elements Chapter 18 Optical Elements GOALS When you have mastered the content of this chapter, you will be able to achieve the following goals: Definitions Define each of the following terms and use it in an operational

More information

Coded Computational Photography!

Coded Computational Photography! Coded Computational Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 9! Gordon Wetzstein! Stanford University! Coded Computational Photography - Overview!!

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS

THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS THE RESTORATION OF DEFOCUS IMAGES WITH LINEAR CHANGE DEFOCUS RADIUS 1 LUOYU ZHOU 1 College of Electronics and Information Engineering, Yangtze University, Jingzhou, Hubei 43423, China E-mail: 1 luoyuzh@yangtzeu.edu.cn

More information

ME 6406 MACHINE VISION. Georgia Institute of Technology

ME 6406 MACHINE VISION. Georgia Institute of Technology ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

A simulation tool for evaluating digital camera image quality

A simulation tool for evaluating digital camera image quality A simulation tool for evaluating digital camera image quality Joyce Farrell ab, Feng Xiao b, Peter Catrysse b, Brian Wandell b a ImagEval Consulting LLC, P.O. Box 1648, Palo Alto, CA 94302-1648 b Stanford

More information

Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction

Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction Seon Joo Kim and Marc Pollefeys Department of Computer Science University of North Carolina Chapel Hill, NC 27599 {sjkim,

More information

High dynamic range imaging and tonemapping

High dynamic range imaging and tonemapping High dynamic range imaging and tonemapping http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 12 Course announcements Homework 3 is out. - Due

More information

Improved motion invariant imaging with time varying shutter functions

Improved motion invariant imaging with time varying shutter functions Improved motion invariant imaging with time varying shutter functions Steve Webster a and Andrew Dorrell b Canon Information Systems Research, Australia (CiSRA), Thomas Holt Drive, North Ryde, Australia

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Fast and High-Quality Image Blending on Mobile Phones

Fast and High-Quality Image Blending on Mobile Phones Fast and High-Quality Image Blending on Mobile Phones Yingen Xiong and Kari Pulli Nokia Research Center 955 Page Mill Road Palo Alto, CA 94304 USA Email: {yingenxiong, karipulli}@nokiacom Abstract We present

More information

Section 7.2 Logarithmic Functions

Section 7.2 Logarithmic Functions Math 150 c Lynch 1 of 6 Section 7.2 Logarithmic Functions Definition. Let a be any positive number not equal to 1. The logarithm of x to the base a is y if and only if a y = x. The number y is denoted

More information

Image Processing. Image Processing. What is an Image? Image Resolution. Overview. Sources of Error. Filtering Blur Detect edges

Image Processing. Image Processing. What is an Image? Image Resolution. Overview. Sources of Error. Filtering Blur Detect edges Thomas Funkhouser Princeton University COS 46, Spring 004 Quantization Random dither Ordered dither Floyd-Steinberg dither Pixel operations Add random noise Add luminance Add contrast Add saturation ing

More information

What will be on the midterm?

What will be on the midterm? What will be on the midterm? CS 178, Spring 2014 Marc Levoy Computer Science Department Stanford University General information 2 Monday, 7-9pm, Cubberly Auditorium (School of Edu) closed book, no notes

More information

Analysis and design of filters for differentiation

Analysis and design of filters for differentiation Differential filters Analysis and design of filters for differentiation John C. Bancroft and Hugh D. Geiger SUMMARY Differential equations are an integral part of seismic processing. In the discrete computer

More information

Deblurring. Basics, Problem definition and variants

Deblurring. Basics, Problem definition and variants Deblurring Basics, Problem definition and variants Kinds of blur Hand-shake Defocus Credit: Kenneth Josephson Motion Credit: Kenneth Josephson Kinds of blur Spatially invariant vs. Spatially varying

More information

STREAK DETECTION ALGORITHM FOR SPACE DEBRIS DETECTION ON OPTICAL IMAGES

STREAK DETECTION ALGORITHM FOR SPACE DEBRIS DETECTION ON OPTICAL IMAGES STREAK DETECTION ALGORITHM FOR SPACE DEBRIS DETECTION ON OPTICAL IMAGES Alessandro Vananti, Klaus Schild, Thomas Schildknecht Astronomical Institute, University of Bern, Sidlerstrasse 5, CH-3012 Bern,

More information

Automatic Selection of Brackets for HDR Image Creation

Automatic Selection of Brackets for HDR Image Creation Automatic Selection of Brackets for HDR Image Creation Michel VIDAL-NAQUET, Wei MING Abstract High Dynamic Range imaging (HDR) is now readily available on mobile devices such as smart phones and compact

More information

SHAPE FROM FOCUS. Keywords defocus, focus operator, focus measure function, depth estimation, roughness and tecture, automatic shapefromfocus.

SHAPE FROM FOCUS. Keywords defocus, focus operator, focus measure function, depth estimation, roughness and tecture, automatic shapefromfocus. SHAPE FROM FOCUS k.kanthamma*, Dr S.A.K.Jilani** *(Department of electronics and communication engineering, srinivasa ramanujan institute of technology, Anantapur,Andrapradesh,INDIA ** (Department of electronics

More information

CS 775: Advanced Computer Graphics. Lecture 12 : Antialiasing

CS 775: Advanced Computer Graphics. Lecture 12 : Antialiasing CS 775: Advanced Computer Graphics Lecture 12 : Antialiasing Antialiasing How to prevent aliasing? Prefiltering Analytic Approximate Postfiltering Supersampling Stochastic Supersampling Antialiasing Textures

More information

Computational Approaches to Cameras

Computational Approaches to Cameras Computational Approaches to Cameras 11/16/17 Magritte, The False Mirror (1935) Computational Photography Derek Hoiem, University of Illinois Announcements Final project proposal due Monday (see links on

More information

(12) United States Patent

(12) United States Patent USOO8208048B2 (12) United States Patent Lin et al. (10) Patent No.: US 8,208,048 B2 (45) Date of Patent: Jun. 26, 2012 (54) (75) (73) (*) (21) (22) (65) (51) (52) (58) METHOD FOR HIGH DYNAMIC RANGE MAGING

More information

High Dynamic Range Images

High Dynamic Range Images High Dynamic Range Images TNM078 Image Based Rendering Jonas Unger 2004, V1.2 1 Introduction When examining the world around us, it becomes apparent that the lighting conditions in many scenes cover a

More information

Blind Dereverberation of Single-Channel Speech Signals Using an ICA-Based Generative Model

Blind Dereverberation of Single-Channel Speech Signals Using an ICA-Based Generative Model Blind Dereverberation of Single-Channel Speech Signals Using an ICA-Based Generative Model Jong-Hwan Lee 1, Sang-Hoon Oh 2, and Soo-Young Lee 3 1 Brain Science Research Center and Department of Electrial

More information

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534 Introduction to Computer Vision Linear Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters

More information

Coded photography , , Computational Photography Fall 2018, Lecture 14

Coded photography , , Computational Photography Fall 2018, Lecture 14 Coded photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 14 Overview of today s lecture The coded photography paradigm. Dealing with

More information

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1

Today. Defocus. Deconvolution / inverse filters. MIT 2.71/2.710 Optics 12/12/05 wk15-a-1 Today Defocus Deconvolution / inverse filters MIT.7/.70 Optics //05 wk5-a- MIT.7/.70 Optics //05 wk5-a- Defocus MIT.7/.70 Optics //05 wk5-a-3 0 th Century Fox Focus in classical imaging in-focus defocus

More information

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Distance Estimation with a Two or Three Aperture SLR Digital Camera Distance Estimation with a Two or Three Aperture SLR Digital Camera Seungwon Lee, Joonki Paik, and Monson H. Hayes Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University

More information

Vignetting Correction using Mutual Information submitted to ICCV 05

Vignetting Correction using Mutual Information submitted to ICCV 05 Vignetting Correction using Mutual Information submitted to ICCV 05 Seon Joo Kim and Marc Pollefeys Department of Computer Science University of North Carolina Chapel Hill, NC 27599 {sjkim, marc}@cs.unc.edu

More information

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT Sapana S. Bagade M.E,Computer Engineering, Sipna s C.O.E.T,Amravati, Amravati,India sapana.bagade@gmail.com Vijaya K. Shandilya Assistant

More information

Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique

Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique Improving Signal- to- noise Ratio in Remotely Sensed Imagery Using an Invertible Blur Technique Linda K. Le a and Carl Salvaggio a a Rochester Institute of Technology, Center for Imaging Science, Digital

More information

SYSTEMATIC NOISE CHARACTERIZATION OF A CCD CAMERA: APPLICATION TO A MULTISPECTRAL IMAGING SYSTEM

SYSTEMATIC NOISE CHARACTERIZATION OF A CCD CAMERA: APPLICATION TO A MULTISPECTRAL IMAGING SYSTEM SYSTEMATIC NOISE CHARACTERIZATION OF A CCD CAMERA: APPLICATION TO A MULTISPECTRAL IMAGING SYSTEM A. Mansouri, F. S. Marzani, P. Gouton LE2I. UMR CNRS-5158, UFR Sc. & Tech., University of Burgundy, BP 47870,

More information

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES 4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES Abstract: This paper attempts to undertake the study of deblurring techniques for Restored Motion Blurred Images by using: Wiener filter,

More information

Basic principles of photography. David Capel 346B IST

Basic principles of photography. David Capel 346B IST Basic principles of photography David Capel 346B IST Latin Camera Obscura = Dark Room Light passing through a small hole produces an inverted image on the opposite wall Safely observing the solar eclipse

More information

Aliasing and Antialiasing. What is Aliasing? What is Aliasing? What is Aliasing?

Aliasing and Antialiasing. What is Aliasing? What is Aliasing? What is Aliasing? What is Aliasing? Errors and Artifacts arising during rendering, due to the conversion from a continuously defined illumination field to a discrete raster grid of pixels 1 2 What is Aliasing? What is Aliasing?

More information

HDR imaging Automatic Exposure Time Estimation A novel approach

HDR imaging Automatic Exposure Time Estimation A novel approach HDR imaging Automatic Exposure Time Estimation A novel approach Miguel A. MARTÍNEZ,1 Eva M. VALERO,1 Javier HERNÁNDEZ-ANDRÉS,1 Javier ROMERO,1 1 Color Imaging Laboratory, University of Granada, Spain.

More information

Image Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced

Image Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced Image Acquisition Hardware Image Acquisition and Representation how digital images are produced how digital images are represented photometric models-basic radiometry image noises and noise suppression

More information

Solution Set #2

Solution Set #2 05-78-0 Solution Set #. For the sampling function shown, analyze to determine its characteristics, e.g., the associated Nyquist sampling frequency (if any), whether a function sampled with s [x; x] may

More information

Filtering. Image Enhancement Spatial and Frequency Based

Filtering. Image Enhancement Spatial and Frequency Based Filtering Image Enhancement Spatial and Frequency Based Brent M. Dingle, Ph.D. 2015 Game Design and Development Program Mathematics, Statistics and Computer Science University of Wisconsin - Stout Lecture

More information

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing

Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing Ashok Veeraraghavan, Ramesh Raskar, Ankit Mohan & Jack Tumblin Amit Agrawal, Mitsubishi Electric Research

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Effective Pixel Interpolation for Image Super Resolution

Effective Pixel Interpolation for Image Super Resolution IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-iss: 2278-2834,p- ISS: 2278-8735. Volume 6, Issue 2 (May. - Jun. 2013), PP 15-20 Effective Pixel Interpolation for Image Super Resolution

More information

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Optical Coherence: Recreation of the Experiment of Thompson and Wolf Optical Coherence: Recreation of the Experiment of Thompson and Wolf David Collins Senior project Department of Physics, California Polytechnic State University San Luis Obispo June 2010 Abstract The purpose

More information

OPTICAL IMAGE FORMATION

OPTICAL IMAGE FORMATION GEOMETRICAL IMAGING First-order image is perfect object (input) scaled (by magnification) version of object optical system magnification = image distance/object distance no blurring object distance image

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Computational Camera & Photography: Coded Imaging

Computational Camera & Photography: Coded Imaging Computational Camera & Photography: Coded Imaging Camera Culture Ramesh Raskar MIT Media Lab http://cameraculture.media.mit.edu/ Image removed due to copyright restrictions. See Fig. 1, Eight major types

More information