Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction

Size: px
Start display at page:

Download "Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction"

Transcription

1 Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction Seon Joo Kim and Marc Pollefeys Department of Computer Science University of North Carolina Chapel Hill, NC {sjkim, Abstract In many computer vision systems, it is assumed that the image brightness directly reflects the scene radiance. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. Effects of these factors are most visible in image mosaics where colors look inconsistent and notable boundaries exist. In this paper, we propose an sequential algorithm for robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function, we approach each process in a manner that is robust to outliers and derive closed form solutions. We were able to remove radiometric artifacts in image mosaics successfully applying our method and we also show a comparison of our method and a previous method which is based on simultaneous nonlinear optimization. 1. Introduction What determines the brightness at a certain point in an image? How is the image brightness related to the actual scene brightness? Scene brightness can be defined by the term radiance which is the power per unit foreshortened area emitted into a unit solid angle by a surface [13]. After passing through the lens system, the power of radiant energy falling on the image plane is called the irradiance. Irradiance is then transformed to image brightness (Fig.1). In many computer vision systems, it is assumed that the image brightness directly reflects the scene radiance. However, the assumption does not hold in most cases as shown in Fig.2. A nonlinear function called radiometric response function which explains the relationship between irradiance and image brightness along with camera exposure is responsible for color inconsistency from image to image in the mosaic. The lens falloff phenomenon in which the amount Figure 1. Illustration of the basic radiometric concepts [23] Figure 2. An image mosaic showing the effects of vignetting and exposure changes [9] of light (radiance) hitting the image plane varies spatially causes the sharp intensity variations or the bands on the image boundaries in the mosaic. There are several factors for the lens falloff phenomenon. The cosine-fourth law is one of the effects that is responsible for the lens falloff. It defines the relationship between radiance(l) and irradiance(e) using a simple camera model consisting of a thin lens and an image plane [13]. Eq. 1 shows that irradiance is proportional to radiance but it decreases as cosine-fourth of the angle α that a ray makes with the optical axis. In the equation, d is the radius of the lens and f denotes the distance between the lens and the image plane. E = Lπd2 cos 4 α 4f 2 (1) Most of cameras are designed to compensate the cosinefourth effect [2] and the most dominant factor for irradiance falloff in the image periphery is due to vignetting. Vi- 1

2 gnetting effect refers to the gradual fading-out of an image at points near its periphery due to the blocking of a part of the incident ray bundle by the effective size of aperture [25]. The effect of vignetting increases as the size of the aperture increases and vice versa. With a pinhole camera, there would be no vignetting. Another phenomenon called pupil aberration has been described as a third important cause of fall in irradiance away form the image center in addition to the cosine-fourth law and vignetting [1]. Pupil aberration is caused due to nonlinear refraction of the rays which results in a significantly nonuniform light distribution across the aperture. In this paper, we use a vignetting model that explains the observed irradiance falloff behavior rather than trying to physically model this radiometric distortion caused by combination of different factors. While there are multiple causes for the irradiance falloff, we will call the process of correcting this distortion vignetting correction since vignetting is the most dominant factor for the distortion and to conform with previous works. 2. Previous Works Recently, a lot of work has been done in finding the relationship between scene radiance and image intensity. The majority of research assumes linearity between radiance and irradiance (no vignetting), concentrating on estimating the radiometric response function [8, 12, 16, 18]. Conventional methods for correcting vignetting involve taking a reference image of a non-specular object such as a paper with uniform white illumination. This reference image is then used to build correction lookup table or to approximate parametric correction function. In [2], Asada et al. proposed a camera model using variable cone that accounts for vignetting effects in zoom lens system. Parameters of the variable cone model were estimated by taking images of uniform radiance field. Yu et al. proposed using a hypercosine function to represent the pattern of the vignetting distortion for each scanline in [26]. They expanded their work to 2D hypercosine model in [25] and also introduced anti-vignetting method based on wavelet denoising and decimation. Other vignetting models include simple form using radial distance and focal length [24], third-order polynomial model [3], first order Taylor expansion [21], and empirical exponential function [7]. In [10] and [15], vignetting was used for camera calibration. In these works, the radiometric response function was ignored and vignetting was modelled in the image intensity domain rather than in irradiance domain. Schechner and Nayar [22] exploited vignetting effect to capture high dynamic range intensity values. In their work, they calibrate the intended vignetting using a linear leastsquares fit on the image data itself rather than using a reference image. Their work assumes either a linear response function or a known response function. Jia and Tang [14] recently presented a method to correct global and local intensity variation using tensor voting. In [19, 20], Litvinov and Schechner presented a solution for simultaneously estimating the unknown response function, exposures, and vignetting from a normal image sequence. They achieve the goal by a nonparametric linear least squares method using common areas between images. Most closely related to our work, Goldman and Chen presented a solution for estimating the response function, gain, and the vignetting factor and applied it to radiometrically align images for seamless mosaic [9]. They use empirical model of response(emor) [12] to model the response function and a polynomial model for vignetting. They estimate the model parameters simultaneously by a nonlinear optimization method. 3. Our Approach Radiometric process of image formulation which is shown in Fig.1 can be mathematically stated as follows. I x = f(km(r x )L X ) (2) L X is the radiance of a scene point X towards the camera, I x is the image intensity value at the projected image point x, k is the exposure, f() is the radiometric response function, M() is the vignetting function, and r x is the normalized radius of x from the center of vignetting. We assume that the center of vignetting coincide with the center of the image in this paper. Eq. 2 can be rewritten as follows. ln(f 1 (I x )) = ln k + ln M(r x ) + ln L x (3) g(i x ) = K + ln M(r x ) + ln L x (4) The goal of our work is to estimate f() (or g()), M(), and k (or K) given set of images with correspondence. For correspondence, homographies between images are computed using the software Autostitch [4]. While the camera response function and vignetting function was estimated simultaneously in [9] and [20], we approach the problem differently by robustly computing the response function and the vignetting function separately. Separating the two processes is possible by decoupling the vignetting process from the radiometric response function estimation. By separating the two processes, we derive closed form solution for each process that is robust against outliers by approaching the problem in maximum likelihood fashion. This is also big advantage over using nonlinear optimization as in [9] saving a lot of computation time, avoiding issues with potential local minima, and increasing accuracy by being able to use more complicated model.

3 sparse. This might result in the robustly computed brightness transfer function only being valid for parts of the curve that were sufficiently supported by observations. This can easily be dealt with by using a version of Eq. 5 that is weighted according to the amount of observations supporting a particular point on the robust brightness transfer function. Figure 3. Decoupling Vignetting Effect: The figure shows three images stitched to a mosaic. Only corresponding points in the colored band (red for the first pair and blue for the second) are used for estimating the radiometric response function Estimating the radiometric response function Eq. 2 shows that the response function f() cannot be recovered without the knowledge about the vignetting function M() and vice versa. Hence, one way to solve the problem is to estimate both functions simultaneously either in a linear [20] or in a nonlinear [9] way. But if we use corresponding points affected with the same amount of vignetting, we can decouple vignetting effect from the process and estimate the response function without worrying about the vignetting effect using Eq. 2. Let x i and x j be image points of a scene point X in image i and image j respectively. If r xi = r xj then M(r xi ) = M(r xj ) since we have already made the assumption that the vignetting model is the same for all images. Hence by using corresponding points that are of equal distance from the center of each image which will make a line between two images, we can decouple the vignetting effect (Eq. 5). In practice, we have allowed correspondences close to equal distance from the center rather than strictly enforcing correspondences to be equal distance to provide more data which yields good results as vignetting varies slowly (Fig. 3). With these correspondences, we obtain the following equation from Eq. 4. g(i xi ) g(i xj ) = K i K j (5) As mentioned, there are many ways to solve the above equation. In this work, we adopt the method proposed in [16] due to its robustness against noise and mismatches. Rather than solving the problem in least squares sense by using all of the correspondences which may include many mismatches and noises, they approach the problem in a maximum likelihood fashion computing the brightness transfer function that explains how brightness changes from one image to another by dynamic programming. The empirical model of response (EMoR) is then used to model the response function [12]. There is a modification that has to be made to the algorithm presented in [16]. Due to small region of the images that is used at this stage, the intensity distribution may be 3.2. Vignetting Correction After estimating the response function f and the exposure k, each image intensity value is transformed to an irradiance value E to compute vignetting function M. E x = f 1 (I x ) = M(r x )L x (6) k Since the radiance L x is the same for x i and x j, we get E xi M(r xi ) = E x j M(r xj ) As presented in section 2, many models for vignetting exist. In this paper, we chose to use the polynomial model used in [9]. In [9], a third order polynomial was used for the vignetting model and it was estimated together with the response function simultaneously by a nonlinear optimization method. By computing the response function independent of vignetting in our first step, we can now compute the polynomial vignetting model linearly. This saves a great deal of computational time compared to the nonlinear optimization scheme used in [9], avoids issues with potential local minima, and it also enables us to easily use much higher order polynomial function for more accuracy. The vignetting model is given by M(r) = 1 + (7) N c n r 2n. (8) n=1 Let a = E x i E xj, then combining the model with Eq. 7 yields the following equation. N n=1 c n (ar 2n x j r 2n x i ) = 1 a (9) One obvious choice for solving for the N unknown c n s is to use least squares since each corresponding pair of points in given image pairs provides additional equation in the form of Eq. 9. But in presence of many outliers, the least squares solution will not give us a robust solution to the problem. We propose to approach the problem similar to the way we computed the response function in the first stage. Rather than solving the problem in least squares sense, we once again solve the problem in a robust fashion. For each combination of r xi and r xj (discretized), we estimate â(r xi, r xj )

4 Figure 4. Example of estimating the irradiance ratio â. For every matching pair with r xi = r 1, r xj = r 2, the value a is stacked in s(r 1, r 2 ). â(r 1, r 2 ) is computed as the median of stacked values which is the robust estimate of the ratio a for the given r xi and r xj. For every matching points in the image sequence, the a value is computed and stacked at corresponding s(r xi, r xj ). In the end, â(r xi, r xj ) is computed as the median of stacked values s(r xi, r xj ) (see Fig. 4). Notice that we only have to keep track of cases where r xi > r xj due to symmetry. If discretisation of was used, we would now have less then 5000 equations in the form of Eq. 9 instead of having one equations per matching pair of points. By using the median, the approach is less sensitive to outliers due to small misalignment. Now we build the matrix formulation of the problem, Ac = b. The n th column of a row of the matrix A is (â(r xi, r xj )rx 2n j rx 2n i ) and the n th element of the vector b is 1 â(r xi, r xj ). Note that we weight each row of the matrix A and b by the number of elements in the stack s(r xi, r xj ). Finally, the model parameter vector c is the solution to the following least squares problem (10) which can be solved using the singular value decomposition (SVD) Exponential Ambiguity ĉ = arg min c Ac b 2. (10) The process of radiometric calibration explained thus far is subject to exponential ambiguity, sometimes called γ ambiguity [11, 20]. This ambiguity basically means that if ˆf, ˆk, and ˆM are the solutions for the Eq. 2, then the whole family of ˆf α, ˆk α, and ˆM α is also solution to the problem. However, this is not a problem for most applications unless absolute quantitative measurements are required since the family of solutions all generate the same intensity. In this work, the scale of the solution is fixed in the first stage by fixing an exposure ratio of an image pair. 4. Experiments In this section, we evaluate the performance of our proposed method. The measure we are interested in testing is the consistency of intensity values. Image intensity value of a corresponding pixel between images should be the same with correct radiometric response function, exposure values, and vignetting function (with the assumption that the scene is lambertian). For this purpose, we radiometrically align all images in a given sequence to a common exposure value. From Eq. 2, intensity value I at pixel x i in image i with f(), k i, and M() is changed to a new value as in Eq. 11. Note that for each image, we estimated three exposure values for each color channel to account for white balance changes. I new x i = f ( knew f 1 ) (I xi ) k i M(r xi ) (11) We first compare our result with the result from [9]. The first mosaic in Fig. 5 is constructed from images which are taken with different exposures and are affected by vignetting. Image mosaics constructed from images corrected by the method in [9] and by our method are also shown in Fig. 5. For comparison, same number of parameters were used (N = 3). The mosaic constructed by our method shows much more consistency in color. These two methods are also compared in terms of error histogram. Fig. 5 shows histograms of intensity difference ( I xi I xi ) of all corresponding points in the sequence with gradient less than a certain value. While both methods greatly reduces error from the original images (rms error of ), errors are reduced more by our method ([9] : rms error of , our method : rms error of ). There are other set of methods called image blending or feathering that try to make image mosaics color consistent [5, 6, 17]. Comparison of the mosaic constructed using our method and the mosaic constructed using multi-band blending [4, 5] is shown in Fig. 5 and Fig. 7. The mosaic constructed by directly blending the original images still show inconsistency in parts of the mosaic. As can be seen from the figures, the mosaic constructed by applying the blending to the images corrected by our method does not show any visible artifact. As suggested in [9, 20], these two methods complement each other very well rather than being in competition. 5. Discussion We have proposed a sequential method for estimating the radiometric response function, exposures, and vignetting. One of the key features of our method is the decoupling of vignetting effect from estimating the response function. By decoupling the two, we can approach each problem with a robust estimation method that is robust to noise and outliers sequentially. Our method was verified by building image mosaics where visible seams and color inconsistency was successfully eliminated. Our method was also compared with an existing method that solves the problem simultaneously in a nonlinear way. Mosaics and error histograms

5 showed a significant reduction of error. In addition, our approach is significantly faster. In the future, we would also like to compare our method to the method proposed in [20]. Note that since we use the estimation process that is robust to outliers at each stage, our method can be used for images taken with a moving camera rather than just a rotating camera. While in this case correspondences can still be obtained using stereo, outliers and non-lambertian scene results in larger percentage of outliers. The performance of non-robust approaches will significantly degrade. We are planning on applying this method for a full radiometric calibration from images taken with a freely moving camera. There are few other problems that we would also like to explore in the future. We have assumed that center of vignetting coincide with the center of the image as most of the other works have. We would like to include the center of vignetting as something that also has to be estimated since observations show that sometimes vignetting centers are off from the image center. Finally, we would also like to solve the case where vignetting would not be the same for all images in the sequence. References [1] M. Aggarwal, H. Hua, and N. Ahuja. On cosine-fourth and vignetting effects in real lenses. Proc. IEEE Int. Conf. on Computer Vision, July [2] N. Asada, A. Amano, and M. Baba. Photometric calibration of zoom lens systems. Proc. IEEE Int. Conf. on Pattern Recognition, pages , Aug , 2 [3] C. M. Bastuscheck. Correction of video camera response using digital techniques. J. of Optical Engineering, 26(12): , [4] M. Brown and D. Lowe. 2, 4, 6 [5] M. Brown and D. Lowe. Recognising panorama. Proc. IEEE Int. Conf. on Computer Vision, Oct [6] P. Burt and E. H. Adelson. A multiresolution spline with application to image mosaics. ACM TOG, 2, [7] Y. P. Chen and B. K. Mudunuri. An anti-vignetting technique for superwide field of view mosaicked images. J. of Imaging Technology, 12(5): , [8] P. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. Proc. SIGGRAPH 97, pages , [9] D. Goldman and J. Chen. Vignette and exposure calibration and compensation. Proc. IEEE Int. Conf. on Computer Vision, Oct , 2, 3, 4, 6, 7 [10] M. D. Grossberg and S. K. Nayar. A general imaging model and a method for finding its parameters. Proc. IEEE Int. Conf. on Computer Vision, July [11] M. D. Grossberg and S. K. Nayar. What can be known about the radiometric response function from images? Proc. European Conference on Computer Vision, May [12] M. D. Grossberg and S. K. Nayar. Modeling the space of camera response functions. IEEE Transaction on Pattern Analysis and Machine Intelligence, 26(10), Oct , 3 [13] B. K. P. Horn. Robot Vision. Cambridge, Mass., [14] J. Jia and C. Tang. Tensor voting for image correction by global and local intensity alignment. IEEE Transaction on Pattern Analysis and Machine Intelligence, 27(1), [15] S. B. Kang and R. Weiss. Can we calibrate a camera using an image of a flat, textureless lambertian surface? Proc. of the European Conference on Computer Vision, July [16] S. J. Kim and M. Pollefeys. Radiometric alignment of image sequences. Proc. IEEE Conference on Computer Vision and Pattern Recognition, June , 3 [17] A. Levin, A. Zomet, S. Peleg, and Y. Weiss. Seamless image stitching in the gradient domain. Proc. of the European Conference on Computer Vision, [18] S. Lin, J. Gu, S. Yamazaki, and H. Shum. Radiometric calibration from a single image. Proc. IEEE Conference on Computer Vision and Pattern Recognition, June [19] A. Litvinov and Y. Schechner. Addressing radiometric nonidealities: A unified framework. Proc. IEEE Conference on Computer Vision and Pattern Recognition, June [20] A. Litvinov and Y. Schechner. Radiometric framework for image mosaicking. Journal of Optical Society of America (JOSA), 22(5), May , 3, 4, 5 [21] A. A. Sawchuk. Real-time correction of intensity nonlinearities in imaging systems. IEEE Transactions on Computers, 26(1), [22] Y. Schechner and S. Nayar. Generalized mosaicing: High dynamic range in a wide field of view. International Journal of Computer Vision, 53(3), [23] E. Trucco and A. Verri. Introductory Techniques for 3-D Computer Vision. Prentice Hall, NJ, [24] M. Uyttendaele, A. Criminisi, S. B. Kang, S. Winder, R. Hartley, and R. Szeliski. Image-based interactive exploration of real-world environments. IEEE Computer Graphics and Applications, 24(3), June [25] W. Yu. Practical anti-vignetting methods for digital cameras. IEEE Transactions on Consumer Electronics, 50(4), Nov [26] W. Yu, Y. Chung, and J. Soh. Vignetting distortion correction method for high quality digital imaging. Proc. IEEE Int. Conf. on Pattern Recognition, Aug

6 Figure 5. From the top : Image Mosaics of original images, images corrected by the method proposed in [9], images corrected by the proposed method, original images blended using [4], and corrected image using the proposed method blended using [4]. Images are from the authors of [9]. REVIEWERS, please look at this figure in a monitor rather than in a hard copy.

7 Figure 6. Error histograms of image sequence in Fig.5 (from left to right) : original images (rms error = ), images corrected by method [9] (rms error = ), and images corrected by the proposed method (rms error = ). Figure 7. From the top : Image Mosaics of original images, original images blended, images corrected by the proposed method, corrected images blended. Data from mbrown/panorama/panorama.html. REVIEWERS, please look at this figure in a monitor rather than in a hard copy.

Vignetting Correction using Mutual Information submitted to ICCV 05

Vignetting Correction using Mutual Information submitted to ICCV 05 Vignetting Correction using Mutual Information submitted to ICCV 05 Seon Joo Kim and Marc Pollefeys Department of Computer Science University of North Carolina Chapel Hill, NC 27599 {sjkim, marc}@cs.unc.edu

More information

Radiometric alignment and vignetting calibration

Radiometric alignment and vignetting calibration Radiometric alignment and vignetting calibration Pablo d Angelo University of Bielefeld, Technical Faculty, Applied Computer Science D-33501 Bielefeld, Germany pablo.dangelo@web.de Abstract. This paper

More information

Vignetting. Nikolaos Laskaris School of Informatics University of Edinburgh

Vignetting. Nikolaos Laskaris School of Informatics University of Edinburgh Vignetting Nikolaos Laskaris School of Informatics University of Edinburgh What is Image Vignetting? Image vignetting is a phenomenon observed in photography (digital and analog) which introduces some

More information

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

On Cosine-fourth and Vignetting Effects in Real Lenses*

On Cosine-fourth and Vignetting Effects in Real Lenses* On Cosine-fourth and Vignetting Effects in Real Lenses* Manoj Aggarwal Hong Hua Narendra Ahuja University of Illinois at Urbana-Champaign 405 N. Mathews Ave, Urbana, IL 61801, USA { manoj,honghua,ahuja}@vision.ai.uiuc.edu

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

Unit 1: Image Formation

Unit 1: Image Formation Unit 1: Image Formation 1. Geometry 2. Optics 3. Photometry 4. Sensor Readings Szeliski 2.1-2.3 & 6.3.5 1 Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor

More information

Introduction to Video Forgery Detection: Part I

Introduction to Video Forgery Detection: Part I Introduction to Video Forgery Detection: Part I Detecting Forgery From Static-Scene Video Based on Inconsistency in Noise Level Functions IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 5,

More information

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Mihai Negru and Sergiu Nedevschi Technical University of Cluj-Napoca, Computer Science Department Mihai.Negru@cs.utcluj.ro, Sergiu.Nedevschi@cs.utcluj.ro

More information

Colour correction for panoramic imaging

Colour correction for panoramic imaging Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in

More information

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration Image stitching Stitching = alignment + blending Image stitching geometrical registration photometric registration Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/3/22 with slides by Richard Szeliski,

More information

Fast and High-Quality Image Blending on Mobile Phones

Fast and High-Quality Image Blending on Mobile Phones Fast and High-Quality Image Blending on Mobile Phones Yingen Xiong and Kari Pulli Nokia Research Center 955 Page Mill Road Palo Alto, CA 94304 USA Email: {yingenxiong, karipulli}@nokiacom Abstract We present

More information

Single-Image Vignetting Correction Using Radial Gradient Symmetry

Single-Image Vignetting Correction Using Radial Gradient Symmetry Single-Image Vignetting Correction Using Radial Gradient Symmetry Yuanjie Zheng 1 Jingyi Yu 1 Sing Bing Kang 2 Stephen Lin 3 Chandra Kambhamettu 1 1 University of Delaware, Newark, DE, USA {zheng,yu,chandra}@eecis.udel.edu

More information

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Real world Optics Sensor Devices Sources of Error

More information

Speed and Image Brightness uniformity of telecentric lenses

Speed and Image Brightness uniformity of telecentric lenses Specialist Article Published by: elektronikpraxis.de Issue: 11 / 2013 Speed and Image Brightness uniformity of telecentric lenses Author: Dr.-Ing. Claudia Brückner, Optics Developer, Vision & Control GmbH

More information

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Motivation! wikipedia! exposure sequence! -4 stops! Motivation!

More information

Image Formation and Capture

Image Formation and Capture Figure credits: B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, A. Theuwissen, and J. Malik Image Formation and Capture COS 429: Computer Vision Image Formation and Capture Real world Optics Sensor Devices

More information

Active Aperture Control and Sensor Modulation for Flexible Imaging

Active Aperture Control and Sensor Modulation for Flexible Imaging Active Aperture Control and Sensor Modulation for Flexible Imaging Chunyu Gao and Narendra Ahuja Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL,

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications IEEE Transactions on Image Processing, Vol. 21, No. 2, 2012 Eric Dedrick and Daniel Lau, Presented by Ran Shu School

More information

Color Analysis. Oct Rei Kawakami

Color Analysis. Oct Rei Kawakami Color Analysis Oct. 23. 2013 Rei Kawakami (rei@cvl.iis.u-tokyo.ac.jp) Color in computer vision Human Transparent Papers Shadow Metal Major topics related to color analysis Image segmentation BRDF acquisition

More information

Photometric Self-Calibration of a Projector-Camera System

Photometric Self-Calibration of a Projector-Camera System Photometric Self-Calibration of a Projector-Camera System Ray Juang Department of Computer Science, University of California, Irvine rjuang@ics.uci.edu Aditi Majumder Department of Computer Science, University

More information

Practical vignetting correction method for digital camera with measurement of surface luminance distribution

Practical vignetting correction method for digital camera with measurement of surface luminance distribution SIViP 2016) 10:1417 1424 DOI 10.1007/s11760-016-0941-2 ORIGINAL PAPER Practical vignetting correction method for digital camera with measurement of surface luminance distribution Andrzej Kordecki 1 Henryk

More information

Multi Viewpoint Panoramas

Multi Viewpoint Panoramas 27. November 2007 1 Motivation 2 Methods Slit-Scan "The System" 3 "The System" Approach Preprocessing Surface Selection Panorama Creation Interactive Renement 4 Sources Motivation image showing long continous

More information

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis Huei-Yung Lin and Chia-Hong Chang Department of Electrical Engineering, National Chung Cheng University, 168 University Rd., Min-Hsiung

More information

A Saturation-based Image Fusion Method for Static Scenes

A Saturation-based Image Fusion Method for Static Scenes 2015 6th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES) A Saturation-based Image Fusion Method for Static Scenes Geley Peljor and Toshiaki Kondo Sirindhorn

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 2 - Class 2: Vision, Physics, Cameras September 7th, 2017 Today Physics Human Vision Eye Brain Perspective Projection Camera Models Image Formation Digital

More information

OPTICAL IMAGING AND ABERRATIONS

OPTICAL IMAGING AND ABERRATIONS OPTICAL IMAGING AND ABERRATIONS PARTI RAY GEOMETRICAL OPTICS VIRENDRA N. MAHAJAN THE AEROSPACE CORPORATION AND THE UNIVERSITY OF SOUTHERN CALIFORNIA SPIE O P T I C A L E N G I N E E R I N G P R E S S A

More information

Continuous Flash. October 1, Technical Report MSR-TR Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052

Continuous Flash. October 1, Technical Report MSR-TR Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 Continuous Flash Hugues Hoppe Kentaro Toyama October 1, 2003 Technical Report MSR-TR-2003-63 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 Page 1 of 7 Abstract To take a

More information

LENSES. INEL 6088 Computer Vision

LENSES. INEL 6088 Computer Vision LENSES INEL 6088 Computer Vision Digital camera A digital camera replaces film with a sensor array Each cell in the array is a Charge Coupled Device light-sensitive diode that converts photons to electrons

More information

Automatic Selection of Brackets for HDR Image Creation

Automatic Selection of Brackets for HDR Image Creation Automatic Selection of Brackets for HDR Image Creation Michel VIDAL-NAQUET, Wei MING Abstract High Dynamic Range imaging (HDR) is now readily available on mobile devices such as smart phones and compact

More information

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Camera & Color Overview Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Book: Hartley 6.1, Szeliski 2.1.5, 2.2, 2.3 The trip

More information

Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging

Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging Mikhail V. Konnik arxiv:0803.2812v2

More information

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

Cameras. CSE 455, Winter 2010 January 25, 2010

Cameras. CSE 455, Winter 2010 January 25, 2010 Cameras CSE 455, Winter 2010 January 25, 2010 Announcements New Lecturer! Neel Joshi, Ph.D. Post-Doctoral Researcher Microsoft Research neel@cs Project 1b (seam carving) was due on Friday the 22 nd Project

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Simulated Programmable Apertures with Lytro

Simulated Programmable Apertures with Lytro Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows

More information

Realistic Image Synthesis

Realistic Image Synthesis Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106

More information

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2014 Version 1

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2014 Version 1 Image Formation Dr. Gerhard Roth COMP 4102A Winter 2014 Version 1 Image Formation Two type of images Intensity image encodes light intensities (passive sensor) Range (depth) image encodes shape and distance

More information

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do? Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution

More information

Computer Generated Holograms for Testing Optical Elements

Computer Generated Holograms for Testing Optical Elements Reprinted from APPLIED OPTICS, Vol. 10, page 619. March 1971 Copyright 1971 by the Optical Society of America and reprinted by permission of the copyright owner Computer Generated Holograms for Testing

More information

Revisiting Image Vignetting Correction by Constrained Minimization of log-intensity Entropy

Revisiting Image Vignetting Correction by Constrained Minimization of log-intensity Entropy Revisiting Image Vignetting Correction by Constrained Minimization of log-intensity Entropy Laura Lopez-Fuentes, Gabriel Oliver, and Sebastia Massanet Dept. Mathematics and Computer Science, University

More information

Image Formation: Camera Model

Image Formation: Camera Model Image Formation: Camera Model Ruigang Yang COMP 684 Fall 2005, CS684-IBMR Outline Camera Models Pinhole Perspective Projection Affine Projection Camera with Lenses Digital Image Formation The Human Eye

More information

GEOMETRICAL OPTICS AND OPTICAL DESIGN

GEOMETRICAL OPTICS AND OPTICAL DESIGN GEOMETRICAL OPTICS AND OPTICAL DESIGN Pantazis Mouroulis Associate Professor Center for Imaging Science Rochester Institute of Technology John Macdonald Senior Lecturer Physics Department University of

More information

Lecture Notes 10 Image Sensor Optics. Imaging optics. Pixel optics. Microlens

Lecture Notes 10 Image Sensor Optics. Imaging optics. Pixel optics. Microlens Lecture Notes 10 Image Sensor Optics Imaging optics Space-invariant model Space-varying model Pixel optics Transmission Vignetting Microlens EE 392B: Image Sensor Optics 10-1 Image Sensor Optics Microlens

More information

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1 TSBB09 Image Sensors 2018-HT2 Image Formation Part 1 Basic physics Electromagnetic radiation consists of electromagnetic waves With energy That propagate through space The waves consist of transversal

More information

RESOLUTION ENHANCEMENT FOR COLOR TWEAK IN IMAGE MOSAICKING SOLICITATIONS

RESOLUTION ENHANCEMENT FOR COLOR TWEAK IN IMAGE MOSAICKING SOLICITATIONS RESOLUTION ENHANCEMENT FOR COLOR TWEAK IN IMAGE MOSAICKING SOLICITATIONS G.Annalakshmi 1, P.Samundeeswari 2, K.Jainthi 3 1,2,3 Dept. of ECE, Alpha college of Engineering and Technology, Pondicherry, India.

More information

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video Using Line and Ellipse Features for Rectification of Broadcast Hockey Video Ankur Gupta, James J. Little, Robert J. Woodham Laboratory for Computational Intelligence (LCI) The University of British Columbia

More information

HDR imaging Automatic Exposure Time Estimation A novel approach

HDR imaging Automatic Exposure Time Estimation A novel approach HDR imaging Automatic Exposure Time Estimation A novel approach Miguel A. MARTÍNEZ,1 Eva M. VALERO,1 Javier HERNÁNDEZ-ANDRÉS,1 Javier ROMERO,1 1 Color Imaging Laboratory, University of Granada, Spain.

More information

FOG REMOVAL ALGORITHM USING ANISOTROPIC DIFFUSION AND HISTOGRAM STRETCHING

FOG REMOVAL ALGORITHM USING ANISOTROPIC DIFFUSION AND HISTOGRAM STRETCHING FOG REMOVAL ALGORITHM USING DIFFUSION AND HISTOGRAM STRETCHING 1 G SAILAJA, 2 M SREEDHAR 1 PG STUDENT, 2 LECTURER 1 DEPARTMENT OF ECE 1 JNTU COLLEGE OF ENGINEERING (Autonomous), ANANTHAPURAMU-5152, ANDRAPRADESH,

More information

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13 Projection Readings Szeliski 2.1 Projection Readings Szeliski 2.1 Müller-Lyer Illusion by Pravin Bhat Müller-Lyer Illusion by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Müller-Lyer

More information

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3 Image Formation Dr. Gerhard Roth COMP 4102A Winter 2015 Version 3 1 Image Formation Two type of images Intensity image encodes light intensities (passive sensor) Range (depth) image encodes shape and distance

More information

Image Formation and Camera Design

Image Formation and Camera Design Image Formation and Camera Design Spring 2003 CMSC 426 Jan Neumann 2/20/03 Light is all around us! From London & Upton, Photography Conventional camera design... Ken Kay, 1969 in Light & Film, TimeLife

More information

High-Resolution Interactive Panoramas with MPEG-4

High-Resolution Interactive Panoramas with MPEG-4 High-Resolution Interactive Panoramas with MPEG-4 Peter Eisert, Yong Guo, Anke Riechers, Jürgen Rurainsky Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institute Image Processing Department

More information

Announcements. Image Formation: Outline. The course. How Cameras Produce Images. Earliest Surviving Photograph. Image Formation and Cameras

Announcements. Image Formation: Outline. The course. How Cameras Produce Images. Earliest Surviving Photograph. Image Formation and Cameras Announcements Image ormation and Cameras CSE 252A Lecture 3 Assignment 0: Getting Started with Matlab is posted to web page, due Tuesday, ctober 4. Reading: Szeliski, Chapter 2 ptional Chapters 1 & 2 of

More information

LENSLESS IMAGING BY COMPRESSIVE SENSING

LENSLESS IMAGING BY COMPRESSIVE SENSING LENSLESS IMAGING BY COMPRESSIVE SENSING Gang Huang, Hong Jiang, Kim Matthews and Paul Wilford Bell Labs, Alcatel-Lucent, Murray Hill, NJ 07974 ABSTRACT In this paper, we propose a lensless compressive

More information

TOWARDS RADIOMETRICAL ALIGNMENT OF 3D POINT CLOUDS

TOWARDS RADIOMETRICAL ALIGNMENT OF 3D POINT CLOUDS TOWARDS RADIOMETRICAL ALIGNMENT OF 3D POINT CLOUDS H. A. Lauterbach, D. Borrmann, A. Nu chter Informatics VII Robotics and Telematics, Julius-Maximilians University Wu rzburg, Germany (helge.lauterbach,

More information

BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL. HEADLINE: HDTV Lens Design: Management of Light Transmission

BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL. HEADLINE: HDTV Lens Design: Management of Light Transmission BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL HEADLINE: HDTV Lens Design: Management of Light Transmission By Larry Thorpe and Gordon Tubbs Broadcast engineers have a comfortable familiarity with electronic

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant

More information

Restoration of Motion Blurred Document Images

Restoration of Motion Blurred Document Images Restoration of Motion Blurred Document Images Bolan Su 12, Shijian Lu 2 and Tan Chew Lim 1 1 Department of Computer Science,School of Computing,National University of Singapore Computing 1, 13 Computing

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 7, 2016 Topics: Image

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 22: Computational photography photomatix.com Announcements Final project midterm reports due on Tuesday to CMS by 11:59pm BRDF s can be incredibly complicated

More information

Lenses, exposure, and (de)focus

Lenses, exposure, and (de)focus Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26

More information

CS 443: Imaging and Multimedia Cameras and Lenses

CS 443: Imaging and Multimedia Cameras and Lenses CS 443: Imaging and Multimedia Cameras and Lenses Spring 2008 Ahmed Elgammal Dept of Computer Science Rutgers University Outlines Cameras and lenses! 1 They are formed by the projection of 3D objects.

More information

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Distance Estimation with a Two or Three Aperture SLR Digital Camera Distance Estimation with a Two or Three Aperture SLR Digital Camera Seungwon Lee, Joonki Paik, and Monson H. Hayes Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University

More information

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools Course 10 Realistic Materials in Computer Graphics Acquisition Basics MPI Informatik (moving to the University of Washington Goal of this Section practical, hands-on description of acquisition basics general

More information

Panoramic Image Mosaics

Panoramic Image Mosaics Panoramic Image Mosaics Image Stitching Computer Vision CSE 576, Spring 2008 Richard Szeliski Microsoft Research Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html

More information

Digital photography , , Computational Photography Fall 2017, Lecture 2

Digital photography , , Computational Photography Fall 2017, Lecture 2 Digital photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 2 Course announcements To the 14 students who took the course survey on

More information

Photographing Long Scenes with Multiviewpoint

Photographing Long Scenes with Multiviewpoint Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an

More information

EC-433 Digital Image Processing

EC-433 Digital Image Processing EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)

More information

Denoising and Effective Contrast Enhancement for Dynamic Range Mapping

Denoising and Effective Contrast Enhancement for Dynamic Range Mapping Denoising and Effective Contrast Enhancement for Dynamic Range Mapping G. Kiruthiga Department of Electronics and Communication Adithya Institute of Technology Coimbatore B. Hakkem Department of Electronics

More information

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2. Projection Projection Readings Szeliski 2.1 Readings Szeliski 2.1 Müller-Lyer Illusion Image formation object film by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Let s design a camera

More information

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES Petteri PÖNTINEN Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing, Finland petteri.pontinen@hut.fi KEY WORDS: Cocentricity,

More information

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Performance Evaluation of Different Depth From Defocus (DFD) Techniques Please verify that () all pages are present, () all figures are acceptable, (3) all fonts and special characters are correct, and () all text and figures fit within the Performance Evaluation of Different

More information

High Dynamic Range Video with Ghost Removal

High Dynamic Range Video with Ghost Removal High Dynamic Range Video with Ghost Removal Stephen Mangiat and Jerry Gibson University of California, Santa Barbara, CA, 93106 ABSTRACT We propose a new method for ghost-free high dynamic range (HDR)

More information

Single Camera Catadioptric Stereo System

Single Camera Catadioptric Stereo System Single Camera Catadioptric Stereo System Abstract In this paper, we present a framework for novel catadioptric stereo camera system that uses a single camera and a single lens with conic mirrors. Various

More information

ABSTRACT 1. INTRODUCTION

ABSTRACT 1. INTRODUCTION The role of aberrations in the relative illumination of a lens system Dmitry Reshidko* and Jose Sasian College of Optical Sciences, University of Arizona, Tucson, AZ, 857, USA ABSTRACT Several factors

More information

Cameras, lenses and sensors

Cameras, lenses and sensors Cameras, lenses and sensors Marc Pollefeys COMP 256 Cameras, lenses and sensors Camera Models Pinhole Perspective Projection Affine Projection Camera with Lenses Sensing The Human Eye Reading: Chapter.

More information

Why learn about photography in this course?

Why learn about photography in this course? Why learn about photography in this course? Geri's Game: Note the background is blurred. - photography: model of image formation - Many computer graphics methods use existing photographs e.g. texture &

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002 DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 22 Topics: Human eye Visual phenomena Simple image model Image enhancement Point processes Histogram Lookup tables Contrast compression and stretching

More information

Issues in Color Correcting Digital Images of Unknown Origin

Issues in Color Correcting Digital Images of Unknown Origin Issues in Color Correcting Digital Images of Unknown Origin Vlad C. Cardei rian Funt and Michael rockington vcardei@cs.sfu.ca funt@cs.sfu.ca brocking@sfu.ca School of Computing Science Simon Fraser University

More information

HIGH DYNAMIC RANGE MAP ESTIMATION VIA FULLY CONNECTED RANDOM FIELDS WITH STOCHASTIC CLIQUES

HIGH DYNAMIC RANGE MAP ESTIMATION VIA FULLY CONNECTED RANDOM FIELDS WITH STOCHASTIC CLIQUES HIGH DYNAMIC RANGE MAP ESTIMATION VIA FULLY CONNECTED RANDOM FIELDS WITH STOCHASTIC CLIQUES F. Y. Li, M. J. Shafiee, A. Chung, B. Chwyl, F. Kazemzadeh, A. Wong, and J. Zelek Vision & Image Processing Lab,

More information

Image Filtering in Spatial domain. Computer Vision Jia-Bin Huang, Virginia Tech

Image Filtering in Spatial domain. Computer Vision Jia-Bin Huang, Virginia Tech Image Filtering in Spatial domain Computer Vision Jia-Bin Huang, Virginia Tech Administrative stuffs Lecture schedule changes Office hours - Jia-Bin (44 Whittemore Hall) Friday at : AM 2: PM Office hours

More information

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response - application: high dynamic range imaging Why learn

More information

multiframe visual-inertial blur estimation and removal for unmodified smartphones

multiframe visual-inertial blur estimation and removal for unmodified smartphones multiframe visual-inertial blur estimation and removal for unmodified smartphones, Severin Münger, Carlo Beltrame, Luc Humair WSCG 2015, Plzen, Czech Republic images taken by non-professional photographers

More information

Making a Panoramic Digital Image of the Entire Northern Sky

Making a Panoramic Digital Image of the Entire Northern Sky Making a Panoramic Digital Image of the Entire Northern Sky Anne M. Rajala anne2006@caltech.edu, x1221, MSC #775 Mentors: Ashish Mahabal and S.G. Djorgovski October 3, 2003 Abstract The Digitized Palomar

More information

Fast Focal Length Solution in Partial Panoramic Image Stitching

Fast Focal Length Solution in Partial Panoramic Image Stitching Fast Focal Length Solution in Partial Panoramic Image Stitching Kirk L. Duffin Northern Illinois University duffin@cs.niu.edu William A. Barrett Brigham Young University barrett@cs.byu.edu Abstract Accurate

More information

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design)

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design) Lens design Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design) Focal length (f) Field angle or field size F/number

More information

Automatic High Dynamic Range Image Generation for Dynamic Scenes

Automatic High Dynamic Range Image Generation for Dynamic Scenes Automatic High Dynamic Range Image Generation for Dynamic Scenes IEEE Computer Graphics and Applications Vol. 28, Issue. 2, April 2008 Katrien Jacobs, Celine Loscos, and Greg Ward Presented by Yuan Xi

More information

Coding and Modulation in Cameras

Coding and Modulation in Cameras Coding and Modulation in Cameras Amit Agrawal June 2010 Mitsubishi Electric Research Labs (MERL) Cambridge, MA, USA Coded Computational Imaging Agrawal, Veeraraghavan, Narasimhan & Mohan Schedule Introduction

More information

Recognizing Panoramas

Recognizing Panoramas Recognizing Panoramas Kevin Luo Stanford University 450 Serra Mall, Stanford, CA 94305 kluo8128@stanford.edu Abstract This project concerns the topic of panorama stitching. Given a set of overlapping photos,

More information

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington Dual-fisheye Lens Stitching for 360-degree Imaging & Video Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington Introduction 360-degree imaging: the process of taking multiple photographs and

More information

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Demosaicing Algorithm for Color Filter Arrays Based on SVMs www.ijcsi.org 212 Demosaicing Algorithm for Color Filter Arrays Based on SVMs Xiao-fen JIA, Bai-ting Zhao School of Electrical and Information Engineering, Anhui University of Science & Technology Huainan

More information

Adding Realistic Camera Effects to the Computer Graphics Camera Model

Adding Realistic Camera Effects to the Computer Graphics Camera Model Adding Realistic Camera Effects to the Computer Graphics Camera Model Ryan Baltazar May 4, 2012 1 Introduction The camera model traditionally used in computer graphics is based on the camera obscura or

More information

This document explains the reasons behind this phenomenon and describes how to overcome it.

This document explains the reasons behind this phenomenon and describes how to overcome it. Internal: 734-00583B-EN Release date: 17 December 2008 Cast Effects in Wide Angle Photography Overview Shooting images with wide angle lenses and exploiting large format camera movements can result in

More information

Overview. Image formation - 1

Overview. Image formation - 1 Overview perspective imaging Image formation Refraction of light Thin-lens equation Optical power and accommodation Image irradiance and scene radiance Digital images Introduction to MATLAB Image formation

More information

Lenses. Overview. Terminology. The pinhole camera. Pinhole camera Lenses Principles of operation Limitations

Lenses. Overview. Terminology. The pinhole camera. Pinhole camera Lenses Principles of operation Limitations Overview Pinhole camera Principles of operation Limitations 1 Terminology The pinhole camera The first camera - camera obscura - known to Aristotle. In 3D, we can visualize the blur induced by the pinhole

More information