Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction

Similar documents
Vignetting Correction using Mutual Information submitted to ICCV 05

Radiometric alignment and vignetting calibration

Vignetting. Nikolaos Laskaris School of Informatics University of Edinburgh

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

Modeling and Synthesis of Aperture Effects in Cameras

On Cosine-fourth and Vignetting Effects in Real Lenses*

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Unit 1: Image Formation

Introduction to Video Forgery Detection: Part I

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

Colour correction for panoramic imaging

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Fast and High-Quality Image Blending on Mobile Phones

Single-Image Vignetting Correction Using Radial Gradient Symmetry

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Speed and Image Brightness uniformity of telecentric lenses

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Image Formation and Capture

Active Aperture Control and Sensor Modulation for Flexible Imaging

Midterm Examination CS 534: Computational Photography

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

Color Analysis. Oct Rei Kawakami

Photometric Self-Calibration of a Projector-Camera System

Practical vignetting correction method for digital camera with measurement of surface luminance distribution

Multi Viewpoint Panoramas

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis

A Saturation-based Image Fusion Method for Static Scenes

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

CSE 527: Introduction to Computer Vision

OPTICAL IMAGING AND ABERRATIONS

Continuous Flash. October 1, Technical Report MSR-TR Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052

LENSES. INEL 6088 Computer Vision

Automatic Selection of Brackets for HDR Image Creation

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging

A moment-preserving approach for depth from defocus

Cameras. CSE 455, Winter 2010 January 25, 2010

Coded Aperture for Projector and Camera for Robust 3D measurement

Simulated Programmable Apertures with Lytro

Realistic Image Synthesis

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2014 Version 1

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Computer Generated Holograms for Testing Optical Elements

Revisiting Image Vignetting Correction by Constrained Minimization of log-intensity Entropy

Image Formation: Camera Model

GEOMETRICAL OPTICS AND OPTICAL DESIGN

Lecture Notes 10 Image Sensor Optics. Imaging optics. Pixel optics. Microlens

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

RESOLUTION ENHANCEMENT FOR COLOR TWEAK IN IMAGE MOSAICKING SOLICITATIONS

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video

HDR imaging Automatic Exposure Time Estimation A novel approach

FOG REMOVAL ALGORITHM USING ANISOTROPIC DIFFUSION AND HISTOGRAM STRETCHING

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Image Formation and Camera Design

High-Resolution Interactive Panoramas with MPEG-4

Announcements. Image Formation: Outline. The course. How Cameras Produce Images. Earliest Surviving Photograph. Image Formation and Cameras

LENSLESS IMAGING BY COMPRESSIVE SENSING

TOWARDS RADIOMETRICAL ALIGNMENT OF 3D POINT CLOUDS

BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL. HEADLINE: HDTV Lens Design: Management of Light Transmission

ECEN 4606, UNDERGRADUATE OPTICS LAB

Restoration of Motion Blurred Document Images

CPSC 425: Computer Vision

CS6670: Computer Vision

Lenses, exposure, and (de)focus

CS 443: Imaging and Multimedia Cameras and Lenses

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Panoramic Image Mosaics

Digital photography , , Computational Photography Fall 2017, Lecture 2

Photographing Long Scenes with Multiviewpoint

EC-433 Digital Image Processing

Denoising and Effective Contrast Enhancement for Dynamic Range Mapping

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

High Dynamic Range Video with Ghost Removal

Single Camera Catadioptric Stereo System

ABSTRACT 1. INTRODUCTION

Cameras, lenses and sensors

Why learn about photography in this course?

1.Discuss the frequency domain techniques of image enhancement in detail.

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

Issues in Color Correcting Digital Images of Unknown Origin

HIGH DYNAMIC RANGE MAP ESTIMATION VIA FULLY CONNECTED RANDOM FIELDS WITH STOCHASTIC CLIQUES

Image Filtering in Spatial domain. Computer Vision Jia-Bin Huang, Virginia Tech

lecture 24 image capture - photography: model of image formation - image blur - camera settings (f-number, shutter speed) - exposure - camera response

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Making a Panoramic Digital Image of the Entire Northern Sky

Fast Focal Length Solution in Partial Panoramic Image Stitching

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design)

Automatic High Dynamic Range Image Generation for Dynamic Scenes

Coding and Modulation in Cameras

Recognizing Panoramas

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Adding Realistic Camera Effects to the Computer Graphics Camera Model

This document explains the reasons behind this phenomenon and describes how to overcome it.

Overview. Image formation - 1

Lenses. Overview. Terminology. The pinhole camera. Pinhole camera Lenses Principles of operation Limitations

Transcription:

Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction Seon Joo Kim and Marc Pollefeys Department of Computer Science University of North Carolina Chapel Hill, NC 27599 {sjkim, marc}@cs.unc.edu Abstract In many computer vision systems, it is assumed that the image brightness directly reflects the scene radiance. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. Effects of these factors are most visible in image mosaics where colors look inconsistent and notable boundaries exist. In this paper, we propose an sequential algorithm for robust estimation of the radiometric response function, exposures, and vignetting. By decoupling the effect of vignetting from the response function, we approach each process in a manner that is robust to outliers and derive closed form solutions. We were able to remove radiometric artifacts in image mosaics successfully applying our method and we also show a comparison of our method and a previous method which is based on simultaneous nonlinear optimization. 1. Introduction What determines the brightness at a certain point in an image? How is the image brightness related to the actual scene brightness? Scene brightness can be defined by the term radiance which is the power per unit foreshortened area emitted into a unit solid angle by a surface [13]. After passing through the lens system, the power of radiant energy falling on the image plane is called the irradiance. Irradiance is then transformed to image brightness (Fig.1). In many computer vision systems, it is assumed that the image brightness directly reflects the scene radiance. However, the assumption does not hold in most cases as shown in Fig.2. A nonlinear function called radiometric response function which explains the relationship between irradiance and image brightness along with camera exposure is responsible for color inconsistency from image to image in the mosaic. The lens falloff phenomenon in which the amount Figure 1. Illustration of the basic radiometric concepts [23] Figure 2. An image mosaic showing the effects of vignetting and exposure changes [9] of light (radiance) hitting the image plane varies spatially causes the sharp intensity variations or the bands on the image boundaries in the mosaic. There are several factors for the lens falloff phenomenon. The cosine-fourth law is one of the effects that is responsible for the lens falloff. It defines the relationship between radiance(l) and irradiance(e) using a simple camera model consisting of a thin lens and an image plane [13]. Eq. 1 shows that irradiance is proportional to radiance but it decreases as cosine-fourth of the angle α that a ray makes with the optical axis. In the equation, d is the radius of the lens and f denotes the distance between the lens and the image plane. E = Lπd2 cos 4 α 4f 2 (1) Most of cameras are designed to compensate the cosinefourth effect [2] and the most dominant factor for irradiance falloff in the image periphery is due to vignetting. Vi- 1

gnetting effect refers to the gradual fading-out of an image at points near its periphery due to the blocking of a part of the incident ray bundle by the effective size of aperture [25]. The effect of vignetting increases as the size of the aperture increases and vice versa. With a pinhole camera, there would be no vignetting. Another phenomenon called pupil aberration has been described as a third important cause of fall in irradiance away form the image center in addition to the cosine-fourth law and vignetting [1]. Pupil aberration is caused due to nonlinear refraction of the rays which results in a significantly nonuniform light distribution across the aperture. In this paper, we use a vignetting model that explains the observed irradiance falloff behavior rather than trying to physically model this radiometric distortion caused by combination of different factors. While there are multiple causes for the irradiance falloff, we will call the process of correcting this distortion vignetting correction since vignetting is the most dominant factor for the distortion and to conform with previous works. 2. Previous Works Recently, a lot of work has been done in finding the relationship between scene radiance and image intensity. The majority of research assumes linearity between radiance and irradiance (no vignetting), concentrating on estimating the radiometric response function [8, 12, 16, 18]. Conventional methods for correcting vignetting involve taking a reference image of a non-specular object such as a paper with uniform white illumination. This reference image is then used to build correction lookup table or to approximate parametric correction function. In [2], Asada et al. proposed a camera model using variable cone that accounts for vignetting effects in zoom lens system. Parameters of the variable cone model were estimated by taking images of uniform radiance field. Yu et al. proposed using a hypercosine function to represent the pattern of the vignetting distortion for each scanline in [26]. They expanded their work to 2D hypercosine model in [25] and also introduced anti-vignetting method based on wavelet denoising and decimation. Other vignetting models include simple form using radial distance and focal length [24], third-order polynomial model [3], first order Taylor expansion [21], and empirical exponential function [7]. In [10] and [15], vignetting was used for camera calibration. In these works, the radiometric response function was ignored and vignetting was modelled in the image intensity domain rather than in irradiance domain. Schechner and Nayar [22] exploited vignetting effect to capture high dynamic range intensity values. In their work, they calibrate the intended vignetting using a linear leastsquares fit on the image data itself rather than using a reference image. Their work assumes either a linear response function or a known response function. Jia and Tang [14] recently presented a method to correct global and local intensity variation using tensor voting. In [19, 20], Litvinov and Schechner presented a solution for simultaneously estimating the unknown response function, exposures, and vignetting from a normal image sequence. They achieve the goal by a nonparametric linear least squares method using common areas between images. Most closely related to our work, Goldman and Chen presented a solution for estimating the response function, gain, and the vignetting factor and applied it to radiometrically align images for seamless mosaic [9]. They use empirical model of response(emor) [12] to model the response function and a polynomial model for vignetting. They estimate the model parameters simultaneously by a nonlinear optimization method. 3. Our Approach Radiometric process of image formulation which is shown in Fig.1 can be mathematically stated as follows. I x = f(km(r x )L X ) (2) L X is the radiance of a scene point X towards the camera, I x is the image intensity value at the projected image point x, k is the exposure, f() is the radiometric response function, M() is the vignetting function, and r x is the normalized radius of x from the center of vignetting. We assume that the center of vignetting coincide with the center of the image in this paper. Eq. 2 can be rewritten as follows. ln(f 1 (I x )) = ln k + ln M(r x ) + ln L x (3) g(i x ) = K + ln M(r x ) + ln L x (4) The goal of our work is to estimate f() (or g()), M(), and k (or K) given set of images with correspondence. For correspondence, homographies between images are computed using the software Autostitch [4]. While the camera response function and vignetting function was estimated simultaneously in [9] and [20], we approach the problem differently by robustly computing the response function and the vignetting function separately. Separating the two processes is possible by decoupling the vignetting process from the radiometric response function estimation. By separating the two processes, we derive closed form solution for each process that is robust against outliers by approaching the problem in maximum likelihood fashion. This is also big advantage over using nonlinear optimization as in [9] saving a lot of computation time, avoiding issues with potential local minima, and increasing accuracy by being able to use more complicated model.

sparse. This might result in the robustly computed brightness transfer function only being valid for parts of the curve that were sufficiently supported by observations. This can easily be dealt with by using a version of Eq. 5 that is weighted according to the amount of observations supporting a particular point on the robust brightness transfer function. Figure 3. Decoupling Vignetting Effect: The figure shows three images stitched to a mosaic. Only corresponding points in the colored band (red for the first pair and blue for the second) are used for estimating the radiometric response function. 3.1. Estimating the radiometric response function Eq. 2 shows that the response function f() cannot be recovered without the knowledge about the vignetting function M() and vice versa. Hence, one way to solve the problem is to estimate both functions simultaneously either in a linear [20] or in a nonlinear [9] way. But if we use corresponding points affected with the same amount of vignetting, we can decouple vignetting effect from the process and estimate the response function without worrying about the vignetting effect using Eq. 2. Let x i and x j be image points of a scene point X in image i and image j respectively. If r xi = r xj then M(r xi ) = M(r xj ) since we have already made the assumption that the vignetting model is the same for all images. Hence by using corresponding points that are of equal distance from the center of each image which will make a line between two images, we can decouple the vignetting effect (Eq. 5). In practice, we have allowed correspondences close to equal distance from the center rather than strictly enforcing correspondences to be equal distance to provide more data which yields good results as vignetting varies slowly (Fig. 3). With these correspondences, we obtain the following equation from Eq. 4. g(i xi ) g(i xj ) = K i K j (5) As mentioned, there are many ways to solve the above equation. In this work, we adopt the method proposed in [16] due to its robustness against noise and mismatches. Rather than solving the problem in least squares sense by using all of the correspondences which may include many mismatches and noises, they approach the problem in a maximum likelihood fashion computing the brightness transfer function that explains how brightness changes from one image to another by dynamic programming. The empirical model of response (EMoR) is then used to model the response function [12]. There is a modification that has to be made to the algorithm presented in [16]. Due to small region of the images that is used at this stage, the intensity distribution may be 3.2. Vignetting Correction After estimating the response function f and the exposure k, each image intensity value is transformed to an irradiance value E to compute vignetting function M. E x = f 1 (I x ) = M(r x )L x (6) k Since the radiance L x is the same for x i and x j, we get E xi M(r xi ) = E x j M(r xj ) As presented in section 2, many models for vignetting exist. In this paper, we chose to use the polynomial model used in [9]. In [9], a third order polynomial was used for the vignetting model and it was estimated together with the response function simultaneously by a nonlinear optimization method. By computing the response function independent of vignetting in our first step, we can now compute the polynomial vignetting model linearly. This saves a great deal of computational time compared to the nonlinear optimization scheme used in [9], avoids issues with potential local minima, and it also enables us to easily use much higher order polynomial function for more accuracy. The vignetting model is given by M(r) = 1 + (7) N c n r 2n. (8) n=1 Let a = E x i E xj, then combining the model with Eq. 7 yields the following equation. N n=1 c n (ar 2n x j r 2n x i ) = 1 a (9) One obvious choice for solving for the N unknown c n s is to use least squares since each corresponding pair of points in given image pairs provides additional equation in the form of Eq. 9. But in presence of many outliers, the least squares solution will not give us a robust solution to the problem. We propose to approach the problem similar to the way we computed the response function in the first stage. Rather than solving the problem in least squares sense, we once again solve the problem in a robust fashion. For each combination of r xi and r xj (discretized), we estimate â(r xi, r xj )

Figure 4. Example of estimating the irradiance ratio â. For every matching pair with r xi = r 1, r xj = r 2, the value a is stacked in s(r 1, r 2 ). â(r 1, r 2 ) is computed as the median of stacked values which is the robust estimate of the ratio a for the given r xi and r xj. For every matching points in the image sequence, the a value is computed and stacked at corresponding s(r xi, r xj ). In the end, â(r xi, r xj ) is computed as the median of stacked values s(r xi, r xj ) (see Fig. 4). Notice that we only have to keep track of cases where r xi > r xj due to symmetry. If discretisation of 100 100 was used, we would now have less then 5000 equations in the form of Eq. 9 instead of having one equations per matching pair of points. By using the median, the approach is less sensitive to outliers due to small misalignment. Now we build the matrix formulation of the problem, Ac = b. The n th column of a row of the matrix A is (â(r xi, r xj )rx 2n j rx 2n i ) and the n th element of the vector b is 1 â(r xi, r xj ). Note that we weight each row of the matrix A and b by the number of elements in the stack s(r xi, r xj ). Finally, the model parameter vector c is the solution to the following least squares problem (10) which can be solved using the singular value decomposition (SVD). 3.3. Exponential Ambiguity ĉ = arg min c Ac b 2. (10) The process of radiometric calibration explained thus far is subject to exponential ambiguity, sometimes called γ ambiguity [11, 20]. This ambiguity basically means that if ˆf, ˆk, and ˆM are the solutions for the Eq. 2, then the whole family of ˆf α, ˆk α, and ˆM α is also solution to the problem. However, this is not a problem for most applications unless absolute quantitative measurements are required since the family of solutions all generate the same intensity. In this work, the scale of the solution is fixed in the first stage by fixing an exposure ratio of an image pair. 4. Experiments In this section, we evaluate the performance of our proposed method. The measure we are interested in testing is the consistency of intensity values. Image intensity value of a corresponding pixel between images should be the same with correct radiometric response function, exposure values, and vignetting function (with the assumption that the scene is lambertian). For this purpose, we radiometrically align all images in a given sequence to a common exposure value. From Eq. 2, intensity value I at pixel x i in image i with f(), k i, and M() is changed to a new value as in Eq. 11. Note that for each image, we estimated three exposure values for each color channel to account for white balance changes. I new x i = f ( knew f 1 ) (I xi ) k i M(r xi ) (11) We first compare our result with the result from [9]. The first mosaic in Fig. 5 is constructed from images which are taken with different exposures and are affected by vignetting. Image mosaics constructed from images corrected by the method in [9] and by our method are also shown in Fig. 5. For comparison, same number of parameters were used (N = 3). The mosaic constructed by our method shows much more consistency in color. These two methods are also compared in terms of error histogram. Fig. 5 shows histograms of intensity difference ( I xi I xi ) of all corresponding points in the sequence with gradient less than a certain value. While both methods greatly reduces error from the original images (rms error of 27.1243), errors are reduced more by our method ([9] : rms error of 10.271, our method : rms error of 8.0988). There are other set of methods called image blending or feathering that try to make image mosaics color consistent [5, 6, 17]. Comparison of the mosaic constructed using our method and the mosaic constructed using multi-band blending [4, 5] is shown in Fig. 5 and Fig. 7. The mosaic constructed by directly blending the original images still show inconsistency in parts of the mosaic. As can be seen from the figures, the mosaic constructed by applying the blending to the images corrected by our method does not show any visible artifact. As suggested in [9, 20], these two methods complement each other very well rather than being in competition. 5. Discussion We have proposed a sequential method for estimating the radiometric response function, exposures, and vignetting. One of the key features of our method is the decoupling of vignetting effect from estimating the response function. By decoupling the two, we can approach each problem with a robust estimation method that is robust to noise and outliers sequentially. Our method was verified by building image mosaics where visible seams and color inconsistency was successfully eliminated. Our method was also compared with an existing method that solves the problem simultaneously in a nonlinear way. Mosaics and error histograms

showed a significant reduction of error. In addition, our approach is significantly faster. In the future, we would also like to compare our method to the method proposed in [20]. Note that since we use the estimation process that is robust to outliers at each stage, our method can be used for images taken with a moving camera rather than just a rotating camera. While in this case correspondences can still be obtained using stereo, outliers and non-lambertian scene results in larger percentage of outliers. The performance of non-robust approaches will significantly degrade. We are planning on applying this method for a full radiometric calibration from images taken with a freely moving camera. There are few other problems that we would also like to explore in the future. We have assumed that center of vignetting coincide with the center of the image as most of the other works have. We would like to include the center of vignetting as something that also has to be estimated since observations show that sometimes vignetting centers are off from the image center. Finally, we would also like to solve the case where vignetting would not be the same for all images in the sequence. References [1] M. Aggarwal, H. Hua, and N. Ahuja. On cosine-fourth and vignetting effects in real lenses. Proc. IEEE Int. Conf. on Computer Vision, July 2001. 2 [2] N. Asada, A. Amano, and M. Baba. Photometric calibration of zoom lens systems. Proc. IEEE Int. Conf. on Pattern Recognition, pages 186 190, Aug. 2001. 1, 2 [3] C. M. Bastuscheck. Correction of video camera response using digital techniques. J. of Optical Engineering, 26(12):1257 1262, 1987. 2 [4] M. Brown and D. Lowe. http://www.autostich.net. 2, 4, 6 [5] M. Brown and D. Lowe. Recognising panorama. Proc. IEEE Int. Conf. on Computer Vision, Oct. 2003. 4 [6] P. Burt and E. H. Adelson. A multiresolution spline with application to image mosaics. ACM TOG, 2, 1983. 4 [7] Y. P. Chen and B. K. Mudunuri. An anti-vignetting technique for superwide field of view mosaicked images. J. of Imaging Technology, 12(5):293 295, 1986. 2 [8] P. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. Proc. SIGGRAPH 97, pages 369 378, 1997. 2 [9] D. Goldman and J. Chen. Vignette and exposure calibration and compensation. Proc. IEEE Int. Conf. on Computer Vision, Oct. 2005. 1, 2, 3, 4, 6, 7 [10] M. D. Grossberg and S. K. Nayar. A general imaging model and a method for finding its parameters. Proc. IEEE Int. Conf. on Computer Vision, July 2001. 2 [11] M. D. Grossberg and S. K. Nayar. What can be known about the radiometric response function from images? Proc. European Conference on Computer Vision, May 2002. 4 [12] M. D. Grossberg and S. K. Nayar. Modeling the space of camera response functions. IEEE Transaction on Pattern Analysis and Machine Intelligence, 26(10), Oct. 2004. 2, 3 [13] B. K. P. Horn. Robot Vision. Cambridge, Mass., 1986. 1 [14] J. Jia and C. Tang. Tensor voting for image correction by global and local intensity alignment. IEEE Transaction on Pattern Analysis and Machine Intelligence, 27(1), 2005. 2 [15] S. B. Kang and R. Weiss. Can we calibrate a camera using an image of a flat, textureless lambertian surface? Proc. of the European Conference on Computer Vision, July 2000. 2 [16] S. J. Kim and M. Pollefeys. Radiometric alignment of image sequences. Proc. IEEE Conference on Computer Vision and Pattern Recognition, June 2004. 2, 3 [17] A. Levin, A. Zomet, S. Peleg, and Y. Weiss. Seamless image stitching in the gradient domain. Proc. of the European Conference on Computer Vision, 2004. 4 [18] S. Lin, J. Gu, S. Yamazaki, and H. Shum. Radiometric calibration from a single image. Proc. IEEE Conference on Computer Vision and Pattern Recognition, June 2004. 2 [19] A. Litvinov and Y. Schechner. Addressing radiometric nonidealities: A unified framework. Proc. IEEE Conference on Computer Vision and Pattern Recognition, June 2005. 2 [20] A. Litvinov and Y. Schechner. Radiometric framework for image mosaicking. Journal of Optical Society of America (JOSA), 22(5), May 2005. 2, 3, 4, 5 [21] A. A. Sawchuk. Real-time correction of intensity nonlinearities in imaging systems. IEEE Transactions on Computers, 26(1), 1977. 2 [22] Y. Schechner and S. Nayar. Generalized mosaicing: High dynamic range in a wide field of view. International Journal of Computer Vision, 53(3), 2003. 2 [23] E. Trucco and A. Verri. Introductory Techniques for 3-D Computer Vision. Prentice Hall, NJ, 1998. 1 [24] M. Uyttendaele, A. Criminisi, S. B. Kang, S. Winder, R. Hartley, and R. Szeliski. Image-based interactive exploration of real-world environments. IEEE Computer Graphics and Applications, 24(3), June 2004. 2 [25] W. Yu. Practical anti-vignetting methods for digital cameras. IEEE Transactions on Consumer Electronics, 50(4), Nov. 2004. 2 [26] W. Yu, Y. Chung, and J. Soh. Vignetting distortion correction method for high quality digital imaging. Proc. IEEE Int. Conf. on Pattern Recognition, Aug. 2004. 2

Figure 5. From the top : Image Mosaics of original images, images corrected by the method proposed in [9], images corrected by the proposed method, original images blended using [4], and corrected image using the proposed method blended using [4]. Images are from the authors of [9]. REVIEWERS, please look at this figure in a monitor rather than in a hard copy.

Figure 6. Error histograms of image sequence in Fig.5 (from left to right) : original images (rms error = 27.1243), images corrected by method [9] (rms error = 10.2710), and images corrected by the proposed method (rms error = 8.0988). Figure 7. From the top : Image Mosaics of original images, original images blended, images corrected by the proposed method, corrected images blended. Data from http://www.cs.ubc.ca/ mbrown/panorama/panorama.html. REVIEWERS, please look at this figure in a monitor rather than in a hard copy.