Panoramic Mosaicing with a 180 Field of View Lens

Similar documents
Single Camera Catadioptric Stereo System

Catadioptric Stereo For Robot Localization

BeNoGo Image Volume Acquisition

Digital deformation model for fisheye image rectification

Cameras for Stereo Panoramic Imaging Λ

UC Berkeley UC Berkeley Previously Published Works

College of Arts and Sciences

Digital Photographic Imaging Using MOEMS

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

Depth Perception with a Single Camera

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

The key to a fisheye is the relationship between latitude ø of the 3D vector and radius on the 2D fisheye image, namely a linear one where

Sensors and Sensing Cameras and Camera Calibration

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Active Aperture Control and Sensor Modulation for Flexible Imaging

Unit 1: Image Formation

Computer Vision. The Pinhole Camera Model

A High-Resolution Panoramic Camera

Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics

Novel Hemispheric Image Formation: Concepts & Applications

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

Method for out-of-focus camera calibration

Feature Extraction and Pattern Recognition from Fisheye Images in the Spatial Domain

This is an author-deposited version published in: Eprints ID: 3672

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Image Processing & Projective geometry

Capturing Omni-Directional Stereoscopic Spherical Projections with a Single Camera

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

Lecture 02 Image Formation 1

Image Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens.

Fast Focal Length Solution in Partial Panoramic Image Stitching

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

Image Formation: Camera Model

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Extended View Toolkit

Colour correction for panoramic imaging

Princeton University COS429 Computer Vision Problem Set 1: Building a Camera

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Digital Image Processing

Folded catadioptric panoramic lens with an equidistance projection scheme

Removing Temporal Stationary Blur in Route Panoramas

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

A moment-preserving approach for depth from defocus

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING

Computer Vision Slides curtesy of Professor Gregory Dudek

Digital Image Processing

CS535 Fall Department of Computer Science Purdue University

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video

CPSC 425: Computer Vision

MEM: Intro to Robotics. Assignment 3I. Due: Wednesday 10/15 11:59 EST

Dynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken

On Cosine-fourth and Vignetting Effects in Real Lenses*

Cameras. CSE 455, Winter 2010 January 25, 2010

Panorama Photogrammetry for Architectural Applications

Abstract. 1. Introduction and Motivation. 3. Methods. 2. Related Work Omni Directional Stereo Imaging

True Single View Point Cone Mirror Omni-Directional Catadioptric System 1

Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction

Opto Engineering S.r.l.

LENSES. INEL 6088 Computer Vision

Extended depth-of-field in Integral Imaging by depth-dependent deconvolution

Photographing Long Scenes with Multiviewpoint

Catadioptric Omnidirectional Camera *

Simulated Programmable Apertures with Lytro

Proc. of DARPA Image Understanding Workshop, New Orleans, May Omnidirectional Video Camera. Shree K. Nayar

A Geometric Correction Method of Plane Image Based on OpenCV

CSE 527: Introduction to Computer Vision

Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere

Simultaneous geometry and color texture acquisition using a single-chip color camera

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Panoramic Vision: Sensors, Theory, And Applications (Monographs In Computer Science) READ ONLINE

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Experiment O11e Optical Polarisation

Real-Time Scanning Goniometric Radiometer for Rapid Characterization of Laser Diodes and VCSELs

Recognizing Panoramas

A Comparison Between Camera Calibration Software Toolboxes

Practical design and evaluation methods of omnidirectional vision sensors

EMVA1288 compliant Interpolation Algorithm

Reconstructing Virtual Rooms from Panoramic Images

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Parallax-Free Long Bone X-ray Image Stitching

Homographies and Mosaics

Cameras, lenses and sensors

How do we see the world?

Defense Technical Information Center Compilation Part Notice

Homographies and Mosaics

The Mathematics of the Stewart Platform

E X P E R I M E N T 12

Radiometric alignment and vignetting calibration

Dr F. Cuzzolin 1. September 29, 2015

The eye & corrective lenses

10.1 Curves defined by parametric equations

Coded Aperture for Projector and Camera for Robust 3D measurement

Single-view Metrology and Cameras

Image Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt

ON THE REDUCTION OF SUB-PIXEL ERROR IN IMAGE BASED DISPLACEMENT MEASUREMENT

Transcription:

CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Panoramic Mosaicing with a 18 Field of View Lens Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek Bakstein and Tomáš Pajdla, Panoramic Mosaicing with a 18 Field of View Lens in Proceedings of the Omnidirectional Vision Workshop, pp. 6-68, June 22. Copyright: IEEE Computer Society Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/bakstein/bakstein-pajdla-omnivis22.pdf Center for Machine Perception, Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Technická 2, 166 27 Prague 6, Czech Republic fax +42 2 2435 7385, phone +42 2 2435 7637, www: http://cmp.felk.cvut.cz

Panoramic Mosaicing with a 18 Field of View Lens Hynek Bakstein Tomáš Pajdla Center for Machine Perception, Dept. of Cybernetics Faculty of Electrical Eng., Czech Technical University 121 35 Prague, Czech Republic {bakstein,pajdla}@cmp.felk.cvut.cz Abstract We presents a technique for 36 x 36 mosaicing with a very wide field of view fish eye lens. Standard camera calibration is extended for lenses with a field of view bigger than 18. We demonstrate the calibration on a Nikon FC- E8 fish eye converter, which is an example of a low-cost lens with 183 field of view. We illustrate the use of this lens on one application, the 36 x 36 mosaic which provides 36 field of view in both vertical and horizontal direction. saics [16] and also the setup of the mosaicing camera is simpler. 1. Introduction There are many ways to enhance a field of view and obtain an omnidirectional sensor. These approaches include use of mirrors [2, 6, 5], multicamera devices [13, 1], rotating cameras [19, 18, 7], lenses [4, 23, 18], or combination of the previous methods [16]. The shape of the mirror determines its field of view, mapping of the light rays [9, 12], and other features such as the single effective viewpoint [21, 1]. On the other hand, focusing of the lens is easier than focusing on the mirror and the resulting setup may be simpler. We concentrate on the use of a special lens, the Nikon FC-E8 fish eye converter [8], which provides FOV of 183. This lens provides omnidirectional image by itself, but we use this lens in a practical realization of a 36 x 36 mosaic [16], where the mosaic is composed by rotating an omnidirectional camera. The resulting mosaic then covers 36 in both horizontal and vertical direction. We mounted this lens on a Pulnix digital camera equipped with a standard 12.5mm lens as it is depicted in Figure 1. Our experiments also show that such a lens provides better results than mirrors, which were often used to build 36 x 36 mo- This work was supported by the following grants: MSM 212313, GACR 12/1/971, MSMT KONTAKT 21/9. Figure 1. Nikon FC-E8 fish eye converter mounted on a Pulnix digital camera with a standard 12.5mm lens. For many computer vision tasks, the relationship between the light rays entering the camera and pixels in the image has to be known. In order to find this relationship, the camera has to be calibrated. A suitable camera model has to be chosen for this task. It turns out that the pinhole camera model with a planar retina is not sufficient for sensors with large FOV [14]. An image point (u, v) defines a light ray as a vector which connects a camera center with the image point which lies on an image plane at a certain distance from the camera center, see Figure 2(a). This is a straightforward approach, however, it limits the field of view of the camera to less then 18. Previous approaches for fish eye calibration used planar retina and pinhole model [3, 4, 22, 23]. In [2], a stereographic projection was employed but the experiments were evaluated on lenses with FOV smaller than 18. We introduce a spherical retina, see Figure 2(b), and a method for calibration from a single image of one known 3D calibration target with iterative refinement of parameters of our camera model with a spherical retina. The light rays emanate from

the camera center and are determined by a radially symmetrical mapping between the pixel coordinates (u, v) and the angle between the light ray and the optical axis of the camera, as it is depicted in Figure 2(b). The main contribution of this work is the introduction of a proper omnidirectional camera model, i. e. the spherical retina, and the choice of a proper projection function, the radially symmetric mapping between the light rays and pixels, for one particular lens. In contrast to other methods [3, 4, 22, 23], we test our approach on a lens with apriori unknown projection function. This lens is an off-the-shelf cheap lens and therefore it does not have to precisely follow any of the projection models listed in [14]. Moreover, this lens has a field of view larger than 18 and thus the standard camera model cannot be used. In the next section, we introduce a camera model with a spherical retina. Then we discuss various models describing the relationship between the light rays and pixels in Section 3. Section 4 is devoted to the determination of this model for the case of Nikon FC-E8 converter. A summary of the presented method is given in Section 5. Experimental results are presented in Section 6. 2 Camera Model The camera model describes how a 3D scene is transformed into a 2D image. It has to incorporate the orientation of the camera with respect to some scene coordinate system and also the way how the light rays in the camera centered coordinate system are projected into the image. The orientation is expressed by extrinsic camera parameters while the latter relationship is determined by intrinsic parameters of the camera. Intrinsic parameters can be divided into two groups. The first one includes the parameters of the mapping between the rays and ideal orthogonal square pixels. We will discuss these parameters in the next section. The second group contains the parameters describing the relationship between ideal orthogonal square pixels and the real pixels of image sensors. Let (u, v) denote coordinates of a point in the image measured in an orthogonal basis as shown in Figure 3. CCD chips often have a different spacing between pixels in the vertical and the horizontal direction. This results in images unequally scaled in the horizontal and vertical direction. This distortion causes circles to appear as ellipses in the image, as shown in Figure 3. Therefore, we introduce a parameter β representing the ratio between the scales of the horizontal and the vertical axis. A matrix expression of the distortion can be written in the following form: K 1 = 1 u β βv. (1) 1 (u,v) a) b) (u,v) Figure 2. From image coordinates to light rays: (a) a directional and (b) an omnidirectional camera. This matrix is a simplified intrinsic calibration matrix of a pinhole camera [11]. The displacement of the center of the image is expressed by terms u and v, the skewness of the image axes is neglected in our case, because cameras usually have orthogonal pixels. u v K 1 Figure 3. A circle in the image plane is distorted due to a different length of the axes. Therefore we observe an ellipse instead of a circle in the image. 3 Projection Models Models of the projection between the light rays and the pixels are discussed in this section. Most commonly used approach is that these models are described by a radially symmetric function that maps the angle θ between the incoming light ray and the optical axis to some distance r from the image center, see Figures 7(a) and 7(b). This function typically has one parameter k. As it was stated before, the perspective projection, which can be expressed as r = k tan θ, is not suitable for modeling cameras with large FOV. Several other projection models exist [14]: stereographic projection r = k tan θ 2, u v

equidistant projection r = kθ, equisolid angle projection r = k sin θ 2, and sine law projection r = k sin θ. Figure 4 shows graphs of the above projection functions for angle θ varying from to 18 degrees. The vertical axis of this graph represents the value of a specific model function for corresponding θ angle. All functions were scaled so that they have a value of 1 at θ = 5. This figure illustrates the development of the projective function with varying θ. It can be noticed that perspective projection cannot cope with angles θ near 9. It can also be noticed that most of the models can be approximated with an equidistant projection for a smaller angle θ. However, when the FOV of the lens increases, the models differ significantly. In the next section we describe a procedure for selecting the appropriate model for the Nikon FC-E8 converter. Model function value 5 4 3 2 1 1 2 3 Perspective Stereographic Sine law Equisolid angle Equidistant 4 5 1 15 θ angle Figure 4. Values of the projection functions for angle θ in range of and 18 degrees. All functions were scaled so that they have a value of 1 at 5. 4 Model Determination The model describing the mapping between the pixels and the light rays differs from lens to lens. Some lenses are manufactured so that they follow a certain model, for some other lenses is this information unavailable, which is the case of the Nikon FC-E8 converter. Also the assumption that the light rays emanate from one point does not have to be true for some lenses. This requires additional parameters of the model determining the position of the ray origin. All above situations can be incorporated into our framework. We demonstrate this procedure on one particular lens. In order to derive the projection model for Nikon FC- E8, we have investigated how light rays with constant increment in the angle θ are imaged on the image plane. We performed the following experiment. The camera was observing a cylinder with circles seen by light rays with known angle θ, as it is depicted in Figure 5(a). These circles correspond to an increment in the angle θ set to 5 for rays imaged to the peripheral parts of the image (θ = 9..7 ) and to 1 for the rays imaged to the central part of the image. Figure 5(b) shows the grid which, after wrapping around a cylinder, produced the circles. Figure 5(c) shows an image of this cylinder. It can be seen that circles are imaged to approximate circles and that constant increment in angles results in slowly increasing increment in radii of the circles in the image. Note that the circles at the border have angular distance 5, while the distance near the center is 1. The camera has to be positioned so that its optical axis is identical with the rotational axis of the cylinder and the circle corresponding to θ = 9 must be imaged precisely. In our case we used the assumption of the radial symmetry and the known field of view of the lens for manual positioning of the lens with respect to the calibration cylinder. This setup is sufficient for the model determination, however, for a full camera calibration, parameters determining this positioning (rotation and translation with respect to the scene coordinate system) have to be included in the computation, as it is described in Section 5. We fitted all of the models mentioned in the previous section to detected projections of the light rays into the image. The model fit error was the Euclidean distance between the pixels observed in the image and the pixel coordinates predicted by the model function. The stereographic projection with two parameters: r = a tan θ b provided the best fit but there was still a systematic error, see Figure 6. Therefore, we extended the model, which resulted in a combination of the stereographic projection with the equisolid angle projection. This improved model is identified by four parameters, see Equation 3, and provides the best fit with no systematic error, as it is depicted in Figure 6. This holds even for the situation, where the parameters were estimated using only a half of the detected points and then used to predict the other half of the points. The prediction error was less then half of the pixel in the worst case. An initial fit of the parameters is discussed in the following section. 5 Complete Camera Model Under the above observations, we can formulate the model of the camera. Provided with a scene point X = (x, y, z) T, we are able to compute its coordinates X = ( x, ỹ, z) T in the camera centered coordinate system: X = RX + T, (2)

(a) (b) (c) Figure 5. (a) Camera observing a cylinder with a calibration pattern (b) wrapped around the cylinder. Note that the lines corresponds to light rays with an increment in the angle θ set to 5 (the bottom 4 intervals) and 1 (the 5 upper intervals). (c) Image of circles with radii set to a tangent of a constantly incremented angle results in concentric circles with almost constant increment in radii in the image. 1 z = optical axis (x,y,z ) (u,v ) Model fiting error [pixels].5.5 1 1.5 a tan(θ/b) a tan(θ/b) + c sin(θ/d) 2 2 4 6 8 1 θ angle Figure 6. Model fit error for stereographic and combined stereographic and equisolid angle projection. where R represents a rotation and T stands for a translation. The standard rotation matrix R has three degrees of freedom and T is expressed by the vector T = (t 1, t 2, t 3 ) T. Then, the angle θ, see Figure 7(a), between the light ray through the point X and the optical axis, can be computed. This angle determines the distance r of the pixel from the center of the image: r = a tan θ b + c sin θ d, (3) where a, b, c, and d are the parameters of the projection model. Together with the angle ϕ between the light ray reprojected to the xy plane and the x axis of the camera centered coordinate system, the distance r is sufficient to calculate x ϕ θ (a) y u ϕ r (u,v ) (b) Figure 7. (a) Camera coordinate system and its relationship to the angles θ and ϕ (b) From polar coordinates (r, ϕ) to orthogonal coordinates (u, v ). the pixel coordinates u = (u, v, 1) in some orthogonal image coordinate system, see Figure 7(b), as u = r cos ϕ (4) v = r sin ϕ. (5) In this case the vector u does not represent a light ray from the camera center like in a pinhole camera model, instead it is just a vector augmented by 1 so that we can write an affine transform of the image points compactly by one matrix multiplication (6). Real pixel coordinates u = (u, v, 1) can be obtained as u = Ku. (6) The complete camera model parameters including extrinsic and intrinsic parameters can be recovered from measured coordinates of calibration points by minimizing J(R, T, β, u, v, a, b, c, d) = v N ũ u, (7) i=1

(a) (b) (c) Figure 8. (a) Experimental setup for the half cylinder experiment. (b) One of the images. (c) The calibration target is located 9 left from the camera, note the significant distortion. where... denotes the Euclidean norm, N is the number of points, ũ are coordinates of points measured in the image, and u are their coordinates reprojected by the camera model. A MATLAB implementation of the Levenberg- Marquardt [15] minimization was employed in order to minimize the objective function (7). The rotational matrix R and the vector of translation T, see (2), have both three degrees of freedom. The image center, the scale ratio of the image axes β, and the four parameters of the mapping between the light rays and pixels (3) give 7 intrinsic parameters. This yields a total of 13 parameters of our model. When minimizing the objective function (7), we initialize the image center to the center of the circle (ellipsis) surrounding the image, see Figure 5. This is possible because the Nikon FC-E8 lens is so called circular fish eye, where this circle is visible. Assuming that the mapping between the light rays and pixels (3) is radially symmetric, this center of the circle should be approximately in the image center. Parameters of the model were initially set to an ideal stereographic projection, which means that b = 2, c =, d = 1, and a was initialized using the ratio between the coordinates of points corresponding to the light rays with the angle θ equal to and 9 degrees. The value of the β parameters was initialized to 1. The initial camera position was set to be in the center of the scene coordinate system with the z axis coincident with the optical axis of the camera. 6 Experimental Results We performed two calibration experiments. In the first experiment, the calibration points were located on a cylinder around the optical axis and the camera was looking down into that cylinder, see Figure 5(a). The points had the same depth for the same value of θ. The second experiment employed a 3D calibration object with points located on a half cylinder. The object was realized such that a line of calibration points was rotated on a turntable, as it is depicted in Figure 8. Here, the points with the same angle θ had different depths. The first experimental setup was also used to determine the projection model, as it is described in Section 4. The total number of 72 points was manually detected. One half of the circles of points was used for the estimation of the parameters while the second half was used to compute the reprojection errors. Similar approach was also used in the second experiment, where the number of calibration points was 285. Again, all points were detected manually. Figure 9 shows the reprojection of points, computed with parameters estimated during the calibration, compared with their coordinates detected in the image. The lines represent the errors between the respective points are scaled 2 times to make the distances clearly visible. The same error is shown in Figure 1 for all the points. It can be noticed that the error is small compared to the precision of manual detection, where the images of some lines spanned more pixels while others where too far to be imaged as continuous circles, see Figure 5(c). Therefore we performed another experiment, where the calibration points where checkerboard patterns. Similar graphs illustrate the results from the second experiment. Figure 11 shows the comparison between the reprojected points and their coordinates detected in the image. Again, the lines representing the distance between these two sets of points are scaled 2 times. Figure 12(a) depicts this reprojection error for each calibration point. Note that the error is bigger for points in the corners of the image, which is natural, since the resolution here is higher and therefore one pixel corresponds to a smaller change in the angle θ. To verify the randomness of the reprojection error, we performed the following test. Because the points in the image were detected manually, we suppose that the detection

1 8 6 4 2 2 4 6 8 1 Figure 9. Reprojection of points for the cylinder experiment. The distances between the reprojected and the detected points are scaled 2 times. 9 8 7 6 5 4 3 2 1 2 4 6 8 1 Figure 11. Reprojection of points for the half cylinder experiment. The distances between the reprojected and the detected points are scaled 2 times. Reprojection error 2.5 2 1.5 1.5 Reprojection error [pixels] 1.8 1.6 1.4 1.2 1.8.6.4.2 5 1 15 2 25 3 Calibration point number (a) Count 1 8 6 4 2 5 1 15 2 25 Sum of squares of normalized detection errors (b) 2 4 6 8 Calibration point number Figure 1. Reprojection error for each point for the cylinder experiment. Figure 12. (a) Reprojection error for each point for the half cylinder experiment. (b) Histogram of sum of squares of normalized detection errors together with a χ 2 density marked by the curve. error has normal distribution in both image axes. Therefore, a sum of squares of these errors, normalized to unit variance, should be described by a χ 2 distribution [17]. Figure 12(b) shows a histogram of detection errors together with a graph of a χ 2 density. Note that χ 2 distribution describes well the calibration error distribution. Finally we show that we are able to select pixels in the image, which correspond the the light rays lying in one plane passing through a camera center. The angle θ between these rays and the optical axis equals to π 2 and because this situation is circularly symmetric, the corresponding pixels should form a circle centered at the image center (u, v ), obtained by minimizing (7). The radius of the circle is determined from (3) for θ = π 2 and a, b, c, and d obtained by minimizing (7). Due to the difference in scale of the image axes β, see Equation 1, the pixels form an ellipse, while the image center again corresponds to the center of the ellipse. As noted before, these light rays lie in one plane, which is crucial for the employment of the proposed sensor in a re- alization of the 36 x 36 mosaic [16]. The selection of the proper pixels (ellipse) assures that the corresponding points in the mosaic pair will be on the same image rows, which simplifies the correspondence search algorithms. There are two possible approaches for selection of light rays with a specific angle θ. The one originally proposed in [16] uses mirrors, see Figure 13(a). The camera mirror rig setup must be performed very precisely to get reliable results. Moreover, focusing on the mirror is not easy, because one has to focus on a virtual scene, not on the mirror, neither on the real scene. Therefore, we propose another approach employing optics with FOV larger than 18, depicted in Figure 13(b). Figure 14 shows the right and the left eye mosaic respectively. Note the significant disparity of objects in the scene. Enlarged parts of the mosaic showing one corresponding point can be found in Figures 15(a) and 15(c) for the right mosaic and Figures 15(b) and 15(d) for the left mosaic. These figures represent the worst case, where the difference

7 (a) (b) Figure 13. Two possible realizations of the 36 x 36 mosaic. (a) a telecentric camera and a conical mirror, (b) a central camera with Nikon fish eye converter. between the situation, where the corresponding points lie on the same image row, was the biggest. Figures 15(a) and 15(b) were acquired using a conical mirror observed by a telecentric lens, while Figures 15(c) and 15(d) are from mosaic acquired employing the Nikon FC-E8 lens. Figure 14. Right (upper) and left (lower) eye mosaics. Notice that in the upper row, where images taken with the use of the mirror are shown, the images are more blurry and that the points do not lie on the same image row and that this difference is quite significant, although some other points actually were on the same image row. This is in contrast with the images in the bottom row that were taken with the Nikon FC-E8 converter, where all corresponding points lie on the same image row and the images are more sharp. This condition is satisfied for all pixels with a tolerance smaller than.5 pixel. Conclusion We have proposed a camera model for lenses with FOV larger than 18. The model is based on employment of a spherical retina and a radially symmetrical mapping between the incoming light rays and pixels in the image. We proposed a method for identification of the mapping function, which led to a combination of two mapping functions. A complete calibration procedure, involving a single image of a 3D calibration target, is then presented. Finally, we demonstrate the theory in two experiments and one application, all using the Nikon FC-E8 fish eye converter. We believe that the ability to correctly describe and calibrate the Nikon FC-E8 fish eye lens converter opens a way to many new application of very wide angle lenses. References [1] P. Baker, C. Fermu ller, Y. Aloimonos, and R. Pless. A spherical eye from multiple cameras (makes better models of the world). In A. Jacobs and T. Baldwin, editors, Proceedings of the CVPR 1 conference, volume 1, pages 576 583, Los Alamitos, CA, USA, Dec. 21. IEEE Computer Society. [2] S. Baker and S. K. Nayar. A theory of single-viewpoint catadioptric image formation. International Journal of Computer Vision, 35(2):175 196, 1999. [3] A. Basu and S. Licardie. Alternative models for fish-eye lenses. Pattern Recognition Letters, 16(4):433 441, 1995. [4] S. S. Beauchemin, R. Bajcsy, and G. G. A unified procedure for calibrating intrinsic parameters of fish-eye lenses. In Vision Interface (VI 99), pages 272 279, May 1999. [5] R. Benosman, E. Deforas, and J. Devars. A new catadioptric sensor for the panoramic vision of mobile robots. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages 112 116, June 2. [6] A. M. Bruckstein and T. J. Richardson. Omniview cameras with curved surface mirrors. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages 79 84, June 2. [7] J. Chai, S. B. Kang, and H.-Y. Shum. Rendering with nonuniform approximate concentric mosaics. In Second Workshop on Structure from Multiple Images of Large Scale Environments, SMILE, July 2. [8] N. Corp. Nikon www pages: http://www.nikon.com, 2. [9] S. Derrien and K. Konolige. Approximating a single effective viewpoint in panoramic imaging devices. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages 85 9, June 2. [1] C. Geyer and K. Daniilidis. A unifying theory for central panoramic systems and practical implications. In D. Vernon, editor, European Conference on Computer Vision ECCV 2, Dublin, Ireland, volume 2, pages 445 462, June-July 2. [11] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge, UK, 2.

295 295 3 3 35 35 31 31 315 315 32 32 325 325 155 16 165 17 175 18 185 19 (a) 855 86 865 87 875 88 885 89 (b) 12 12 125 125 13 13 135 135 14 14 145 145 15 15 155 155 145 15 155 16 165 17 175 (c) 725 73 735 74 745 75 755 (d) Figure 15. Detail of a corresponding pair of points (a) and (c) in the right mosaic and (b) and (d) in the left mosaic representing the difference from the ideal case, where the corresponding points lie on the same image row. The upper row is the worst case acquired using mirror, bottom row for the Nikon FC-E8 fish eye converter. Note the blurred images and that the points do not lie on the same image row in case of mirror and that the lens provides focused and aligned images. [12] R. A. Hicks and R. Bajcsy. Catadioptric sensors that approximate wide-angle perspective projections. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages 97 13, June 2. [13] H. Hua and N. Ahuja. A high-resolution panoramic camera. In A. Jacobs and T. Baldwin, editors, Proceedings of the CVPR 1 conference, volume 1, pages 96 967, Los Alamitos, CA, USA, Dec. 21. IEEE Computer Society. [14] F. M. M. Perspective projection: the wrong imaging model. Technical Report TR 95-1, Comp. Sci., U. Iowa, 1995. [15] J. Moré. The levenberg-marquardt algorithm: Implementation and theory. In G. A. Watson, editor, Numerical Analysis, Lecture Notes in Mathematics 63, pages 15 116. Springer Verlag, 1977. [16] S. K. Nayar and A. Karmarkar. 36 x 36 mosaics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR ), Hilton Head, South Carolina, volume 2, pages 388 395, June 2. [17] A. Papoulis. Probability and Statistics. Prentice-Hall, 199. [18] S. Peleg and M. Ban-Ezra. Stereo panorama with a single camera. In IEEE Conference on Computer Vision and Pattern Recognition, pages 395 41, June 1999. [19] H.-Y. Shum, A. Kalai, and S. M. Seitz. Omnivergent stereo. In Proc. of the International Conference on Computer Vision (ICCV 99), Kerkyra, Greece, volume 1, pages 22 29, September 1999. [2] D. E. Stevenson and M. M. Fleck. Robot aerobics: Four easy steps to a more flexible calibration. In International Conference on Computer Vision, pages 34 39, 1995. [21] T. Svoboda, T. Pajdla, and V. Hlaváč. Epipolar geometry for panoramic cameras. In H. Burkhardt and N. Bernd, editors, the fifth European Conference on Computer Vision, Freiburg, Germany, number 146 in Lecture Notes in Computer Science, pages 218 232, Berlin, Germany, June 1998. [22] R. Swaminathan and S. Nayar. Non-metric calibration of wide-angle lenses. In DARPA Image Understanding Workshop, pages 179 184, 1998. [23] Y. Xiong and K. Turkowski. Creating image based vr using a self-calibrating fisheye lens. In IEEE Computer Vision and Pattern Recognition (CVPR97), pages 237 243, 1997.