ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

Similar documents
How to combine images in Photoshop

INSERTING THE PAST IN VIDEO SEQUENCES

Creating a Panorama Photograph Using Photoshop Elements

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Image Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt

Sensors and Sensing Cameras and Camera Calibration

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

CSI: Rombalds Moor Photogrammetry Photography

Lecture 7: homogeneous coordinates

PHOTOGRAMMETRIC APPLICATION OF SPHERICAL IMAGING

A Structured Light Range Imaging System Using a Moving Correlation Code

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Extended View Toolkit

High Performance Imaging Using Large Camera Arrays

CS 443: Imaging and Multimedia Cameras and Lenses

Opto Engineering S.r.l.

Advanced Diploma in. Photoshop. Summary Notes

Reconstructing Virtual Rooms from Panoramic Images

Reading: Lenses and Mirrors; Applications Key concepts: Focal points and lengths; real images; virtual images; magnification; angular magnification.

Panorama Photogrammetry for Architectural Applications

STUDYING THE EFFECT OF SOME IMAGE ENHANCEMENT FEATURES ON THE ACCURACY OF CLOSE RANGE PHOTOGRAMMETRIC MEASUREMENTS USING CAD ENVIRONMENT

Lenses and Focal Length

Colour correction for panoramic imaging

Homographies and Mosaics

IMAGE ACQUISITION GUIDELINES FOR SFM

Homographies and Mosaics

Proposed Kumototo Site 10 Wellington

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

DEVELOPMENT AND APPLICATION OF AN EXTENDED GEOMETRIC MODEL FOR HIGH RESOLUTION PANORAMIC CAMERAS

CS535 Fall Department of Computer Science Purdue University

PHOTOGRAMMETRY STEREOSCOPY FLIGHT PLANNING PHOTOGRAMMETRIC DEFINITIONS GROUND CONTROL INTRODUCTION

Aerial photography: Principles. Frame capture sensors: Analog film and digital cameras

Physics 2020 Lab 8 Lenses

Technical information about PhoToPlan

Midterm Examination CS 534: Computational Photography

Complete the diagram to show what happens to the rays. ... (1) What word can be used to describe this type of lens? ... (1)

Astronomy 80 B: Light. Lecture 9: curved mirrors, lenses, aberrations 29 April 2003 Jerry Nelson

Fast Focal Length Solution in Partial Panoramic Image Stitching

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

Chapter 36. Image Formation

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Digital Design and Communication Teaching (DiDACT) University of Sheffield Department of Landscape. Adobe Photoshop CS4 INTRODUCTION WORKSHOPS

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

MINIMISING SYSTEMATIC ERRORS IN DEMS CAUSED BY AN INACCURATE LENS MODEL

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Patents of eye tracking system- a survey

ENHANCEMENT OF THE RADIOMETRIC IMAGE QUALITY OF PHOTOGRAMMETRIC SCANNERS.

Photographing Long Scenes with Multiviewpoint

DETERMINATION OF ST. GEORGE BASILICA TOWER HISTORICAL INCLINATION FROM CONTEMPORARY PHOTOGRAPH

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

ECEN 4606, UNDERGRADUATE OPTICS LAB

Reading. Angel. Chapter 5. Optional

303SPH SPHERICAL VR HEAD

Chapter 23. Mirrors and Lenses

3D Viewing. Introduction to Computer Graphics Torsten Möller / Manfred Klaffenböck. Machiraju/Zhang/Möller

AUTOMATED PROCESSING OF DIGITAL IMAGE DATA IN ARCHITECTURAL SURVEYING

UltraCam Eagle Prime Aerial Sensor Calibration and Validation

Spherical Mirrors. Concave Mirror, Notation. Spherical Aberration. Image Formed by a Concave Mirror. Image Formed by a Concave Mirror 4/11/2014

Chapter 23. Mirrors and Lenses

GigaPan photography as a building inventory tool

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

mm F2.6 6MP IR-Corrected. Sensor size

Computer Vision. The Pinhole Camera Model

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses.

APPLICATION AND ACCURACY POTENTIAL OF A STRICT GEOMETRIC MODEL FOR ROTATING LINE CAMERAS

How do we see the world?

EXPERIMENT ON PARAMETER SELECTION OF IMAGE DISTORTION MODEL

COURSE NAME: PHOTOGRAPHY AND AUDIO VISUAL PRODUCTION (VOCATIONAL) FOR UNDER GRADUATE (FIRST YEAR)

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Panoramas. Featuring ROD PLANCK. Rod Planck DECEMBER 29, 2017 ADVANCED

CHAPTER 3LENSES. 1.1 Basics. Convex Lens. Concave Lens. 1 Introduction to convex and concave lenses. Shape: Shape: Symbol: Symbol:

Digital deformation model for fisheye image rectification

Two strategies for realistic rendering capture real world data synthesize from bottom up

Volume 1 - Module 6 Geometry of Aerial Photography. I. Classification of Photographs. Vertical

RESULTS OF 3D PHOTOGRAMMETRY ON THE CMS BARREL YOKE

Geometry of Aerial Photographs

Determining Crash Data Using Camera Matching Photogrammetric Technique

I-I. S/Scientific Report No. I. Duane C. Brown. C-!3 P.O0. Box 1226 Melbourne, Florida

Beacon Island Report / Notes

Parallax-Free Long Bone X-ray Image Stitching

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Lens Design I. Lecture 3: Properties of optical systems II Herbert Gross. Summer term

CALIBRATION OF AN AMATEUR CAMERA FOR VARIOUS OBJECT DISTANCES

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

High Resolution Sensor Test Comparison with SPOT, KFA1000, KVR1000, IRS-1C and DPA in Lower Saxony

ContextCapture Quick guide for photo acquisition

Multi Viewpoint Panoramas

Desktop - Photogrammetry and its Link to Web Publishing

Chapter 23. Mirrors and Lenses

Fast and High-Quality Image Blending on Mobile Phones

Taking Panorama Pictures with the Olympus e-1. Klaus Schraeder May 2004

Chapter 23. Geometrical Optics: Mirrors and Lenses and other Instruments

KEY WORDS: Animation, Architecture, Image Rectification, Multi-Media, Texture Mapping, Visualization

Ch 24. Geometric Optics

Image Processing & Projective geometry

Chapter 3 Mirrors. The most common and familiar optical device

Basic principles of photography. David Capel 346B IST

Camera Calibration PhaseOne 80mm Lens A & B. For Jamie Heath Terrasaurus Aerial Photography Ltd.

Transcription:

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES Petteri PÖNTINEN Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing, Finland petteri.pontinen@hut.fi KEY WORDS: Cocentricity, Panorama, Projective, Transformation, Least Squares. ABSTRACT This paper considers the creations of panoramic images from image sequences. The main guideline is to create physically correct panoramic images, where the focusing surface is a cylinder and all the light rays from the object to the focusing surface are straight and cross on the axis of this cylinder. This requires that the images of the sequence have a common projection center. Also the camera parameters and the relative orientations of the images of the sequence need to be known. If these conditions are fulfilled, the resulting panoramic images will be consistent. 1 INTRODUCTION Panoramic images have a history of more than 150 years. One of the pioneers of panoramic imaging, Joseph Puchberger from Austria, patented his swing lens panoramic camera already in 1843 (IAPP, 1999). Many other inventors in different countries all over the world were also working with panoramic imaging around that time. Because most of them worked independently their devices were also quite different. However, the basic solutions were similar: either a very wide angle optics, a swinging lens or a rotating camera. The first devices were hand-driven and the first panoramic images were exposed on curved glass plates. The application areas of panoramic images vary from art to aerial surveillance. Artists and photographers are probably the biggest user groups of panoramic images. Also people who work with virtual environments utilize panoramic imaging. There have also been some studies on the use of panoramic images in photogrammetry (Antipov and Kivaev, 1984, Hartley, 1993), but the topic hasn t been very popular among photogrammetrists in general. The old panoramic techniques are still in use, but modern technology presents also other possibilities. One alternative is to create panoramic views from digital image sequences, which is considered in this paper. The main guideline is to create physically correct panoramic images, where the focusing surface is cylinder and all the light rays from the object are straight and cross on the axis of this cylinder. It s not enough to just stitch the adjacent images together so that the result looks nice. The panoramic camera model is introduced briefly in section 2. Some general demands for the image sequence are also considered. The procedures for combining the images are presented in section 3. Two alternative ways are considered. The first one is based on the rotations between the images and the second one on the two-dimensional projective transformations between the images. Section 4 presents one example and section 5 contains the conclusions. 2 PANORAMIC IMAGE 2.1 Panoramic camera model The main feature of a panoramic camera is the wide field of view, usually more than 90 degrees. In spite of the different constructions of different panoramic cameras, they can all be modelled as a camera with a cylindrical focal surface (Hartley, 1993) and a projection center that lies on the axis of the cylinder. Figure 1 illustrates the major difference between a standard camera and a panoramic camera. It is clear that the field of view of a standard camera never exceeds 180 degrees and is usually much less. This means that with panoramic techniques the object can be photographed from a shorter distance. International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000. 635

image plane image cylinder projection center projection center Figure 1: Two projections. On the left the focusing surface is a plane and on the right a cylinder. 2.2 Demands for a correct panoramic image If the panoramic image is constructed from an image sequence by combining adjacent images, some conditions must be fulfilled. One of the conditions is that the sequence must be cocentric, which means that the camera must be rotated around its projection center. If this condition is not fulfilled (i.e. if the projection center moves during the camera rotation) the following problem occurs: the adjacent images have been taken from different viewpoints, which means that different things are visible on their overlapping areas. This makes the combining of the images in principle impossible. The first thing to do is to mount the camera on a tripod so that it rotates around its projection center. For that purpose there are for sale so called pano-heads for certain camera and lens combinations. The same thing can be done, for example, with the help of a theodolite and a special rotation tool that allows the camera to be moved freely in two dimensions on a tripod (see Figure 2). The rotation axis of the tool and the vertical axis of the theodolite should be joined when they are mounted on the tripod. The procedure is as follows: 1. Using the theodolite and four poles, construct two lines that intersect on the vertical axis of the theodolite (see Figure 3). 2. Replace the theodolite with the camera mounted on the rotation tool. Now the two lines intersect on the rotation axis of the rotation tool. 3. Move the camera on the rotation tool so that the poles seem to be in line (see Figure 4). This moves the projection center to the rotation axis. Another condition that has to be fulfilled is that the camera parameters (camera constant, principal point coordinates and lens distortions) must be known. Otherwise the original shapes of the bundles of image rays are not known and the creation of the correct panoramic image is impossible. The values of the camera parameters can be found by calibration. The principal point coordinates can also be derived directly from the cocentric images (Hartley, 1994). 3 COMBINING OF IMAGES This section introduces two alternative methods for combining the single images of the sequence. The first method is based on the rotations between the adjacent images and the second one on the two dimensional projective transformations theodolite / camera poles Figure 3: Arrangement for the camera adjustment. The two lines intersect on the vertical axis of the theodolite and also on the rotation axis of the rotation tool. Figure 2: Camera mounted on a special rotation tool. 636 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000.

Pöntinen, Petteri Figure 4: Views through the camera. When the poles are in line (right) the correct position has been found. between the images. 3.1 Combining based on rotations Because the image sequence is assumed to be cocentric, only the three rotations between the images need to be solved. Although the projection center is fixed the three rotations allow more or less free trajectories for the images. One way to solve the orientation problem is to force the camera to rotate along a known trajectory with some predefined angles. This often requires extra equipment and work. A more convenient way is to solve the unknown rotations based on the images themselves. This task is not too difficult thanks to the assumed cocentricity. The corresponding image vectors defined by the projection center and the corresponding image points must point to the same direction as shown in Figures 5 and 6. This means that (1) where a and b are the corresponding image vectors and R is the unknown rotation matrix. It is quite obvious that only two corresponding image vectors are needed to fix the three rotations. Also the whole overlapping area can be used to determine the rotation angles, and. The idea is to have (2) where and are the gray values on the different images and,, and are the centered image coordinates of the corresponding points. The connection between equations (1) and (2) is (3) where c is the camera constant. Using the least squares principle the optimal rotation matrix, which minimizes the squared sum of gray value differences in corresponding points, can be found. If the relative rotations of the images of the sequence are known, the creation of a panoramic image is simple. The gray values of the individual images just have to be projected to a chosen cylinder surface along the relatively orientated image rays. The radius of the cylinder can be chosen freely but its axis must go through the projection center. Lens distortions, non-cocentricity of the sequence, errors in the camera constant and the principal point coordinates, and errors of the orientation parameters cause non-consistency in the resulting image. If the overlapping areas are averaged from the source images, the errors can be seen as a blurring of these areas. Tests with synthetic images show that the consistency was more sensitive to the errors of the orientation parameters than to the errors of the camera parameters. For example in the camera constant the error could be 10% without any clear influence, but already 1% error in the rotations was enough to blur the overlapping area. a b Figure 5: Two corresponding image vectors. Figure 6: The corresponding image vectors rotated so that they point to the same direction. International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000. 637

Pöntinen, Petteri P size of the combined image P Figure 7: Two image planes intersecting the same image rays. Figure 8: The size of the combined image depends on the rotation between the images. 3.2 Combining based on two dimensional projective transformation A picture taken with a traditional camera is in principle a central projection of the target. How the picture looks depends on the location of the image plane relative to the projection center (see Figure 7). If two different planes intersect the same image rays there is a correspondence between the image point coordinates. The correspondence is formulated as (Wang, 1990) (4) (5) where and are the image coordinates on the different planes and are the transformation parameters. These transformation parameters can be solved if the image coordinates of at least four corresponding points are known on both planes and if no three points lie on the same line. After the parameters have been solved, any of the image points can be transformed to the other plane. Instead of using a set of points, the whole overlapping area can be utilized to determine the transformation parameters, like in the previous subsection. The initial transformation parameters can be solved using four corresponding points coordinates and then adjusted using the least squares so that the sum of squared gray level differences in corresponding points will be minimized. If the images are cocentric and have sufficient overlaps, they can be combined into one image using the two-dimensional projective transformation. One of the images can be chosen as a reference image and the other images can be transformed to it. The combined image can then be projected to a chosen cylinder surface. If the camera has been rotated very much (in the extreme case over 360 degrees), all the images can t be transformed to one reference image, because the combined image will grow, in the worst case infinitely (see Figure 8). In that case, the panoramic image must be created in stages. In the first stage, the reference image is chosen and two or three images are transformed to it. After that, the combined image is projected to a cylinder surface. In the next stage, a new reference image is made by projecting part of the cylindrical image back to a plane. After that, the next two or three images are combined to the new reference image and the result is projected to the previously chosen cylinder. If there are more images to be projected, a new reference image is created and the procedure is repeated. This continues until all the images are on the surface of the cylinder. If the created panoramic image covers over 360 degrees, the perimeter of the cylinder (i.e. the distance between the same point on the different ends of the image) should be, where is the radius of the chosen cylinder. If the perimeter differs from this, it indicates that the camera constant used has been erroneous (assuming that there are no other errors affecting the image simultaneously). 4 AN EXAMPLE Figure 9 shows three images of a workshop. They were taken with an Olympus Camedia C-1400 L digital camera. The image size was 1280x1024 pixels. The camera was calibrated using a testfield and the lens distortions were eliminated by resampling the images (see Figure 10). As can be seen, the images overlap by approximately 50%. The corners of the overlap areas of the images were given as source data to a software program which solved the eight transformation 638 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000.

Figure 9: The original three images. Figure 10: The images after the lens distortion corrections. Figure 11: The left and right images combined to the middle image. International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000. 639

Figure 12: A zoomed detail of the middle source image. Figure 13: A zoomed detail of the combined image. Figure 14: Zoomed details of the original image (on the left), the panoramic image where lens distortions were eliminated (in the middle) and the panoramic image where lens distortions were not eliminated (on the right). Figure 15: The combined image projected to a cylinder surface. 640 International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000.

parameters between the images. The iterative calculation converged nicely and the result of the combination can be seen in Figure 11. The calculations took about 20 minutes with a 200 MHz computer with a 32 MB memory. The calculation time can be reduced easily by using less pixels, for example every second or third pixel, for determining the transformation parameters. The gray values of the overlapping areas were averaged from the two source images. In Figure 12 is shown one detail from the middle source image. In Figure 13 is the same detail but it is grabbed from the combined image shown in Figure 11. The brightnesses of the original images were different, which is why the image edges are visible in the combined image. Otherwise the result is satisfactory: there are no discontinuities or blurring. Because in this example the eight parameter projective transformation was used, the camera constant and principal point coordinates were not needed for the combination of the images but for the correct projection to a cylinder surface. This means that errors in the camera constant and principal point coordinates do not cause any blurring. Instead, neglecting the lens distortions cause blurring. In Figure 14 are three zoomed pictures showing roughly the same details. The picture on the left is from one of the original images, the one in the middle is a part of the panoramic image where lens distortions were taken into account, and the one on the right is a part of the image where distortions were neglected. As can be seen, the quality of the last image is clearly worse than that of the other two. The difference between the two first pictures is quite small, although the picture in the middle has gone through three interpolations and one averaging. The projection to a cylinder surface is shown in Figure 15. 5 CONCLUSIONS In this paper has been described how to make panoramic images from cocentric image sequences so that the central projections of the original images will be preserved. It has been shown that only the camera parameters and sufficient overlap between the images are needed. The two combination methods presented here are based on the fact that the image sequence is cocentric. The first method solves the relative rotations of the images and then projects the images to a cylinder surface. The second method doesn t solve the rotations explicitly. Instead, it combines the images using two-dimensional projective transformation before the projection to a cylinder surface. Both the rotations and the two-dimensional transformation parameters can be derived from the overlapping areas of the images. If the used camera parameters were correct the resulting panoramic image was consistent. The use of the whole overlapping area ties the images strongly together. The bigger is the overlap the better but also the more images are needed for a certain view. One interesting question which will be studied in the near future is if it is possible to solve also all the camera parameters during the panoramic image creation process. REFERENCES Hartley, R., 1993. Photogrammteric techniques of panoramic cameras. SPIE Proceedings, Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision, Vol. 1944, Orlando, USA. Hartley, R., 1994. Self-calibration from multiple views with a rotating camera. In: J.-O. Eklund (ed.), Lecture Notes in Computer Science, Computer Vision - ECCV 94, Vol. 800, Springer-Verlag Berlin Heidelberg. IAPP, 1999. International Association of Panoramic Photographers, http://www.panphoto.com. Antipov, I. T., Kivaev A. I., 1984. Panoramic photographs in close range photogrammtery. International Archives of Photogrammtery and Remote Sensing, Vol. XXV, Part A5, Rio de Janeiro, Brazil. Wang, Z., 1990. Principles of Photogrammetry (with Remote Sensing). Press of Wuhan Technical University of Surveying and Mapping, Publishing House of Surveying and Mapping, Beijing. International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B5. Amsterdam 2000. 641