DEVELOPMENT AND APPLICATION OF AN EXTENDED GEOMETRIC MODEL FOR HIGH RESOLUTION PANORAMIC CAMERAS

Similar documents
APPLICATION AND ACCURACY POTENTIAL OF A STRICT GEOMETRIC MODEL FOR ROTATING LINE CAMERAS

Panorama Photogrammetry for Architectural Applications

3-D OBJECT RECONSTRUCTION FROM MULTIPLE-STATION PANORAMA IMAGERY

Technical information about PhoToPlan

Digital Photogrammetry. Presented by: Dr. Hamid Ebadi

CALIBRATION OF OPTICAL SATELLITE SENSORS

PROPERTY OF THE LARGE FORMAT DIGITAL AERIAL CAMERA DMC II

ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES

Geometry of Aerial Photographs

AUTOMATED PROCESSING OF DIGITAL IMAGE DATA IN ARCHITECTURAL SURVEYING

CALIBRATION OF IMAGING SATELLITE SENSORS

Camera Calibration Certificate No: DMC III 27542

Desktop - Photogrammetry and its Link to Web Publishing

KEY WORDS: Animation, Architecture, Image Rectification, Multi-Media, Texture Mapping, Visualization

High Resolution Sensor Test Comparison with SPOT, KFA1000, KVR1000, IRS-1C and DPA in Lower Saxony

Aerial photography: Principles. Frame capture sensors: Analog film and digital cameras

Colour correction for panoramic imaging

PERFORMANCE EVALUATIONS OF MACRO LENSES FOR DIGITAL DOCUMENTATION OF SMALL OBJECTS

Digital deformation model for fisheye image rectification

ENHANCEMENT OF THE RADIOMETRIC IMAGE QUALITY OF PHOTOGRAMMETRIC SCANNERS.

Calibration Report. Short Version. UltraCam L, S/N UC-L Vexcel Imaging GmbH, A-8010 Graz, Austria

POTENTIAL OF LARGE FORMAT DIGITAL AERIAL CAMERAS. Dr. Karsten Jacobsen Leibniz University Hannover, Germany

Leica ADS80 - Digital Airborne Imaging Solution NAIP, Salt Lake City 4 December 2008

switzerland Commission II, ISPRS Kyoto, July 1988

Single Camera Catadioptric Stereo System

Computer Vision. Howie Choset Introduction to Robotics

EXAMPLES OF TOPOGRAPHIC MAPS PRODUCED FROM SPACE AND ACHIEVED ACCURACY CARAVAN Workshop on Mapping from Space, Phnom Penh, June 2000

D. Hunter, J. Smart Kern & Co.., Ltd 5000 Aarau switzerland Commission II, ISPRS Kyoto, July 1988

CLOSE RANGE ORTHOIMAGE USING A LOW COST DIGITAL CAMCORDER

MINIMISING SYSTEMATIC ERRORS IN DEMS CAUSED BY AN INACCURATE LENS MODEL

APPLICATION OF COMPUTER VISION FOR DETERMINATION OF SYMMETRICAL OBJECT POSITION IN THREE DIMENSIONAL SPACE

Volume 1 - Module 6 Geometry of Aerial Photography. I. Classification of Photographs. Vertical

Sample Copy. Not For Distribution.

EXPERIMENT ON PARAMETER SELECTION OF IMAGE DISTORTION MODEL

Camera Calibration Certificate No: DMC II

Exercise questions for Machine vision

CALIBRATION REPORT SUMMARY

Vexcel Imaging GmbH Innovating in Photogrammetry: UltraCamXp, UltraCamLp and UltraMap

TELLS THE NUMBER OF PIXELS THE TRUTH? EFFECTIVE RESOLUTION OF LARGE SIZE DIGITAL FRAME CAMERAS

Reconstructing Virtual Rooms from Panoramic Images

Calibration Report. Short Version. Vexcel Imaging GmbH, A-8010 Graz, Austria

E X P E R I M E N T 12

Calibration Report. Short Version. UltraCam Eagle, S/N UC-E f210. Vexcel Imaging GmbH, A-8010 Graz, Austria

Camera Calibration Certificate No: DMC II

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

HD aerial video for coastal zone ecological mapping

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Camera Calibration Certificate No: DMC II

Camera Calibration Certificate No: DMC II

Digital Image Processing

Camera Calibration Certificate No: DMC II Aero Photo Europe Investigation

Camera Calibration Certificate No: DMC IIe

GEOMETRIC RECTIFICATION OF EUROPEAN HISTORICAL ARCHIVES OF LANDSAT 1-3 MSS IMAGERY

Calibration Report. Short version. UltraCam X, S/N UCX-SX Microsoft Photogrammetry, A-8010 Graz, Austria. ( 1 of 13 )

Camera Calibration Certificate No: DMC II

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Overview. Objectives. The ultimate goal is to compare the performance that different equipment offers us in a photogrammetric flight.

Beacon Island Report / Notes

DECISION NUMBER FOURTEEN TO THE TREATY ON OPEN SKIES

UltraCam Eagle Prime Aerial Sensor Calibration and Validation

Digital Image Processing

VERIFICATION OF POTENCY OF AERIAL DIGITAL OBLIQUE CAMERAS FOR AERIAL PHOTOGRAMMETRY IN JAPAN

Calibration Report. Short version. UltraCam Xp, S/N UC-SXp Vexcel Imaging GmbH, A-8010 Graz, Austria

Digital Photographic Imaging Using MOEMS

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Section 3. Imaging With A Thin Lens

Unit 1: Image Formation

MEDIUM FORMAT CAMERA EVALUATION BASED ON THE LATEST PHASE ONE TECHNOLOGY

EC-433 Digital Image Processing

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

A Study of Slanted-Edge MTF Stability and Repeatability

2019 NYSAPLS Conf> Fundamentals of Photogrammetry for Land Surveyors

not to be republished NCERT Introduction To Aerial Photographs Chapter 6

INSERTING THE PAST IN VIDEO SEQUENCES

ENVI Tutorial: Orthorectifying Aerial Photographs

Chapters 1-3. Chapter 1: Introduction and applications of photogrammetry Chapter 2: Electro-magnetic radiation. Chapter 3: Basic optics

INTENSITY CALIBRATION AND IMAGING WITH SWISSRANGER SR-3000 RANGE CAMERA

Time-Lapse Panoramas for the Egyptian Heritage

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Lab #10 Digital Orthophoto Creation (Using Leica Photogrammetry Suite)

CALIBRATION OF AN AMATEUR CAMERA FOR VARIOUS OBJECT DISTANCES

1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany

LENSES. INEL 6088 Computer Vision

RESULTS OF 3D PHOTOGRAMMETRY ON THE CMS BARREL YOKE

RADIOMETRIC CALIBRATION OF INTENSITY IMAGES OF SWISSRANGER SR-3000 RANGE CAMERA

Be aware that there is no universal notation for the various quantities.

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

Automated Photogrammetry with PHODIS

Solution Set #2

NUMERICAL ANALYSIS OF WHISKBROOM TYPE SCANNER IMAGES FOR ASSESSMENT OF OPEN SKIES TEST FLIGHTS

PLazeR. a planar laser rangefinder. Robert Ying (ry2242) Derek Xingzhou He (xh2187) Peiqian Li (pl2521) Minh Trang Nguyen (mnn2108)

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

PROFILE BASED SUB-PIXEL-CLASSIFICATION OF HEMISPHERICAL IMAGES FOR SOLAR RADIATION ANALYSIS IN FOREST ECOSYSTEMS

Image Fusion. Pan Sharpening. Pan Sharpening. Pan Sharpening: ENVI. Multi-spectral and PAN. Magsud Mehdiyev Geoinfomatics Center, AIT

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

THE USE OF PANORAMIC IMAGES FOR 3-D ARCHAEOLOGICAL SURVEY

QUALITY COMPARISON OF DIGITAL AND FILM-BASED IMAGES FOR PHOTOGRAMMETRIC PURPOSES Roland Perko 1 Andreas Klaus 2 Michael Gruber 3

PHOTOGRAMMETRY STEREOSCOPY FLIGHT PLANNING PHOTOGRAMMETRIC DEFINITIONS GROUND CONTROL INTRODUCTION

TEST RESULTS OBTAINED WITH THE LH SYSTEMS ADS40 AIRBORNE DIGITAL SENSOR

Transcription:

DEVELOPMENT AND APPLICATION OF AN EXTENDED GEOMETRIC MODEL FOR HIGH RESOLUTION PANORAMIC CAMERAS D. Schneider, H.-G. Maas Dresden University of Technology Institute of Photogrammetry and Remote Sensing Mommsenstr. 13, 01062 Dresden, Germany (danilo.schneider@mailbox, hmaas@rcs.urz).tu-dresden.de Commission V, WG V/4 KEY WORDS: Panoramic camera, High resolution, Geometric modelling, Bundle adjustment, Three-dimensional model ABSTRACT: Digital panoramic photography has become a popular tool to record landscapes, city squares or indoor scenes in a single image with a full 360 view. In photogrammetric applications, a digital panoramic camera may present an interesting alternative to conventional image acquisition techniques such as large format film or plate based cameras. Advantages of such a camera are their very high resolution, which allows the recognition of fine object detail, and the possibility of recording panorama-like object geometries such as city squares or indoor scenes by taking only a few images. This paper describes and investigates a strict mathematical model for rotating line panoramic cameras, developed at the Institute of Photogrammetry and Remote Sensing of the Dresden University of Technology. The model accuracy was improved considerably through additional parameters, which describe deviations from the basic geometric model of cylinder projection. Furthermore the mathematical model was successfully implemented in different photogrammetric data processing routines, such as a self-calibrating bundle adjustment of panoramic image data. Based on this work, the combination of rotating line panoramic cameras and the mathematical model depict a very suitable tool for precise 3D modelling of indoor scenes, city squares or long façades in combination with the analysis of very high resolution texture information. The paper concentrates on results of a self-calibrating bundle adjustment using the developed geometric model of panoramic image data. Furthermore some examples of high resolution 3D-models generated with the panoramic camera EYESCAN M3 will be presented as well as other applications which are derived from the geometric model such as epipolar line geometry for stereo or multi-image matching. Fig. 1: Panorama of Theaterplatz, Dresden (camera: KST EYESCAN M3, Image format: 53,800 10,200 Pixel; further panoramas in (Schneider, 2004)) 1. INTRODUCTION Stereoscopic image acquisition of indoor scenes or city squares with conventional cameras may be rather laborious, since many images must been captured to achieve sufficient overlap for the following analysis. Therefore rotating line cameras (digital panoramic cameras) may present an interesting alternative to conventional methods, because panorama-like object geometries can be completely captured by taking only a few images, which concurrently offer a very high resolution (Tecklenburg & Luhmann, 2003). Analogue panoramic cameras exist a fairly long time, but they were primarily used for purely photographic purposes. Most photogrammetric imaging techniques are based on the central perspective principle. The geometry of panoramic images deviates from the central perspective, as the image data is projected onto a cylinder. To use this kind of imagery, it was necessary to establish a generic geometric model for digital panoramic cameras (Schneider & Maas, 2003a). Based on knowledge of the mechanical and optical properties of the camera, the model was successively extended by additional parameters. The mathematical model was initially implemented in a spatial resection and tested with the camera EYESCAN M3, made by KST (Kamera & System Technik, Dresden) in a joint venture with the German Aerospace Centre (DLR). Information about the EYESCAN camera can also be found in (Scheibe et al. 2001). Based on the geometric model, a self-calibrating bundle adjustment was developed, and other photogrammetric methods were adapted to panoramic geometry. Thus a detailed and accurate 3D reconstruction of objects such as indoor scenes, city squares or long façades based on digital panoramic imagery is possible.

2. ROTATING LINE CAMERA 2.1 Principle of image acquisition The principle of image acquisition of a digital panoramic camera is similar to the one of a flat bed scanner, where a RGB- CCD linear array sensor moves linearly over an object. In the case of a digital panoramic camera, the sensor moves around a fix rotation axis and describes a cylindrical surface. Therefore this kind of camera is also called rotating line camera. R G B R G B Lens 35 mm 45 mm 60 mm 100 mm Sensor: Linear array RGB CCD with 10,200 pixel per colour channel, Length 72 mm, Radiometric Resolution: 16 bit per colour channel Number of image columns (360 ) 31400 40400 53800 89700 Vertical opening angle 90 80 60 40 Data volume (360, 48 Bit) Recording time (360, 8 ms per column) 1.7 GB 2.3 GB 3.1 GB 5.1 GB 3 min 4,5 min 6 min 10 min Tab. 1: Basic parameters of EYESCAN M3 The resolution potential of the camera is illustrated by Fig. 4, where a small part of a panorama of the Zwinger Dresden is shown in 4 zoom steps. Fig. 2: Principle of digital panoramic image acquisition Fig. 3: Digital panoramic camera EYESCAN M3 In this manner the room around the camera can be scanned with a horizontal angle up to 360. The vertical opening angle depends on the sensor length as well as on the focal distance of the lens. As the sensor consists of one CCD line for each colour channel, true RGB information is recorded without using colour filter patterns and interpolation techniques. A disadvantage of this imaging principle is the long recording time compared to CCD arrays. Therefore moving objects are not correct represented on the image. 2.2 Panoramic line camera EYESCAN M3 The EYESCAN M3 is equipped with a CCD sensor with 3 10,200 pixel. Similar sensors are also used in cameras on airborne platforms such as the ADS40. The size of a 360 panorama image depends also on the focal length and can reach values between 300 and 900 mega pixel (uncompressed, 16 Bit per colour). Further details to the camera configuration are shown (Schneider & Maas, 2003b). Some technical data are summarised in the following table. Fig. 4: Resolution potential of the camera EYESCAN M3 illustrated through 4 zoom steps Optionally it is possible to mount an illumination system on the camera head (see Fig. 5), which projects a light line onto the object. This system consists of a light source and an optical fibre cable, which transmits the light to 3 light profile adapters. This illumination system can be very useful for capturing retroreflecting targets, e.g. for camera calibration purposes..

(1) where: (2) Fig. 5: Camera EYESCAN M3 with glassfibre illumination system 3.1 Basic model approach 3. MATHEMATICAL MODEL The mapping of object points onto a cylindrical surface, described by the rotation of the linear array sensor, complies only in one image direction with the known central perspective principle. Therefore, it was necessary to develop a geometric model for the geometry of digital panoramic cameras. This model is based on transformations between four coordinate systems. As also described in (Lisowski & Wiedemann, 1998), an object coordinate system, a Cartesian and a cylindrical camera system as well as the pixel coordinate system were defined. Through the transformation between these coordinate systems we achieve the basic observation equations (equations 1 and 2) in analogy to the collinearity equations, which describe the observations (image coordinates) as a function of object coordinates, camera orientation and possibly additional parameters. 3.2 Additional parameters and accuracy potential The geometric model complies only approximately with the actual physical reality of the image forming process. Therefore the correction terms dm and dn, where additional parameters for the compensation of systematic effects are considered, are crucial for the accuracy potential of the model. These parameters are explained in more detail in Schneider & Maas (2003b). The following figures (Fig. 7 and 8) illustrate three of the additional parameters. Rotation axis Fig. 7: Model deviations (e 1 : eccentricity of projection centre) Fig. 8: Model deviations (γ 1, γ 2 : Non-parallelism of CCD line, 2 components) Fig. 6: Geometrical model (Definition of used variables in Schneider & Maas 2003b) As a fist step, the geometrical model was implemented in a spatial resection. The resulting standard deviation of unit weight and other output parameters of the resection were analysed to assess the effect of every additional parameter individually. The following table (Tab. 2) shows how ˆ σ 0 changed by successively inserting additional parameters. The spatial resection is based on approx. 360 reference points around the camera position of a calibration room courtesy of AICON 3D Systems GmbH.

Parameter ˆ σ 0 [Pixel] Exterior orientation 25.20 Interior orientation 5.88 Eccentricity of projection centre 5.63 Non-parallelism of CCD line (2 components) 1.15 Lens distortion 0.60 Affinity 0.45 Non-uniform rotation (periodical deviations) 0.24 Tab. 2: ˆ σ 0 of spatial resection Translating the resulting ˆ σ 0 of 0.24 pixel into object space, we receive a lateral point precision between 0.1 (at 2 m distance) and 0.5 mm (at 10 m distance) when using a 35 mm lens. According to the length of the CCD line of 10,200 pixel, this value corresponds with a relative precision 1 : 42,000. In (Amiri Parian & Grün, 2003) further parameters in addition to physically defined parameters are used for the compensation of local systematics. For this purpose the panorama is divided into pieces, in which a polynomial approach is then used for the compensation of local remaining systematics. Thus ˆ σ 0 = 0.23 pixels was reached, which corresponds to the order of magnitude shown here. Fig. 9: Principle drawing of bundle adjustment for panoramas 7.5 m 2.5 m 4. IMPLEMENTATION OF THE MODEL The mathematical model was implemented into different photogrammetric applications, with primary focus on a bundle block adjustment for panoramic images. 4.1 Panoramic bundle adjustment Using the bundle adjustment, it is possible to determine object points, orientations and camera parameters simultaneously from two or more panoramas. An important goal during the development of the panoramic image bundle block adjustment was user friendliness, which means among other things that the computation should get along with as few as possible approximate values. The implemented solution requires only three object points to procure approximate values for the orientation of the panoramas and successively for all object points. These three object points can be realized for example by a small reference triangle placed into the object. The adjustment can be accomplished alternatively with a minimum datum solution, a certain number of control points or as free network adjustment. In Tab. 3 the results of two computations, an adjustment with minimum datum and a free network adjustment, both with 364 object points and 5 camera positions, are summarised. As expected, the standard deviations of the object coordinates are better in the free network adjustment. This effect can be explained by the datum point distribution. Fig. 10: Object points of calibration room of AICON 3D Systems GmbH incl. camera positions Minimum datum ˆ0 σ [pixel] 0.22 5.0 m Free network adjustment σ X [mm] 0.48 0.33 σ Y [mm] 0.45 0.27 σ Z [mm] 1.01 0.15 Tab. 3: Results of panoramic bundle block adjustment of points in the calibration room Remaining systematic errors of the camera might result in object point coordinate errors and not show up in the results of the bundle adjustment. Therefore the computed object coordinates were compared with the reference coordinates of the calibration room. For the stabilization of the block geometry four well-distributed control points were used. From Tab. 4 it becomes obvious that the average value of the deviations amounts to ca. 0.5 mm for all three coordinate directions. It is not sure, however, whether the small discrepancy between bundle results and checkpoint deviations can be interpreted as a limitation of the accuracy potential of the camera, or whether the deviations are caused by the limited precision of the reference coordinates.

X 0.58 mm Y 0.44 mm Z 0.36 mm After the calculation of coordinates of points representing the object geometry such as edges or corners via bundle block adjustment or spatial intersection, it is possible to generate 3Dmodels of the 360 -surrounding. After creating surfaces using these discrete points, the model can be filled with highresolution texture from the panoramic images. This texture mapping can be achieved by projecting image data of an orientated and calibrated panorama onto the object surface planes using the accurate mathematical model as described in chapter 3. Fig. 12 illustrates the principle of this projection. Tab. 4: Average deviations between calculated object points and reference points Further bundle adjustments were calculated using panorama imagery of an inner courtyard within the campus of the Dresden University. Fig. 11 shows one of 4 panoramas. This courtyard has a dimension of approximately 45 x 45 m in the ground view and the building height is ca. 20 m. X Z Y Fig. 12: Principle of the projection of panoramic texture into a 3D-Model Fig. 11: Panorama of an inner courtyard within the campus of Dresden University A first calculation was carried out using image coordinates of 120 signalized points, which could be measured semiautomatically with subpixel precision. The free network adjustment resulted in ˆ σ 0 = 0.24 Pixel. The mean standard deviation of object points is summarized in the following table. Using the 3D object geometry as outcome of the geometric processing of panoramic imagery it is then possible to generate precise photo-realistic 3D-models of objects such as city squares, rooms or courtyards, e.g. with CAD-software (Fig. 13). Some virtual reality models (VRML) can be found in (Schneider, 2004). Signalized points ˆ0 σ [pixel] (120 sign. points) 0.24 σ X [mm] (120 sign. points) 2.6 σ Y [mm] (120 sign. points) 2.5 σ Z [mm] (120 sign. points) 2.8 Tab. 5: Results of panoramic bundle block adjustment of signalized points of a real-word object The same dataset was processed with 48 additional natural object points, which were measured manually. Table 6 shows the mean standard deviations of these natural points. Natural points ˆ0 σ [pixel] (all 168 points) 0.41 σ X [mm] (48 natural points) 4.2 σ Y [mm] (48 natural points) 4.7 σ Z [mm] (48 natural points) 5.3 Tab. 6: Results of panoramic bundle block adjustment of natural points of a real-word object 4.2 Object model generation Fig. 13: 3D-modelling with AutoCAD using object geometry and texture from panoramic imagery 4.3 Epipolar line geometry The developed mathematical model was further used to describe the epipolar line geometry for panoramic images. As evident from Fig. 14, in most cases the epipolar lines are actually no straight lines but rather epipolar curves in the image. b O 1 O 2 P 1 P 2 P K Fig. 14: Epipolar line geometry of panoramas

The epipolar line geometry can be computed from the orientation parameters of a set of panoramic images, taking into consideration the additional parameters. It may be used to support the search of corresponding points during interactive stereo mapping, and is indispensable for semi or fully automatic processing of panoramic image data. Proceeding from a point in one panorama, the homologous point a second panorama can be searched for along the epipolar curve. The same point a third panorama can be found at the intersection point of two epipolar curves. 4.4 Tangential projection The RGB information of the panoramic cylindrical surface of a panoramic image can be projected onto a tangential plane by a tangential projection. In this process, the panoramic image is converted into a central perspective view using the developed geometrical model, allowing to use it in conventional photogrammetric software tools. With this method only panorama sectors can be converted, whose panoramic angle is significantly smaller than 180. The procedure can for example be used in architectural applications such as capturing façades (Fig. 15). 5. FUTURE PROSPECTS The presented algorithms for the photogrammetric processing of panorama pictures were programmed in the form of functions and already partly integrated in a user software package of the company fokus GmbH Leipzig. This integration will be completed in the next months, so that comprehensive software will finally be available for highly exact three-dimensional processing of panoramic images. ACKNOLEDGEMENT The results presented in this paper were developed in the context of the project "Terrestrial rotating line wide-angle camera for digital close range photogrammetry", which is funded by resources of the European Fund of Regional Development (EFRE) 2000-2006 and by resources of the State Saxony. The authors would like to thank the companies KST (Kamera System Technik Dresden GmbH) and the fokus GmbH Leipzig, which are also participants in the project. In addition, special thanks apply the AICON 3D-Systems GmbH to the supply of the calibration room. REFERENCES Amiri Parian, J., Grün, A., 2003: A sensor model for panoramic cameras. Grün/Kahmen (Eds.): Optical 3-D Measurement Techniques VI, Volume II, pp. 130-141 Lisowski, W., Wiedemann, A., 1998: Auswertung von Bilddaten eines Rotationszeilenscanners. Publikationen der DGPF, No 7/1998, pp. 183-189 Fig. 15: Panorama sector of a façade and tangential projection Scheibe, K., Korsitzky, H., Reulke, R., Scheele, M., Solbrig, M., 2001: EYESCAN A high resolution digital panoramic camera. Robot Vision 2001, LNCS 1998 (Eds. Klette/Peleg/Sommer). Springer Verlag, pp. 87-83 Schneider, D., Maas, H.-G., 2003a: Geometrische Modellierung und Kalibrierung einer hochauflösenden digitalen Rotationszeilenkamera. Photogrammetrie, Laserscanning, Optische 3D-Messtechnik Beiträge der Oldenburger 3D-Tage 2003, Wichmann Verlag, pp. 57-64 Schneider, D., Maas, H.-G., 2003b: Geometric modelling and calibration of a high resolution panoramic camera. Grün/Kahmen (Eds.): Optical 3-D Measurement Techniques VI, Volume II, pp. 122-129 Fig. 16: Rectified image The panoramic image projected on the tangential plane was rectified by means of a projective transformation (Fig. 16). The advantage of this procedure over the use of area sensor cameras is a very high resolution of rotating line panoramic cameras, which allows for the recognition and mapping of very fine object detail in the image. The original image of the panoramic sector shown in Fig. 15 has ca. 64 million pixel. Schneider, D. (2004): Information about the research project "High-resolution digital panoramic camera geometric modelling, calibration and photogrammetric applications", Institute of Photogrammetry and Remote Sensing, Dresden University of Technology, Germany. http://www.tu-dresden.de/fghgipf/forschung/panocam/ (29 April 2004) Tecklenburg, W., Luhmann, T., 2003: Potential of panoramic view generated from high-resolution frame images and rotation line scanners. Grün/Kahmen (Eds.): Optical 3-D Measurement Techniques VI, Volume II, pp. 114-121