Novel Hemispheric Image Formation: Concepts & Applications

Similar documents
IR Panomorph Lens Imager and Applications

ABSTRACT. Keywords: Panomorph lens, panoramic, lens design, infrared, LWIR, situation awareness, image rendering 1. INTRODUCTION

The Fastest, Easiest, Most Accurate Way To Compare Parts To Their CAD Data

Opto Engineering S.r.l.

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

SEE MORE, SMARTER. We design the most advanced vision systems to bring humanity to any device.

Digital Photographic Imaging Using MOEMS

Lecture 3: Geometrical Optics 1. Spherical Waves. From Waves to Rays. Lenses. Chromatic Aberrations. Mirrors. Outline

Depth Perception with a Single Camera

Parity and Plane Mirrors. Invert Image flip about a horizontal line. Revert Image flip about a vertical line.

Using Optics to Optimize Your Machine Vision Application

Ch 24. Geometric Optics

Design of high resolution panoramic endoscope imaging system based on freeform surface

This is an author-deposited version published in: Eprints ID: 3672

Evaluation of the ImmerVision IMV1-1/3NI Panomorph Lens on a Small Unmanned Ground Vehicle (SUGV)

Real-Time Scanning Goniometric Radiometer for Rapid Characterization of Laser Diodes and VCSELs

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Abstract. 1. Introduction and Motivation. 3. Methods. 2. Related Work Omni Directional Stereo Imaging

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

DISPLAY metrology measurement

MUSKY: Multispectral UV Sky camera. Valentina Caricato, Andrea Egidi, Marco Pisani and Massimo Zucco, INRIM

Imaging Instruments (part I)

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

Chapter 18 Optical Elements

LENSES. INEL 6088 Computer Vision

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Single Camera Catadioptric Stereo System

Notes on the VPPEM electron optics

A novel tunable diode laser using volume holographic gratings

Be aware that there is no universal notation for the various quantities.

Target Range Analysis for the LOFTI Triple Field-of-View Camera

Brief summary report of novel digital capture techniques

Compact camera module testing equipment with a conversion lens

Lecture 2 Digital Image Fundamentals. Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

Intorduction to light sources, pinhole cameras, and lenses

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Geometry of Aerial Photographs

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

MEM: Intro to Robotics. Assignment 3I. Due: Wednesday 10/15 11:59 EST

3D and Sequential Representations of Spatial Relationships among Photos

Active Aperture Control and Sensor Modulation for Flexible Imaging

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing

Feature Extraction and Pattern Recognition from Fisheye Images in the Spatial Domain

CS 443: Imaging and Multimedia Cameras and Lenses

Introduction to Optical Modeling. Friedrich-Schiller-University Jena Institute of Applied Physics. Lecturer: Prof. U.D. Zeitner

NANO 703-Notes. Chapter 9-The Instrument

TECHSPEC COMPACT FIXED FOCAL LENGTH LENS

Computer Vision. The Pinhole Camera Model

Spectacle lens design following Hamilton, Maxwell and Keller

UNIT Explain the radiation from two-wire. Ans: Radiation from Two wire

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Optical basics for machine vision systems. Lars Fermum Chief instructor STEMMER IMAGING GmbH

Beacon Island Report / Notes

Multi-sensor Panoramic Network Camera

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

Multi-aperture camera module with 720presolution

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT


A collection of hyperspectral images for imaging systems research Torbjørn Skauli a,b, Joyce Farrell *a

Imaging Optics Fundamentals

Lab 11: Lenses and Ray Tracing

Activity 6.1 Image Formation from Spherical Mirrors

Analysis of Hartmann testing techniques for large-sized optics

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

General Imaging System

CSE 473/573 Computer Vision and Image Processing (CVIP)

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Εισαγωγική στην Οπτική Απεικόνιση

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

The key to a fisheye is the relationship between latitude ø of the 3D vector and radius on the 2D fisheye image, namely a linear one where

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

On spatial resolution

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System

A shooting direction control camera based on computational imaging without mechanical motion

Lecture # 7 Coordinate systems and georeferencing

Reading: Lenses and Mirrors; Applications Key concepts: Focal points and lengths; real images; virtual images; magnification; angular magnification.

Colour correction for panoramic imaging

Math, Magic & MTF: A Cheat Sheet For The Vision System Community. By Stuart W. Singer, senior VP & CTO, and Jim Sullivan, director, Industrial Optics

LENSES. a. To study the nature of image formed by spherical lenses. b. To study the defects of spherical lenses.

ii) When light falls on objects, it reflects the light and when the reflected light reaches our eyes then we see the objects.

Pinch-the-Sky Dome: Freehand Multi-Point Interactions with Immersive Omni-Directional Data

Cameras. CSE 455, Winter 2010 January 25, 2010

Applying of refractive beam shapers of circular symmetry to generate non-circular shapes of homogenized laser beams

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Unit 1: Image Formation

Chapter 34. Images. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

Digital Image Processing

E X P E R I M E N T 12

Image Formation and Capture

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

Digital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye

5 180 o Field-of-View Imaging Polarimetry

Digital Image Processing

Image Acquisition Hardware. Image Acquisition and Representation. CCD Camera. Camera. how digital images are produced

CS 548: Computer Vision REVIEW: Digital Image Basics. Spring 2016 Dr. Michael J. Reale

Figure 1 HDR image fusion example

GEOMETRICAL OPTICS AND OPTICAL DESIGN

International Conference on Information Sciences, Machinery, Materials and Energy (ICISMME 2015)

Transcription:

Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic and hemispheric lens technologies represent new and exciting opportunities in both imaging and projection systems. Such lenses offer intriguing applications for the transportation/automotive industry, in the protection of civilian and military areas, and in the ever-evolving entertainment business. In this paper we describe a new optical design technique that provides a greater degree of freedom in producing a variety of hemispheric spatial light distribution areas. This innovative optical design strategy, of generating and controlling image mapping, has been successful in producing high-resolution imaging and projection systems. This success has subsequently generated increased interest in the high-resolution camera/projector and the concept of absolute measurement with high-resolution wide-angle lenses. The new technique described in this paper uses optimization techniques to improve the performance of a customized wide-angle lens optical system for a specific application. By adding a custom angle-to-pixel ratio at the optical design stage, this customized optical system provides ideal image coverage while reducing and optimizing signal processing. This novel image formation technique requires the development of new algorithms in order to view the panoramic image on a display without any residual distortion. Keywords: panoramic, omnidirectional, panomorph, hemispheric, image forming, rendering, 3D rendering. 1. INTRODUCTION Natural or artificial vision systems process the images collected with the system s eyes or cameras to capture information required for navigation, surveillance, tracking, recognition and other tasks. Since the way images are captured determines the degree of difficulty in performing a task, and since most systems have to cope with limited resources, the image mapping on the system s sensor should be designed to optimize the image resolution and processing related to particular tasks. Different ways of sampling light, i.e., through different camera lenses, may be more or less powerful with respect to specific competencies. This seems intuitively evident in view of the variety of eye designs in the biological world. Over the last several years, ImmerVision s research team has focused on the imaging process and the development of a new type of panoramic imager that is optimized to provide superior image mapping with respect to specific applications. We have shown that for surveillance scenarios 1, as an example, the camera system can be improved by increasing the resolution in the zone of interest as it relates to the system s overall capabilities and costs. This first application demonstrates a new way of constructing powerful imaging devices which, compared to conventional cameras, are better suited to particular tasks in various wide-angle vision applications, thus leading to a new camera technology. 2. HARDWARE: PANOMORPH LENS CONCEPT Panoramic imaging is of growing importance in many applications. While primarily valued for its ability to image a very large field-of-view (180 o X 360 o ), other characteristics, such as the ability to reduce the number of sensors, are equally important benefits of panoramic imaging. In addition, ImmerVision s panomorph lenses offer distortion control, which is considered a major enhancement to panoramic vision 2. Specifically, the panoramic imager, equipped with a panomorph lens, can be designed to increase the number of pixels in the zones of interest using a patented distortion-control process. The main advantage of the ImmerVision patent is that it is based on a custom-design approach, simply because panoramic lens applications need to be designed to meet real and very specific needs. By integrating specific distortion control during the optical design stage, ImmerVision technology can produce a unique and highly efficient panoramic lens.

The panomorph lens provides a full hemispheric field-of-view. In contrast with other types of panoramic imagers that suffer from blind zone (catadioptric cameras), low-image numerical aperture and high distortion, the panomorph lens is designed to use distortion as a design parameter, with the effect of producing a high-resolution coverage where needed, i.e., in the zone of interest. In the design of an efficient panoramic lens, the coverage area is divided into different zones. A specific resolution requirement as well as a particular field of view is defined for each individual zone. Figure 1 shows a typical surveillance scenario. Figure 1: Specific security zones. Figure 2: The ideal FOV (α) vs. the position (d) on the sensor for the scenario presented in Figure 1. For this particular scenario, the panoramic coverage area is divided into five adjacent and continuous zones. Zones B and C are symmetrical with the vertical axis. The five adjacent zones, while still providing full hemispheric coverage together, each feature a different resolution requirement, as the most significant objects are in Zone B. (Zone B in a surveillance application enables facial recognition and identification.) An object in Zone B is also more distant from the camera than an object in Zone A. This means that the relative angular resolution (pixels/degree) in Zones A and B should be different. For example: A human face in Zone B (located at 60 degrees from the vertical axis) will subtend an angle by half the amount that it would in Zone A (above the camera). To get the same number of pixels per face in both Zones A and B, the pixels/degree in Zone B must be twice the pixels/degree in Zone A. This means that the number of pixels required on the sensor to image Zone B is twice the number of pixels required to image Zone A. It is difficult to evaluate the exact resolution as a function of the sensor, because this would depend on the resolution chosen for the zone of interest. However, if we define i zones (1 to n) where each zone covers an angle (θ i ) with a number of pixels (N i ) we can describe the resolution (R i ) for each zone:

with the following limit conditions: R i N θ i =, (1) i n i= 1 N n i = i i # i= 1 n i= 1 R θ = pixels. (2) and θ = θ i, (3) max showing that if you increase the resolution in the i zone, the result is less resolution in the other zones. In the next section we will see some examples of this. Table 1 summarizes how the modern distortion-corrected panomorph lens offers a real benefit over other types of panoramic imagers (with at least 180 degrees field-of-view). As a comparison matrix we used the following: The zone of interest is defined as 20 o to 60 o calculated under the horizon (0 degrees being the horizon), representing a typical zone of interest for a surveillance application. This comparison table is valid for any number of pixels on the sensor. The last column has been added to show the sensor footprint in viewing an interior scenario. As you can see, mirror, PAL and fisheye panoramic imagers use less than 60% of the sensor areas to image the FOV. In comparison, using anamorphic design, the panomorph lens uses up to 80% of the sensor to image the FOV, which is 30% more than any other panoramic imager on the market. Table 1: Panoramic image formation comparison Sensor surface used Pixels used in the zone of interest Blind zone Compactness Sensor Footprint Mirror Imager 57% 18% Yes No PAL (Panoramic Annular Lens) 51% 28% Yes Yes Fisheye Lens 59% 29% No Yes

Panomorph Lens 79% 50% No Yes 3. SOFTWARE: FROM PANORAMIC PICTURE TO STANDARD VISUALIZATION To be effective, the panoramic video-viewing library corrects image distortion from cameras equipped with a panomorph lens for display and control of one or more standard views, such as a PTZ (Figure 3) in real time. The viewing library allows simultaneous display of as many views as desired from one or more cameras (Figure 4). Figure 3: Real-time distortion-free display (left: original image produced by the panomorph lens). Figure 4: Four PTZ views (left), and two strips to display a 360 total camera surround in one single view (right). Consequently, the viewing process must unwrap the image in real time in order to provide views that reproduce real world proportions and geometrical information. The algorithms can be customized and adapted for each specific application, which is then related to human vision (display) or artificial vision (analytic function). The viewing process can be decomposed into three main steps: - the definition of the panomorph geometrical model (PGM) associated to each custom panomorph lens application; - the projection of the recorded image onto the PGM to provide a discretized mapping based on the recorded pixel position on the sensor; - finally, the rendering, which uses well-known standard rendering techniques.

3.1 Panomorph Geometrical Model (PGM) The image produced by each panomorph lens is unique to its application. The image mapping can be defined by a unique 3D geometrical model (Panomorph Geometrical Model, or PGM), which reproduces the panomorph lens design characteristics. The PGM is a geometric representation (surface) of the relative magnification of the lens as a function of the angles, expressed in spherical or polar coordinates (R, θ, φ). In other words, if the surface is represented by vector R, the length of the vector is proportional to the lens magnification (resolution) in the direction defined by the polar angles. This model depends on lens parameters such as the anamorphic ratio, the field of view, as well as position, size, and the magnification in the zones of interest. The PGM is a mathematical transformation of the image footprint I(u,v) into a surface S(R,θ,φ) representation using spherical coordinates: 3.1.1 Anamorphic ratio I(u,v) S(R,θ,φ), (4) The anamorphic ratio is used only as a scale factor, which is function of the angle φ (Figure 5) This angle defines the azimuth direction of the recorded image taken by the panomorph lens. 3.1.2 Field of view Figure 5 : Panomorph elliptical footprint I(u,v); scaling defined with φ angle. The field of view, or FOV, determines the angular limit (theta) of the PGM. The FOV of the panomorph lens is about 180 degrees and can be more or less, depending on the application. Figure 6 shows two schematic PGMs with 180 degree and 250 degree FOVs respectively.

Figure 6: PGM with 180 and 250 degree FOVs respectively. 3.1.3 Distortion The panomorph lens uses distortion as a design parameter, in order to provide high-resolution coverage in a specific zone of interest. In other words, the FOV can be divided into different zones, each with a defined resolution, which all respect the equation 1 to 3. The characteristic of each zone is defined by its specific angular extension and relative resolution. To illustrate the impact of the distortion profile on the PGM, we will study two examples. In these examples, the FOV is 180 degrees wide, the zone of interest is 30 degrees wide, and the resolution is two times greater in the zone of interest than it is in the rest of the FOV (2:1). From one example to another, only the position of the zone of interest changes. Example 1: The first example is based on the design of a front view camera (Figure 7). In this case, the zone of interest is the central part of the image, even though the entire 180-degree FOV is still recorded. A panomorph lens with this feature can be used on a cell phone (for video conferencing) or on an ATM surveillance camera. Figure 7: Panomorph lens for a front-view camera.

The panomorph lens resolution in the central zone is twice that of the resolution in the periphery. Figure 8 shows the image footprint with the proper resolution for each zone. On the left of Figure 8, we have a Cartesian plot of the resolution as a function of the view angle (defined from the centre). We note that a transition zone exists between the central and the periphery areas. Theoretically, this transition can be very small, but as the panomorph lens is a real product, this transition can extend over 10 degrees. Figure 8: Image footprint (left) and resolution graph for the front-view panomorph lens. As defined, the PGM in the polar coordinate space represents the resolution of the panomorph lens, or a surface in space where the spatial resolution is constant in terms of azimuthal (θ ) direction. Mathematically, it means that the Cartesian graph (Figure 8, right side) is transposed into the spherical coordinate plane. Figure 9 shows the 3D PGM representation. Example 2: Figure 9: 3D PGM (left), 2D view in Y-Z plane. The second example demonstrates a panomorph lens optimized for video conferencing, where the zone of interest is not in the centre but on the edge of the field of view. Figures 10, 11 and 12 show the image footprint, the resolution and the corresponding PGM respectively.

Figure 10: Panomorph lens for video conferencing. Figure 11: Image footprint (left) and resolution graph (right) for the video-conferencing panomorph lens. 3.1.4 Sensor resolution Figure 12: 3D PGM (left), 2D view in Y-Z plane (right).

A panomorph lens can be used with any sensor format (VGA, XGA, etc.) as long as the lens performance matches the Nyquist frequency of the sensor. The number of pixels will have an impact on the discretization of the model for the PGM. Up until now, the PGM has been defined by a continuous mathematical surface, however, on sensor we have a finite number of pixels. The continuous PGM will be sampled by the pixels. The number of pixels required to map the entire surface of the PGM is equal to the number of pixels on the sensor. Figure 13 shows a 2D sampling of the PGM using only 22 elements. You should note that the pixel dimension is constant over the entire PGM, and the pixels are always perpendicular to the direction of regard (direction of the vector R). With a higher number of pixels, the discrete PGM will be closer to the continuous PGM, as shown in Figure 14. Figure 13: Discrete PGM with 22-unit (pixels) sample Figure 14: Discrete PGM with 44-unit (pixels) sample 3.2 Projection of the panomorph image onto the PGM The image I (u,v) from the panomorph lens is projected onto the PGM, as shown in Figures 13 and 14. The final result is a discrete surface. The PGM is mapped with the panomorph image and can then be viewed using any classical 3D visualization techniques. Each pixel of the panomorph image is projected onto a discrete element of the PGM. The pixel position in the 3D space (on the surface) represents the real object position in the recorded scene. The projection uses the adapted azimuthal projection technique 4 with anamorphosis and distortion parameters added. 3.3 Standard rendering of the PGM The final goal is to visualize the recorded scene without distortion. The PGM can be used to achieve this goal using a standard algorithm 3. A virtual camera is placed at the central position (0,0,0). Viewing the scene with this virtual camera requires first selecting the angle (θ,φ) of viewing direction. Figure 15 shows two cameras pointing in two different directions. The camera pointed at the centre of the PGM will show a total of four elements (1D, 16 elements in 2D). The camera pointed at the edge of the PGM will show only two elements. This is the distortion effect. The resolution is twice in the centre than it is on the edge. A zoom can also be applied to change the θ and provide virtual functionalities.

Figure 15: Virtual camera at the centre of the mapped PGM The following Figure 16 shows the final projection on a 2D plane of each virtual view. This 2D view can be sent to a display monitor. Figure 16: Viewing pixel as a function of the pointing direction of the virtual camera (left = centre, right =edge). 4. CONCLUSION Panomorph lens development has led to a new type of panoramic imager that can be customized to enhance any panoramic imager application. The design features full hemispheric coverage, better use of sensor areas and increased resolution in the zone of interest. During the last decade, the ImmerVision research team has developed a custom viewing process perfectly adapted to the panomorph lens. The viewing process is composed of three steps. The first step is the definition of the panomorph geometrical model (PGM) associated with each custom panomorph lens application. The second step is the projection of the recorded image onto the PGM to provide a discretized mapping based on the recorded pixel position on the sensor. The third is a final rendering based on an azimuthal projection technique. The algorithms developed over the years have been optimized for use on small CPU and memory, enabling embedded processing. The algorithms are available thru a SDK running on Linux and Windows operating systems, and can be ported to many processors and systems. REFERENCES 1. Thibault S. Enhanced Surveillance System Based on Panomorph Panoramic Lenses. Proc SPIE Vol. 6540, paper 65400E, 2007. 2. Thibault S. Distortion Control Offers Optical System Design a New Degree of Freedom. Photonics Spectra May 2005, pp. 80-82. 3. Radu HORAUD Olivier MONGA, Vision par Ordinateur - Outils fondamentaux,,ed. Hermès, 1995 4. Weisstein, Eric W. "Azimuthal Equidistant Projection." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/azimuthalequidistantprojection.html Formatted: French (France)

Authors contact: Simon Thibault M.Sc., Ph.D., P.Eng. Director, Optics Division & Principal Optical Designer ImmerVision simon.thibault@immervision.com Patrice Roulet Technical Director ImmerVision patrice.roulette@immervision.com