Development of Hybrid Image Sensor for Pedestrian Detection

Similar documents
A Vehicle Speed Measurement System for Nighttime with Camera

Evaluation of Connected Vehicle Technology for Concept Proposal Using V2X Testbed

Development of Gaze Detection Technology toward Driver's State Estimation

Camera Setup and Field Recommendations

Single Image Haze Removal with Improved Atmospheric Light Estimation

Patents of eye tracking system- a survey

From the start the main activity of our company was the development and production of infrared illuminators.

A Study on Single Camera Based ANPR System for Improvement of Vehicle Number Plate Recognition on Multi-lane Roads

Image Capture and Problems

Privacy-Protected Camera for the Sensing Web

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

Development of 24 GHz-band High Resolution Multi-Mode Radar

Infrared Night Vision Based Pedestrian Detection System

Light) Conditions. Lens Aperture Shutter Speed (sec.) NOTES. Light. Conditions. Lens Aperture Shutter Speed (sec.) None LBB-12** (or Wratten No.

On spatial resolution

ROAD TO THE BEST ALPR IMAGES

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

A Method of Measuring Distances between Cars. Using Vehicle Black Box Images

Intelligent Nighttime Video Surveillance Using Multi-Intensity Infrared Illuminator

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

A shooting direction control camera based on computational imaging without mechanical motion

VEHICLE LICENSE PLATE DETECTION ALGORITHM BASED ON STATISTICAL CHARACTERISTICS IN HSI COLOR MODEL

One Week to Better Photography

FLASH LiDAR KEY BENEFITS

Unit Test Strand: The Wave Nature of Light

BROADCAST ENGINEERING 5/05 WHITE PAPER TUTORIAL. HEADLINE: HDTV Lens Design: Management of Light Transmission

By Pierre Olivier, Vice President, Engineering and Manufacturing, LeddarTech Inc.

Evaluation of laser-based active thermography for the inspection of optoelectronic devices

Photometric Measurements in the Field. Ronald B. Gibbons Group Leader, Lighting and Infrastructure Technology

Automatic Licenses Plate Recognition System

Automated Visual Snow-Cover Measurement Using SRG and VMS

Unit 8: Light and Optics

Vehicle Detection Using Imaging Technologies and its Applications under Varying Environments: A Review

Camera Requirements For Precision Agriculture

Camera Requirements For Precision Agriculture

Abstract Quickbird Vs Aerial photos in identifying man-made objects

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD

Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

An Inherently Calibrated Exposure Control Method for Digital Cameras

APPLICATIONS FOR TELECENTRIC LIGHTING

Using Optics to Optimize Your Machine Vision Application

Method Of Defogging Image Based On the Sky Area Separation Yanhai Wu1,a, Kang1 Chen, Jing1 Zhang, Lihua Pang1

Image Enhancement Using Frame Extraction Through Time

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

mm F2.6 6MP IR-Corrected. Sensor size

Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere

Saturation And Value Modulation (SVM): A New Method For Integrating Color And Grayscale Imagery

A software video stabilization system for automotive oriented applications

INFRARED IMAGING-PASSIVE THERMAL COMPENSATION VIA A SIMPLE PHASE MASK

Mastery. Chapter Content. What is light? CHAPTER 11 LESSON 1 C A

Detection of License Plate using Sliding Window, Histogram of Oriented Gradient, and Support Vector Machines Method

Chapter 10. Non-Intrusive Technologies Introduction

FUJICHROME PROVIA 1600 Professional [RSP]

Development of hot-engraved letter reading device using a two-dimensional laser distance meter KYOSUKE SATO *

76-GHz High-Resolution Radar for Autonomous Driving Support

Facial Biometric For Performance. Best Practice Guide

Measurements of Droplets Spatial Distribution in Spray by Combining Focus and Defocus Images

THE EXPANSION OF DRIVING SAFETY SUPPORT SYSTEMS BY UTILIZING THE RADIO WAVES

Chapter Ray and Wave Optics

11. What happens if two complementary colors are projected together at the correct intensities onto a white screen?

Conceptual Physics Fundamentals

Techniques for Suppressing Adverse Lighting to Improve Vision System Success. Nelson Bridwell Senior Vision Engineer Machine Vision Engineering LLC

An Image Processing Based Pedestrian Detection System for Driver Assistance

Making NDVI Images using the Sony F717 Nightshot Digital Camera and IR Filters and Software Created for Interpreting Digital Images.

The Importance of Wavelengths on Optical Designs

Master thesis: Author: Examiner: Tutor: Duration: 1. Introduction 2. Ghost Categories Figure 1 Ghost categories

Light Emitting Diode Illuminators for Video Microscopy and Machine Vision Applications

Measurement Guide. Solarzentrum Stuttgart GmbH Rotebühlstr. 145, Stuttgart

Automatic Crack Detection on Pressed panels using camera image Processing

Image Processing for feature extraction

ABC Math Student Copy. N. May ABC Math Student Copy. Physics Week 13(Sem. 2) Name. Light Chapter Summary Cont d 2

An Automatic System for Detecting the Vehicle Registration Plate from Video in Foggy and Rainy Environments using Restoration Technique

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

Bias errors in PIV: the pixel locking effect revisited.

Optical Design of Full View Lens based on Energy Luminance Analysis Chart of Stray Light

The new technology enables 8K high resolution and high picture quality imaging without motion distortion, even in extremely bright scenes.

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks

Chapter 17: Wave Optics. What is Light? The Models of Light 1/11/13

Chapter 36. Image Formation

Sony Releases the Industry's Highest Resolution Effective Megapixel Stacked CMOS Image Sensor for Automotive Cameras

Contrast adaptive binarization of low quality document images

Education and Training

Recognition Of Vehicle Number Plate Using MATLAB

Chapter 8. Remote sensing

Chapter 36. Image Formation

COMPARISON OF INFORMATION CONTENTS OF HIGH RESOLUTION SPACE IMAGES

Pedestrian Detection in Infrared Outdoor Images Based on Atmospheric Situation Estimation

White Paper. VIVOTEK Supreme Series Professional Network Camera- IP8151

Application Note. Digital Low-Light CMOS Camera. NOCTURN Camera: Optimized for Long-Range Observation in Low Light Conditions

HUYGENS PRINCIPLE AND INTERFERENCE

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

TELLS THE NUMBER OF PIXELS THE TRUTH? EFFECTIVE RESOLUTION OF LARGE SIZE DIGITAL FRAME CAMERAS

Fig Color spectrum seen by passing white light through a prism.

General Physics II. Ray Optics

Wireless technologies Test systems

All-weather vision for automotive safety: which spectral band?

Technical information about PhoToPlan

Transcription:

AUTOMOTIVE Development of Hybrid Image Sensor for Pedestrian Detection Hiroaki Saito*, Kenichi HatanaKa and toshikatsu HayaSaKi To reduce traffic accidents and serious injuries at intersections, development of cooperative driving support systems and related sensors has been promoted. Along with this movement, the authors have developed a hybrid image sensor that consists of visible-ray s and far-infrared-ray (FIR) s to compensate each other. The images taken by these s are processed simultaneously, thereby covering various conditions including nighttime, shadows, and high temperature. The authors have also established an algorithm to detect pedestrians. This paper outlines the developed hybrid image sensor and pedestrian detection algorithm along with their evaluation results. Keywords: pedestrian detection, far-infrared-ray 1. Introduction With the aim of reducing traffic accidents, vehicle-infrastructure cooperative driving safety support systems (Fig. 1) have been actively developed. These systems comprise a sensor installed on the roadside to detect nearby pedestrians, bicycles or other objects and warn approaching vehicle drivers. The sensor must accurately detect pedestrians or other objects in all environments 24 hours a day, 365 days a year. The authors have worked for many years on developing a far-infrared-ray (FIR) * 1 and a pedestrian detection algorithm * 2 (1). The FIR uses a sintered ZnS lens made of our original far-infrared optical material to take images of pedestrian-emitted heat. With the aim of reducing traffic accidents at crossings, we have developed a hybrid image sensor combining FIR and visible-ray s, with a pedestrian detection algorithm that uses the image characteristics of both s. This paper outlines the newly developed hybrid image sensor, details the pedestrian detection algorithm, and the sensor s pedestrian detection performance evaluation results. 2. Outline of Hybrid Image Sensor Table 1 compares the working principle and performance of sensors used to detect pedestrians. Visible-ray s are widely used in the applications of detecting vehicles, for example traffic control systems; however, it is difficult for visible-ray to detect pedestrians at night when they are not illuminated (vehicles can be detected, with headlights on) and they can mistake the shadows of pedestrians for the pedestrians themselves. In contrast, since FIR s produce images by detecting the heat objects emit, this type of can be used even in the dark of night. Type of Principle Table 1. Comparison of Pedestrian Detection Sensors Visible-ray Detects reflected environment light or headlight. Environment light Far-infrared-ray Detects farinfrared-ray (heat) emitted from object. Detectable wavelength range: 8-12 μm Hybrid image sensor Detects pedestrians via both thermal and visible-ray images. Reflected light A sensor located on the roadside detects a pedestrian and sends a radio signal to an approaching vehicle. Performance Day Night - Bad weather Major target causes of traffic accident Driver carelessness Sudden running of pedestrians into traffic Right/left-turn hazards Fig. 1. Vehicle-Infrastructure Cooperative Driving Safety Support System FIR s can also take images even when water drops or snowflakes adhere to the lens; in various other adverse environments as well, they offer promise as pedestrian detection tools. However, one drawback of these s is that they cannot create an accurate image when there is insufficient thermal contrast between pedestrian and background (road surface), especially during the daytime in summer. To compensate for these above shortcomings of visible-ray and FIR s, thereby enhancing pedestrian detection accuracy, we have developed a hybrid image sensor 38 Development of Hybrid Image Sensor for Pedestrian Detection

combining both types of s. External view and internal configuration of the hybrid image sensor are shown in Fig. 2. The optical performance (such as refractive index and transmission factor) of the sintered ZnS (Photo 1) used as FIR lens is slightly inferior to that of Ge, a typical FIR transparent material. To overcome this disadvantage, our proven lens design technology was used to ensure necessary frequency resolution (MTF), distortion factor and other optical performance. Since sintered ZnS lenses reduce material and production costs, they reduce the market price of FIR s. The specifications of the hybrid image sensor are shown in Table 2. Visible-ray image input Background difference (use of distance information) Far-infrared-ray image input Background difference (use of distance information) (superposition of both images) (elimination of road surface reflection region) Protective cover (glass) Enclosure (time-series tracking) Protective cover (ZnS) Visible-ray Far-infraredray Control board Pedestrian determination Result output Fig. 3. Process Flowchart of Pedestrian Detection Algorithm Fig. 2. External View and Internal Configuration of Hybrid Image Sensor Item View angle Resolution Gradation Dimension Photo 1. Sintered ZnS Lens Table 2. Specifications of Hybrid Image Sensor H24[ ] V18[ ] Specification Far-infrared-ray : 320 240 [pixel] Visible-ray : 640 480 [pixel] 8 bit (256 levels) gray scale H152 W186 D325 [mm] 3. Pedestrian Detection Algorithm The basic process flowchart of the developed pedestrian detection algorithm is shown in Fig. 3. This algorithm not only processes the visible-ray and FIR images independently, it also unifies the image regions, taking into account the characteristic differences between both the images, to enhance pedestrian detection accuracy. The details of each process in this algorithm are described below. 3-1 Background difference Background difference is used with input images to extract non-background regions. After the extraction of difference pixels through comparison between input images and background images, adjacent pixels are grouped together and the blanks in the grouped region are filled up in order to extract the region as a non-background region. Although time subtraction (interframe difference) is one possible method of extracting a non-background region, this technique was unsuitable for this study, since pedestrians move only a short distance per unit time as compared with vehicles, making it difficult to detect the movement of pedestrians walking far from the. 3-2 The candidate pedestrian region is extracted by unifying the non-background regions that have been extracted using the background difference method and eliminating unnecessary regions. In addition to candidate region extraction based on the distance information, the following three data processing methods have been developed as means of improving candidate region extraction accuracy: Unifying non-background regions by superposing both images Eliminating unnecessary background region by detecting road surface reflection region Extracting candidate region by time-series tracking The details of each method are described below. SEI TECHNICAL REVIEW NUMBER 74 APRIL 2012 39

(1) Unifying non-background regions by overlapping both images A pedestrian region extracted from a visible-ray image is easily affected by the pedestrian shadow, road surface reflection and other factors; therefore the region often becomes larger than the actual size of the pedestrian. In addition, since a pedestrian shadow cast on the road is affected by the movement of the sun, the direction of pedestrian region expansion is indefinite. For FIR images, on the other hand, the background difference method cannot extract that portion of the pedestrian where thermal contrast with the background (road surface) is small. As a result, the size of the extracted non-background region is often smaller than the actual pedestrian region (Fig. 4). An exception is road surface reflection on a rainy day, when the extracted non-background region Extracted nonbackground region Fig. 4. Extraction of Non-Background Region (Left: visible-ray images Right: far-infrared-ray images) Extracted nonbackground region often becomes larger than the actual region of the pedestrian. Even in such an exceptional case, the region expands toward the pedestrian s feet, due to the characteristics of road surface reflection. The above discussions can be summarized as follows: If both the visible-ray and FIR s are fixed at nearly the same view angle and nearly the same shooting direction, it will be highly possible to extract a satisfactory candidate region for a pedestrian by grouping together the FIR s non-background regions within the visible-ray s single non-background region, and eliminating the lower part of the non-background region (see description in (2) below) as needed. A candidate region extracted according to this processing method is shown in Fig. 5. (2) Eliminating unnecessary background region by detecting road surface reflection region As described above, in such undesirable ambient conditions as rain, road surface reflection occurs in the lower part of the pedestrian images captured by visible-ray and FIR s. The presence of reflection, as well as its region and brightness, depend largely on rainwater film thickness distribution on the road surface. We took a lot of pedestrian images of FIR, and analyzed in detail with focus on the features common to these images. Analysis results showed that extracting the edge of a pedestrian s image, including reflected region, would effectively define the inflection point of edge intensity at the boundary between pedestrian and reflected region (= toe) (Fig. 6). This finding has enabled accurate elimination of a reflected region from a candidate region. To describe this process in more detail, the toe coordinates are assumed by the upper coordinates of the extracted region, and pedestrian height taken into account the distance from the. An edge Edge image Non-background region (visible image) Unification of FIR non-background region that is completely included in visible-ray non-background region (dotted line) Pedestrian Reflecting region Edge intensity inflection point Edge intensity Fig. 6. Road Surface Reflection Region Detection Method (Edge Intensity Inflection Point) Non-background region (far-infrared image) Unified image (solid line) Fig. 5. Unification of Non-Background Regions by Superposing Camera Images inflection point is then sought in the vertical neighborhood of the assumed toe coordinates, to eliminate the region below the coordinates of the reflection point. (3) Extraction of candidate region by time-series tracking Only a pedestrian image at a given instant is insufficient for accurate extraction of a candidate region, in the case of the image containing two or more pedestrians overlapping one another. To ensure high-accuracy extraction of a candidate region even in such an undesirable condi- 40 Development of Hybrid Image Sensor for Pedestrian Detection

tion, a candidate region search technique that uses past pedestrian detection results is introduced, in addition to the above-described candidate region extraction technique. Specifically, the newly introduced technique defines a search range after taking into account the moving speed of pedestrians (including bicycles) while using the initially captured image within the pedestrian detection frame as a reference image, and performs a similarity calculation. A search image having the maximum correlation value exceeding a predetermined level is then selected as the candidate region. 3-3 Pedestrian determination Whether the extracted candidate region is the detection target is judged according to the following procedures. After the candidate region has been roughly screened using the pedestrian s size condition, the support vector machine (SVM) * 3 algorithm is used for the candidate regions in the visible-ray and FIR images to determine whether or not the regions represent a pedestrian. The region is finally determined to be a pedestrian only when either of the regions represents a pedestrian. The histogram of oriented gradients (HOG) (2) *4 is used as a feature value for SVM determination. Since an extracted candidate region may be slightly dislocated from the actual position of the pedestrian, new candidate regions are added to the extracted candidate region to prevent pedestrian detection error. Furthermore, the size of the extracted candidate region must be normalized before SVM determination. The aspect ratio of the candidate region of the standing pedestrian differs from that of the sideways bicycle. To normalize the size of the pedestrian s candidate region, we separated the pedestrian region from the sideways bicycle region with attention paid to the geometrical features. False-negative rate False-positive rate Table 4. Detection Performance Evaluation Result Sunny/ day Fine/ twilight Clear/ night the image). Considering that pedestrians enter a crossing from relatively specific positions and move in relatively limited directions, the additional use of this specific information is expected to improve the false-negative rate. False-positive rates were mostly attributed to capturing the image of part of a motorcycle or vehicle. It may be reduced by effectively using the engine temperature information acquired by the FIR. 5. Conclusion Rain/ day Rain/ Rain/ twilight night 4.8% 3.2% 2.9% 6.2% 6.3% 6.9% 8.0% 3.5% 7.1% 5.6% 8.7% 3.9% A hybrid image sensor and its pedestrian detection algorithm have been developed to overcome the disadvantages of conventional pedestrian detection sensors. Pedestrian detection performance evaluation testing of the sensor has demonstrated acceptable false-negative and false-positive rates as low as less than 10%. Our future challenges are to further reduce the non-detection and detection error rates, and improve the accuracy of pedestrian direction and speed measurements. 4. Pedestrian Detection Performance Evaluation To evaluate the effectiveness of the newly developed hybrid image sensor and pedestrian detection algorithm, pedestrian detection testing was carried out for images captured around Japan throughout the year. The detection rates used as performance evaluation criteria are defined in Table 3, while the evaluation results are given in Table 4. For every ambient condition, the false-negative rate and false-positive rate of the sensor were lower than 10%, within acceptable ranges. Most instances of false-negatives were attributable to (a) pedestrians in a group or (b) a pedestrian overlapped with another object (especially when overlapping continued immediately after their appearance in Table 3. Definition of Detection Performance Criteria Item False-negative rate False-positive rate Definition Number of detected pedestrians/ actual number of pedestrians 100 Number of detection errors/ total number of detected pedestrians 100 Technical Term *1 Far-infrared-ray (FIR) : A that detects the heat (electromagnetic waves with wavelength 8-12 µm) emitted from an object. This offers the advantage of measuring the temperature of an object or taking its picture even in the dark. *2 Pedestrian detection algorithm: A technique for detecting pedestrian presence in a detection area by processing input sensor signals via software. Technique performance is evaluated using two indicators: (a) falsenegative rate, representing percentage of non-detected pedestrians, and (b) false-positive rate, representing rate at which objects are mistaken for pedestrians. *3 SVM (support vector machine): A pattern recognition technique. This technique uses past experimental rules. *4 HOG (histograms of oriented gradients): An image feature value used for object detection. After an input image gradient has been determined, the gradient is divided for each local region to calculate a histogram. * Target: pedestrians, bicycles, wheelchairs SEI TECHNICAL REVIEW NUMBER 74 APRIL 2012 41

References (1) H. Saito, T. Hagihara, K. Hatanaka and T.Sawai, Development of Pedestrian Detection System Using Far-Infrared Ray Camera, SEI Technical Review, No.72, pp.112-117 (2011) (2) N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, IEEE Conf. on CVPR, pp.886-893 (2005) Contributors (The lead author is indicated by an asterisk (*).) H. Saito* Automotive Technology R&D Laboratories Engaged in the research and development of charging systems for electric vehicles K. HatanaKa Group Manager, Automotive Technology R&D Laboratories t. HayaSaKi General Manager, Automotive Technology R&D Laboratories 42 Development of Hybrid Image Sensor for Pedestrian Detection