Enhancing thermal video using a public database of images

Similar documents
TRICLOBS Portable Triband Color Lowlight Observation System

Fusion of Colour and Monochromatic Images with Chromacity Preservation

Experiments on image enhancement for night-vision and surveillance.

Improved SIFT Matching for Image Pairs with a Scale Difference

Towards an Optimal Color Representation for Multiband Nightvision Systems

INVIS Integrated Night Vision Surveillance and Observation System

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

APJIMTC, Jalandhar, India. Keywords---Median filter, mean filter, adaptive filter, salt & pepper noise, Gaussian noise.

Content Based Image Retrieval Using Color Histogram

Combined Approach for Face Detection, Eye Region Detection and Eye State Analysis- Extended Paper

An Efficient Nonlinear Filter for Removal of Impulse Noise in Color Video Sequences

Automated Driving Car Using Image Processing

Application Note. Digital Low-Light CMOS Camera. NOCTURN Camera: Optimized for Long-Range Observation in Low Light Conditions

THE modern airborne surveillance and reconnaissance

Local Adaptive Contrast Enhancement for Color Images

Image processing. Image formation. Brightness images. Pre-digitization image. Subhransu Maji. CMPSCI 670: Computer Vision. September 22, 2016

New applications of Spectral Edge image fusion

Real Time Word to Picture Translation for Chinese Restaurant Menus

Intelligent Nighttime Video Surveillance Using Multi-Intensity Infrared Illuminator

CSE 332/564: Visualization. Fundamentals of Color. Perception of Light Intensity. Computer Science Department Stony Brook University

Philpot & Philipson: Remote Sensing Fundamentals Color 6.1 W.D. Philpot, Cornell University, Fall 2012 W B = W (R + G) R = W (G + B)

Concealed Weapon Detection Using Color Image Fusion

Module 3 Introduction to GIS. Lecture 8 GIS data acquisition

Feature Detection Performance with Fused Synthetic and Sensor Images

A Mathematical model for the determination of distance of an object in a 2D image

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

A Vehicle Speed Measurement System for Nighttime with Camera

Fusion of Heterogeneous Multisensor Data

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

A Vehicular Visual Tracking System Incorporating Global Positioning System

Keyword: Morphological operation, template matching, license plate localization, character recognition.

Evaluation of Algorithms for Fusing Infrared and Synthetic Imagery

Reference Free Image Quality Evaluation

Towards Real-time Hardware Gamma Correction for Dynamic Contrast Enhancement

Image Processing by Bilateral Filtering Method

A Vehicular Visual Tracking System Incorporating Global Positioning System

A Vehicular Visual Tracking System Incorporating Global Positioning System

Automatics Vehicle License Plate Recognition using MATLAB

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

Lecture Notes 11 Introduction to Color Imaging

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

AUTOMATIC NUMBER PLATE DETECTION USING IMAGE PROCESSING AND PAYMENT AT TOLL PLAZA

Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks

Smart License Plate Recognition Using Optical Character Recognition Based on the Multicopter

Target detection in side-scan sonar images: expert fusion reduces false alarms

ABSTRACT 1. INTRODUCTION

ISSN Vol.03,Issue.29 October-2014, Pages:

Content-based Grayscale Image Colorization

Image Quality Assessment for Defocused Blur Images

International Journal of Advance Engineering and Research Development CONTRAST ENHANCEMENT OF IMAGES USING IMAGE FUSION BASED ON LAPLACIAN PYRAMID

Sensor set stabilization system for miniature UAV

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Automatic Licenses Plate Recognition System

Development of Hybrid Image Sensor for Pedestrian Detection

MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER

Simultaneous geometry and color texture acquisition using a single-chip color camera

Single Image Haze Removal with Improved Atmospheric Light Estimation

Image Forgery Detection Using Svm Classifier

Optimized Quality and Structure Using Adaptive Total Variation and MM Algorithm for Single Image Super-Resolution

Calibration of a High Dynamic Range, Low Light Level Visible Source

Research on Methods of Infrared and Color Image Fusion Based on Wavelet Transform

Locating the Query Block in a Source Document Image

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Multispectral Bilateral Video Fusion

Malaysian Car Number Plate Detection System Based on Template Matching and Colour Information

How does prism technology help to achieve superior color image quality?

Today s Presentation. Introduction Study area and Data Method Results and Discussion Conclusion

Gesture Recognition with Real World Environment using Kinect: A Review

A Comparison of Histogram and Template Matching for Face Verification

Example Based Colorization Using Optimization

The Research of the Lane Detection Algorithm Base on Vision Sensor

Improvement of Himawari-8 observation data quality

The EDA SUM Project. Surveillance in an Urban environment using Mobile sensors. 2012, September 13 th - FMV SENSORS SYMPOSIUM 2012

A Global-Local Contrast based Image Enhancement Technique based on Local Standard Deviation

IMAGE RESTORATION WITH NEURAL NETWORKS. Orazio Gallo Work with Hang Zhao, Iuri Frosio, Jan Kautz

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Design and Testing of DWT based Image Fusion System using MATLAB Simulink

Learning the image processing pipeline

Anti-shaking Algorithm for the Mobile Phone Camera in Dim Light Conditions

High Dynamic Range Imaging using FAST-IR imagery

A VIDEO CAMERA ROAD SIGN SYSTEM OF THE EARLY WARNING FROM COLLISION WITH THE WILD ANIMALS

Using BIM Geometric Properties for BLE-based Indoor Location Tracking

On Fusion Algorithm of Infrared and Radar Target Detection and Recognition of Unmanned Surface Vehicle

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Detection of Image Forgery was Created from Bitmap and JPEG Images using Quantization Table

Sensors and Sensing Cameras and Camera Calibration

Imaging Process (review)

Night-time pedestrian detection via Neuromorphic approach

Background Adaptive Band Selection in a Fixed Filter System

Response Curve Programming of HDR Image Sensors based on Discretized Information Transfer and Scene Information

Enhanced LWIR NUC Using an Uncooled Microbolometer Camera

International Journal of Modern Trends in Engineering and Research e-issn No.: , Date: 2-4 July, 2015

ISSN No: International Journal & Magazine of Engineering, Technology, Management and Research

Performance Analysis of Color Components in Histogram-Based Image Retrieval

Reversible data hiding based on histogram modification using S-type and Hilbert curve scanning

Learning Pixel-Distribution Prior with Wider Convolution for Image Denoising

COLOR IMAGE SEGMENTATION USING K-MEANS CLASSIFICATION ON RGB HISTOGRAM SADIA BASAR, AWAIS ADNAN, NAILA HABIB KHAN, SHAHAB HAIDER

A simulation tool for evaluating digital camera image quality

Transcription:

Enhancing thermal video using a public database of images H. Qadir, S. P. Kozaitis, E. A. Ali Department of Electrical and Computer Engineering Florida Institute of Technology 150 W. University Blvd. Melbourne, FL 32901 ABSTRACT We presented a system to display nightime imagery with natural colors using a public database of images. We initially combined two spectral bands of images, thermal and visible, to enhance night vision imagery, however the fused image gave an unnatural color appearance. Therefore, a color transfer based on look-up table (LUT) was used to replace the false color appearance with a colormap derived from a daytime reference image obtained from a public database using the GPS coordinates of the vehicle. Because of the computational demand in deriving the colormap from the reference image, we created an additional local database of colormaps. Reference images from the public database were compared to a compact local database to retrieve one of a limited number of colormaps that represented several driving environments. Each colormap in the local database was stored with an image from which it was derived. To retrieve a colormap, we compared the histogram of the fused image with histograms of images in the local database. The colormaps of the best match was then used for the fused image. Continuously selecting and applying colormaps using this approach offered a convenient way to color night vision imagery. Keywords: color night vision, histogram matching, image database, image fusion, thermal video 1. INTRODUCTION Night vision systems generally display images with different colors than one would see in daytime. Thermal infrared (IR) and low-light-level (LLL) visible cameras are the most popular nighttime imaging systems, which are widely used for military, surveillance, reconnaissance, and security applications [1]. A thermal camera provides an output proportional to temperature and is useful for objects radiating thermal energy in a dark area, a busy background, or seeing though fog, while a LLL visible camera provides data of objects reflecting visible and near infrared light in great detail [2,3]. A popular idea to achieve a better description of a scene at night is to combine thermal and visible images into a single fused image. The goal is that in the single fused image, all perceptually important information present in the individual thermal and visible images is preserved. A common way to represent night vision imagery is by a gray- or green-scale representation. Using full color representation of night imagery could lead to advances such as greater safety in night vehicle navigation, and several methods have been proposed for giving night vision imagery a natural day-time color appearance. Most of these focus on multi-band night vision and image fusion [2-4], and there are some other methods dealing with single band image colorization techniques [5-7]. Welsh et al [7] introduced a general technique to colorize a grayscale image by borrowing colors from a reference daytime image. They used the same concept as Reinhard in Ref. [8], but since the grayscale image was represented by a one dimensional distribution, they only matched the luminance channels between the reference color image and grayscale target image. Toet [5] showed that Reinhard et al s [8] color transfer method can be applied to transfer the natural color characteristics of daytime color image to fused multiband night vision images. Because color transfer in lαβ color space is often computationally expensive, it can be difficult to be realize in real time. Therefore, other algorithms have been developed based on methods proposed by Reinhard and Toet to colorize night images using different color spaces in real time applications [1-3, 4, 9]. In this statistical approach, one problem is that a large object in the reference image could dominate the color mapping. Another problem is that these approaches generally addresse only the global color characteristics of the depicted scene. Hogervorst et al [4] described an Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2014 Proc. of SPIE Vol. 9121, 91210F 2014 SPIE CCC code: 0277-786X/14/$18 doi: 10.1117/12.2050399 Proc. of SPIE Vol. 9121 91210F-1

alternative lookup table based method that alleviates the drawbacks of this statistical approach. They derived a color mapping from the combination of a nighttime false color fusion image and a corresponding daylight color image. Although the method proposed by Hogervorst et al [4] can be deployed in real-time, it cannot necessarily be used for night navigation because a daylight image is not available. In this paper, we extended this approach for use in night navigation by using daylight imagery from a public database of images. A public database such as Google Earth is a ready asset that is being continuously updated by multiple sources and contains images of a large number of scenes from around the world. In addition, since the environment is changing in a navigation, a single colormap is not sufficient for colorization. Due to the computational demand in deriving a colormap from a reference image, we created an additional local database of colormaps. Reference images from the public database were compared to a more compact local database of images and colormaps in order to retrieve one of a limited number of colormaps, one that represented several driving environments. Each colormap in the local database was stored with an image from which it was derived. To retrieve a colormap, we compared the histogram of the fused image with histograms of images in the local database. The colormap of the best match was then applied to the fused image. Using this approach allowed us to obtain a colormap sufficient for the location of interest. 2. SYSTEM ARCHITECTURE The system architecture of the color night navigation vision system that is developed in this paper is shown in Fig. 1. Other than the public database, the entire system can be implemented in a stand-alone fashion with the central portion implemented on a computer. The system has four inputs and two outputs. Two of the inputs are from cameras, one is from a GPS sensor, and the other receives images from a public database. The two outputs consist of a connection to a public database that sends requests for images and the display of the final result of the night scene. Such an architecture can form the basis of other systems related to vehicle navigation with enhanced imagery. The two cameras provide images of the same scene with different sensors. In our case, one of the cameras was a DRS Technologies Tamarisk IR thermal camera with 320 x 240 pixel resolution and a field-of-view (FOV) of 40 degrees. The other camera was an Everfocus EQ700 that amplifies light over spectral range from 400 to 800 nm. It has an analog video signal output at rate of 30 fps with a resolution of 640 x 480 pixels. These cameras provided responses for two distinct spectral regions. We used a GPS sensor to gather location data to identify specific images needed from a public database. For this purpose, we used a Garmin GPS 18x sensor with a USB for connection to a computer. The device provides location data at a rate of 1 Hz and has an accuracy of less than 15 meters when not moving. Google Maps is a public database that provides 360 degree image data from points all over the world and can be accessed easily. By providing location and angle information, requests can be sent over a standard internet connection and a Google Street View image returned at a specific location and angle [10, 11]. The image preprocessing section plays an important role that adjusts image values, aligns images, and then combines them prior to colormap adjustment. This section uses two cameras from different sensors as inputs and produces a single false-color image as an output. Images acquired from the cameras cannot be directly combined for several reasons. For example, video acquired by the visible camera in a low light condition is usually noisy. Therefore, a denoising algorithm was used to enhance the output of that camera, and a video implementation of a BM3D filter was chosen as the noise reduction filter [12]. The BM3D algorithm has been found to work remarkably well when compared to other denoising filters. It assumes constant white noise so that only a single value of the noise standard deviation σ needs to be provided. Since the camera is often used under the same condition, we used a value of σ = 20 based on 8-bit integer image values in all our work, and this seemed to work well. The output of the thermal camera was adjusted so that its output was set to the darkest values for the warmest values and white for the coldest. The visible and thermal cameras were mounted on top of each other providing a fixed position difference between them. Even though there were differences in parameters such as position, resolution, and their FOVs, these differences did not change. Therefore, an affine transformation was adopted to obtain registered images. The two registered images were Proc. of SPIE Vol. 9121 91210F-2

fed into a dual band image fusion section which maps the thermal images into the R channel of an RGB system and visible images into the G channel and sets the B channel to zero. The resulting false-color fused image was then fed to a color correction unit. The fused image from the preprocessing stage has unnatural colors and needed to be replaced with the correct colors. In the color correction unit, the false colormap of the fused image was swapped with an improved colormap for display of the final image. The color correction unit used the GPS sensor and false-color image as inputs as well as the output of the public image database. One output is for requests made to the public database, and the other is the final fused image with an improved colormap. Location data from the GPS sensor was put into the format of a request to Google Street View, and a single image was returned for each request. The request for a new image is required when the scene changes appreciably, but this was set to update periodically in our case. Extracting the colormap from the public database image in real-time is demanding. Therefore, an additional but much smaller database of images and their colormaps was created and saved locally. Ideally, this local database would store representative images of various regional road scenes that correspond to the general location of the GPS. For example, sample images would be stored for typical scenes such as rural, urban, highways, etc. Note that the local database images are only used for colormap swapping and not for navigation. Along with each image in the local database, its colormap is also stored, and the goal of the local database is to store a reasonable set of scenes that represent a range of colormaps for the general region where the vehicle is operating. Once an image from the public database was returned, it was compared to the local database of images. This comparison can be performed in an efficient way using a histogram-based method. For each public database image, a similar image in the local database was found and its associated colormap was used to color the false-color fused image. Using this process, a night-vision system can produce color imagery in real-time based on the set of stored colormaps. 3. COLOR CORRECTION Although the intermediate fused image may be better than the originals in terms of features, it has an unnatural color appearance. Therefore, a method is required to develop and apply an appropriate colormap to make the final result appear as a daytime image. We use an existing method for extracting and applying the colormap to the intermediate fused image; however, we developed a new method for choosing the colormap when it must be supplied in real-time. 3.1 Colormap Generation We used a color look-up table (LUT) to apply an appropriate colormap to the false-color image [4]. To derive a color map using this method, the false-color image and its corresponding daytime reference images have to be registered since pixel values determine what false color matches to what color in the reference image. The steps for creating a colormap can be summarized as follows: 1 Convert the false-color image to an indexed color image, where a single index is assigned to each pixel. Each index value represents an RGB-value in a color LUT. In our case, the color lookup table contains only combinations of R and G values. 2 Derive the natural color equivalent for each color index by locating the pixels in the indexed false-color image with the converted index, and find the corresponding pixels in a registered daytime image of the same scene. 3 Calculate the average of this group of pixels in lαβ color space. This means the RGB values are transformed to decorrelated lαβ values. This ensures that the computed average color reflects the perceptual average color [4]. 4 Convert back to RGB space, and assign those pixel values by the new color lookup table for the false indexed image. 5 Replace the luminance of the corrected image with a grayscale version of the false-color image. Proc. of SPIE Vol. 9121 91210F-3

After this process, the new LUT is applied to the false-colored of a scene from our sensors respectively. Fig. 2( c) shows the result of image that generates the final result. For example, Figs. 2(a) and 2(b) show the IR and visible images fusing the image as described earlier, and Fig. 2(d) shows the reference image taken at the same viewpoint in daylight. The false-color colormap from Fig. 2(c) and the new colormap are both shown in Fig. 2(e). The final result is shown in Fig. 2(f). thermal ramera proprocessing pr -rocess Registration and fusion.l visible carcera :olor Correction Unit Display night it images in natur color appearats latabase Local database of reference ges and colormaps Figure 1 System architecture 2013/04/ '22 09:4 (a) (b) (c) 2013/04/22 09: (d) (e) (f) Figure 2 Images of same scene used in example (a) preprocessed IR image (b) preprocessed visible image (c) fused false color image (d) reference image (e) false-color (on left) and derived (on right) colormaps (f) final result. 3.2 Database of colormaps The reference image from where a colormap was derived should contain the same set of colors desired in the final result, so different references images are needed as the scene changes. For example, a colormap derived from an urban area may not work properly when applied to a rural scene. Fortunately, a public database such as Google Street View is Proc. of SPIE Vol. 9121 91210F-4

available to provide reference images. Since it is difficult to produce the colormaps in real-time, we stored a number of precalculated colormaps in a local database, and only a relatively small number of colormaps had to be stored because many scenes in a region contain similar colors. This approach allowed the colormaps for the final image to be presented in real-time. We selected them by comparing a Google Street View image with images in our local database. When a match was found, the associated colormap was used to color the final image. To demonstrate our approach we choose 23 street scenes from around Melbourne, FL and stored those images and colormaps to create a database. The images are shown in Fig. 3., ----- _ ti:,.k ),- ~ ` -~-.. Gööglè Pb. =-1 I. CIO :, _..,. _ - _ -- '".'-ra. ydït Figure 3 Images in local database where colormaps wee derived. 3.3Colormap retrieval The retrieval algorithm utilized in the proposed system must be fully automatic without any human interaction. The color correction unit provided an automated process to retrieve a colormap based on the content of an image from the Proc. of SPIE Vol. 9121 91210F-5

public database. During navigation, the GPS sensor continuously returns vehicle location coordinates to the system, which initiates the process. An HTTP request is generated to obtain an image of the location from Google Street View. The next step is the extraction of the histogram of the returned image and comparison with the stored histograms of the reference images in the local database. Images in the local datable are ranked with regards to their similarity to the returned image. In the following step, the colormap of the image with the closest match is used to correct the false colors in false-color image. We show some example results from the image retrieval process in Fig. 4. Figures 4 (a), (c), and (e) show the images from Google Street View, and Figs. (b), (d) and (f) show the returned images respectively. Note that the returned images are also from Google Street View, but don t correspond to the coordinates of the query images. From the results, it seems that most of the colors that are present in each one of the query images can also be found in the corresponding retrieved images. Thus, it is reasonable to use the colormaps whichh have been derived from the retrieved images to correct the false colors in the intermediate fused image captured at the same locations of query images. (a) (b) (c) (d) (e) (f) Figure 4. Examples of returned images from Google Streett View (a), (c), (e) images from Google Street View, (b), (d), and (f) returned images from local database. Proc. of SPIE Vol. 9121 91210F-6

4. EXPERIMENTAL RESULTS The returned image is shown in Fig. 5 (a) was obtained from the Google Street View by sending the coordinate data (longitude, latitude, and the compass heading) of the vehicle location from the GPS sensor. After comparing its histogram with the stored histograms of images in the local database, the image shown in Fig. 7(b) was retrieved as the best match. The false color multiband fused image shown is shown in Fig. 7 (c), and the final image is shown in Fig. 7(g). The color appearance of the resulting image (Fig. 7 (g)) is close to nature. Fig. 8 and Fig. 9 show two more examples. Again, the retrieval system successfully retrieved the best matches for both cases. In the resulting images shown in Fig. 9 (e) and 9 (f), some objects appeared with the wrong colors, but that should not affect a driver s performance because they can clearly recognized the road and the trees 2013/07/10 04:43:44 (a) (b) (c) (d) Figure 5 Example results (a) query image (b) retrieved image from database (c) false color image at query location (d) retrieved colormap applied to false color image. 2013/07/08 07:03:51 (a) (b) (c) (d) Figure 6 Example results (a) query image (b) retrieved image from database (c) false color image at query location (d) retrieved colormap applied to false color image. (a) (b) (c) (d) Figure 7 Example results (a) query image (b) retrieved image from database (c) false color image at query location (d) retrieved colormap applied to false color image. Proc. of SPIE Vol. 9121 91210F-7

5 CONCLUSION We presented a new approach to color night vision imagery. A color transfer method based on look-up tables was used to replace the false color in multiband fused images with natural colors derived from a daytime reference image. Since a single colormap is not sufficient for navigation because the environment is changing, colormaps for different environments were derived in advance. Each colormap was stored in a local database along with a reference image. To retrieve a colormap from this database, GPS coordinates from a vehicle location were used to retrieve an image from Google Street View. The corresponding image was compared to our local database and the colormap of the image with the best match to the query image was used to color the false-color image. The resulting colorized nighttime images improve situational awareness because they closely resemble daytime reference images, so they could be used in realtime to aid nighttime vehicle navigation. The derivation of a colormap may require some time, but once the colormap is derived, it can be employed in a real-time implementation because the swapping process requires a minimal amount of processing time. REFERENCES [1] Zhang, J., Han, Y., Chang, B., Yuan, Y., Qian, Y. and Qiu, Y., "Real-time color image fusion for infrared and lowlight-level cameras," Proc. SPIE7383, 73833B, (2009). [2] Gu, X., Sun, S. Y. and Fang., J., "Real-time color night-vision for visible and thermal images," IEEE IITAW'08, 612-615, (2008). [3]Liu, G. and Huang, G., "Color fusion based on EM algorithm for IR and visible image," IEEE ICCA. vol. 2, 253-258, (2010). [4] Hogervorst, M. A. and Toet, A., "Method for applying daytime colors to nighttime imagery in realtime," Proc. SPIE6974, 69740, (2008). [5] Toet, A., "Natural colour mapping for multiband nightvision imagery," Information fusion, 4(3), 155-166, (2003). [6] Toet, A., "Colorizing single band intensified nightvision images," Displays 26(1), 15-21, (2005). [7] Welsh, T., Ashikhmin, M. and Mueller, K., "Transferring color to greyscale images," ACM Transactions on Graphics 21(3), 277-280, (2002). [8] Reinhard, E., Adhikhmin, M., Gooch, B. and Shirley, P., "Color transfer between images," IEEE Computer Graphics and Applications, 21(5), 34-41, (2001). [9] Wang, L., Zhao, Y., Jin, W., Shi, S. and Wang, S., "Real-time color transfer system for low-light level visible and infrared images in YUV color space," International Society for Optics and Photonics: Defense and Security Symposium, 65671G, (2007). [10] Salmen, J., Houben, S. and Schilipsing, M, "Google street view images support the development of vision-based driver assistance systems," 2012 Intelligent Vehicles Symposium, Alcala de Henares, Spain, 891-895, (2012). [11] https://developers.google.com/maps/ [12] Dabov, K. A., Foi, A., Katkovnik, V. and Egiazarian, K., Image denoising by sparse 3D transform-domain collaborative filtering, IEEE Trans. on Imag. Proc., 16(8), 2080-2095 (2007). Proc. of SPIE Vol. 9121 91210F-8