Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

Similar documents
Realistic Image Synthesis

Figure 1 HDR image fusion example

Introduction to Video Forgery Detection: Part I

ROAD TO THE BEST ALPR IMAGES

Continuous Flash. October 1, Technical Report MSR-TR Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

Removal of Haze in Color Images using Histogram, Mean, and Threshold Values (HMTV)

HDR imaging Automatic Exposure Time Estimation A novel approach

According to the proposed AWB methods as described in Chapter 3, the following

A Vehicle Speed Measurement System for Nighttime with Camera

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

Image Enhancement using Histogram Equalization and Spatial Filtering

Quality Measure of Multicamera Image for Geometric Distortion

Modeling and Synthesis of Aperture Effects in Cameras

FOG REMOVAL ALGORITHM USING ANISOTROPIC DIFFUSION AND HISTOGRAM STRETCHING

CMOS Image Sensors in Cell Phones, Cars and Beyond. Patrick Feng General manager BYD Microelectronics October 8, 2013

CS6670: Computer Vision

Wavelet Based Denoising by Correlation Analysis for High Dynamic Range Imaging

ECC419 IMAGE PROCESSING

Exercise questions for Machine vision

A Mathematical model for the determination of distance of an object in a 2D image

Radiometric alignment and vignetting calibration

Coding and Modulation in Cameras

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

Response Curve Programming of HDR Image Sensors based on Discretized Information Transfer and Scene Information

LENSLESS IMAGING BY COMPRESSIVE SENSING

A Review over Different Blur Detection Techniques in Image Processing

An Inherently Calibrated Exposure Control Method for Digital Cameras

RECOVERY OF THE RESPONSE CURVE OF A DIGITAL IMAGING PROCESS BY DATA-CENTRIC REGULARIZATION

4 STUDY OF DEBLURRING TECHNIQUES FOR RESTORED MOTION BLURRED IMAGES

White Paper High Dynamic Range Imaging

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Effective Pixel Interpolation for Image Super Resolution

Using Spatially Varying Pixels Exposures and Bayer-covered Photosensors for High Dynamic Range Imaging

CAMERA BASICS. Stops of light

Enhanced Shape Recovery with Shuttered Pulses of Light

High Dynamic Range Imaging

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Detection and Verification of Missing Components in SMD using AOI Techniques

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

ME 6406 MACHINE VISION. Georgia Institute of Technology

Image Processing Lecture 4

Comparitive analysis for Pre-Processing of Images and videos using Histogram Equalization methodology and Gamma correction method

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks

Visibility of Uncorrelated Image Noise

A Study of Slanted-Edge MTF Stability and Repeatability

Calibration-Based Auto White Balance Method for Digital Still Camera *

LED flicker: Root cause, impact and measurement for automotive imaging applications

A New Lossless Compression Algorithm For Satellite Earth Science Multi-Spectral Imagers

The Noise about Noise

Development of Hybrid Image Sensor for Pedestrian Detection

Focusing and Metering

Automatic Selection of Brackets for HDR Image Creation

Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction

TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0

Contrast adaptive binarization of low quality document images

Issues in Color Correcting Digital Images of Unknown Origin

Coded Aperture for Projector and Camera for Robust 3D measurement

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Efficient Color Object Segmentation Using the Dichromatic Reflection Model

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

High Dynamic Range image capturing by Spatial Varying Exposed Color Filter Array with specific Demosaicking Algorithm

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Background Pixel Classification for Motion Detection in Video Image Sequences

Lab Report 3: Speckle Interferometry LIN PEI-YING, BAIG JOVERIA

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

COMPUTATIONAL PHOTOGRAPHY. Chapter 10

Vignetting. Nikolaos Laskaris School of Informatics University of Edinburgh

An Automatic System for Detecting the Vehicle Registration Plate from Video in Foggy and Rainy Environments using Restoration Technique

Camera controls. Aperture Priority, Shutter Priority & Manual

License Plate Localisation based on Morphological Operations

icam06, HDR, and Image Appearance

Correction of Clipped Pixels in Color Images

Research on Enhancement Technology on Degraded Image in Foggy Days

Lecture Notes 11 Introduction to Color Imaging

HIGH DYNAMIC RANGE MAP ESTIMATION VIA FULLY CONNECTED RANDOM FIELDS WITH STOCHASTIC CLIQUES

COLOR CORRECTION METHOD USING GRAY GRADIENT BAR FOR MULTI-VIEW CAMERA SYSTEM. Jae-Il Jung and Yo-Sung Ho

A Comparison of Histogram and Template Matching for Face Verification

RADIOMETRIC CAMERA CALIBRATION OF THE BiLSAT SMALL SATELLITE: PRELIMINARY RESULTS

Changyin Zhou. Ph.D, Computer Science, Columbia University Oct 2012

Light-Field Database Creation and Depth Estimation

Goal of this Section. Capturing Reflectance From Theory to Practice. Acquisition Basics. How can we measure material properties? Special Purpose Tools

A Vehicular Visual Tracking System Incorporating Global Positioning System

Video Synthesis System for Monitoring Closed Sections 1

Preparing Remote Sensing Data for Natural Resources Mapping (image enhancement, rectifications )

Keywords-Image Enhancement, Image Negation, Histogram Equalization, DWT, BPHE.

Locating the Query Block in a Source Document Image

QUALITY CHECKING AND INSPECTION BASED ON MACHINE VISION TECHNIQUE TO DETERMINE TOLERANCEVALUE USING SINGLE CERAMIC CUP

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

RELEASING APERTURE FILTER CONSTRAINTS

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Camera Calibration Certificate No: DMC III 27542

Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System

Testing, Tuning, and Applications of Fast Physics-based Fog Removal

A Study On Preprocessing A Mammogram Image Using Adaptive Median Filter

Towards Real-time Hardware Gamma Correction for Dynamic Contrast Enhancement

PARAMETRIC ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES

Transcription:

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Mihai Negru and Sergiu Nedevschi Technical University of Cluj-Napoca, Computer Science Department Mihai.Negru@cs.utcluj.ro, Sergiu.Nedevschi@cs.utcluj.ro Abstract The quality of images in real time vision tends to be of extreme importance. Without high quality images processing of the outside world can introduce extreme errors. Image quality is dependent on many environmental factors, such as weather and lighting conditions (sun, rain, snow, fog, mist), entering and going out of tunnels, shadows, cars headlights. This paper presents a new approach for adapting the camera response with respect to the environment s lighting conditions. For this we model a digital camera s response as a function of its parameters. The obtained mathematical model is vital for adapting the cameras to the environment s lighting conditions. The most widely used camera parameters are the exposure time and amplification gain. By adjusting these parameters the image acquisition system can be less dependent on the environment s lighting conditions and can provide better quality images for further processing. Keywords: camera model, radiometric calibration, camera adaptation, lighting conditions, stereo vision I. INTRODUCTION In real world applications, when the processed environment is rapidly changing it is very hard to obtain high quality images. In order to improve the quality of the obtained images we must modify some of the camera parameters (amplification gain, exposure time, focus). In real time systems, the focal distance and the camera aperture are chosen in accordance to the domain of applications and are kept constant. Since the environment is not a static one, we need to develop a method of adapting the camera signal to the outside lighting conditions, i.e. we must model the response of the digital camera. This response must be a function of the camera parameters and it can be modeled by a mathematical function that relates the observed intensity (I) of a pixel to the intensity of light (q) falling on the corresponding sensor, the current exposure time (e) and the current gain (g) settings. Thus, a camera model is a function of the type presented in equation 1. By adjusting these parameters in real time, the acquisition system becomes less dependent on the lighting conditions. I = f(q, e, g) (1) In [1] we have modeled two mathematical camera response functions: the linear camera model and the square gain camera model. We have also proved that the square gain camera model is more accurate than the linear one and that it can be used to estimate the camera response with minimum errors. In order to adapt the cameras to the outside lighting conditions we have developed an algorithm that tries to model the human visual system. The human eye has the ability to adapt to light intensities in a wide dynamic range, making people perceive information in almost any lighting conditions. Our algorithm is designed to use only one metric, namely image brightness or average intensity of an image. A. Related Work The output of an imaging system is a brightness (or intensity) image. An image acquired by a camera consists of measured intensity values which are related to scene radiance by a function called the camera response function. Knowledge of this response is necessary for computer vision algorithms which depend on scene radiance. Brightness values of pixels in an image are related to image irradiance by a non-linear function, called the radiometric response function [2]. Several investigators have described methods for recovery of the radiometric response from multiple exposures of the same scene. The radiometric response function comes from the correspondence of gray-levels between images of the same static scene taken at different exposures. In [3], Mitsunaga and Nayar approximated the function by a low degree polynomial. They were then able to obtain the coefficients of the polynomial and the exposure ratios, from rough estimates of those ratios, by alternating between recovering the response and the exposure ratios. At a single point in the image, an intensity value is related to the scene radiance by a nonlinear function. This camera response function is assumed to be the same for each point in the image. Moreover this function is monotonically increasing [4]. Nonparametric estimation of the function is performed in [5], where the estimation process is constrained by assuming the smoothness of the response function. Grossberg and Nayar [6] estimated the parameters by projecting the response function to a low dimensional space of response functions obtained from a database of real world response functions. Using this model they estimate the camera response from images of an arbitrary scene taken using different exposures. In [7], a method to estimate the radiometric response functions (of R, G and B channels) of a color camera directly from the images of an arbitrary scene taken under different illumination conditions is presented. The illumination conditions are not assumed to be known. The response function of a channel is modeled as a gamma curve and is recovered by using a constrained nonlinear minimization approach by exploiting the fact that the material properties of the scene remain constant in all the images. A unified framework for dealing with non-ideal radiometric aspects, such as spatial non-uniformity, variations due to automatic gain control (AGC), non linear response of 978-1-4577-1481-8/11/$26.00 2011 IEEE 273

the sensor are presented in [8]. [9] and [10] present methods to obtain high dynamic range images, from images of the same scene captured using different exposures. They reconstruct the original image from several images taken with different exposures values and then interpolate them in order to obtain a higher quality image. In [11] an algorithm for image quality improvement by adaptive exposure correction techniques is presented. Their approach is to analyze CCD/CMOS Bayer data, identify relevant image features and adjust the exposure level according to a camera response like function. In consumer digital cameras, some of the primary tasks in the image capture data path include automatic focus, automatic exposure (AE) determination and auto-white balance (AWB). The main objective of an automatic exposure system is to compute the correct exposure necessary to record an acceptable, good quality image in the camera [12]. The algorithm that performs this analysis is a generalized color balancing, or illumination adaptation. This illumination adaptation is done gradually, after several frames have been acquired. Advanced driving assistance systems based on cameras are available on a wide category of vehicles. Theses systems are following different ways to handle different lighting conditions. Some try avoid camera control at all but use a higher bit depth and others are using a round robin process with fixed camera control settings requiring several frames in order to adapt the cameras to the environment s lighting condition. These long adaptation process is not good in real traffic situations, when entering or leaving tunnels, going under bridges, shadows from buildings or surrounding vehicle in urban scenarios, etc. So a better approach is to try to obtain a high quality image as fast as possible by acquiring only one frame, perform the adaptation to lighting conditions, such that next frame will be ready for further processing. The quality of an image captured with a digital camera is determined by many things: illumination conditions, lens and camera parameters. The aim of this work is to build a robust algorithm for adapting the cameras of a stereo system to the outside lighting conditions. This algorithm should follow the behavior of the human visual system, that is able to adapt to lighting conditions in a very large spectrum. This large variation is accomplished by changes in the overall sensitivity, a phenomenon called brightness adaptation. At any situation the visual system is adapted to a certain light intensity, which is called the brightness-adaptation level, and is most sensitive to intensities around that level, and totally insensitive to intensities at some distance below it, which are all perceived as black. Sensing much higher intensities will result in shifting the adaptation level to a higher point. Hence, the visual system is most sensitive to luminance changes around the adaptation level. We have investigated two types of digital cameras JAI A10 CL and JAI M4 CL. These two cameras have the ability to use automatic exposure and automatic gain control in order to control the quality of the acquired images. The problem is that when using these two functions we require several frames until the quality of the image is suitable for image processing. The presented work is in the field of real time stereo vision, so we need a fast method to adapt the camera signal to the outside lighting conditions. Our approach is to model the camera response as a function of the two most important camera parameters: the amplification gain and exposure time. Such a mathematical model can be a described as a low degree polynomial. In order to compute the parameters of this radiometric model we perform a process called radiometric calibration. Thus, radiometric calibration becomes the process of computing the values of the parameters for this polynomial. During calibration the illumination conditions are not known, but they are assumed to be constant. Our purpose is to build an accurate mathematical model for estimating the camera response and to use this model in order to perform real time adaptation of the camera signal according to the environment s lighting condition. This real time algorithm is able to compute the necessary exposure and gain values needed for acquiring the next frame, such that the quality of the next acquired image is optimal for further processing. The algorithm is based on one image metric, the brightness of the image. Thus we need to acquire only one image with the digital camera, to perform the adaptation to the environment s lighting conditions and the next frame acquired will have an optimal quality for processing. B. Paper Structure Section II presents the Square gain camera model and how this model can be used for adapting the cameras to the outside lighting conditions. Section III describes the adaptation to lighting conditions algorithm. The experimental results are shown in section IV. Finally Section V presents the conclusions of the paper and outlines some future work proposals. II. SQUARE GAIN CAMERA MODEL As we described in [1], the square gain camera model is a second degree polynomial that relates the observed intensity I, to the intensity of light (q), the current exposure time (e) and the current gain (g). The formula of the squared gain model is the following: I(q, e, g) =e ((a q + m)g 2 + q g + b q + n)+c (2) The coefficients of the camera model, a, b, c, m and n, are computed offline through a radiometric calibration procedure. During calibration the illumination conditions are not known, but they are assumed to be constant. This process is achieved by placing the camera in a static environment (constant light intensity and static scene) and taking a set of frames with different exposure and gain values. Then, the camera model is fitted, by performing a least squares methodology, to the set of observed values. Having this mathematical equation we can estimate the new intensity of an image when an exposure and/or gain setting is performed. First, we re-write the above equation factoring out the unknown light intensity (q). Thus, we obtain the following: 274

I = e (q ((a g +1)g + b)+m g 2 + n)+c (3) We introduce two new functions LUT 1 (g) and LUT 2 (g). LUT 1 (g) =(a g +1)g + b (4) LUT 2 (g) =m g 2 + n By replacing these functions in equation 3 we obtain: I = e LUT 1 (g) q + e LUT 2 (g)+c (5) If we know the current exposure and gain values (e 1,g 1 ), the current image brightness (I 1 ) and the next exposure and gain values (e 2,g 2 ) we can estimate what the next image brightness (I 2 ) will be, taking into account that the light intensity (q) is constant. In order to accomplish this, we construct the following system of equations: I 1 = e 1 LUT 1 (g 1 ) q + e 1 LUT 2(g 1 )+c (6) I 2 = e 2 LUT 1 (g 2 ) q + e 2 LUT 2(g 2 )+c Now we extract q from the first equation and we replace it in the second equation. q = I 1 [e 1 LUT 2 (g1) + c] (7) e 1 LUT 1 (g 1 ) I 2 = e 2 LUT 1 (g2) e 1 LUT 1 (g 1 ) I 1 e 2 LUT 1 (g2)[e 1 LUT 2 (g1) + c] e 1 LUT 1 (g 1 ) + e 2 LUT 2 (g2) + c We have obtained a formula that is able to estimate, with minimum errors, the brightness of the next acquired image, providing that the light intensity (q) does not change. By using this formula we were able to design a robust algorithm for adapting the cameras to the outside lighting conditions. III. CAMERA ADAPTATION ALGORITHM The algorithm for camera adaptation to lighting conditions uses only one metric, namely image brightness. The image brightness is computed as the average intensity level of the pixels belonging in the region of interest used in the adaptation process. The algorithm must take into account some constraints: reduce oscillations as much as possible if possible, exposure settings should be used more often than gain settings, since high gain values introduce noise in the images Our current frame grabber is configured to use the pulse width trigger mode. In this mode the trigger to acquire the next image is equal (in time) to the value of the exposure parameter, thus exposure settings are very fast. On the other (8) hand gain settings require several messages to be sent through the serial communication channel on the camera link interface, so they take longer. Having these constraints in mind and knowing that the cost of setting the exposure is lower then the cost of setting the gain, the algorithm tries to adapt the cameras by using only exposure. When this is no longer possible, or the exposure has exceeded an admissible range, adjust the gain in such a way as to obtain the desired intensity and, if possible, bring the exposure back to the middle of the admissible range. By using this strategy, a hysteresis is obtained, such that there is no operating point at which slight changes in the luminosity of the scene will cause repeated gain settings. For simplicity reasons, we represent the exposure and gain values as percentages. Using this approach and taking into account the square gain camera model, we must try to find suitable values for the gain and exposure such that the desired brightness will be reached. The block diagram of the adaptation to lighting conditions algorithm is presented in figure 1. The adaptation to lighting conditions algorithm starts by computing the image histogram and the brightness of the input image, called current brightness (CB). Because the algorithm uses only one metric, the image brightness, we have to know what the desired brightness (DB) of the image should be. If the current brightness is lower than the desired brightness, we must increase one or both of the camera parameters values in order to reach the desired brightness; if the current brightness is higher than the desired brightness, we must decrease the values of the camera parameters and if the current brightness is equal to the desired brightness then we do not need to change the acquisition parameters so the algorithm stops. The algorithm uses other two assumptions: high gain values introduce a lot of noise in the image, making further processing quite hard and introducing a lot of errors (especially stereo reconstruction errors); while high exposure values increase the acquisition time of the next image. In the first case, the algorithm tries to use only the exposure in order to reach the desired brightness, by increasing the exposure up to a maximum exposure percent. If the exposure reaches the maximum exposure percent, than the algorithm will try to increase the gain until it reaches the maximum gain percent. The algorithm stops whenever the current brightness reaches the desired brightness. If the maximum exposure and gain percents have been reached, but the current brightness is still lower than the desired brightness, the algorithm will further try to increase these values until they reach 100%; In the second case, current brightness higher than desired brightness, the algorithm will try to decrease the gain first until it reaches a minimum gain value. If the desired brightness was not met, the algorithm will try to decrease the exposure time until it reaches the minimum exposure value. As presented in the first case, if the desired brightness is still not reached, the gain and exposure values will be decreased until they reach 0%. 275

brightness interval, the algorithm rewinds one step and, using finer granularity (0.1%), it tries to find the point for which the desired brightness is closest to the middle of the predicted interval. The second approach is to use a search problem. The idea is to search along a line segment of constant gain, with limits [E1,E2], the value of the exposure such that the desired brightness is closest to the middle of the predicted intensity range. For this we compute the distance (D1) between the brightness we would obtain with exposure E1 and the desired brightness, the distance (D2) between the brightness we would obtain with exposure E2 and the desired brightness and the distance (DM) between the brightness we would obtain with exposure EM and the desired brightness; where EM = E1+E2 2 is the middle of the exposure interval. Figure 2 presents this searching methodology. Fig. 2. Methodology for searching the desired brightness. Fig. 1. The block diagram of the adaptation algorithm. If both the gain and exposure time have reached their limits (100% or 0%) and the desired brightness was not reached, the algorithm stops by preserving these values. This means that the algorithm was not able to find an (exposure, gain) pair such that the acquired image is suitable for further processing, due to several factors that are not related to the camera sistem: very dark night with no lighting on the streets (100%) or the sun is directly into the cameras (0%). In order to reduce the oscillations, that can appear when the algorithm performs in real time, we introduced an admissible brightness range around the desired brightness, such that when the current brightness is situated in this interval there are no exposure and/or gain settings. There are two approaches for finding a new exposure and/or gain value. For example, when an exposure increase is necessary for reaching the desired brightness, the algorithm will try to find a suitable value for the exposure by using 1% increments. When the desired brightness is above the predicted After computing these distances, the search for a new exposure value is carried out recursively on the correct exposure interval until the desired brightness is reached. If D1 < 0 and DM > 0 or D1 > 0 and DM < 0 then the [E1,EM] interval is chosen, otherwise [EM,E2] is chosen. The exposure decrease, gain increase and gain decrease are computed using the same methodology. By using this approach, the necessary exposure and/or gain values are found much faster, thus decreasing the acquisition time between two frames. IV. EXPERIMENTAL RESULTS In order to be able to adapt the cameras to the outside lighting conditions, we must first obtain the mathematical model of the cameras. The process of estimating the camera response functions is called radiometric calibration. This is achieved by placing the camera in a static environment (constant light intensity and static scene) and taking a set of frames with different exposure and gain values. Then, the camera model is fitted to the set of observed values such as to minimize the error between the camera response and the intensity as predicted by the mathematical model. Radiometric calibration is performed only once, when the system is set up and then the camera model is loaded every time when the system is started. We have evaluated the square gain camera model on two types of digital stereo systems: one composed of two JAI A10 CL cameras and the other one with two JAI M4 CL cameras. Table I presents the radiometric calibration results for each of the two digital stereo systems. The left column in the table represents the camera parameters for the square gain camera model: a, b, c, m and n. The values presented in the table 276

are the results obtained for each parameter, after radiometric calibration, for each of the four digital sensors: left and right camera of the first digital stereo system and left and right camera of the second digital stereo system. The square gain camera model average estimation errors, for the the entire image, are below 5 pixels (4.59) [1], which means that the estimation errors are below 2%, so we can use this model for adapting the cameras to the outside environment s lighting conditions. TABLE I RADIOMETRIC CALIBRATION RESULTS JAI A10 CL JAIM4CL Left Right Left Right a 0.024 0.084 0.010 0.012 b 27.773 27.132 35.423 35.381 c 13.305 13.493 10.921 10.724 m 0.002 0.002 0.001 0.001 n 1.693 1.947 0.463 0.458 (a) Under Exposed Scene, brightness = 31 In order to increase the functionality of our algorithm and because the stereo system is mounted inside a vehicle, close to the windshield forward facing the road, we will not compute the average brightness on the whole image, but rather on a subset of the image, an adaptation region of interested. This region of interest starts from the front of the vehicle, above the vehicle s hood, it covers the whole field of view of the cameras and does not include the sky, thus avoiding large saturation areas in the image. In this manner we focus on the area in the image that contains relevant information and that will be further processed. With our current digital cameras it is not possible to set a region of interested when using automatic exposure and gain control. Thus the resulting images will take into consideration the sky (which on sunny days can be very bright, saturated areas in the image) and the vehicle s front hood (very dark, under exposed areas in the image), so the resulting images will have a poor brightness over the region of interest. Figure 3 presents the results of the camera adaptation to lighting conditions algorithm. The image represents a real traffic scenario. We first captured an under exposed image, having an average brightness value, over the region of interest, of only 31. After performing the adaptation to lighting conditions and changing the camera parameters, we obtain the second image. It can be easily seen that the quality of the second image is much higher than that of the first image; its average brightness value is 101. Figure 4 presents the results of applying the adaptation to lighting conditions on a saturated image. The image represents the same traffic scenario only this time the average brightness was 218. After performing the adaptation to lighting conditions the second image was captured. Its brightness value over the region of interest is 102. In order to reduce oscillations that may appear due to repeated exposure and/or gain settings we use an admissible brightness interval in order to reach the desired brightness. Our adaptation to lighting conditions algorithm performs quite Fig. 3. (b) Same scene after adaptation, brightness = 101 Camera auto adaptation to lighting conditions results well. It proved to have no oscillations from repeated exposure and/or gain settings. The parameters used for adapting the cameras to the environment s conditions are presented in table II. These parameters were chosen after several experiments performed in real traffic scenes. The minimum and maximum brightness define the admissible brightness interval in which we consider that there is no need for changing the camera parameters. The maximum exposure considered was 80%, in order to avoid motion blur, while the maximum gain cosidered was 50%, in order to avoid the noise in the images. TABLE II CAMERA ADAPTATION PARAMETERS Minimum Brightness 80 Desired Brightness 100 Maximum Brightness 120 Minimum Exposure 0% Maximum Exposure 80% Minimum Gain 0% Maximum Gain 50% Our cameras are able to provide 10-bit images, so an im- 277

(a) Saturated scene, brightness = 218 can set this look up table and exploit the high dynamic range provided by the cameras. For example, on a very sunny day setting S2 should be used, while during the evening or at night setting S0 should be used. The main characteristic of a stereo vision system is the number of 3D points that it can processes after the 3D stereo reconstruction process. For this reason we took our vehicle in real traffic scenario for two loops, one performing the adaptation to lighting conditions and one with algorithm turned off. We recording all the images in these two real traffic scenarios and we compared the number of 3D points obtained when using the adaptation to lighting conditions algorithm and when our algorithm was stopped. Figure 5 plots the obtained 3D points of our stereo system by using the adaptation algorithm and without the adaptation algorithm. As it can be seen, the number of 3D points is higher when using the adaptation to lighting condition algorithm by a factor between 7 and 12%. This is due to the fact that the quality of the images is greatly improved, they are more sharp, thus the 3D reconstruction algorithm is able to find more matching points between the two stereo images. The more 3D points we have for a pair of images the better the detection and classification of the objects in the scene is. Fig. 4. (b) Same scene after adaptation, brightness = 102 Camera auto adaptation to lighting conditions results Fig. 5. Number of 3D reconstructed points obtained by the system with and without the adaptation to lighting conditions algorithm provement to the accuracy of the algorithm would be to exploit the whole dynamic range provided by the digital cameras. This is, currently, not possible since our frame grabber provides only 8-bit images at the output. However, the frame grabber has a look up table setting that relates a 10-bit pixel in the acquired image to an 8-bit pixel that can be processed in software. By using this functionality we can take advantage of a subset of the camera s dynamic range. The look up table implements the functionality of a right shifter: S0 => Input bits D0...D9, output bits D0...D7 S1 => Input bits D0...D9, output bits D1...D8 S2 => Input bits D0...D9, output bits D2...D9 Using this settings, the input gray scale values from the 10 bit image are converted to 8-bit values, in the range [0, 255]. The values that are mapped by this look up table are presented below: S0 => 0...255 S1 => 2 (0...255) = 0, 2, 4,..., 510 S2 => 4 (0...255) = 0, 4, 8,..., 1020 So, when the exposure and gain values reach their limits, we V. CONCLUSIONS AND FUTURE WORKS A. Conclusions Although commercial digital cameras have the automatic exposure and gain control functionality, we have come to the conclusion that these solutions are not the best for real time processing for stereo vision, because they require a lot of frames until the quality of the image is improved and thus they do not work well in real urban traffic scenes where tunnels, shadows from buildings and trees are present. Another drawback of these algorithm is the lack of ability to set a region of interest on which to perform the adaptation to lighting conditions algorithm, thus taking into account the sky and the hood of the vehicle and not focusing on the scene that is to be processed. For these reasons we have presented a robust algorithm for adapting the signal of digital cameras to the environment s lighting conditions. We have used a mathematical model that estimates the camera s response taking into account two of the most important camera parameters: the amplification gain and the exposure time. The camera model, called square gain camera model, was designed for CCD gray scale digital cameras, but can also be extended to 278

CMOS sensors. It proves to be quite accurate and is able to estimate the camera response with minimum errors. Using the camera model we can estimate the average intensity of the image in the next frame when changing the exposure and/or gain value. The adaptation to lighting conditions algorithm performs quite well in almost any lighting conditions, we have tested it on both very sunny days, and during the evening and at night time. When adaptation to lighting conditions is no longer possible, we use the limited capabilities of the frame grabber to set the look up table that shifts the output bits of the image. Adaptation to lighting conditions is a very important task in a real time stereo vision system. When the adaptation to lighting conditions is enabled, the quality of the images is improved and thus the number of 3D reconstructed points is higher, increasing the robustness of the other processing modules: 3D points grouping, lane detection, object detection and classification, and so on. Having better quality images implies a significant increase in the detection and classification rate. The algorithm is very flexible, its initialization exposure and gain values are not crucial. We have selected 0% for gain and 20% for the exposure parameter. Our algorithm is very robust, it does not suffer any drift problem and the adaptation to lighting conditions is done over only one frame, i.e. it does not require several frames to be captured until the desired quality is reached as is the case for the automatic exposure and gain control functionalities that are present in digital cameras. B. Future Works An improvement to the adaptation algorithm would be the acquisition of the 10-bit images directly from the frame grabber and perform high dynamic range compression in software. This would increase the robustness of the whole system. Another improvement would be to mount a light sensor inside the vehicle, on the windshield below the cameras, in order to obtain the value of the luminance of the ambient light and use it to set the look up table in our current frame grabber. We are currently investigating other types of sensors as well, both CCD and CMOS cameras having a high dynamic range and able to capture 10, 12 or 14-bit images. In the future we will focus on designing and implementing a custom made camera link stereo frame grabber, in FPGA technology, that will be able to process high dynamic range images and that will provide other useful functions for the adaptation of the sensor to the environment s lighting conditions. Then we will translate the implementations of the radiometric calibration and adaptation to lighting conditions algorithms in this new architecture. We intend to develop a new stereo vision machine that will be able to perform several preprocessing algorithms including: image rectification, 3D reconstruction, optical flow, adaptation to lighting conditions; and that will be able to provide very good images for further processing. REFERENCES [1] M. Negru and S. Nedevschi, Camera response estimation. radiometric calibration, in Proceedings of IEEE Intelligent Computer Communication and Processing (ICCP), vol. 1, August 2009, pp. 103 110. [2] M. Grossberg and S. Nayar, What can be Known about the Radiometric Response Function from Images? in Proceedings of European Conference on Computer Vision (ECCV), vol. IV, May 2002, pp. 189 205. [3] T. Mitsunaga and S. Nayar, Radiometric Self Calibration, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, Jun 1999, pp. 374 380. [4] M. Grossberg and S. Nayar, Determining the Camera Response from Images: What is Knowable? in Proceedings of IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 11, Nov 2003, pp. 1455 1467. [5] Y. Tsin, V. Ramesh, and T. Kanade, Statistical calibration of ccd imaging process, in Proceedings of IEEE International Conference on Computer Vision (ICCV), vol. 1, Jul 2001, pp. 480 487. [6] M. Grossberg and S. Nayar, What is the Space of Camera Response Functions? in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. II, Jun 2003, pp. 602 609. [7] K. Shafique and M. Shah, Estimation of the radiometric response functions of a color camera from differently illuminated images, in Proceedings of IEEE International Conference on Image Processing (ICIP), vol. 4, Oct 2004, pp. 2339 2342. [8] A. Litvinov and Y. Y. Schechner, Addressing Radiometric Nonidealities: A Unified Framework, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. II, Jun 2005, pp. 52 59. [9] S. Nayar and T. Mitsunaga, High Dynamic Range Imaging: Spatially Varying Pixel Exposures, in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, Jun 2000, pp. 472 479. [10] M. Aggarwal and N. Ahuja, Split aperture imaging for high dynamic range, in Proceedings of IEEE International Conference on Computer Vision (ICCV), vol. 2, July 2001, pp. 10 17. [11] G. Messina, A. Castorina, S. Battiato, and A. Bosco, Image quality improvement by adaptive exposure correction techniques, in Proceedings of International Conference on Multimedia and Expo (ICME), vol. 2, July 2003, pp. 549 552. [12] N. Sampat and T. Yeh, System implications of implementing autoexposure on consumer digital cameras, in Sensors, Cameras, and Applications for Digital Photography (SPIE), vol. 3650, March 1999, pp. 100 107. 279