Multi-sensor Super-Resolution

Similar documents
COLOR DEMOSAICING USING MULTI-FRAME SUPER-RESOLUTION

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 1, JANUARY Sina Farsiu, Michael Elad, and Peyman Milanfar, Senior Member, IEEE

Joint Demosaicing and Super-Resolution Imaging from a Set of Unregistered Aliased Images

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Double resolution from a set of aliased images

AN EFFECTIVE APPROACH FOR IMAGE RECONSTRUCTION AND REFINING USING DEMOSAICING

Dynamic Demosaicing and Color Super-Resolution of Video Sequences

Image Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson

Edge Potency Filter Based Color Filter Array Interruption

Demosaicing Algorithms

Color Filter Array Interpolation Using Adaptive Filter

Increasing Space-Time Resolution in Video

Effective Pixel Interpolation for Image Super Resolution

Recent Advances in Image Deblurring. Seungyong Lee (Collaboration w/ Sunghyun Cho)

MOST digital cameras capture a color image with a single

IDEAL IMAGE MOTION BLUR GAUSSIAN BLUR CCD MATRIX SIMULATED CAMERA IMAGE

COLOR demosaicking of charge-coupled device (CCD)

DYNAMIC CONVOLUTIONAL NEURAL NETWORK FOR IMAGE SUPER- RESOLUTION

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Comparative Study of Demosaicing Algorithms for Bayer and Pseudo-Random Bayer Color Filter Arrays

Image Processing for feature extraction

Analysis on Color Filter Array Image Compression Methods

SUPER RESOLUTION INTRODUCTION

Imaging-Consistent Super-Resolution

MDSP RESOLUTION ENHANCEMENT SOFTWARE USER S MANUAL 1

ABSTRACT 1. INTRODUCTION

Edge Preserving Image Coding For High Resolution Image Representation

IMPROVEMENTS ON SOURCE CAMERA-MODEL IDENTIFICATION BASED ON CFA INTERPOLATION

Interpolation of CFA Color Images with Hybrid Image Denoising

Filters. Materials from Prof. Klaus Mueller

IMPROVING the spatial resolution of a video camera is

ABSTRACT I. INTRODUCTION. Kr. Nain Yadav M.Tech Scholar, Department of Computer Science, NVPEMI, Kanpur, Uttar Pradesh, India

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

Midterm Examination CS 534: Computational Photography

Optical Flow Estimation. Using High Frame Rate Sequences

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System

RGB RESOLUTION CONSIDERATIONS IN A NEW CMOS SENSOR FOR CINE MOTION IMAGING

Space-Time Super-Resolution

Noise Reduction in Raw Data Domain

High Dynamic Range image capturing by Spatial Varying Exposed Color Filter Array with specific Demosaicking Algorithm

High resolution images obtained with uncooled microbolometer J. Sadi 1, A. Crastes 2

An Efficient Noise Removing Technique Using Mdbut Filter in Images

RESOLUTION ENHANCEMENT FOR COLOR TWEAK IN IMAGE MOSAICKING SOLICITATIONS

Image Deblurring with Blurred/Noisy Image Pairs

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern

CS6670: Computer Vision

Resolving Objects at Higher Resolution from a Single Motion-blurred Image

Method of color interpolation in a single sensor color camera using green channel separation

Improvements of Demosaicking and Compression for Single Sensor Digital Cameras

Image Deblurring. This chapter describes how to deblur an image using the toolbox deblurring functions.

Lecture Notes 11 Introduction to Color Imaging

ELEC Dr Reji Mathew Electrical Engineering UNSW

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Restoration and resolution enhancement of a single image from a vibration-distorted image sequence

Moving Object Detection for Intelligent Visual Surveillance

A Study of Slanted-Edge MTF Stability and Repeatability

A Unified Framework for the Consumer-Grade Image Pipeline

Color image Demosaicing. CS 663, Ajit Rajwade

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015

Diploma Thesis Resolution improvement of digitized images

CS6670: Computer Vision Noah Snavely. Administrivia. Administrivia. Reading. Last time: Convolution. Last time: Cross correlation 9/8/2009

Introduction to Video Forgery Detection: Part I

Announcements. Image Processing. What s an image? Images as functions. Image processing. What s a digital image?

Sensors and Sensing Cameras and Camera Calibration

Study guide for Graduate Computer Vision

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

Vision Review: Image Processing. Course web page:

Recent Patents on Color Demosaicing

Cameras. Shrinking the aperture. Camera trial #1. Pinhole camera. Digital Visual Effects Yung-Yu Chuang. Put a piece of film in front of an object.

Table of contents. Vision industrielle 2002/2003. Local and semi-local smoothing. Linear noise filtering: example. Convolution: introduction

A Spatial Mean and Median Filter For Noise Removal in Digital Images

restoration-interpolation from the Thematic Mapper (size of the original

IDENTIFYING DIGITAL CAMERAS USING CFA INTERPOLATION

Image interpretation and analysis

Denoising Scheme for Realistic Digital Photos from Unknown Sources

Postprocessing of nonuniform MRI

Super resolution and dynamic range enhancement of image sequences

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Color filter arrays revisited - Evaluation of Bayer pattern interpolation for industrial applications

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

An Adaptive Framework for Image and Video Sensing

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

A Study on Image Enhancement and Resolution through fused approach of Guided Filter and high-resolution Filter

New applications of Spectral Edge image fusion

Guided Image Filtering for Image Enhancement

Color Constancy Using Standard Deviation of Color Channels

No-Reference Perceived Image Quality Algorithm for Demosaiced Images

Smart Interpolation by Anisotropic Diffusion

Colour correction for panoramic imaging

New Additive Wavelet Image Fusion Algorithm for Satellite Images

digital film technology Resolution Matters what's in a pattern white paper standing the test of time

Local prediction based reversible watermarking framework for digital videos

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

PCA Based CFA Denoising and Demosaicking For Digital Image

Recent advances in deblurring and image stabilization. Michal Šorel Academy of Sciences of the Czech Republic

Optimal Color Filter Array Design: Quantitative Conditions and an Efficient Search Procedure

An Improved Color Image Demosaicking Algorithm

Transcription:

Multi-sensor Super-Resolution Assaf Zomet Shmuel Peleg School of Computer Science and Engineering, The Hebrew University of Jerusalem, 9904, Jerusalem, Israel E-Mail: zomet,peleg @cs.huji.ac.il Abstract Image sensing is usually done with multiple sensors, like the RGB sensors in color imaging, the IR and EO sensors in surveillance and satellite imaging, etc. The resolution of each sensor can be increased by considering the images of the other sensors, and using the statistical redundancy among the sensors. Particularly, we use the fact that most discontinuities in the image of one sensor correspond to discontinuities in the other sensors. Two applications are presented: Increasing the resolution of a single color image by using the correlation among the three color channels, and enhancing noisy IR images. Keywords: Super-Resolution, Demosaicing, Color, Multi-Sensor, Restoration. Introduction Image sensing is usually done with multiple sensors. A color image, for example, is a combination of three sensors: red, green, and blue. In visual surveillance and satellite imaging, sensors that are even more different are often used, e.g. some sensors in the visible domain and other sensors in the infra red domain. Combined depth-color cameras (e.g. ]) are also becoming available. In most multi-sensor applications, it is assumed that the images are aligned. Otherwise the motion between the images is computed 9, 20, 5], and the images are aligned by warping. The resampling done for warping degrades the quality of the combined image. This paper presents a new way to combine the information from different non-registered sensors. Given a set of images of possibly different sensors viewing the same scene, the resolution of one image is improved by using the other images. In color RGB sensors, for example, the red channel is enhanced using the green and blue channels. Similarly the green channels are enhanced using the red and blue channels. The result of combining the enhanced reso- igure. The Bayer pattern, a common way to organize the Red, Green and Blue sensors in a grid. lution channels is a higher resolution color image. We adapt the super resolution engine 0, 5, 6, 8], developed for same-sensor images, to combine information from different sensors. The multi-sensor extension is enabled by exploiting statistical redundancy among the sensors. Particularly, we use the fact that most discontinuities in the image of one sensor correspond to discontinuities in the other sensors, and find a local affine mapping between the intensities of different sensors along the edges. We address the validity of this model both analytically and experimentally. Clearly, for very different sensors such as medical modalities 20] this model is less useful. One application of the multi-sensor super-resolution is the improvement of resolution in a single multi-sensor image whose channels are not registered. This is the typical case in -CCD color cameras (see ig. ), where each pixel location has a sensor of a single color. It may also occur in 3-CCD cameras, where each color has a full CCD, and there is no perfect registration of the three sensors.. Previous Work There have been much work on the combination of different sensors, and particularly recovery of color image values from noisy samples. One approach is color image restoration (2, 2, 9, 7]), where the combination is usu-

q r r f ally achieved by a joint-channel regularization term, aiming mainly to reduce noise. Since super resolution is a generalization of image restoration 6], the adaption of these algorithms to super resolution algorithms is straightforward. The approach of this paper is inherently different, since it minimizes a projection error rather than forcing spatial inter-channel smoothness constraints on the solution. When prior knowledge of the scene is available 4, 4] regularization terms 2, 2] can be combined in the algorithm. A comparison of the results of multi-sensor super resolution and multi-sensor restoration is shown in the experiments. Another bulk of related work are demosaicing algorithms 3,, 6, 7, 4], aiming in recovery of the missing samples in a -CCD color filter array (ig. ). The proposed approach is more general, allowing an arbitrary transformation between the different sensors. It is interesting to compare the presented method to multi-sensor fusion 3], where the visual information captured by various sensors is combined into a single image. This combined image includes the information (i.e. edges) from all sensors. In the presented method, one of the input images is set as photometric reference image, and the resulting image contains only the edges which appeared in this reference image. An edge that is included in another sensor, but not in the reference image, will not appear in the enhanced image. 2 Multi-Sensor Super Resolution Existing super resolution techniques present the process in the following way 0, 5, 6, 2, 8]: The low resolution input images, all captured by the same sensor, are the result of imaging some high resolution (unknown) image. The imaging model usually includes geometric warping, camera blur, decimation, and additive noise. The goal is to find the high resolution image which, when imaged into the lattice of the input images according to the respective imaging model, predicts well the low resolution input images. Let be an estimate of the unknown image is:. Then the prediction error for!#" () where, are the geometric warping and blurring operators respectively, and " is decimation. $ is the combination of optical blur, sensor blur, and motion blur. is based on on the estimated displacement between the images. Most super resolution algorithms 0, 5, 6, 2, 8] minimize the % & norm of the error (possibly with regularization), by iteratively simulating the imaging process, and using to update the solution estimate. The proposed multi-sensor super-resolution algorithm uses the same framework. is assumed to be a high resolution version of ' (w.l.o.g). Since )(*+-,, were possibly captured by different sensors than ', no imaging process has created from. Still, aligned images of different sensors are statistically related. Using a model. relating the projected image / 0!" with the input image, a virtual input image and a virtual prediction error image 2 can be defined: 3.04/ (! 5(62 7/ The virtual prediction error 2 replaces the prediction error in the super resolution algorithm, e.g. 0, 5, 8]. We use an affine relation between the intensities of different sensors in a local neighborhood. We select corresponding neighborhoods in two images / and, possibly taken by different sensors. or this neighborhood we estimate an affine transformation relating the intensity values of /8 to the intensity values of, /9!;:<(!=>?3@AB:<(=>C B:<(=>ED B:(!=>. This mapping is used to compute the virtual input image B:<(=>G@A;:<(!=>!B:(!=>HD B:(!=>, and the virtual prediction error 2 ;:<(=IJ B:(!=>H7/8 B:(!=>. To simplify notations in the following equations, we will use @9( to represent @A;:<(!=>5( ;:<(!=>. Assuming the image is contaminated with a zero-mean white noise, the optimal estimator for the affine relation between a region in the projected image / and a region in the input image minimizes the following squared error: K *CLMNPO Q RSUT O V @ ;W(2ED X/9 ;W(2E &. In Section 2. we discuss the validity of the local affine model for relating the intensity value of different sensors, and explain why it can be useful in the context of super resolution. Still, this model fails in some cases, and thus it is important to check its validity before using it. The absolute average-centralized normalized cross correlation is a good measure for affine similarity. The maximal absolute value of indicates two signals that are related by an affine transformation. In summary, the virtual input image Y.0Z/ () can be computed for pixel B:<(=> as follows: Estimate the absolute of centralized-normalized cross correlation in the (weighted) window of size \]^D`_ around B:(!=> : a5bced;ë f8g hjilkmi bncedë fio h i bcedë f k i bcedpepf b hr i bnced;epfso h i bncedë f fcb ktr i bcedpepf>o k i bncedë f (2) Where / ;:<(=I is a weighted average of / in a neighborhood around B:(!=> : / ;:<(!=>? u ;W(2EB/8 zb:{d W(!=D}2I vawpx T O V xaway and similarly for ( / ( / & ( &. Weighting the window by distance-decreasing kernel reduces spatial discontinuities in the affine parameters. y 2

igure 2. An example of a projection error image i, as defined in Eq.. The non-zero values concentrate along the edges of the car. If ;:<(!=>, the correlation is bellow some threshold, ;:<(!=>? / B:(!=>. skip the affine estimation and assign Otherwise, compute the least-squares estimate for the affine similarity in the (weighted) window: @ / & v B:<(=> / B:<(=> /8 ;!;:<(!=> / B:(!=> _ B:<(=> (3) Use the affine estimate B:<(=>? @ B:<(=> <D is then used for estimating the virtual prediction error 2j which is reprojected to the solution estimate. The threshold expresses the validity of the affine approximation. It was experimentally checked that modifying in,, induces negligible influence on the results. 2. Discussion: Local Affine Model for Intersensor Prediction. The use of local affine model for estimating 2 from the input image is based on two simple observations: irst, most of the information used by the super resolution algorithms is concentrated along the edges. Second, in many practical cases, images of different sensors have edges at corresponding locations. ig. 2 shows a typical prediction error image of a single-sensor super resolution algorithm. Most of the details for resolution enhancement are concentrated along the edges. Therefore, the most important regions to estimate in the virtual prediction image 2, are along the edges. Local affine relation between the intensity values of different sensors has been used by Irani and Anandan as a distance measure for image alignment 9]. While this assumption does not hold for general sensors, it is useful in several practical cases, e.g. for different color sensors (neglecting minor misalignments due to color aberration 8]). To test the validity of the model between different color sensors, we have computed statistics over various images of different types (natural, urban, faces etc.). The results, presented in ig. 4, show that typically more than 90% of igure 3. The correlation values between different sensors. The Red channel. The Green channel. The absolute of local correlation between the two colors. Most regions have large correlation, except regions containing more than two colors (corners) and uniform regions were most intensity variations are due to noise. the edges follow the affine model (correlation 0.8). This is translated to a low model prediction error (ig. 4-. The model validity can be also shown analytically. Looking on small neighborhoods, most edges can be modeled as abrupt transitions between two relatively homogeneous regions. Using a simplifying assumption that these regions are homogeneous, it can be easily shown that the linearity of the blurring operator implies an affine transformation between the intensity values of different sensors near an edge. The full proof is omitted due to space limitations. The affine approximation fails in some cases, for example when the imaged region contains more than two homogeneous colors, or when the variations in intensity are mainly due to noise, as demonstrated in ig 3. It is therefore important to test the validity of the model using the correlation measure (Eq. 2) before using it. 3 Experiments We first tested the algorithm on color samples organized in a Bayer pattern, as presented in ig.. The algorithm was applied to each of the sensors, improving for example the red image using the blue and green images, etc. or the local photometric affine alignment stage we used in all our experiments a window weighted by a Gaussian kernel with standard deviation _ \\, and a correlation threshold,. The algorithm was not sensitive to changes in these values. or the blurring operator we have used an isotropic Gaussian with standard deviation _, pixels. We examined whether adding information from other sensors increases the quality of the image. The algorithm was applied to the red image, using the green and blue channels as input, and the result was compared to single-image super resolution on the red channel image, which is equivalent to a high pass filter on the image. The experiment results are presented in ig. 7. To demonstrate the quality of the combination of the three super-resolved channels, we compared it with three 3

Cumulative probability 0.8 0.6 0.4 0.2 Cumulative probability 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 Normalized correlation 0 0 2 3 4 5 Intensity igure 4. Measuring the validity of the local photometric affine model between different sensors. ig.4-a Shows a typical cumulative histogram of normalized crossed correlation in regions where the gradient is larger than 0 values (for 8-bit images). ig.4-b Show the cumulative histogram of prediction error of the local affine model. ig.4-c Shows the image on which these statistics were computed. These results repeated on different images and among all pairs of RGB sensors. 4 Summary This paper has introduced a new way to combine information from different image sensors. Given non-registered images captured by different sensors, the algorithm improves their resolution, using a local photometric affine alignment along the edges. igure 5. Results of multi-sensor super resolution on IR images of different spectral sensitivity. ig. 5-a shows the input image (bilinearly interpolated), and ig. 5-b shows the result of the multi-sensor super resolution. demosaicing methods: Bilinear interpolation, reeman s method 7] and color restoration 2]. The results are presented in ig. 6. It can be seen that the presented method eliminates the zipper artifacts along the edges without oversmoothing the image. Note that most of the intensity information is captured by the green channel, for which the resolution enhancement is minor. Thus the process serves mainly as an anti-aliasing filter. In a second experiment, we have enhanced a 2-5um IR image using 8-2um IR images of higher-resolution (ig. 5). Due to the low SNR in the input images, we have used a larger neighborhood for the local affine model, with standard deviation 5.2. Acknowledgments The authors would like to thank Minolta for providing raw samples from their camera, Elop and Eli Shechtman for providing IR input data and useful comments, and Danny Keren for applying his algorithm on our input. References ] www.3dvsystems.com. 2] P. Blomgrem and T. Chan. Color tv: Total variation methods for restoration of vector-valued images. IEEE Trans. Image Processing, 7(3):304 309, March 998. 3] P. Burt and R. Kolczynski. Enhanced image capture through fusion. In ICCV93, pages 73 82, 993. 4] D. Capel and A. Zisserman. Super-resolution enhancement of text image sequences. In ICPR, pages Vol I: 600 605, September 2000. 5] Y. Caspi and M. Irani. Alignment of non-overlapping sequences. In ICCV, pages II: 76 83, July 200. 6] M. Elad and A. euer. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans. Image Processing, 6(2):646 658, December 997. 7] W. reeman. Median filter for reconstructing missing color samples, united states patent 4,724,395, 998. 8] B. unt and J. Ho. Color from black and white. In ICCV, December 988. 9] M. Irani and P. Anandan. Robust multi-sensor image alignment. In ICCV, pages 959 966, January 998. 0] M. Irani and S. Peleg. Improving resolution by image registration. GMIP, 53:23 239, 99. 4

igure 7. The results on the red image of the Books example. Magnified regions from the input red image, using bilinear interpolation. Multi-sensor Super resolution result. Single image super resolution (high-pass filter). The contrast of all images was enhanced for visualization. d) igure 6. A comparison of multi-sensor super resolution to other demosaicing techniques on a Bayer pattern. Bilinear interpolation. reeman s method 7]. Multichannel restoration 2]. d) The presented multi-sensor super resolution. image d) contains no high-frequency zipper artifacts, and is sharper than than and. ] D. Keren and Y. Hel-Or. Image processing system using image demosaicing, patent ep050847, 2000--08. 2] D. Keren and M. Osadchy. Restoring subsampled color images. MVA, (4):97 202, 999. 3] R. Kimmel. Demosaicing: Image reconstruction from color ccd samples. IEEE. Trans. IP, 8(9):22, September 999. 4] S. K. Nayar and S. G. Narasimhan. Assorted pixels: Multisampled imaging with structural models. In ECCV, May 2002. 5] A. Patti, M. Sezan, and A. Tekalp. Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time. IEEE Trans. Image Processing, 6(8):064 076, August 997. 6] R. Ramanath, W. Snyder, G. Bilbro, and W. Sander. Demosaicking methods for bayer color arrays. Journal of Electronic Imaging, (3), July 2002. 7] R. Schultz and R. Stevenson. Stochastic modeling and estimation of multispectral image data. IEEE Trans. Image Processing, 4(8):09 9, December 995. 8] R. Schultz and R. Stevenson. Extraction of high-resolution frames from video sequences. IEEE Trans. Image Processing, 5(6):996 0, June 996. 9] B. Tom and A. Katsaggelos. Multi-channel image identification and restoration using the expectation maximization algorithm, optical engineering, vol. 35, no., pp. 24-254, jan. 996. 20] P. Viola and W. Wells, III. Alignment by maximization of mutual information. IJCV, 24(2):37 54, September 997. 2] A. Zomet and S. Peleg. Efficient super-resolution and applications to mosaics. In ICPR, volume I, pages 579 583, September 2000. 5