A Novel Hybrid Exposure Fusion Using Boosting Laplacian Pyramid

Similar documents
Selective Detail Enhanced Fusion with Photocropping

Efficient Image Retargeting for High Dynamic Range Scenes

A Saturation-based Image Fusion Method for Static Scenes

ISSN Vol.03,Issue.29 October-2014, Pages:

Denoising and Effective Contrast Enhancement for Dynamic Range Mapping

MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Correcting Over-Exposure in Photographs

HDR imaging Automatic Exposure Time Estimation A novel approach

An Improved HDR Technique by Classification Method

Automatic Selection of Brackets for HDR Image Creation

Realistic Image Synthesis

Continuous Flash. October 1, Technical Report MSR-TR Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052

Saliency Based Boosting Laplacian Pyramid Image Fusion for Multi Exposure Photography

Single Scale image Dehazing by Multi Scale Fusion

Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach

ISSN: (Online) Volume 2, Issue 2, February 2014 International Journal of Advance Research in Computer Science and Management Studies

High dynamic range imaging and tonemapping

Probabilistic motion pixel detection for the reduction of ghost artifacts in high dynamic range images from multiple exposures

International Journal of Advance Engineering and Research Development. Asses the Performance of Tone Mapped Operator compressing HDR Images

A Real Time Algorithm for Exposure Fusion of Digital Images

Tonemapping and bilateral filtering

Fast and High-Quality Image Blending on Mobile Phones

Color Preserving HDR Fusion for Dynamic Scenes

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

High dynamic range and tone mapping Advanced Graphics

Real-time ghost free HDR video stream generation using weight adaptation based method

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

Fibonacci Exposure Bracketing for High Dynamic Range Imaging

Automatic Content-aware Non-Photorealistic Rendering of Images

GHOSTING-FREE MULTI-EXPOSURE IMAGE FUSION IN GRADIENT DOMAIN. K. Ram Prabhakar, R. Venkatesh Babu

High Dynamic Range Imaging

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

A Multi-resolution Image Fusion Algorithm Based on Multi-factor Weights

Automatic High Dynamic Range Image Generation for Dynamic Scenes

Computational Photography

Guided Image Filtering for Image Enhancement

The Fundamental Problem

Low Dynamic Range Solutions to the High Dynamic Range Imaging Problem

Multispectral Image Dense Matching

PERCEPTUAL EVALUATION OF MULTI-EXPOSURE IMAGE FUSION ALGORITHMS. Kai Zeng, Kede Ma, Rania Hassen and Zhou Wang

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Photomatix Light 1.0 User Manual

High Dynamic Range Imaging

Title: DCT-based HDR Exposure Fusion Using Multi-exposed Image Sensors. - Affiliation: School of Electronics Engineering,

HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

25/02/2017. C = L max L min. L max C 10. = log 10. = log 2 C 2. Cornell Box: need for tone-mapping in graphics. Dynamic range

Lossless Image Watermarking for HDR Images Using Tone Mapping

High Dynamic Range Video with Ghost Removal

High Dynamic Range (HDR) Photography in Photoshop CS2

Fast Bilateral Filtering for the Display of High-Dynamic-Range Images

arxiv: v1 [cs.cv] 29 May 2018

Problem Set 3. Assigned: March 9, 2006 Due: March 23, (Optional) Multiple-Exposure HDR Images

Image Registration for Multi-exposure High Dynamic Range Image Acquisition

Simulated Programmable Apertures with Lytro

Flash Photography Enhancement via Intrinsic Relighting

Image Visibility Restoration Using Fast-Weighted Guided Image Filter

PSEUDO HDR VIDEO USING INVERSE TONE MAPPING

arxiv: v1 [cs.cv] 8 Nov 2018

High Dynamic Range Photography

Defocus Map Estimation from a Single Image

An Efficient Nonlinear Filter for Removal of Impulse Noise in Color Video Sequences

Multiscale model of Adaptation, Spatial Vision and Color Appearance

Contrast Use Metrics for Tone Mapping Images

Fixing the Gaussian Blur : the Bilateral Filter

HDR Recovery under Rolling Shutter Distortions

Figure 1 HDR image fusion example

A Single Image Haze Removal Algorithm Using Color Attenuation Prior

Intelligent Nighttime Video Surveillance Using Multi-Intensity Infrared Illuminator

arxiv: v1 [cs.cv] 20 Dec 2017 Abstract

Colour correction for panoramic imaging

Inexpensive High Dynamic Range Video for Large Scale Security and Surveillance

A Novel Approach for Detail-Enhanced Exposure Fusion Using Guided Filter

! High&Dynamic!Range!Imaging! Slides!from!Marc!Pollefeys,!Gabriel! Brostow!(and!Alyosha!Efros!and! others)!!

Neuron Bundle 12: Digital Film Tools

PERCEPTUAL QUALITY ASSESSMENT OF HDR DEGHOSTING ALGORITHMS

A Study on Image Enhancement and Resolution through fused approach of Guided Filter and high-resolution Filter

FEATURE BASED GHOST REMOVAL IN HIGH DYNAMIC RANGE IMAGING

PERCEPTUAL QUALITY ASSESSMENT OF HDR DEGHOSTING ALGORITHMS

This histogram represents the +½ stop exposure from the bracket illustrated on the first page.

HIGH DYNAMIC RANGE IMAGE ACQUISITION USING FLASH IMAGE

HIGH DYNAMIC RANGE MAP ESTIMATION VIA FULLY CONNECTED RANDOM FIELDS WITH STOCHASTIC CLIQUES

REDUCING the backlight of liquid crystal display (LCD)

Dynamic Range. H. David Stein

Extract from NCTech Application Notes & Case Studies Download the complete booklet from nctechimaging.com/technotes

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015

Photographic Color Reproduction Based on Color Variation Characteristics of Digital Camera

VLSI Implementation of Impulse Noise Suppression in Images

Image Enhancement for Astronomical Scenes. Jacob Lucas The Boeing Company Brandoch Calef The Boeing Company Keith Knox Air Force Research Laboratory

Image Deblurring with Blurred/Noisy Image Pairs

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array

A Locally Tuned Nonlinear Technique for Color Image Enhancement

Research Article Anisotropic Diffusion for Details Enhancement in Multiexposure Image Fusion

How to combine images in Photoshop

Luminosity Masks Program Notes Gateway Camera Club January 2017

Distributed Algorithms. Image and Video Processing

Enhancement of Low Dynamic Range Videos using High Dynamic Range Backgrounds

Electrical & Computer Engineering and Research in the Video and Voice over Networks Lab at the University of California, Santa Barbara Jerry D.

A Novel approach for Enhancement of Image Contrast Using Adaptive Bilateral filter with Unsharp Masking Algorithm

Glare Removal: A Review

Transcription:

A Novel Hybrid Exposure Fusion Using Boosting Laplacian Pyramid S.Abdulrahaman M.Tech (DECS) G.Pullaiah College of Engineering & Technology, Nandikotkur Road, Kurnool, A.P-518452. Abstract: THE DYNAMIC range of a natural scene often spans a much larger scope than the capture range of common digital cameras. An exposure image only captures a certain dynamic range of the scene and some regions are invisible due to under-exposure or over-exposure. Variable exposure photography captures multiple images of the same scene with different exposure settings of the camera while maintaining a constant aperture. In order to recover the full dynamic range and make all the details visible in one image, high dynamic range (HDR) imaging techniques are employed to reconstruct one HDR image from an input exposure sequence. These generated HDR images usually have higher fidelity than convectional low dynamic range (LDR) images, which have been widely applied in many computer vision and image processing applications, such as physically-based realistic images rendering and photography enhancement. On the other hand, the current displays are only capable of handling a very limited dynamic range. This Project proposes a new exposure fusion approach for producing a high quality image result from multiple exposure images. Based on the local weight and global weight by considering the exposure quality measurement between different exposure images, and the just noticeable distortion-based saliency weight, a novel hybrid exposure weight measurement is developed. This new hybrid weight is guided not only by a single image s exposure level but also by the relative exposure level between different exposure images. Mr.G.Ramarao, M.Tech. M.I.E Associate Professor, Department of ECE, G.Pullaiah College of Engineering & Technology, Nandikotkur Road, Kurnool, A.P-518452. The core of the approach is our novel boosting Laplacian pyramid, which is based on the structure of boosting the detail and base signal, respectively, and the boosting process is guided by the proposed exposure weight. Our approach can effectively blend the multiple exposure images for static scenes while preserving both color appearance and texture structure. Our experimental results demonstrate that the proposed approach successfully produces visually pleasing exposure fusion images with better color appearance and more texture details than the existing exposure fusion techniques and tone mapping operators. Introduction: Exposure fusion is currently a very active research area in the field of computer vision, as it offers the full dynamic range from an input exposure sequence. The task of exposure fusion is slightly different from the traditional HDR imaging technique, and it does not need to reconstruct a single HDR image from a set of images under different exposure settings from the same scene. The traditional image fusion techniques are most relevant to our algorithm, but image fusion algorithms focus on preserving the details and can be viewed as an analogy to alpha blending. The purpose of exposure fusion is to acquire the full dynamic range of a scene by blending multiple exposure images into a single high-quality composite image, and preserving the detail and texture information as much as possible. Page 461

The general image fusion approaches usually use the multisensory or multispectral images as input while exposure fusion methods use the multiple exposure images as input. Taking a sequence of images with different exposures of a scene as input, our approach produces a detail and texture preserving fusion result using the boosting Laplacian pyramid. The proposed fusion method neither needs to generate any HDR image nor needs to do the tone mapping process. RELATED WORK: In recent years, a few approaches of multiple exposure fusion have been proposed and are widely used for their simplicity and effectiveness. Mertenset al presented an exposure fusion approach using Gaussian and Laplacian pyramid in a multi-resolution fashion, which belongs to the pixel level fusion approach. Raman and Chaudhuriemployed the edge-preserving bilateral filters to generate an exposure fusion result from multi-exposure input images. Raman et al further proposed an automatic multi-exposure fusion approach without the ghost artifacts by determining the moving objects. Recently, Zhang and Cham presented a tone mappinglike exposure fusion method under the guidance of gradient-based quality assessment. Later, Song et al synthesized an exposure fusion image using a probabilistic model, which preserves the luminance levels and suppresses reversals in the image luminance gradients. More recently, Haul presented a novel registration and fusion approach for the exposure images in the presence of both live scenes and camera motions. Compared with the traditional tone mapping techniques, the proposed exposure fusion approach does not need to recover the camera response curve and record the exposure time of an input sequence of different exposure images. As described earlier, the conventional two stages include the HDR image creation and tone mapping and this two-stage process usually needs complex user interactions and tends to miss some color and texture details in the created fusion result. Therefore, it is desirable to produce the fusion result from a multiple exposure sequence input, which is more efficient and effective. The purpose of our exposure fusion technique is to directly produce visually pleasing photographs for the displaying purpose while preserving the details and enhancing the contrastwhich preserves the luminance levels and suppresses reversals in the image luminance gradients. Exposure fusion algorithm method Proposed Method Block diagram: Block Diagram: Fig. 1.Block diagram of a novel exposure fusion approach using BLP to produce a high quality image from multiple exposure images. Exposure Fusion is a fairly new concept that is the process of creating a low dynamic range (LDR) image from a series of bracketed exposures. In short, EF takes the best bits from each image in the sequence and seamlessly combines them to create a final Fused image. Or more technically, the fusing process assigns weights to the pixels of each image in the sequence according to luminosity, saturation and contrast, then depending on these weights includes or excludes them from the final image. And because Exposure Fusion relies on these qualities, no extra data is required, and indeed, if you wanted to, you could include an image with flash to bring darker areas to life. Exposure Fusion also has one other trick up its virtual sleeve. It can also create extended Depth Of Field images by fusing together a sequence of images with different DOFs. This could actually be quite handy, say if lighting conditions at the time don t allow the full DOF to be captured in one shot, or if you re just limited by the DOF of your lens. This process could also be used creatively to get different DOFs in one image. Page 462

different exposure images. Hence we define a global exposure weight V(x, y) to make a better exposure measurement by considering other exposure images from the sequence and finally, we multiply the weight map of each exposure image to obtain its final global exposure level of the input sequence. Fig. 2.Example ofexposure Fusion also has one other trick up its virtual sleeve. It can also create extended Depth of Field images by fusing together a sequence of images with different DOFs. Exposure fusion algorithm: Our goal is to automatically find the useful visibility information from each input exposure image, and then combine these regions together so as to generate a high quality fusion result with more details. We propose three different guidance methods to identify each pixel s contribution to the final fusion components, and we consider both the global and local exposure weight for multiple exposure fusion. Our method is different from the approach in [23], which emphasizes the local cues to determine the weight maps, such as the contrast, saturation and exposure measurement. In contrast, we measure the exposure weight by computing the exposure levels from both the local and global information. A. Local exposure weight:this exposure quality assessment Q(x, y) sets the lightest and darkest regions with zero values, while it assigns other regions with the values between zero and one. Our exposure weight map E(x, y) is the grayscale image of Q(x, y) and represents the exposure quality of the input image. B. Global exposure weight:the aforementioned exposure weight is a local weight map, which computes the exposure level only from a single exposure image of the input sequence. However, this local weight map does not utilize the global relationship of measuring the exposure level between C. JND-Based Saliency Weight:JND refers to the maximum distortion that the human visual system (HVS) does not perceive, and defines a perceptual threshold to guide a perceptual image quality measurement task. The JND model helps us to represent the HVS sensitivity of observing an image [12]. It is an important visual saliency cue for image quality measurement. We employ the JND model to define the saliency weight J(x, y), which can pick up the pixels in different exposure images with good color contrast and saturation regions. Furthermore, we can utilize a saliency weight map based function to estimate the level of boosting in our BLP. Texture masking is usually determined by local spatial gradients around the pixel. In order to obtain more accurate JND estimation, edge and non-edge regions should be well distinguished Boosting Laplacian Pyramid: Our boosting process is guided by the aforementioned local exposure weight and JND-based saliency weight with different exposure images. It is very useful tocorrectly select the salient regions to boost, and the boostinglevel is controlled by the exposure quality measurement. Page 463

In order to avoid the color casting artifacts, we multiply the RGB triplet by a scalar, which keeps the chromaticity unchanged before and after signal boosting and reduces the color distortion. Fig. 3 gives an illustration of our boosting on the magnitude of the RGB color vector along the direction of this color vector. The direction of the RGB color vector lies in the line of 45 degrees diagonal, which means that the direction of this color vector does not change before and after using our boosting Laplacian pyramid (Fig. 3, third row). At the sometime, the magnitude of the RGB color vector increases nonlinearly before and after the boosting process (Fig. 3, bottom row). Our boosting method preserves the structure and texture details so as to get a natural and visually pleasing fusion result. Fig. 4.Performance comparisons on the input sequence with the exposure fusion approach and the gradient directed fusion approach. (a) Input sequence. (b) Result 1. (c) Result 2. (d) Our result. (e) Close-up of (b). (f) Close-up of (c). (g) Close-up of (d). The detail layer is obtained by the original signal subtracting the Gaussian pyramid signal, which is based on the standard image Laplacian pyramid [1]. Our BLP exhibits the new advantages of boosting the base layer and detail layer signal according to the boosting guidance, which is computed by our local exposure weight and JND-based saliency weight introduced in Section III. Boosting guidance: As mentioned before, our boosting process is guided by the two image quality measurements, the local exposure weight guidance and the JND-based saliency weight guidance. We first use the exposure weight guidance to select the well exposed regions which need to be boosted using our BLP. This strategy can avoid the visual artifacts. It is beneficial to boost the base layer and detail layers with different extents for each pixel according to the guidance map, since the well exposure regions and under-exposure or overexposure regions of the sequence should be enhanced with different amplifying values during the boosting process. The output is formed as the video file by using sequence of single output images. The output video file will be saved in current folder. Experimental Results: These three weights indicate different aspects of image quality measurements, and the hybrid weight maps denote the overall contribution of the different exposure images to the final fusion result. As mentioned before, the basic principle of exposure fusion is to select the good exposure pixels from the input images, and preserve the detail and texture information as much as possible during the fusion process. Therefore, we define a local weight E(x, y) and a global weight V(x, y) to measure the exposure level of the input images. E(x, y) selects the good exposure pixels while V(x, y) eliminates the bad exposure regions of the current image. Thus, V(x, y) helps E(x, y) to compensate the exposure level measurement. J(x, y) measures the visual saliency according to the background luminance and texture information. Fig. 5.Different exposure images to the final fusion result Page 464

Conclusion: This paper has presented a novel exposure fusion approach using BLP to produce a high quality image from multiple exposure images. Our novel BLP algorithm is based on boosting the detail and base signal respectively, and can effectively blend the multiple exposure images for preserving both color appearance and texture structures. We therefore believe that our fusion method will suffice to produce the results with fine details for most practical applications. The comprehensive perceptual study and analysis of exposure fusion algorithms will make an interesting subject for future work. For instance, we can create a benchmark of input exposure images and conduct a user study to compare a representative number of state-of-the-art exposure fusion methods. References: [1] M. I. Smith and J. P. Heather, Review of image fusion technology in 2005, in Proc. SPIE, vol. 5782, 2005, pp. 29 45. [2] E. A. Khan, A. O. Akyuz, and E. Reinhard, Ghost removal in high dynamic range images, in Proc. ICIP, 2006, pp. 2005 2008. [3] A. Eden, M. Uyttendaele, and R. Szeliski, Seamless image stitching of scenes with large motions and exposure differences, in Proc. IEEE CVPR, 2006, pp. 2498 250. [4] R. Fattal, M. Agrawala, and S. Rusinkiewicz, Multiscale shape and detail enhancement from multilight image collections, ACM Trans. Graph., vol. 26, no. 3, article 51, 2007. [5] S. Raman and S. Chaudhuri, A matteless, variational approach to automatic scene compositing, in Proc. IEEE ICCV, 2007, pp. 1 6. [6] K. Jacobs, C. Loscos, and G. Ward, Automatic high-dynamic range image generation for dynamic scenes, IEEE Comput.Graph.Appl.,vol. 28, no. 2, pp. 84 93, Mar. Apr. 2008. [7] O. Gallo, N. Gelfand, W. Chen, M. Tico, and K. Pulli, Artifact-free high dynamic range imaging, in Proc. IEEE ICCP, 2009, pp. 1 7. [8] T. Mertens, J. Kautz, and F. V. Reeth, Exposure fusion: A simple and practical alternative to high dynamic range photography, Comput.Graph. Forum, vol. 28, no. 1, pp. 161 171, 2008. [9] A. R. Varkonyi-Koczy, A. Rovid, and T. Hashimoto, Gradient-based synthesized multiple exposure time color HDR image, IEEE Trans. Instrum. Meas., vol. 57, no. 8, pp. 1779 1785, Aug. 2008. [10] S. Raman and S. Chaudhuri, Bilateral filter based compositing for variable exposure photography, in Proc. Eurograph., 2009, pp. 1 4. [11] S. Raman, V. Kumar, and S. Chaudhuri, Blind de-ghosting for automatic multiexposure compositing, in Proc. SIGGRAPH ASIAPosters, 2009, article 44. [12] W. Zhang and W. K. Cham, Gradient-directed composition of multiexposure images, in Proc. IEEE CVPR, 2010, pp. 530 536. [12] Q. Shan, J. Jia, and M. S. Brown, Globally optimized linear windowed tone mapping, IEEE Trans. Vis. Comput.Graph., vol. 16, no. 4,pp. 663 675, Jul. Aug. 2010. [13] M. Granados, B. Ajdin, M. Wand, C. Theobalt, H. P. Seidel, and H. P. A. Lensch, Optimal HDR reconstruction with linear digital cameras, in Proc. IEEE CVPR, 2010, pp. 215 222. [14] T. H. Wang, C. W. Fang, M. C. Sung, and J. J. J. Lien, Photography enhancement based on the fusion of tone and color mappings in adaptive local region, Page 465

IEEE Trans. Image Process., vol. 19, no. 12,pp. 3089 3105, Dec. 2010. [15] A. Liu, W. Lin, M. Paul, C. Deng, and F. Zhang, Just noticeable difference for images with decomposition model for separating edge and textured regions, IEEE Trans. Circuits Syst. Video Technol.,vol. 20, no. 11, pp. 648 1652, Nov. 2010. [16] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High Dynamic Range Imaging, Second Edition: Acquisition, Display and Image-Based Lighting. San Mateo, CA, USA: Morgan Kaufmann, 2010. Page 466