Global Color Saliency Preserving Decolorization

Similar documents
Interactive two-scale color-to-gray

Color Image Segmentation in RGB Color Space Based on Color Saliency

Converting color images to grayscale images by reducing dimensions

Performance Analysis of Color Components in Histogram-Based Image Retrieval

Perceptually Consistent Color-to-Gray Image Conversion. Shaodi You, Nick Barnes, Janine Walker

2 Human Visual Characteristics

Contrast Maximizing and Brightness Preserving Color to Grayscale Image Conversion

Color Image Segmentation Using K-Means Clustering and Otsu s Adaptive Thresholding

Research of an Algorithm on Face Detection

Example Based Colorization Using Optimization

Forget Luminance Conversion and Do Something Better

Content Based Image Retrieval Using Color Histogram

A self-adaptive Contrast Enhancement Method Based on Gradient and Intensity Histogram for Remote Sensing Images

A Chinese License Plate Recognition System

An Algorithm and Implementation for Image Segmentation

MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER

Correction of Clipped Pixels in Color Images

Contrast Enhancement with Reshaping Local Histogram using Weighting Method

Reduction of Musical Residual Noise Using Harmonic- Adapted-Median Filter

Practical Content-Adaptive Subsampling for Image and Video Compression

Color Image Processing

Image Processing by Bilateral Filtering Method

Color Image Enhancement by Histogram Equalization in Heterogeneous Color Space

Empirical Study on Quantitative Measurement Methods for Big Image Data

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

Image Enhancement in Spatial Domain: A Comprehensive Study

Chapter 3 Part 2 Color image processing

An Analysis of Image Denoising and Restoration of Handwritten Degraded Document Images

COLOR-TO-GRAY (C2G) image conversion [1], also

Image Distortion Maps 1

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

NORMALIZED SI CORRECTION FOR HUE-PRESERVING COLOR IMAGE ENHANCEMENT

No-Reference Image Quality Assessment Using Euclidean Distance

Detection of Defects in Glass Using Edge Detection with Adaptive Histogram Equalization

Reversible data hiding based on histogram modification using S-type and Hilbert curve scanning

A Novel Algorithm for Hand Vein Recognition Based on Wavelet Decomposition and Mean Absolute Deviation

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System

Detection and Verification of Missing Components in SMD using AOI Techniques

Analysis of Satellite Image Filter for RISAT: A Review

Computers and Imaging

Method to acquire regions of fruit, branch and leaf from image of red apple in orchard

IMAGE SEGMENTATION ALGORITHM BASED ON COLOR FEATURES: CASE STUDY WITH GIANT PANDA

Keywords Fuzzy Logic, ANN, Histogram Equalization, Spatial Averaging, High Boost filtering, MSE, RMSE, SNR, PSNR.

3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel

An Improved Adaptive Median Filter for Image Denoising

Quality Measure of Multicamera Image for Geometric Distortion

Measuring a Quality of the Hazy Image by Using Lab-Color Space

Research on Pupil Segmentation and Localization in Micro Operation Hu BinLiang1, a, Chen GuoLiang2, b, Ma Hui2, c

Contrast Enhancement using Improved Adaptive Gamma Correction With Weighting Distribution Technique

Subjective evaluation of image color damage based on JPEG compression

Sampling and Reconstruction. Today: Color Theory. Color Theory COMP575

AN INVESTIGATION INTO SALIENCY-BASED MARS ROI DETECTION

A Review over Different Blur Detection Techniques in Image Processing

Method Of Defogging Image Based On the Sky Area Separation Yanhai Wu1,a, Kang1 Chen, Jing1 Zhang, Lihua Pang1

IEEE Signal Processing Letters: SPL Distance-Reciprocal Distortion Measure for Binary Document Images

C. Efficient Removal Of Impulse Noise In [7], a method used to remove the impulse noise (ERIN) is based on simple fuzzy impulse detection technique.

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

EFFICIENT CONTRAST ENHANCEMENT USING GAMMA CORRECTION WITH MULTILEVEL THRESHOLDING AND PROBABILITY BASED ENTROPY

A variable step-size LMS adaptive filtering algorithm for speech denoising in VoIP

Dynamic Visual Performance of LED with Different Color Temperature

Exact Characterization of Monitor Color Showing

ABSTRACT I. INTRODUCTION

A Solution for Identification of Bird s Nests on Transmission Lines with UAV Patrol. Qinghua Wang

Comparing CSI and PCA in Amalgamation with JPEG for Spectral Image Compression

Preprocessing and Segregating Offline Gujarati Handwritten Datasheet for Character Recognition

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy

A Fuzzy Set Approach for Edge Detection

Segmentation using Saturation Thresholding and its Application in Content-Based Retrieval of Images

DEPTH FUSED FROM INTENSITY RANGE AND BLUR ESTIMATION FOR LIGHT-FIELD CAMERAS. Yatong Xu, Xin Jin and Qionghai Dai

The Hand Gesture Recognition System Using Depth Camera

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

CSE 332/564: Visualization. Fundamentals of Color. Perception of Light Intensity. Computer Science Department Stony Brook University

The Use of Color in Multidimensional Graphical Information Display

Image Resizing based on Summarization by Seam Carving using saliency detection to extract image semantics

Wavelet-Based Multiresolution Matching for Content-Based Image Retrieval

Contrast Enhancement Using Bi-Histogram Equalization With Brightness Preservation

COLOR IMAGE QUALITY EVALUATION USING GRAYSCALE METRICS IN CIELAB COLOR SPACE

Automatic Licenses Plate Recognition System

Color. Used heavily in human vision. Color is a pixel property, making some recognition problems easy

On Contrast Sensitivity in an Image Difference Model

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples

ABSTRACT. Keywords: Color image differences, image appearance, image quality, vision modeling 1. INTRODUCTION

Face Detection System on Ada boost Algorithm Using Haar Classifiers

An Efficient Method for Landscape Image Classification and Matching Based on MPEG-7 Descriptors

Comparison of Two Pixel based Segmentation Algorithms of Color Images by Histogram

A New Metric for Color Halftone Visibility

On Contrast Sensitivity in an Image Difference Model

A Novel Hybrid Exposure Fusion Using Boosting Laplacian Pyramid

Color & Compression. Robin Strand Centre for Image analysis Swedish University of Agricultural Sciences Uppsala University

Image Measurement of Roller Chain Board Based on CCD Qingmin Liu 1,a, Zhikui Liu 1,b, Qionghong Lei 2,c and Kui Zhang 1,d

INDIAN VEHICLE LICENSE PLATE EXTRACTION AND SEGMENTATION

Removal of High Density Salt and Pepper Noise through Modified Decision based Un Symmetric Trimmed Median Filter

Research on Hand Gesture Recognition Using Convolutional Neural Network

Lossy and Lossless Compression using Various Algorithms

A SURVEY ON COLOR IMAGE SEGMENTATION BY AUTOMATIC SEEDED REGION GROWING

The Perceived Image Quality of Reduced Color Depth Images

Analysis on Color Filter Array Image Compression Methods

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Locating the Query Block in a Source Document Image

Transcription:

, pp.133-140 http://dx.doi.org/10.14257/astl.2016.134.23 Global Color Saliency Preserving Decolorization Jie Chen 1, Xin Li 1, Xiuchang Zhu 1, Jin Wang 2 1 Key Lab of Image Processing and Image Communication of Jiangsu Province, Nanjing University of Posts and Telecommunications(NUPT), Nanjing, 210003, China {hucjtt@163.com, lixin@njupt.edu.cn, zhuxc@njupt.edu.cn} 2College of Information Engineering, Yangzhou University, Yangzhou, China jinwang@yzu.edu.cn Abstract. The process of transforming a color image with three channels to a single channel grayscale image is called decolorization, which will unavoidably accompany with the information loss. In this paper, we propose a method to obtain a grayscale image which best preserves the global saliency of the color image. First, we convert the color image to the YUV color space, which separates the luminance and chrominance channels, and conduct parametric linear mapping on Y,U,V channels, through selecting different parameters to get different candidate grayscale images. Then, we compute the global contrast based saliency maps of the color image and candidate grayscale images. Finally, we use the Normalized Cross-Correlation metric to select the grayscale image whose saliency map is the most similar to that of the color image as the wanted decolorization result. The experiment results show that our method retains part of the chrominance information, prevents the contrast degradation in the isoluminance colors, and the global saliency preserving purpose reduces the abrupt change or distortion around the edge areas. Keywords: Decolorization, Color-to-gray, Global saliency, Parametric linear map, Normalized Cross-Correlation. 1 Introduction Decolorization is the process of converting a color image to its grayscale form. The most intuitive mode is taking the luminance channel of the color image (e.g. Y channel in the YUV color space) as the decolorization result[1]. As shown in Fig. 1, this manner cannot keep the contrast of the isoluminance colors. How to keep the structure and contrast of the color image during the color-to-gray conversion is an important research content of decolorization methods. ISSN: 2287-1233 ASTL Copyright 2016 SERSC

(a)color image (b)y image (c)our result Fig. 1. Column (a) are the original color images. Column (b) are results of taking luminance channel as grayscale image. Column (c) are the results of our method. The purpose of decolorization is to preserve as much visually meaningful information about the color image as possible, meanwhile produce perceptually natural and pleasing grayscale image[2]. Choosing what features to preserve during the decolorization process is crucial for the achievement of the purpose, and will heavily decide the performance and effect of the method. The work of [4] chosed the image apparent lightness feature to maintain the perceptual accurate of the conversion. The method of [5] defined three visual cues including color spatial consistency, image structure information and color channel perception priority to be the basis of the conversion. [6] chosed the nature order of hues as the preserved feature of the conversion. The work of [7] alleviated the strict color order constraint and aimed at maximally preserving the original color contrast. Although the main purpose of the decolorization methods is the same, because of the different choice of features in the specific method, there is not a robust method can get satisfied results for all kinds of images, that is each method can be found some failure cases. To solve this problem, the decolorization methods tried to consider multiple features simultaneously which made the algorithms become more and more complex. Nowadays, the research of visual saliency estimation which is relevant to the human visual system has drawn great attention. Saliency originates from visual uniqueness, unpredictability, rarity or surprise, and is often ascribed to variations in image attributes like color, gradient, edges and boundaries[3]. Using image saliency as the preserved feature in the color-to-gray conversion, can cover multiple attributes of the image simultaneously, and is a comparatively ideal manner in image decolorization. The methods in [1][8-11] all used the image visual saliency as the feature to conduct the color-to-gray conversion. [9] computed the global color saliency of the pixel to instruct the parameter selection in the luminance and chrominance information fusion procedure, this method could enhance the chrominance contrast in the grayscale image, but, it only used few points with the largest saliency value as the instruction, which made it difficult to maintain the contrast of the whole image. The methods proposed in [1][8][10] and [11] computed the region saliency of the image to instruct the parameters selection or adjustment to 134 Copyright 2016 SERSC

convert the image, first step of these methods was image segmentation, which would influence the effect of the methods in the image edge areas. In this paper, we compute the global saliency value of every pixel without using image segmentation, and use the saliency maps of the color and grayscale images to get the decolorization image. Experimental results show that our method can reduce the contrast loss in the isoluminance colors and prevent the abrupt change or discontinuity around the edge areas. 2 Global color saliency preserving decolorization method Our decolorization method contains 3 steps, which are parametric linear mapping, global saliency maps computation and similarity measurement. The whole framework is shown in Fig.2. Fig. 2. Framework of the proposed decolorization method. 2.1 Parametric linear mapping function Compared with RGB color space, the YUV color space makes the luminance information(y) separate from the chrominance information(u,v), which is conformable with human visual perception. We convert the input color image to the YUV color space, and operate parametric linear mapping (PLM) on Y,U,V channels. Because the human visual perception is more sensitive to the contrast of the adjacent pixels than their own values, we alleviate the constraint of the strict color order and define the mapping function as, G wy U (1 U) V (1 V). (1) 1 2 1 2 Where, Y, U, V respectively is the orderly formed vector of each channel and the values in the vectors are normalized to [0,1], w, 1, 2, 1, 2 are the parameters of the Copyright 2016 SERSC 135

mapping function, with the constraint that 12 0 and 1 2 0 to avoid the offset of the two parameterized values in the same chrominance channel. With numbers of different parametric combinations, the mapping process will get numbers of candidate grayscale images, denote the number as n. We choose YUV color space to conduct the process, could intuitionally supplement the image s luminance information using its chrominance components. 2.2 Computation of the global contrast based visual saliency map The existing image saliency based or referenced decolorization methods generally use local region saliency of the image, which will be influenced by the region segmentation and edge distortion. In this paper, we take the global contrast based salience estimation (GCSE) algorithm[3] to get the global visual saliency map without conducting image segmentation or edge detection. The original GCSE algorithm is designed to calculate the saliency values of color image, as to our need to calculate for the grayscale image, we make a little adjustment to meet the demand. The GCSE algorithm computes the color distance D(, ) in the CIELab color space, which is D( c, c ) ( L L ) ( a a ) ( b b ). (2) 2 2 2 r s r s r s r s where, c, c represent two colors in the image, L, a, b respectively represent the r s r r r three components of color cr in the CIELab color space. The number of the colors in the color image is denoted as N c,the occurring frequency of color cr is denoted as f r, which can be directly get from the color histogram. Then the saliency value of pixel Pl with color cr is defined as: N c S( P ) S( c ) f D( c, c ). (3) l r i r i i1 In consideration of reducing the time complexity, the image colors should go through RGB color space quantization and partial discarding before calculating in equation(3). First, respectively uniform quantizing the R,G,B channel of the image to 12 different values, which will reduce the color number from 256 3 of the true color space to 12 3 =1728. Then, discarding the low frequency colors, ensuring those reserved high frequency colors could cover more than 95% pixels of the image, and those discarded less than 5% pixels colors are replaced by their nearest colors in the reserved color set. Through channel quantizing and color discarding process, the color number N generally could be reduced to less than 90. c After substituting the number reduced colors into the equations to get their saliency values, considering the influence of the quantization noise, the algorithm takes a CIELab color space smoothing procedure to refine the saliency values. That is, replacing the color s saliency value by the weighted average value of its m similar colors (measured by CIELab color distance D(, )) saliency values. Denoting the 136 Copyright 2016 SERSC

color to be smoothed as value m N c /4 c t, its similar colors number m is setting as a fix c is defined as:, the weight value of its similar color j t, j t t, j Weight c c T D c c. (4) m t t j j1 T D c, c. (5) Then, the refined saliency value of color ct is: m 1 ( T (, )) t D ct c j j S c j Ss ct. (6) ( m1) Tt Where, m1t t is the normalization factor. After getting saliency values of the image colors, substituting them into the corresponding pixels will get the whole saliency map. About the calculating of the grayscale image s saliency map, the procedure is almost the same as the GCSE algorithm, except for changing the color histogram to grayscale histogram, and eliminating the channel quantization process but directly use the 256-level grayscale representation. In view of maintaining the consistency of the color and grayscale saliency maps, the grayscale process still chooses high frequently occurring values which cover more than 95% pixels to replace those less than 5% low frequency values, and after getting the grayscale saliency values, also conducts smoothing to refine the values. 2.3 Selecting of the output grayscale image To get the wanted grayscale image from the n candidates, we take the Normalized Cross-Correlation(NCC) metric to measure the similarity between the color image saliency map and the candidate grayscale images saliency maps. ( S (, ) (, )) (, ) c x y Sg x y xy NCC. (7) 2 2 S ( x, y) S ( x, y) ( x, y) c ( x, y) g Where, ( xyis, ) coordinate of the pixel in the image, S c, g S respectively is the color and grayscale image saliency map. We choose the image which gets the highest NCC value as the output grayscale image. 3 Experiment set and analysis The computation cost of our method mainly lies on the saliency map estimation procedure. In the experiment, we fix factor w to 1, and constraint parameters 1, 2, 1, 2 in the range of [0,0.5] to reduce n and emphasize the importance of luminance among the three channels. And because the empirical result shows that slightly varying the factors would not change the grayscale appearance too much [12], Copyright 2016 SERSC 137

we discretize the parameters 1, 2, 1, 2 in the range of [0,0.5] with interval 0.1. Considering the constraints of 12 0 and 1 2 0, the number of candidate grayscale images is n 121. We use Cadik s decolorization dataset [13] which is the publicly available decolorization benchmark dataset to evaluate our algorithm, the Cadik s dataset contains 24 different resolution color images. We implement our algorithm in Matlab, for a 390 390 image, it takes about 20 seconds on a computer with a 2.4GHz Intel Core i5 CPU. We compare our results with CIE Y, Grundland07 [14], Lu12 [7], Liu13 [10] and Du15 [11] results, the images are shown in Fig. 3. It can be seen that our method can get satisfactory results of the 24 images. Especially, our result of image 4 distinguish all the color difference of the balls which is superior to results of CIE Y, Grundland07 [14], Lu12 [7], Liu13 [10], and without excessive enhance the contrast of the two green balls like Du15 [11]. Our results of image 7 and 8 unambiguously differentiate the different color regions which is better than all the other compared methods. Our result of image 22 is the only image obviously preserves the red and green leaves difference on the right side of the color image. Fig. 3. Comparison of color original image, results of CIE Y, Grundland07[14], Lu12[7], Liu13[10], Du15[11] and ours. For quantitative evaluation, we employ color contrast preserving ratio (CCPR) metric which was proposed by [7] and widely adopted by the subsequent methods. # ( x, y) x, y, gx g y CCPR. (8) 138 Copyright 2016 SERSC

Where, is the set containing all the pixel pairs ( xyof, ) the original color image with difference ( xy, ), is a set threshold, is the number of pixel pairs in. # ( x, y) x, y, gx g y is the number of pixel pairs in has difference gx gy after decolorizaton. We evaluate different methods based on CCPR using the 24 color images in Cadik s dataset, varying from 1 to 40, and get the average CCPRs for the whole dataset as shown in Fig.4. Although the average CCPR of Du15[11] is higher than our method, but the algorithm is based on the image separation, which will bring abrupt change between different regions, as shown in image 17 of Fig.3, we enlarged the images in Fig.5. And because Du15[11] algorithm takes the middle points of the separate regions to represent every region, it needs a large number of separations to ensure the accuracy of the representation, which will bring high computation cost. CCPR 0.9 0.8 0.7 CIE Y Grundland07 Lu12 Liu13 Du15 Ours 0.6 Fig. 4. CCPRs plot on Cadik s dataset. 0.5 5 10 15 20 25 30 35 40 Thresholds Original Du15[11] Our result Fig. 5. Example of Du15[11] algorithm abrupt changes the gradual change colors in the decolorization result, and our algorithm follows the change better. 4 Concluding remarks This paper presents a global saliency based decolorization algorithm to mimic human visual perception and try to preserve the perceive uniformity between the color and grayscale images. We take into account the global saliency values to avoid the influence of local block artifacts and edge distortion. The computation cost of the algorithm has nearly linear correlation with the candidate grayscale images number n, how to further reduce n to enhance the algorithm efficiency is a meaningful problem. Copyright 2016 SERSC 139

Acknowledgments. This work is partially supported by the National Natural Science Foundation of China (61071166, 61172118, 61071091, 61471201), Jiangsu Province Postgraduate Innovative Program of Scientific Research (CXLX12_0474). The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation. References 1. Zhou, M.Q., Sheng, B., Ma, L.Z.: Saliency preserving declorization. Multimedia and Expo, IEEE, Chengdu China(2014) 2. Ma, K., Zhao, T., Zeng, K., Wang, Z.: Objective Quality Assessment for Color-to-Gray Image Conversion. IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 4673-4685(2015) 3. Cheng, M. M., Zhang, G. X., Mitra, N. J.: Global contrast based salient region detection. Computer Vision and Pattern Recognition, IEEE, Providence RI(2011) 4. Smith, K., Landes, P., Thollot, J., Myszkowski, K.: Apparent greyscale: a simple and fast conversion to perceptually accurate images and video. Computer Graphics Forum, vol.27, no. 2, pp.193-200(2008) 5. Song, M., Tao, D., Chen, C., Li, X., Chen, C. W.: Color to gray: visual cue preservation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.32, no.9, pp.1537-1552(2010) 6. Hsin, C.H., Le, H.N., Shin, S.J.:Color to grayscale transform preserving natural order of hues. Electrical Engineering and Informatics, IEEE, Bandung(2011) 7. Lu, C.W., Xu, L., Jia, J.Y.:Contrast preserving decolorization. Computational Photography, IEEE, Seattle WA(2012) 8. Gooch, A., Olsen, S., Tumblin, J., Gooch, B.: Color2gray: salience-preserving color removal. ACM Transactions on Graphics (SIGGRAPH), vol.24, pp.634-639(2005) 9. Ancuti,C.A., Ancuti, C., Bekaert, P.:Enhancing by saliency-guided decolorization. Computer Vision and Pattern Recognition, IEEE, Providence RI(2011) 10. Liu, C.W.,Liu T. L.: A sparse linear model for saliency-guided decolorization. Image Processing, IEEE, Melbourne VIC(2013) 11. Du,H., He,S.F., Sheng,B., Ma,L.Z., Lau,R.W.H.: Saliency-guided color-to-gray conversion using region-based optimization. IEEE Transactions on Image Processing, vol.24, no.1, pp.434-443(2015) 12. Lu, C.W., Xu, L., Jia, J.Y.:Real-time contrast preserving decolorization. Siggraph Asia Technical Briefs, vol.110, no.2, pp.1-7(2012) 13. Cadik, M.: Perceptual Evaluation of color-to-grayscale image conversions. Pacific Graphics, vol.27, no.7, pp. 1745-1754(2008) 14. Grundland, M., Dodgson, N. A.: Decolorize: fast, contrast enhancing, color to grayscale conversion. Pattern Recognition, vol.40, no.11, pp.2891-2896(2007) 140 Copyright 2016 SERSC