CHROMATIC aberration (CA) commonly arises from the

Similar documents
ISSCC 2006 / SESSION 19 / ANALOG TECHNIQUES / 19.1

Fuzzy Fusion Based High Dynamic Range Imaging using Adaptive Histogram Separation

THE magnetic field has been widely used for the transfer of

10 Input Filter Design

IEEE TRANSACTIONS ON POWER ELECTRONICS, VOL. 33, NO. 6, JUNE

The Hemispherical Resonator Gyro for precision pointing applications A. Matthews and D. A. Bauer

Scene-Adaptive RGB-to-RGBW Conversion Using Retinex Theory-Based Color Preservation

Franke Three-Dimensional Molded Interconnect Devices (3D-MID)

178 IEEE TRANSACTIONS ON ELECTROMAGNETIC COMPATIBILITY, VOL. 48, NO. 1, FEBRUARY Mohit Kumar and Vivek Agarwal, Senior Member, IEEE EMI.

A Cooperative Localization Algorithm for UWB Indoor Sensor Networks

1150 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 45, NO. 6, JUNE 2010

ADVANCED radar systems benefit from the ability to have

THE ENVIRONMENTAL concerns and electric utility

E tions usually derives its bursts of energy by rapidly

Modeling and Evaluation of the Effect of Obstacles on the Performance of Wireless Sensor Networks

Renewable Energy 43 (2012) 90e100. Contents lists available at SciVerse ScienceDirect. Renewable Energy

INDUCTIVE power transfer (IPT) systems have found application

2-D Scanning Magneto-Electric Dipole Antenna Array Fed by RGW Butler Matrix

Influence of Avatar Creation on Attitude, Empathy, Presence, and Para-Social Interaction

Frugal Innovation and Knowledge Transferability

Optical-Inertial System for Railway Track Diagnostics

ANALYSES SUPPORTING SURVEILLANCE REQUIREMENTS FOR A CATEGORY I PAIRED APPROACH PROCEDURE

On-line Junction Temperature Estimation of SiC Power MOSFETs through On-state Voltage Mapping

MULTICELL battery is a widely adopted energy source

DUE to the growing penetration of distributed generation

AUTOMATIC DETECTION AND CORRECTION OF PURPLE FRINGING USING THE GRADIENT INFORMATION AND DESATURATION

An 8.2 Gb/s-to-10.3 Gb/s Full-Rate Linear Referenceless CDR Without Frequency Detector in 0.18 μm CMOS

Common-mode Overvoltage Mitigation in a Medium Voltage Pump Motor Transformerless Drive in a Mining Plant. Brenno Marcus Prado

Agile Multiple Pulse Coherent Lidar for Range and Micro-Doppler Measurement

Electric Drive System of Dual-Winding Fault-Tolerant Permanent-Magnet Motor for Aerospace Applications

Artifacts Reduced Interpolation Method for Single-Sensor Imaging System

4438 IEEE TRANSACTIONS ON POWER ELECTRONICS, VOL. 29, NO. 8, AUGUST 2014

IEEE TRANSACTIONS ON MAGNETICS, VOL. 50, NO. 5, MAY

Visual Occlusion Decreases Motion Sickness in a Flight Simulator

MODERN wireless communication systems are required

Coil Design and Shielding Methods for a Magnetic Resonant Wireless Power Transfer System

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)

Edge Potency Filter Based Color Filter Array Interruption

AUTOMATIC modulation classification is a procedure

FROM DYADIC CHANGE TO CHANGING BUSINESS NETWORKS: AN ANALYTICAL FRAMEWORK* AINO HALINEN. Turku School of Economics and Business Administration

AN EFFECTIVE APPROACH FOR IMAGE RECONSTRUCTION AND REFINING USING DEMOSAICING

A 0.65-to-10.5 Gb/s Reference-Less CDR With Asynchronous Baud-Rate Sampling for Frequency Acquisition and Adaptive Equalization

Demosaicing Algorithms

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

both background modeling and foreground classification

THE consumer electronics market demands high speed,

IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, VOL. 50, NO. 3, MAY/JUNE

An Advanced Contrast Enhancement Using Partially Overlapped Sub-Block Histogram Equalization

Color Filter Array Interpolation Using Adaptive Filter

OSCILLATORS and timers are critical to all systems with

Interpolation of CFA Color Images with Hybrid Image Denoising

Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array

T direct digital synthesizer. The most elementary technique

Correction of Clipped Pixels in Color Images

ABSTRACT I. INTRODUCTION. Kr. Nain Yadav M.Tech Scholar, Department of Computer Science, NVPEMI, Kanpur, Uttar Pradesh, India

Invited Paper ABSTRACT. Keywords: Fiber gyro, fiber optic gyro, FOG, IFOG, RFOG, fiber resonator, resonator fiber optic gyro, laser gyro.

NEIGHBORHOOD electric vehicles (NEVs) are propelled

An Improved Bernsen Algorithm Approaches For License Plate Recognition

Moving Object Detection for Intelligent Visual Surveillance

130 IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, VOL. 60, NO. 1, JANUARY 2012

A moment-preserving approach for depth from defocus

Introduction to Video Forgery Detection: Part I

An Improved Color Image Demosaicking Algorithm

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 35, NO. 8, AUGUST

On spatial resolution

Camera Resolution and Distortion: Advanced Edge Fitting

High Dynamic Range image capturing by Spatial Varying Exposed Color Filter Array with specific Demosaicking Algorithm

Multimedia Forensics

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

A Survey of ADAS Technologies for the Future Perspective of Sensor Fusion

An Effective Directional Demosaicing Algorithm Based On Multiscale Gradients

A No Reference Image Blur Detection using CPBD Metric and Deblurring of Gaussian Blurred Images using Lucy-Richardson Algorithm

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Two-Pass Color Interpolation for Color Filter Array

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Image Processing for feature extraction

Blur Detection for Historical Document Images

Enhanced DCT Interpolation for better 2D Image Up-sampling

A Novel 3-D Color Histogram Equalization Method With Uniform 1-D Gray Scale Histogram Ji-Hee Han, Sejung Yang, and Byung-Uk Lee, Member, IEEE

Mahjong Gambling in the Chinese-Australian Community in Sydney: A Prevalence Study

Method of color interpolation in a single sensor color camera using green channel separation

Midterm Examination CS 534: Computational Photography

Recovery of badly degraded Document images using Binarization Technique

COLOR CORRECTION METHOD USING GRAY GRADIENT BAR FOR MULTI-VIEW CAMERA SYSTEM. Jae-Il Jung and Yo-Sung Ho

Color Image Segmentation Using K-Means Clustering and Otsu s Adaptive Thresholding

A Single Image Haze Removal Algorithm Using Color Attenuation Prior

Multi-sensor Super-Resolution

Color Demosaicing Using Variance of Color Differences

Automatic Locating the Centromere on Human Chromosome Pictures

Waves & Oscillations

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

FEATURE BASED GHOST REMOVAL IN HIGH DYNAMIC RANGE IMAGING

Batch Fabrication of Metasurface Holograms Enabled by Plasmonic Cavity Lithography

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

Title. CitationIEEE photonics journal, 8(3): Issue Date Doc URL. Rights. Type. File Information.

License Plate Localisation based on Morphological Operations

Pixel Classification Algorithms for Noise Removal and Signal Preservation in Low-Pass Filtering for Contrast Enhancement

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Transcription:

IEEE TANSACTIONS ON IMAGE POCESSING, VOL. 26, NO. 5, MAY 2017 2561 Color Fringe Correction by the Color Difference Prediction Using the Logistic Function Dong-Won Jang and ae-hong Park, Senior Member, IEEE Abstract This paper proposes a new color fringe correction method that preserves the object color well by the color difference prediction using the logistic function. We observe two characteristics between normal edge (NE) and degraded edge (DE) due to color fringe: 1) the DE has relatively smaller -G and B-G correlations than the NE and 2) the color difference in the NE can be fitted by the logistic function. The proposed method adjusts the color difference of the DE to the logistic function by maximizing the -G and B-G correlations in the corrected color fringe image. The generalized logistic function with four parameters requires a high computational load to select the optimal parameters. In experiments, a one-parameter optimization can correct color fringe gracefully with a reduced computational load. Experimental results show that the proposed method restores well the original object color in the DE, whereas existing methods give monochromatic or distorted color. Index Terms Chromatic aberration, color fringe, color fringe image, color difference model, color fringe correction, cross-color correlation, logistic function fitting. I. INTODUCTION CHOMATIC aberration (CA) commonly arises from the difference of refraction indices due to different wavelengths of light. CA is commonly divided into two classes: axial chromatic aberration (ACA) and lateral chromatic aberration (LCA). ACA shows defocus blur whereas LCA gives the magnification error due to misalignment of refracted light [1]. This classification is based on Gaussian optics and paraxial approximation that neglect some properties of physical optics with multiple lenses. If properties of physical optics are not known to reduce the CA in an input image, Gaussian optics and paraxial approximation can be used [2] [10]. Signal processing-based correction methods [2] [12] are more practical than optical-based ones when physical properties, e.g., lens system, sensor and so on, are unknown. Signal-based methods can be divided into two categories [12]: image-based approaches [2] [10] and demosaicking-based approaches [11], [12]. Although color fringes appear over the entire image, they are commonly observed at high-contrast edges [12]. In commercial digital cameras, most image-based Manuscript received December 4, 2015; revised August 13, 2016 and March 6, 2017; accepted March 19, 2017. Date of publication March 24, 2017; date of current version April 11, 2017. This work was supported by Samsung Electronics Co., Ltd. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Gene Cheung. (Corresponding author: ae-hong Park.) The authors are with the Department of Electronic Engineering, School of Engineering, Sogang University, Seoul 04107, South Korea (e-mail: javelot87@sogang.ac.kr; rhpark@sogang.ac.kr). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIP.2017.2687125 methods [2], [3], [5] [10] deal with color fringes at highcontrast edges to reduce the computation time with acceptable image quality. These methods usually fail to correct color fringes at boundaries of color objects [12]. ecently, physically proper post processing methods for CA correction are proposed [12] [14]. These methods correct CA not only at edges but also over the entire image. The proposed method is one of image-based approaches that deal with color fringes at edges with original colors of objects preserved. Image-based approaches perform color fringe correction in full-color images, in which the false colors from chromatic aberration to failure of demosaicking are considered as color fringes. These approaches commonly use color difference models [2] [5]. Color fringes have two properties in the color difference model [3]: 1) large magnitude; and 2) nonmonotonic function at high-contrast edges. Chung et al. s method [3] clipped the color difference using boundary pixels at high-contrast edges. Chang et al. s method [4] focused on reducing the color difference to make monochromatic edges. Jang et al. s method reduced the color difference by fusing low-exposure images [5]. Kang s method converted purple fringes to grayscale [6]. These methods successfully remove color fringes in a monochromatic object, while distort the color in a color object (see Section II-B). Kang et al. [10] proposed a color fringe correction method using the partial differential equation (PDE). The PDE-based method matches the gradients of and B channels with those of G channel at edges. The same gradients of, G, and B channels indicate constant color differences. This method removes color fringes with the original color preserved. However, it needs a lot of iterations to remove color fringes, which is very computationally expensive [4]. Some image-based methods correct the color fringe in CIE xyy color space [7], [8] and YC b C r color space [9]. In fullcolor images, the degraded edge (DE) has non-linearity in CIE xyy whereas the normal edge (NE) has linearity [15]. The NE represents the region with no color fringe and the DE indicates the region with color fringe. Bi and Fang [7] corrected purple fringing by interpolating xy chromaticity in the purple fringe region. Ju and Park [9] corrected the color fringe in C b C r plane by modifying the color degree using neighboring pixels. Color fringe correction was modeled as a color prediction problem using neighboring pixels [5], which is similar to image concealment [16] and super-resolution [17]. In this paper, we propose a new color fringe correction method by the color difference prediction using the logistic function of 1057-7149 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

2562 IEEE TANSACTIONS ON IMAGE POCESSING, VOL. 26, NO. 5, MAY 2017 Fig. 1. Characteristics of the DE and the NE. (a) Left: sample image with the DE (solid box) and the NE (dotted box). Cropped and enlarged images; middle: DE, right: NE. (b) Intensity profiles of the DE between green leaf and sky (dash-dot in solid box). (c) Intensity profiles of the NE between green leaf and sky (dash-dot in dotted box). the cross-color correlations. Two characteristics are observed from a large number of transition regions which are detected by Chung et al. s method [3] (see Section II-A). First, the DE has a relatively smaller -G and B-G correlations than the NE. Second, we observe that the color difference of the NE can be fitted well to the logistic function. The proposed method estimates the color difference in the DE using the logistic function. The rest of the paper is organized as follows. In Section II, properties of color fringe and the existing correction methods are reviewed. Section III describes the proposed color fringe correction method using the logistic function that gives the maximum -G and B-G correlations. Experimental results and discussions are presented in Section IV. Finally, Section V concludes the paper. II. ELATED WOK Color fringe correction can be modeled as a color prediction problem in the sense that an original color value is estimated from possibly distorted color values of neighboring pixels. Most digital cameras have a single charged-coupled device (CCD) structure with color filter array (CFA) to reduce the sensor cost. A color difference model is used for color interpolation to estimate missing color values caused by a single-ccd structure in digital cameras [18]. The color difference model is defined as K = G, K B = B G, (1) where, G,andB represent color channel values of an image. Since the filter and sensor sensitivities generally overlap, the, G, B color channels are correlated along the wavelength dimension [19]. Based on the high correlations between color channels, the color differences K and K B are flat over small regions [18]. Fig. 1 shows the characteristics of the color difference in the DE and the NE. In Fig. 1(a), solid and dotted boxes indicate the DE and the NE, respectively. In the DE, a distorted color, red, is observed at the edge of green leaf, whereas, in the NE, there is no distorted color at the edge of green leaf. These observations can be explained by the intensity profiles in terms of the color difference model. Fig. 1(b) shows the intensity profiles of the DE where color differences are large positive values in the transition region, whereas Fig. 1(c) shows those of the NE where color differences are monotonically increasing in the transition region. At the top of the intensity profiles in Figs. 1(b) and 1(c), their corresponding color images are shown. In this paper, we use the following notations. The color components of an input image I are represented by (i, j), G(i, j), and B(i, j), where (i, j) denotes the pixel location. To detect the DE, we use the gradient value E C (i, j), for each color channel C, C {, G, B}. The gradient value is calculated by the vertical Sobel mask, which is defined as E C (i, j) = C(i 1, j 1) + 2C(i, j 1) + C(i + 1, j 1) C(i 1, j +1) 2C(i, j +1) C(i +1, j + 1). (2) Although processing only along rows and columns may generate artifacts, many existing methods [2] [3], [7] remove color fringes properly using 1-D processing. Similar to these methods, the proposed method also operates separately in the horizontal and vertical directions, and thus we represent the 1-D color components with fixed iin the horizontal processing, i.e., C(i, j) C( j), E C (i, j) E C ( j). (3) For the vertical processing, the horizontal Sobel mask is used. In the rest of the paper, the 1-D component index for the horizontal processing is used. Similarly, the vertical processing is performed with the direction interchanged. Many existing methods [2] [4], [7] correct color fringes in horizontal and vertical directions. In experiments, although the transition region is neither vertical nor horizontal, the existing method [3] and the proposed method reduce color fringes well. Most existing methods [2] [5], [10] correct color fringes independently for and B channels. Also our algorithm corrects separately the color differences K and K B. Also our algorithm processes separately for the color differencesk and K B. In the rest of the paper, processing of the color difference K is described. Processing of K B is the same as that of K. A. Existing Color Fringe Correction Methods In this section, we describe the existing fringe correction methods [3], [4], [6]. We observe that the existing methods correct the color fringe in different ways, e.g., by clipping the color difference [3], reducing the absolute value of the color difference [4], and converting to grayscale [6], however, all methods give increased absolute values of the -G and B-G correlations. Chung et al. s method [3] clips the color difference K when it is larger than a maximum value (K max )or smaller than

JANG AND PAK: COLO FINGE COECTION BY THE COLO DIFFEENCE PEDICTION USING THE LOGISTIC FUNCTION 2563 a minimum value (K min ) at boundary pixels in the transition region, which is expressed as K min, if K (k) <K min ˆK (k) = K max, if K (k) >K max (4) K, otherwise wherek max and K min are defined by K max = max{k [l(p)], K [r(p)]}, K min = min{k [l(p)], K [r(p)]}, (5) with l(p) and r(p) representing boundary pixels explained below. The clipped K denotes that K is constant in the clipped region. From (1), the constant K indicates that the variance of color is the same as that of G color. In other words, the absolute value of the -G correlation is equal to 1. Processing of K B is the same as that of K. The transition region is detected by Chung et al. s method [3], the steps of which are summarized as follows (horizontal processing): 1. Calculate a gradient value,e C (i, j), using the vertical Sobel mask; 2. Find an initial pixel p, along the horizontal line, which satisfies E G ( j) T 1 ; 3. Search the pixel xstarting from p along the left and right directions, which satisfies H (x p) T 2, where H (x p) = max{s(p)e (x), s(p)e G (x), s(p)e B (x)} is the largest gradient magnitude among, G, and B at x with the same gradient sign as s(p) = sgn[e G (p)]; 4. Find the left boundary pixel l(p) and right boundary pixel r(p), at which the inequality H (x p) T 2 does not hold for the first time. For more detailed description on detecting the transition region, interested readers would refer to [3]. Because we implemented the proposed code with the intensity range normalized to [0,1], T 1 and T 2 were set to 0.8 and 0.15, respectively. These values are similar to those used in Chung et al. s method [2]; in which 200 and 40 were used with the intensity range (0 to 255). Chang et al. s method [4] consists of two steps: 1) transient improvement (TI); and 2) false color filtering. In the TI step, most ACAs are corrected by boosting high-frequency components of and B colors. In the false color filtering step, LCAs and remaining ACAs are corrected by reducing the color differences. The corrected color differences can be obtained by the weighted average of clipped neighboring color differences. Chang et al. s method is processed without detecting transition regions. In a (2L + 1) (2L + 1) window centered at pixel (i, j), the clipping algorithm for the horizontal direction is defined as cli p {K ( j + k)} min {K ( j + k), K ( j)}, if K ( j) >0 = max {K ( j + k), K ( j)}, if K ( j) <0 K ( j + k), otherwise, where krepresents the position index, L k L. The clipping algorithm for the vertical direction is similarly defined. (6) Fig. 2. Intensity profiles in the DE. The DE is divided into three subregions: object (O), change (C), and background (B) sub-regions. From top to bottom, the intensity profiles of red object (Flower), green object (Leaf), and blue object (Book) are shown. (a) Intensity profiles of the input images. (b) Intensity profiles of the recovered images using Chung et al. s method [3]. (c) Intensity profiles of the recovered images using Chang et al. s method [4]. Kang s method [6] removes purple fringes by converting GB into grayscale. Since, G, and B are the same in the corrected region, the absolute value of the -G correlation is equal to 1. In other words, Kang s method corrects the purple fringes by maximizing the absolute values of the -G and B-G correlations. B. Color Fringe Correction of Color Objects As shown in Fig. 2, the existing methods [3], [4] successfully reduce the color fringe. However, these methods convert the color fringe to grayscale, and thus cannot preserve the original object color. The transition region detected by Chung et al. s method [3] is divided into three sub-regions depending on the color difference values: object (O), change (C), and background (B) sub-regions. The color differences in the object sub-region are typically positive (or negative) values whereas those in the monochromatic object sub-region are commonly zero. In the background sub-region, the color differences are typically small due to saturation. In the DE, color differences in the change sub-region are large positive (or negative) whereas those of the NE are small. Grayscale artifacts occur when the color difference of the object sub-region is negative while that of the change sub-region is positive. Also, the opposite case is observed (positive color difference in the color object sub-region with negative color difference in the change sub-region). Fig. 2 shows the intensity profiles of color fringes in a red object (Flower), green object (Leaf), and blue object (Book). Fig. 2(a) shows the intensity profiles of input images whereas

2564 IEEE TANSACTIONS ON IMAGE POCESSING, VOL. 26, NO. 5, MAY 2017 Figs. 2(b) and 2(c) show their corrected images using Chung et al. s method [3] and Chang et al. s method [4], respectively. The first row in Fig. 2, when the color difference K B of the object sub-region is positive and the color difference K B of the change sub-region is negative, Chung et al. s and Chang et al. s methods clip the color difference K B to zero in the change sub-region. Thus, the recovered image looks gray in the DE. In the last two rows in Fig. 2, where the color difference K of the object sub-region is negative and the color difference K of the change sub-region is positive, the two existing methods convert the color fringe to gray. Since Chung et al. s and Chang et al. s methods clip the color differences in the DE, the color difference values in the change sub-region are typically set to zero, which is the same as the color difference of the background sub-region, K max. However, in this case, to restore the original color, it is desirable that the color difference of the change sub-region have the same color difference of the object sub-region, K min. By reducing the absolute value of the color difference, the existing methods [3], [4] successfully remove color fringes in monochromatic objects, while distorting the original color in color objects. III. POPOSED COLO FINGE COECTION Color fringe correction methods are divided into two classes: 1) methods detecting candidate regions and then correcting them [2], [3], [5] [10]; and 2) methods correcting all regions without candidate region detection [4]. This paper is one of the methods using candidate regions. This paper proposes a color fringe correction method by predicting the color difference K (or K B ) using the logistic function that maximizes the absolute value of the inter-color correlation (-G correlation or B-G correlation). The existing color fringe correction methods have goals to achieve a monotonically increasing (decreasing) color difference curve [3], [4] with small magnitudes [4]. However, these goals are not sufficient to restore accurately the original object color as shown in Fig. 2. This paper proposes to achieve the following goals to remove color fringes with the original object color preserved: 1) monotonically increasing (decreasing) color difference, which is fitted by the logistic function; and 2) maximizing the absolute value of the -G correlation (or the B-G correlation). A. Characteristics of the DE and the NE To analyze the characteristics of the NE, we manually classified the NE and the DE using the characteristics of the color differences from a large number of transition regions detected by Chung et al. s method [3]. About 30,000 NEs and 10,000 DEs are obtained from 20 real images that are captured with two different cameras. We observe two characteristics of the NE, which are different from those of the DE: 1) The absolute values of the -G and B-G correlations of the NE are larger than those of the DE; 2) The color differences of the NE can be fitted by the logistic function whereas those of the DE cannot. Fig. 3. Histograms of the -G correlation in the NE and the DE. The histogram of the B-G correlation is similar to that of the -G correlation. (a) NE. (b) DE. In this paper, we modify and B channels to correct color fringes using G channel as reference. Most existing color fringe correction methods [2] [6] use G channel as the reference, e.g., the Bayer CFA, the most frequently used CFA pattern, assigns higher resolution to G pixels to increase the quality of the recovered image [20]. Fig. 3 shows the histograms of the -G correlation in the NEs and the DEs. In the transition region, the -G correlation is defined as r(p) k=l(p) ρ(, G) = ((k) m )(G(k) m G ) r(p) k=l(p) ((k) m ) 2 r(p), k=l(p) (G(k) m G) 2 wherel(p) and r(p) are pixel indices at left and right boundary pixels in the transition region, m and m G indicate the average intensities of and G channels in the transition region, respectively, and k denotes the pixel index. The B-G correlation is similarly defined. Since the filter and sensor sensitivities generally overlap, the, G, B color channels are correlated along the wavelength dimension [19]. Most demosaicking methods use the high correlations between color channels, which are not satisfied if CA appears [12]. As shown in Fig. 3, the correlations between color channels in the NEs are nearly one, whereas those in the DEs are relatively small. The first characteristic of the NE is that the absolute values of the -G and B-G correlations are larger than those of the DE. The second characteristic of the NE is that the color difference is well fitted to the logistic function. In experiments, we observe that the color difference in the NE is well fitted by the logistic function whereas that in the DE is not. Fig. 4 shows the color difference K in the NE and the DE as well as their corresponding fitted logistic functions. Horizontal axis and vertical axis indicate the normalized pixel position and the normalized color difference value, respectively. In Fig. 4, the color difference value at l(p) is set to zero whereas that at r(p) is set to one. Similar to K, the color difference K B in the NE can be fitted well to the logistic function whereas the color difference K B in the DE cannot. Table I shows the average L 1 error of the fitted logistic function for the color differences K and K B using 30,000 NEs and 10,000 DEs. In terms of the average L 1 error, the color difference in the NEs is well fitted to the logistic function. This paper proposes the color difference prediction using the logistic function to support the characteristics of the color differences in the NEs. (7)

JANG AND PAK: COLO FINGE COECTION BY THE COLO DIFFEENCE PEDICTION USING THE LOGISTIC FUNCTION 2565 Fig. 4. Typical color difference K in the NE and the DE. Solid and dotted lines show color differences and their corresponding fitted logistic functions, respectively. Similar to K, the color difference K B in the NE can be fitted to the logistic function whereas the color difference K B in the DE cannot. Horizontal axis and vertical axis indicate the normalized pixel position and the normalized color difference value, respectively. (a) Color difference in the NE. (b) Color difference in the DE. Fig. 5. TABLE I AVEAGE L 1 EO OF FITTED LOGISTIC FUNCTIONS IN THE NEs AND THE DEs Block diagram of the proposed color fringe correction method. B. Proposed Color Fringe Correction Since the color fringe is observed independently in and B channels, optimization of and B color channels is performed independently in each transition region of the color difference channelsk and K B, respectively. In the rest of the paper, processing of the color difference K is described. Processing of K B is the same as that of K. Fig. 5 shows a block diagram of the proposed color fringe correction method. The proposed color fringe correction method consists of four main blocks: color transform, transition region detection, color difference optimization, and color fringe correction. 1) Color Transform and Transition egion Detection: In the experiments of the bird image, we observed that several color fringes cannot be removed using Chung et al. s detection method; because of gradients inversion, detection of transition regions is not enough to remove color fringes. We simply modified Chung et al. s termination condition: 4. Find the left boundary pixel l(p) and right boundary pixel r(p), at which the inequality condition H (x p) T 2 does not hold at two consecutive pixels for the first time. Fig. 6. Detection of the transition region with gradient inversion. A double arrow indicates a selected (or detected) transition region. (a) Manually selected transition region. (b) Automatically detected transition region using Chung et al. s method [3]. (c) Automatically detected transition region using the proposed method. Fig. 6 shows the intensity profiles of the transition region with gradient inversion. Fig. 6(a) shows a manually selected transition region (indicated by an arrow) whereas Figs. 6(b) and 6(c) show the automatically detected ones using Chung et al. s method and the proposed method, respectively. When the gradient inversion exists, Chung et al. s method terminates the transition region at the pixel of gradient inversion whereas the proposed method does not. Since the transition region using Chung et al. s method satisfies the properties of the color differences in the NE, the color fringes remain after color fringe correction. When the gradient inversion exists at more than two successive pixels, the proposed method also has the same problem. However, in experiments, this case is rarely observed. The transition region detection is not the key contribution of the proposed algorithm. The transition region detection can thus be replaced by Chung et al. s method [3] or Kang et al. s method [10]. 2) Color Difference Prediction and Color Fringe Correction: In the rest of the paper, processing of the color difference K is described. Processing of K B is the same as that of K. As shown in Fig. 3, the absolute values of the -G and B-G correlations in the NE are nearly 1. In the detected transition region, the proposed method finds a set of color values whose absolute values of the -G correlation are nearly one with boundary conditions: { ( = arg min 1 ρ( ˆ, G) + λ ˆ (l(p)) (l(p)) ˆ ) 2 ( ) } 2 + ˆ (r(p)) (r(p)), where ˆ represents a set of the estimated color values. Using (1) with G channel as the reference, the estimated color values are obtained by ˆ = ˆK + G, (9) where ˆK is a set of the estimated color differences. Since, in the NE, the color difference can be fitted to the logistic function, the set of estimated color differences ˆK can be represented by the logistic function: ˆK (x) = K max K min min + K 1 + e γ(x τ). (10) (8)

2566 IEEE TANSACTIONS ON IMAGE POCESSING, VOL. 26, NO. 5, MAY 2017 Algorithm 1 Pseudo Code of the Proposed Color Fringe Correction for Horizontal Processing TABLE II CANDIDATE SIGMOID FUNCTIONS The generalized logistic function has four parameters. This paper optimizes only two parameters (γ and τ) to reduce the computational load, with the upper and lower asymptotes set to the maximum value (K max ) and the minimum value (K min ), respectively, which are obtained by (5). The optimum parameters γ and τ are calculated for each transition region. Using (8) (10), the color value can be estimated by adjusting the parameters in (10) which is simply optimized by Nelder-Mead method [21]. There is no guarantee that we can find the global optimal solution of the cost function using Nelder-Mead method. In experiments, Nelder-Mead method usually finds the global optimal solution. When the color difference K is nearly constant in the transition region, parameters of the logistic function γ and τ that minimize the cost function are not unique. However, in this case, the color intensity error due to different values of γ and τ is small. Algorithm 1 summarizes the proposed color fringe correction for horizontal processing. Processing along the vertical direction is defined similarly to the horizontal processing. C. Logistic Function for Color Fringe Correction In real-world images, the color differences K and K B are relatively uniform over small regions [18] and rather monotonically increase in edge regions [3]. From these properties, the color difference in the NE may be flat over the object (O) and background (B) sub-regions whereas monotonically increases in the center of the change (C) sub-region. The distribution of color differences may be fitted by a sigmoid function, which is flat over both sides of the DE and monotonically increases between object and background subregions. A sigmoid function is commonly used in neural networks as a transfer function [22]. In digital image applications, it is used for surface fitting [23]. In the NE, the color difference can be fitted by a sigmoid function. In this paper, some typical sigmoid functions are considered for selecting the monotonically increasing (decreasing) function. The candidate function is to satisfy the following property: 1) flexible sharpness; and 2) flexible offsets in the x-axis and y-axis. The logistic function is one of sigmoid functions. For example, the hyperbolic tangent function can be represented by Fig. 7. Original and recovered images in the NE. (a) Intensity profiles of the original image. (b) Intensity profiles of the recovered image using the logistic function. (c) Intensity profiles of the recovered image using the algebraic function. the logistic function, and thus similar types of functions are not considered. As other sigmoid functions, the algebraic function is selected. Table II shows two sigmoid functions: logistic and algebraic. Each sigmoid function can be adjusted using four parameters: upper asymptote, lower asymptote, sharpness, and offset. The algebraic function is a sigmoid function only when the sharpness γ is an even number, and thus the step size of the sharpness in the logistic function is smaller than that of the algebraic function. In other words, a logistic function can be more flexible than the algebraic function. Fig. 7 shows the original and recovered intensity profiles in the NE using (8). Fig. 7(a) shows the original intensity profiles of the NE. Figs. 7(b) and 7(c) are the corresponding recovered results using the logistic and algebraic functions, respectively. The logistic function preserves the original color of the NE well. This paper uses the logistic function to estimate the color difference in the change sub-region of the DE. The proposed correction method is performed separately in the horizontal and vertical directions. D. Implementation Details The proposed method solves (8) (10) using Nelder-Mead method [21], which has a high computational complexity. Nelder-Mead method can be accelerated by using proper initial values. Optimization is terminated within 20 iterations, which takes on the average 1.2 ms per transition region. The proposed method was implemented in MATLAB on Intel i5 3.4GHz CPU with 4 GB memory. In experiments, initial values of the sharpness γ and offset τ are set to 1 and half the length ( l(p) r(p) / 2) of the transition region, respectively.

JANG AND PAK: COLO FINGE COECTION BY THE COLO DIFFEENCE PEDICTION USING THE LOGISTIC FUNCTION 2567 TABLE III PEFOMANCE COMPAISON OF FOU METHODS IN TEMS OF THE PSN (UNIT: db) Fig. 8. Histograms of the optimized sharpness γ and offset τ. (a) Optimized sharpness. (b) Optimized offset. Fig. 9. Original and recovered Book images. (a) Input image with the DE. (b) ecovered image by a one-parameter optimization. (c) ecovered image by a two-parameter optimization. Fig. 8 shows the histograms of sharpness γ and offset optimized with 20 real images. As shown in Fig. 8, the values of sharpness γ are typically 1.2 in all real images whereas the values of offset τ are spread out over the range [0, 1]. To reduce the computational load, we optimize the offset τ only in (10) with the sharpness γ set to 1.2. A one-parameter optimization takes usually 0.7 ms per transition region. The one-parameter optimization, where only the offset parameter is optimized, is used to correct color fringe gracefully with the reduced computational load by a factor of two compared with a two-parameter optimization. Fig. 9 shows the recovered images using a two-parameter optimization and a one-parameter optimization. Both the offset and sharpness parameters are selected in a two-parameter optimization, whereas only the offset parameter in a oneparameter optimization. Using a one-parameter optimization, on the average, we reduce the computation time by about 40 % with the slight PSN degradation of about 0.02 db. IV. EXPEIMENTAL ESULTS AND DISCUSSIONS In this section, the performance of the color fringe correction methods is evaluated. For quantitative comparison, the peak signal to noise ratio (PSN) [3] is used, which is defined by ( 255 2 ) PSN = 10 log σ 2 (11) where the peak intensity value is 255 (i.e., quantized to 8 bits) and σ represents the standard deviation of intensity in the manually selected regions. Chung et al. used a small patch for the PSN calculation. If no color fringes appear, either side of an edge can be considered as a homogeneous area with small variance [3]. In small regions, one side of an edge is usually flat as shown in Fig. 10. Similar to Chung et al. s method, we show the mask for the PSN calculation, where the larger the PSN, the better the recovered image. White regions in the selected binary masks represent manually detected color fringe regions for performance comparison in terms of the PSN (see Table III). Three existing methods are used for performance comparison with the proposed method: Chung et al. s method [3], Chang et al. s method [4], and Kang et al. s method [10]. All these methods remove the color fringes in GB space. ecovered images of the three existing methods [3], [4], [10] are obtained by our own implementations. The proposed method uses a one-parameter optimization. Fig. 10 shows quantitative comparison of three existing methods with the proposed method for the monochromatic object (Bird) and the color objects (Book, Fence, Flower, and Leaves). In Fig. 10, from top to bottom, input images, recovered images using Chung et al. s method, Chang et al. s method, Kang et al. s method, and the proposed method, and selected mask regions for the PSN calculation are shown. In each image, cropped and enlarged image is shown. Fig. 11 shows intensity profiles of each image. In Bird image that contains a monochromatic object with color fringe, the existing three methods incompletely remove the red fringe whereas the proposed method removes relatively well the red fringe in the chest of the bird. Kang et al. s method removes color fringes well in the neck of the bird. Chung et al. s method makes horizontal color fringe that comes from incorrectly determining the transition region. In recovered images by Kang et al. s method, Chang et al. s method, and the proposed method, color fringe is removed well with little difference as shown in Figs 10(a) and 11(a). In color objects, two existing methods [3], [4] remove color fringe by converting color to gray as shown in Figs. 10(b) 10(e), whereas the proposed method and Kang et al. s method correct color fringe with the color of chromatic objects preserved. In Book image in Figs. 10(b) and 11(b) that contains a blue object with color fringe, the two existing methods [3], [4] remove color fringe, however, lose the original object color whereas the proposed method preserves the original object color. In Fence image

2568 IEEE TANSACTIONS ON IMAGE POCESSING, VOL. 26, NO. 5, MAY 2017 Fig. 10. Comparison of three existing methods with the proposed method. (a) Bird. (b) Book. (c) Fence. (d) Flower. (e) Leaves. in Figs. 10(c) and 11(c) that contains a green object with color fringe, the two existing methods [3], [4] convert color fringe to the grayscale, however, lose the original object color whereas the proposed method and Kang et al. s method preserve the original object color. Book and Fence images are failure cases of the two existing methods that have negative color differences in the color object. In Book image, K B is negative in a blue object region whereas K B has a large positive value in the DE. Similarly, in Fence image, K and K B have negative values in a green object region whereas large positive values in the DE. Because the two existing methods [3], [4] reduce the magnitudes of the color differences in the DE, these methods convert color fringe to grayscale in the DE of a color object (green or blue) as shown in Fig 11. In Flower image in Figs. 10(d) and 11(d) that contains a red object with color fringe, K B has a positive value in a red object sub-region whereas K B has a large negative value in the DE. Flower image has more complex DE structure than Book and Fence images. Chung et al. s method does not completely reduce color fringe and Chang et al. s method successfully removes color fringe with a little color change. The proposed method corrects color fringes by converting the degraded color into the correct color. Qualitatively, the proposed method well corrects color fringes without introducing color artifacts in the color object. In Leaves image in Figs. 10(e) and 11(e) that contains green objects with color fringe, the two existing methods [3], [4] convert color fringe to the grayscale and Kang et al. s method remains purple fringes. However, the

JANG AND PAK: COLO FINGE COECTION BY THE COLO DIFFEENCE PEDICTION USING THE LOGISTIC FUNCTION Fig. 11. 2569 Intensity profiles of Fig. 10. The profiles are shown with same layout of Fig. 10. (a) Bird. (b) Book. (c) Fence. (d) Flower. (e) Leaves. proposed method removes color fringes with the original object color preserved. Table III shows the PSN comparison of three existing methods and the proposed method in the selected regions of the recovered images using the corresponding masks as shown Fig. 10. PSNG indicates desirable PSN of non-degraded color, green color, whereas PSND is calculated by averaging standard deviations of, G, and B colors using (11) [3]. In the NE, the color of the object sub-region changes little so that the standard deviation is small. In other words, the PSN in the NE is larger than that in the DE. Chung et al. [3] observed that the PSN of the recovered image is similar to PSNG. In the color object, since the standard deviation of a dominant color is smaller than that of green, the PSN of the recovered image is larger than PSNG. For Book and Flower images, where the object sub-regions have blue and red colors, respectively, the PSNs of the recovered images are larger than PSNG. In Table III, the best and the second best results are shown in bold and underlined, respectively, for each image. The proposed method shows superior color fringe correction performance than the three existing methods in all cases, especially when the color fringe appears in color objects. V. C ONCLUSION This paper proposes a new color fringe correction method by the color difference prediction using the logistic function that gives the maximum -G and B-G correlations. Color fringe correction of the proposed method is based on our observations that the color difference prediction in the NE is fitted well by the logistic function which gives the maximum -G and B-G correlations. The proposed method not only

2570 IEEE TANSACTIONS ON IMAGE POCESSING, VOL. 26, NO. 5, MAY 2017 removes color fringe but also restores the original color well in the DE. Qualitative and quantitative performance comparisons show that the proposed method outperforms the existing methods in terms of preserving the original color and the PSN. Future work will focus on efficient region-based color difference prediction. EFEENCES [1] P. Mouroulis and J. Macdonald, Geometrical Optics and Optical Design. London, U.K.: Oxford Univ. Press, 1997. [2] S.-W. Chung, B.-K. Kim, and W.-J. Song, Detecting and eliminating chromatic aberration in digital images, in Proc. IEEE Int. Conf. Image Process., Nov. 2009, pp. 3905 3908. [3] S.-W. Chung, B.-K. Kim, and W.-J. Song, emoving chromatic aberration by digital image processing, Opt. Eng., vol. 46, no. 6, pp. 067002-1 067002-10, Jun. 2010. [4] J. Chang, H. Kang, and M. G. Kang, Correction of axial and lateral chromatic aberration with false color filtering, IEEE Trans. Image Process., vol. 22, no. 3, pp. 1186 1198, Mar. 2013. [5] D.-W. Jang, H.-S. Kim, C.-D. Jung, and.-h. Park, Color fringe correction based on image fusion, in Proc. IEEE Int. Conf. Image Process., Oct. 2014, pp. 1817 1821. [6] S. B. Kang, Automatic removal of purple fringing from images, U.S. Patent 7 577 292, Aug. 18, 2009. [7] J. Bi and Z. Fang, Detection and correction of purple fringing in narrow region, in Proc. IEEE Int. Conf. Signal Process. Commun. Comput., Aug. 2013, pp. 1 4. [8] D.-K. Lee, B.-K. Kim, and.-h. Park, Colourisation in Yxy colour space for purple fringing correction, IET Image Process., vol. 6, no. 7, pp. 891 900, Oct. 2012. [9] H.-J. Ju and.-h. Park, Colour fringe detection and correction in YC b C r colour space, IET Image Process., vol. 7, no. 4, pp. 1397 1409, Jun. 2013. [10] H. Kang, S.-H. Lee, J. Chang, and M. G. Kang, Partial differential equation based approach for removal of chromatic aberration with local characteristics, J. Electron. Imag., vol. 19, no. 3, pp. 033016-1 033016-8, Sep. 2010. [11] C. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf, Nonstationary correction of optical aberrations, in Proc. IEEE Int. Conf. Comput. Vis., Nov. 2011, pp. 659 666. [12] J. T. Korneliussen and K. Hirakawa, Camera processing with chromatic aberration, IEEE Trans. Image Process., vol. 23, no. 10, pp. 4539 4552, Oct. 2014. [13] A. Fazekas and L. Tóth, Filtering chromatic aberration for wide acceptance angle electrostatic lenses, IEEE Trans. Image Process., vol. 23, no. 7, pp. 2834 2841, Jul. 2014. [14] A. Fazekas, H. Daimon, H. Matsuda, and L. Tóth, Filtering chromatic aberration for wide acceptance angle electrostatic lenses II Experimental evaluation and software-based imaging energy analyzer, IEEE Trans. Image Process., vol. 25, no. 8, p. 3638, Aug. 2016. [15] I. Yerushalmy and H. Hel-Or, Digital image forgery detection based on lens and sensor aberration, Int. J. Comput. Vis., vol. 92, no. 1, pp. 71 91, Mar. 2011. [16] J. ombaut, A. Pizurica, and W. Philips, Locally adaptive intrasubband interpolation of lost lowfrequency coefficients inwavelet coded images, in Proc. IEEE Int. Conf. Image Process., Sep./Oct. 2007, pp. IV-257 IV-260. [17] F. Zhou, W. Yang, and Q. Liao, Interpolation-based image superresolution using multisurface fitting, IEEE Trans. Image Process., vol. 21, no. 7, pp. 3312 3318, Jul. 2012. [18] S.-C. Pei and I.-K. Tam, Effective color interpolation in CCD color filter arrays using signal correlation, IEEE Trans. Circuits Syst. Video Technol., vol. 13, no. 6, pp. 503 513, Jun. 2003. [19] D. Alleysson, S. Süsstrunk, and J. Hérault, Linear demosaicing inspired by the human visual system, IEEE Trans. Image Process., vol. 14, no. 4, pp. 439 449, Apr. 2005. [20] J. Adams, K. Parulski, and K. Spaulding, Color processing in digital cameras, IEEE Micro, vol. 18, no. 6, pp. 20 30, Nov./Dec. 1998. [21] J. A. Nelder and. Mead, A simplex method for function minimization, Comput. J., vol. 7, no. 4, pp. 308 313, 1965. [22] W. Duch and N. Jankowski, Survey of neural transfer functions, Neural Comput. Surv., vol. 2, no. 1, pp. 163 213, 1999. [23] D.. Iskander, A parametric approach to measuring limbus corneae from digital images, IEEE Trans. Biomed. Eng., vol. 53, no. 6, pp. 1134 1140, Jun. 2006. Dong-Won Jang received the B.S. and M.S. degrees in electronics engineering from Sogang University, Seoul, South Korea, in 2012 and 2015, respectively, where he is currently pursuing the Ph.D. degree in electronics engineering. His current research interests include deep learning-based color fringe correction, color image dehazing, and image fusion. ae-hong Park (S 76 M 84 SM 99) received the B.S. and M.S. degrees in electronics engineering from Seoul National University, Seoul, South Korea, in 1976 and 1979, respectively, and the M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, USA, in 1981 and 1984, respectively. In 1984, he joined as a Faculty Member with the Department of Electronic Engineering, Sogang University, Seoul, where he is currently a Professor. In 1990, he spent his sabbatical year as a Visiting Associate Professor with the Computer Vision Laboratory, Center for Automation esearch, University of Maryland at College Park. In 2001 and 2004, he spent sabbatical semesters at Digital Media esearch and Development Center (DTV image/video enhancement), Samsung Electronics Co., Ltd., Suwon, South Korea. In 2012, he spent a sabbatical year in Digital Imaging Business (esearch and Development Team) and Visual Display Business (esearch and Development Office), Samsung Electronics Co., Ltd., Suwon, South Korea. His current research interests include video communication, computer vision, and pattern recognition. He served as the Editor of the Korea Institute of Telematics and Electronics (KITE) Journal of Electronics Engineering from 1995 to 1996. Dr. Park was a recipient of a 1990 Post-Doctoral Fellowship presented by the Korea Science and Engineering Foundation, the 1987 Academic Award presented by the KITE, the 2000 Haedong Paper Award presented by the Institute of Electronics Engineers of Korea, the 1997 First Sogang Academic Award, and the 1999 Professor Achievement Excellence Award presented by Sogang University. He was a co-recipient of the Best Student Paper Award of the IEEE International Symposium on Multimedia (ISM 2006) and the IEEE International Symposium on Consumer Electronics (ISCE 2011).

本文献由 学霸图书馆 - 文献云下载 收集自网络, 仅供学习交流使用 学霸图书馆 (www.xuebalib.com) 是一个 整合众多图书馆数据库资源, 提供一站式文献检索和下载服务 的 24 小时在线不限 IP 图书馆 图书馆致力于便利 促进学习与科研, 提供最强文献下载服务 图书馆导航 : 图书馆首页文献云下载图书馆入口外文数据库大全疑难文献辅助工具