A Model of Retinal Local Adaptation for the Tone Mapping of CFA Images
|
|
- Candace Cain
- 6 years ago
- Views:
Transcription
1 A Model of Retinal Local Adaptation for the Tone Mapping of CFA Images Laurence Meylan 1, David Alleysson 2, and Sabine Süsstrunk 1 1 School of Computer and Communication Sciences, Ecole Polytechnique Fédérale de Lausanne, Switzerland 2 Université Pierre-Mendes France (UPMF), Grenoble laurence.meylan@a3.epfl.ch; david.alleysson@upmf-grenoble.fr; sabine.susstrunk@epfl.ch We present a tone mapping algorithm that is derived from a model of retinal processing. Our approach has two major improvements over existing methods. First, tone mapping is applied directly on the mosaic image captured by the sensor, analogue to the human visual system that applies a non-linearity on the color signals captured by the cone mosaic. This reduces the number of necessary operations by a factor three. Second, we introduce a variation of the center/surround class of local tone mapping algorithms, which are known to increase the local contrast of images but tend to create artifacts. Our method gives a good improvement in contrast while avoiding halos and maintaining good global appearance. Like traditional center/surround algorithms, our method uses a weighted average of surrounding pixel values. Instead of using it directly, the weighted average result serves as a variable in the Naka-Rushton equation, which models the photoreceptor non-linearity. Our algorithm provides pleasing results on various images with different scene content, key, and dynamic range. c 2007 Optical Society of America OCIS codes: , Introduction Most of today s digital cameras are composed of a single sensor with a color filter array (CFA) placed in front to select the spectral band that is captured at each spatial position (Fig. 2, left). Since only one chromatic component is retained at each spatial location (pixel), 1
2 a color reconstruction must be performed to obtain the full resolution color image with three chromatic components per pixel. In traditional color processing work-flows, 15 this color reconstruction, called demosaicing (Fig. 1, a) usually takes place before applying any rendering operations. The mosaiced image captured by the CFA is first demosaiced to obtain an RGB image with three chromatic components per spatial location. Color rendering operations, which include white balancing, color matricing, and tone mapping, are performed afterwards. Instead of the work-flow shown in Fig. 1 (a), we propose a solution where the demosaicing is the last step of the color processing work-flow. Color rendering operations are thus performed directly on the CFA image (Fig. 1, b). Our motivations to use such a work-flow is that it is more analogous to the retinal processing of the human visual system (HVS), as discussed in Section 2. Another motivation is that applying the tone mapping on the CFA image requires only one third of the operations. This in addition to the use of small filters makes our method relatively fast compared to other existing local tone mapping algorithms. Finally, because the rendering operations are performed directly on the values captured by the sensor, there is no loss of information prior to rendering. Our tone mapping algorithm takes inspiration from the non-linear adaption that occurs in the retina, which efficiently improves local contrasts while conserving good global appearance. Fig. 1 is an example of applying our method on a high dynamic range image (i.e., containing high contrast and details in dark and bright areas). The left image shows the result obtained with standard tone mapping (gamma operator) and the right image shows the result obtained with our algorithm. Our method successfully enhances detail visibility in the center of the image, the details are well rendered without requiring an additional sharpening operation. We tested our algorithm on various kinds of captured scenes (high dynamic range and low dynamic range, low key and high key). The results show that our tone mapping operator provides visually pleasing reproductions for all the tested images. This article is structured as follows. First, Section 2 provides background knowledge on tone mapping and the model of retinal adaptation that we base our method on. Section 3 presents the algorithm. We first explain the global work-flow and then separately describe each step. Section 4 shows the results obtained by our proposed work-flow and Section 5 discusses the key differences of our algorithm compared to other existing methods. Section 6 concludes the article. 2
3 Linear Tone mapped Demosaicing Rendering a) Traditional image processing workflow Rendering Demosaicing b) Our proposed workflow c) Global correction (gamma) d) Our proposed method Fig. 1. Top (a) : Traditional image processing work-flow. Center (b) : Our proposed work-flow. Bottom left (c): Image rendered with a global tone mapping operator (gamma). Bottom right (d): Image rendered according to our method. 3
4 2. Background 2.A. Model of Retinal Processing Historically, many analogies with the HVS have been used to develop image and computer vision applications. For example, there is a correspondence between trichromacy (the ability of human vision to distinguish different colors given by the interaction of three kinds of photoreceptors) and the three color channels that constitute a color image. Another equivalence exists between the spatio-chromatic sampling of the cone mosaic and the Bayer CFA (Fig. 2). 7 Fig. 2. Bayer CFA (left) and the spatio-chromatic sampling of the cone mosaic (right). Our proposed work-flow (Fig. 1, b) exploits another analogy with human vision, namely the non-linear adaptation taking place in the retina. We make a correspondence between this non-linear processing and the tone mapping operation of the image processing work-flow. One role of tone mapping is to non-linearly process the captured image to mimic the retina s non-linear adaptation and render the image as if the HVS had processed it. In traditional work-flows, this non-linear coding is usually applied to the RGB color image while for the HVS, the non-linear adaptation takes place in the retina before color reconstruction. We know that the sampled color signals are still a multiplexing of spatial and chromatic information at the ganglion cell level 17 (see Fig. 3). Thus, we propose a new processing work-flow where the non-linear encoding (tone mapping) is directly performed on the mosaic image provided by the Bayer pattern. Fig. 3 shows the model of the retinal cell layers on which we base our algorithm. We use the fact that the retina is composed of two functional layers, the outer plexiform layer 4
5 ) 5 Fig. 3. Simplified model of the retina. / / Ganglion cells Amacrine cells * * Bipolar cells 0 Horizontal cells 5 Cones Inner plexiform layer (IPL) Outer plexiform layer (OPL)
6 (OPL) and the inner plexiform layer (IPL) that both apply an adaptive non-linearity on the input signal. These two layers are composed of the cones, the horizontal and amacrine cells, which provide the horizontal connectivity, and the bipolar and ganglion cells. When the light enters the retina, it is sampled by the cones into a mosaic of chromatic components. The horizontal cells measure the spatial average of several cone responses, which determines the cones adaptation factors through a feedback loop. 18 The color signals are then passed through the bipolar cells to the ganglion cells. We assume that the role of the bipolar cells is simply to pass the color signal from the OPL to the IPL. In the IPL, a similar non-linear processing is applied. We assume that the amacrine cells also provide a feed-back to modulate the adaptive non-linearity of the ganglion cells. Thus, our tone mapping algorithm also applies two non-linear processings on the CFA image in imitation of the IPL and OPL functions. The non-linear operations are based on Naka and Rushton, 24, 29 who developed a model for the photoreceptor non-linearities and adaptation to incoming light. Spitzer et al. 30, 31 also proposed a biological model for color contrast, which used similar adaptation mechanisms. The non-linear mosaic image is then demosaiced to reconstruct the RGB tone-mapped image. 2.B. Tone Mapping Tone mapping is the operation in the image processing work-flow that matches scene radiances to display luminances. The goal of tone mapping may vary, but the intent often is to reproduce visually pleasing images that correspond to the expectation of the observer. Tone mapping algorithms can either be global (spatially invariant) or local (spatially variant). A global tone mapping is a function that maps the input pixel value to a display value, not taking into account the spatial position of the treated pixel (one input value corresponds to one and only one output value). A typical tone mapping function can be logarithmic, a power-law (often referred to as a gamma function) or a sigmoid, also called s-shape. More sophisticated global tone mapping methods vary the function parameters depending on global characteristics of the image. 8, 16, 26, 32 The key of the image can be used to determine the exponent of the gamma function. 26 In 8 and, 16 a s-shaped function is defined by the image statistics, such as the mean and the variance. In, 32 the histogram distribution is used to construct an image-dependent global function. With local tone mapping algorithms, one input pixel value can lead to different output values depending on the pixel s surround. A local tone mapping operator is used when it is necessary to change local features in the image, such as increasing the local contrast to improve detail visibility. Many local tone mapping algorithms have been proposed, which can be grouped in different classes sharing the same common features (see 10 and 27 for a review). 6
7 Center/surround methods compute the new pixel values by taking the difference between the input pixel values and the average of surrounding pixels in the log domain. 5, 23, 25, 26 They take inspiration from the HVS receptive fields and lateral inhibition. Their common drawbacks are the creation of halos along high contrast edges and graying-out of low contrast areas, which were addressed in. 23, 25 Gradient-based methods 13 work directly on the image gradient to increase the local contrast by weighting high and low gradient values differently, taking surrounding data into account. One difficulty of this technique is to integrate the gradient to recover the treated image. Frequency-based methods have also been developed, such as the bilateral filter algorithm of Durand and Dorsey. 12 The image is separated in low and high frequency bands. The low-frequency band is assumed to approximatively correspond to the illuminant and is compressed. Although all these methods provide pleasing results for a certain set of images, none of them is satisfying for all kinds of scenes. Evaluation methods lead to different conclusions 19, 21 depending on the tasks and on scene content. Which tone mapping operation should be performed depends on the dynamic range of the scene, where the dynamic range is defined by the ratio between the brightest and the darkest object luminance in the scene, and that of the display. In the case of a low dynamic range scene, the input image s dynamic range is smaller than that of the display and thus needs to be expanded. In the opposite case of a high dynamic range scene, whose dynamic range exceeds that of the display, the luminance ratios must be compressed. With high dynamic range scenes, it is often necessary to apply a local tone mapping in addition to the global compression to increase the local contrast and thus obtain visually pleasing images where details are kept visible. 3. A Local Tone Mapping Algorithm for CFA Images Our local tone mapping method processes the images according to the retinal model that was described in Section 2.A. The input mosaic image (or CFA image), which has one chromatic component per spatial location, is treated by two consecutive non-linear operations. Last, demosaicing is applied in order to obtain a color image with three color components per pixel. Each of these steps is described in the following sections. 3.A. Adaptive Non-Linearity Our model of the OPL and IPL non-linearities takes inspiration from the Naka-Rushton 24, 29 equation 7
8 X Y = X + X0, (1) where X represents the input light intensity, X0 is the adaptation factor, and Y is the adapted signal. In the original formulation, 24 the adaptation factor (X0) is determined by the average light reaching the entire field of view. In our method, X0 varies for each pixel. It is a local variable given by the average light intensity in the neighborhood of one pixel. Fig. 4 illustrates the Naka-Rushton function for different values of X0. If X0 is small, the cell output has increased sensitivity. If X0 is large, there is not much change in sensitivity. In our model, the Naka-Rushton equation is used to calculate the non-linearities of both the OPL and IPL. X0 is given by the output of the horizontal cells or amacrine cells, respectively, and modulates the sensitivities of the cones and of the ganglion cells. Adapted signal (Y) X0 = 1 X0 = 2 X0 = 5 X0 = 10 Input signal (X) Fig. 4. Naka-Rushton function with different adaptation factors X0. 3.B. Properties of a CFA Image The two non-linearities described above are applied directly on the CFA image. In our implementation, the CFA image is obtained by capturing the light using a Bayer pattern, 7 which results in a spatio-chromatic sampling of the scene. This mosaic image has certain properties that allow treating the luminance and the chrominance of the image separately. 8
9 Alleysson et al. 4 showed that if we analyze the amplitude Fourier spectrum of a Bayer CFA image, the luminance is located in the center of the spectrum and the chrominance is located in the borders (Fig. 5). The luminance is present at full resolution while the chrominance is down-sampled and encoded with opponent colors. Considering the frequency domain representation of Fig. 5, a wide-band low-pass filter can be used to recover the luminance and a high-pass or band-pass filter can recover the down-sampled chrominance. Choosing the appropriate filters allows implementing an efficient demosaicing algorithm. Their method was refined by 11 and 22 who propose a more accurate estimation of the luminance. Fig. 5. Amplitudes of the Fourier spectrum of a CFA image obtained with a Bayer pattern. The luminance is located in the center and represented at full resolution. The chrominance is located in the borders. It is down-sampled and encoded with opponent colors. In Section 3.E, we will apply such filters for demosaicing. For now, we use this property of localized luminance and chrominance when computing the response of the horizontal and amacrine cells as a guarantee that using a low-pass filter will indeed provide the average of the luminance in a surrounding area. In other words, we apply the non-linearities only on the luminance signals, not on any chromatic components. 9
10 3.C. The First Non-Linearity The first non-linear operation of our method simulates the adaptive non-linearity of the OPL. The adaptation factors, which correspond to the response of the horizontal cells, are computed for each pixel by performing a low-pass filter on the input CFA image. H(p) = I CF A (p) G H + I CF A 2, (2) where p is a pixel in the image, H(p) is the adaptation factor at pixel p, I CF A is the mosaic input image, denotes the convolution operation, and G H is a low-pass filter that models the transfer function of the horizontal cells. G H is a two-dimensional Gaussian filter (Fig. 6) with spatial constant σ H. For the images shown in this article, we used σ H = 3. The term I CF A corresponds to the mean value of the CFA image pixel intensities. The factor (here 1 ) weights between global and local effects, and could be adjusted according to 2 image characteristics. G H = e x 2 2σ H 2, (3) The input image I CF A is then processed according to the Naka-Rushton equation using the adaptation factors given by H. A graphical representation is given in Fig. 6. I bip (p) = I CF A (p) I CF A (p) + H(p), (4) 3.D. The Second Non-Linearity A second similar non-linear operation that models the behavior of the IPL is applied on the image I bip to obtain the tone-mapped image I ga. I ga (p) = I bip (p) I bip (p) + A(p) A simulates the output of the amacrine cells. I ga models the output color signal transfered from the ganglion cells to the visual cortex. Similarly to (4), A is a low-pass version of the image luminance. It is computed by convolving the mosaic image I bip with a Gaussian filter of spatial constant σ A. We used σ A = 1.5. (5) where G A is given by A(p) = I bip (p) G A + I bip 2, (6) 10
11 I CFA H * G H I + CFA 2 = I bip Naka Rushton eq. I bip I(p) (p) = CFA I(p) + H(p) CFA Fig. 6. Simulation of the OPL adaptive non-linear processing. The input signal is processed by the Naka-Rushton equation whose adaptation factors are given by filtering the CFA image with a low-pass filter. The second non-linearity that models the IPL layer works similarly. 11
12 G A = e x 2 2σ A 2. (7) The resulting mosaic image I ga has now been processed by a local tone mapping. The local contrast has been increased. The next step before displaying the result is to recover three chromatic components per spatial location. This can be performed by any demosaicing algorithm. 3.E. Demosaicing We use the demosaicing algorithm described by Alleysson et al, 4 which first obtains the luminance image using a wide-band low-pass filter (Fig. 5). Although some high frequencies are removed by this method, 11 the filter is sufficiently accurate to well estimate the luminance. We choose the luminance estimation filter to be F dem. F dem = (8) Then L(p) = I ga (p) F dem, (9) where I ga is the tone-mapped CFA image and L represents the non-linearly encoded luminance, which we call lightness. Note that in, 4 L corresponds to the luminance while here L is non-linear and corresponds to perceived lightness. Nevertheless, the properties of the Fourier spectrum remain the same. We will use the term lightness to refer to L in the rest of the article. The chrominance is then obtained by subtracting the lightness from the mosaiced image I ga. C(p) = I ga (p) L(p) (10) C(p) is also a mosaic and contains the down-sampled chrominance. In C(p), each pixel only contains information for one spectral band. C(p) can be separated in three down-sampled chrominance channels using the modulation functions m R, m G, m B (11). This is illustrated in Fig
13 C1 C2 = C3 Fig. 7. The chrominance channels are separated before performing the interpolation. m R (x, y) = (1 + cos(πx))(1 + cos(πy))/4 m G (x, y) = (1 cos(πx) cos(πy))/2 m B (x, y) = (1 cos(πx))(1 cos(πy))/4, (11) by: where x, y is the coordinate of a pixel p in the image. The chrominance channels are given C 1 (x, y) = C(x, y) m R (x, y) C 2 (x, y) = C(x, y) m G (x, y) C 3 (x, y) = C(x, y) m B (x, y) (12) In C 1, C 2, C 3, the missing pixels (having a zero value) must be reconstructed to recover the full resolution image. This is done using a simple bilinear interpolation. Although more sophisticated methods exist, we deem it sufficient as the HVS is not highly sensitive to chromatic variations. After interpolation, the treated RGB image is obtained by adding the lightness and the chrominance channels together: 13
14 R(p) = L(p) + C 1(p) G(p) = L(p) + C 2(p), (13) B(p) = L(p) + C 3(p) where R(p), G(p), B(p) are the RGB channels of the image, L is the lightness (9), and C 1, C 2, C 3 are the interpolated chrominance channels. The results of applying our algorithm on natural images are shown in Section Results We tested our method on a large image database of various scene content. Since our algorithm is designed to be applied on the CFA image directly, we captured a series of images with a Canon camera in order to obtain their RAW (mosaic) representation. Moreover, to increase the number of images in our database and the variety of content, we simulated RAW images from legacy images that were already color rendered. We inversed the original nonlinearity assuming a power function (gamma) 3 of 2.4 and we recreated the mosaic according to the Bayer pattern. This allowed us to test our algorithm on all kind of images. The other advantage of simulating the mosaic is that other rendering operations have already been performed (color matricing, white balance) and we can focus on the effect of tone mapping only. In the first part of this section, we present the results obtained with legacy images and simulated CFA. The four examples of Fig. 8 and Fig. 9 show that for high dynamic range images, the local contrast is increased in the shadow areas without compressing the highlights. Moreover, applying our method on standard dynamic range images also gives visually pleasing results. The second series of test was performed on the RAW images captured with the Canon camera. The format provided by the camera manufacturer is called CRW (Canon Raw) and can only be read by special software. In order to retrieve the RAW data, we used the free program DCRAW, 9 which can handle RAW formats from nearly all camera manufacturers. No color matricing and no white balance is performed prior to tone mapping. Fig. 10 shows the results obtained by our algorithm on RAW images. They are compared with images obtained with the Canon RAW converter software. The only color rendering operation that we perform is tone mapping. Since our goal here is to demonstrate the benefit of our tone mapping algorithm, we did not implement the color matricing part. However, with our work-flow, color matricing or other color rendering operations could also be integrated before the color interpolation. The integration of other rendering operations in our proposed framework is considered for future work. 14
15 Fig. 8. Results with legacy images. Left: Original tone-mapped image. Right: Image processed by our method. 15
16 Fig. 9. Results with legacy images. Left: Original tone-mapped image. Right: Image processed by our method. 16
17 Fig. 10. Results with RAW images. Left: Image obtained using Canon RAW converter. Right: Image processed by our method. 17
18 5. Discussion 5.A. A New Center/Surround Method Our tone mapping algorithm shares similarities with center/surround local tone mapping methods, 20, 23, 25 which take inspiration from the retina s receptive fields. Traditionally, center/surround algorithms compute the treated pixel values by taking the difference in the log domain between each pixel value and a weighted average of the pixel values in its surround. The weighted average is performed by filtering the image with a low-pass filter. This can be applied to each color channel separately or to the luminance channel only. I c(p) = log(i c (p)) log(i c (p) G), (14) where p is a pixel in the image, I c is one color channel (it could also be the luminance channel), denotes the convolution operation, and G is a low-pass filter (often a Gaussian). A common drawback of center/surround methods is that the increase in local contrast depends greatly on the size of the filter. When a small filter is used, halo artifacts appearing as shadows along high contrast edges can be visible. When a large filter is used, the increase in local contrast is not sufficient to retrieve detail visibility in dark or bright areas. Another drawback of center/surround methods is that they tend to gray-out (or wash-out) lowcontrast areas. For example, a plain black area or a bright low-contrast zone will tend to become gray because of the local averaging. Fig. 11 demonstrates the trade-off between the increase in local contrast and the creation of artifacts. Halo artifacts and graying-out are illustrated. These drawbacks have already been discussed in the literatures 6, 14, 23, 25 and solutions to overcome them were developed. Rahman et al. 25 introduced a multi-scale method where the center/surround operation is performed for three different scales so that halo artifacts and graying-out are reduced. However, these artifacts are still visible when the scene contains high contrasts. Meylan and Süsstrunk 23 introduced an adaptive filter, whose shape follows the high contrast edges in the image and thus prevents halo artifacts. The graying-out is avoided by using a sigmoid weighting function to conserve black and white low-contrast areas. That method is good at retrieving details in the dark areas but it tends to compress the highlights too much. In general, existing tone mapping methods work well only for a small set of images (Section 5.B). The method that we present here does not suffer from halos nor graying-out as it is not based on the same general equation (14). The key difference is in the way the local information is used. With traditional methods, the local information is averaged and subtracted to the value of the treated pixel (14). This leads to halos and graying-out. Our method also uses 18
19 Fig. 11. Example of the trade-off between increase in local contrast and image rendering. Left: Image treated with a center/surround method and a small filter. The increase in local contrast is significant but halo artifacts are visible along high contrast edges and the black t-shirt looks washed-out. The shadow on the face of the person is a halo artifact. Right: Image treated with a center/surround method and a large filter. There is no halo artifact nor graying-out but the increase in local contrast is not sufficient. The image is courtesy of Xiao et al
20 Output pixel value 1 X0 = 0.06 Output pixel value 1 X0 = Input pixel value 1 0 Input pixel value 1 Fig. 12. Illustration of the local tone mapping function that is applied to each pixel depending of the value of X0. Left: Dark surround, small X0. Right: Bright surround, large X0. the average of the surrounding pixel values given by H or A. But, instead of subtracting the average directly from the treated pixel, it is used as a variable of the Naka-Rushton equation (the adaptation factor), which is then applied to the treated pixel. If the treated pixel lies in a dark area, the adaptation factor is small and thus, the output value range allocated to dark input values is large (Fig. 12, left). In a bright area, the adaptation factor is large and thus the mapping function between the input pixel value and the output pixel value is almost linear (Fig. 12, right). Fig. 12 shows the two Naka- Rushton curves obtained for the minimum and maximum adaptation factor X0 obtained when processing the car image shown in Fig B. Comparison with Other Methods Our method aims to achieve pleasing reproductions of images. This can not be measured objectively. Pleasing reproduction should be evaluated using psychovisual experiments and human subjects. As we believe that comparisons of a limited set of images with existing methods are not always comprehensive, we do not provide a complete comparison here. Instead, we put the code available on-line 1 so that figures and results are reproducible. 28 We illustrate some key features of our method using examples obtained with other center/surround algorithms. Fig. 13 shows the results of MSRCR developed in, 25 our previously developed adaptive filter method, 23 and the algorithm proposed here. The MSRCR image was obtained with the 20
21 free version of the software PhotoFlair using the default settings, 2 which puts demo tags across the image. The advantage of our method is that it provides pleasing images regardless of the characteristics of the input image, while other methods are often restricted to a set of images having common features (dynamic range, key, content). For example, MSRCR provides pleasing images when the dynamic range is standard or slightly high, but it tends to generate artifacts when the input image has a very high dynamic range such as the one of Fig. 13. In the image rendered with MSRCR (Fig. 13, top), there is a shadow on the person near the window and on the red dog. Moreover, the black t-shirt tends to become gray. In the image rendered by our method (center) there is no halo on the face of the person and the graying-out of the black t-shirt is avoided. Avoiding halo and graying-out artifacts was the focus of Meylan and Süsstrunk, 23 who proposed the use of an adaptive filter to avoid them. In Fig. 13 (bottom), we can see that there is no halo nor graying-out artifacts. However, the details in the roof are washed-out. The local contrast is lost in the highlights due to the strong compression of bright areas. This makes the adaptive filter method inappropriate for standard or low dynamic range scenes. In addition to being suitable to all kind of images, another advantage of our method is that it is quite fast compared to other existing local tone mapping algorithms. First, the operation is performed on the CFA image, which divides the time of computation by three. Second, the fact that relatively small filters can be used for tone mapping ensures that the algorithm has a reasonably low complexity. 5.C. Filter Sizes We saw in Section 5.A that our way of using the local information as a variable of the Naka- Rushton equation allowed to prevent graying-out and halos. Another advantage of using such a technique is that the resulting image does not change much with different filter sizes. This makes our algorithm robust to varying parameters. In our implementation of the algorithm, we used σ H = 3 and σ A = 1.5. However, other values can be used without corrupting the results. Fig. 14 shows an example of our method applied with different filter sizes, (σ H = 1; σ A = 1) for the left image and (σ H = 3; σ A = 5) for the right image. Although the filters are quite different, there is no tone difference between the two resulting images. The slight difference between the two images is due to the different sharpening effect induced by the difference in filter size. The size of the demosaicing filter also plays a role in the final appearance of the image. Since this was discussed in, 4 we do not address this issue here. 21
22 Fig. 13. Top: Image treated with MSRCR. 25 Center: Image treated with our method. Bottom: Image treated with the Retinex-based adaptive filter method
23 Fig. 14. Example of our method applied with different filter sizes. Left: Small filters (σ H = 1 and σ A = 1). Right: Large filter (σ H = 3 and σ A = 5). 6. Conclusion We present a color image processing work-flow that is based on a model of retinal processing. The principle of our work-flow is to perform the color rendering steps before color reconstruction (demosaicing), which is coherent with the HVS. Our focus is on the tone mapping part of the general problem of color rendering. We propose a tone mapping algorithm that is performed directly on the CFA image. Our method shares similarities with center/surround algorithms but is not subject to halo artifacts or graying-out. Our proposed algorithm is relatively fast compared to existing tone mapping methods and provides pleasing results for all kinds of images. References 1. Additional material. material/index.html, Photoflair was developed by truview imaging company IEC Multimedia systems and equipment - Colour measurement and management - Part2-1:Colour management - Default RGB colour space - srgb, D. Alleysson, S. Süsstrunk, and J. Herault. Linear demosaicing inspired by the human visual system. IEEE Transactions on Image Processing, 14(4): , April M. Ashikhmin. A tone mapping algorithm for high contrast images. In Proc. EURO- GRAPHICS 2002, pages 1 11,
24 6. K. Barnard and B. Funt. Investigations into multi-scale Retinex. In Proc. Colour Imaging in Multimedia 98, pages 9 17, B. E. Bayer. Color imaging array. US Patent #3,971,065 to Eastman Kodak Company, Patent and Trademark Office, Washington, D.C., G. J. Braun and M. D. Fairchild. Image lightness rescaling using sigmoidal contrast enhancement functions. Journal of Electronic Imaging, 8(4): , October D. Coffin. dcoffin/dcraw/. 10. K. Devlin. A review of tone reproduction techniques. Technical Report CSTR , Department of Computer Science, University of Bristol, November E. Dubois. Frequency-domain methods for demosaicking of bayer-sampled color images. IEEE Signal Processing Letters, 12(12): , December F. Durand and J. Dorsey. Fast bilateral filtering for the display of high-dynamic-range images. In Proc. ACM SIGGRAPH 2002, Annual Conference on Computer Graphics, pages , R. Fattal, D. Lischinski, and M. Werman. Gradient domain high dynamic range compression. In Proc. ACM SIGGRAPH 2002, Annual Conference on Computer Graphics, pages , B. Funt, K. Barnard, M. Brockington, and V. Cardei. Luminance-based multi-scale Retinex. In Proc. AIC Colour th Congress of the International Colour Association, volume I, pages , J. Holm, I. Tastl, L. Hanlon, and P. Hubel. Colour Engineering: Achieving Device Independent Colour, chapter Color processing for digital photography, pages ed, P. Green and L. MacDonald, J Holm. Photographic tone and colour reproduction goals. In Proc. CIE Expert Symposium 96 on Colour Standards for Image Technology, pages 51 56, C. R. Ingling and E. Martinez-Uriegas. The spatiotemporal properties of the r-g x-cell channel. Vision Research, 25(1):33 38, M. Kamermans and H. Spekreijse. The feedback pathway from horizontal cells to cones. a mini review with a look ahead. Vision Research, 39(15): , July J. Kuang, H. Yamaguchi, G. M. Johnson, and M. D. Fairchild. Testing HDR image rendering algorithms. In Proc. IS&T/SID Twelfth Color Imaging Conference: Color Science, Systems, and Application, pages , E. Land. An alternative technique for the computation of the designator in the Retinex theory of color vision. In Proc. National Academy of Sciences of the United States of America, volume 83, pages , May
25 21. P. Ledda, A. Chalmers, T. Troscianko, and H. Seetzen. Evaluation of tone mapping operators using a high dynamic range display. In Proc. ACM SIGGRAPH 2005, Annual Conference on Computer Graphics, pages , N. Lian, L. Chang, and Y.-P. Tan. Improved color filter array demosaicing by accurate luminance estimation. In Proc. IEEE International Conference on Image Processing, ICIP 2005, volume 1, L. Meylan and S. Süsstrunk. High dynamic range image rendering with a Retinex-based adaptive filter. IEEE Transactions on Image Processing, 15(9): , September K.-I. Naka and W.A.H Rushton. S-potentials from luminosity units in the retina of fish (cyprinidae). Journal of Physiology, 185: , Z.-U. Rahman, D. D. Jobson, and G. A. Woodell. Retinex processing for automatic image enhancement. Journal of Electronic Imaging, 13(1): , January E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda. Photographic tone reproduction for digital images. In Proc. ACM SIGGRAPH 2002, Annual Conference on Computer Graphics, pages , E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec. High Dynamic Range Imaging. Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann Publishers, San Francsisco, CA, August M. Schwab, M. Karrenbach, and J. Claerbout. Making scientific computations reproducible. Computing in Science & Engineering, 2(6):61 67, November R. Shapley and C. Enroth-Cugell. Visual adaptation and retinal gain controls. Progress in retinal research, pages Pergamon Press, H. Spitzer and S. Semo. Color constancy: a biological model and its application for still and video images. Pattern Recognition, 35: , H. Spitzer and E. Sherman. Color contrast: A biological model and its application for real images. In Proc. CGIV 02, The first European Conference on Colour in Graphics,Image and Vision, G. Ward, H. Rushmeier, and C. Piatko. A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Transactions on Visualization and Computer Graphics, 3(4): , October-December F. Xiao, J. DiCarlo, P. Catrysse, and B. Wandell. High dynamic range imaging of natural scenes. In Proc. IS&T/SID Tenth Color Imaging Conference: Color Science, Systems, and Application, pages ,
A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images
A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images Laurence Meylan School of Computer and Communication Sciences Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
More informationThe Influence of Luminance on Local Tone Mapping
The Influence of Luminance on Local Tone Mapping Laurence Meylan and Sabine Süsstrunk, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland Abstract We study the influence of the choice
More informationicam06, HDR, and Image Appearance
icam06, HDR, and Image Appearance Jiangtao Kuang, Mark D. Fairchild, Rochester Institute of Technology, Rochester, New York Abstract A new image appearance model, designated as icam06, has been developed
More informationJoint Demosaicing and Super-Resolution Imaging from a Set of Unregistered Aliased Images
Joint Demosaicing and Super-Resolution Imaging from a Set of Unregistered Aliased Images Patrick Vandewalle a, Karim Krichane a, David Alleysson b, and Sabine Süsstrunk a a School of Computer and Communication
More informationMODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER
International Journal of Information Technology and Knowledge Management January-June 2012, Volume 5, No. 1, pp. 73-77 MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY
More informationRealistic Image Synthesis
Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106
More informationA Locally Tuned Nonlinear Technique for Color Image Enhancement
A Locally Tuned Nonlinear Technique for Color Image Enhancement Electrical and Computer Engineering Department Old Dominion University Norfolk, VA 3508, USA sarig00@odu.edu, vasari@odu.edu http://www.eng.odu.edu/visionlab
More informationSpatio-Chromatic ICA of a Mosaiced Color Image
Spatio-Chromatic ICA of a Mosaiced Color Image David Alleysson 1,SabineSüsstrunk 2 1 Laboratory for Psychology and NeuroCognition, CNRS UMR 5105, Université Pierre-Mendès France, Grenoble, France. 2 Audiovisual
More informationHigh Dynamic Range Image Rendering with a Luminance-Chromaticity Independent Model
High Dynamic Range Image Rendering with a Luminance-Chromaticity Independent Model Shaobing Gao #, Wangwang Han #, Yanze Ren, Yongjie Li University of Electronic Science and Technology of China, Chengdu,
More informationHigh dynamic range and tone mapping Advanced Graphics
High dynamic range and tone mapping Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Cornell Box: need for tone-mapping in graphics Rendering Photograph 2 Real-world scenes
More information25/02/2017. C = L max L min. L max C 10. = log 10. = log 2 C 2. Cornell Box: need for tone-mapping in graphics. Dynamic range
Cornell Box: need for tone-mapping in graphics High dynamic range and tone mapping Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Rendering Photograph 2 Real-world scenes
More informationTone mapping. Digital Visual Effects, Spring 2009 Yung-Yu Chuang. with slides by Fredo Durand, and Alexei Efros
Tone mapping Digital Visual Effects, Spring 2009 Yung-Yu Chuang 2009/3/5 with slides by Fredo Durand, and Alexei Efros Tone mapping How should we map scene luminances (up to 1:100,000) 000) to display
More informationDenoising and Effective Contrast Enhancement for Dynamic Range Mapping
Denoising and Effective Contrast Enhancement for Dynamic Range Mapping G. Kiruthiga Department of Electronics and Communication Adithya Institute of Technology Coimbatore B. Hakkem Department of Electronics
More informationISSN Vol.03,Issue.29 October-2014, Pages:
ISSN 2319-8885 Vol.03,Issue.29 October-2014, Pages:5768-5772 www.ijsetr.com Quality Index Assessment for Toned Mapped Images Based on SSIM and NSS Approaches SAMEED SHAIK 1, M. CHAKRAPANI 2 1 PG Scholar,
More informationFrequency Domain Based MSRCR Method for Color Image Enhancement
Frequency Domain Based MSRCR Method for Color Image Enhancement Siddesha K, Kavitha Narayan B M Assistant Professor, ECE Dept., Dr.AIT, Bangalore, India, Assistant Professor, TCE Dept., Dr.AIT, Bangalore,
More informationHigh dynamic range imaging and tonemapping
High dynamic range imaging and tonemapping http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 12 Course announcements Homework 3 is out. - Due
More informationFast Bilateral Filtering for the Display of High-Dynamic-Range Images
Fast Bilateral Filtering for the Display of High-Dynamic-Range Images Frédo Durand & Julie Dorsey Laboratory for Computer Science Massachusetts Institute of Technology Contributions Contrast reduction
More informationExtended Dynamic Range Imaging: A Spatial Down-Sampling Approach
2014 IEEE International Conference on Systems, Man, and Cybernetics October 5-8, 2014, San Diego, CA, USA Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach Huei-Yung Lin and Jui-Wen Huang
More informationHigh Dynamic Range Imaging
High Dynamic Range Imaging 1 2 Lecture Topic Discuss the limits of the dynamic range in current imaging and display technology Solutions 1. High Dynamic Range (HDR) Imaging Able to image a larger dynamic
More informationFast Bilateral Filtering for the Display of High-Dynamic-Range Images
Contributions ing for the Display of High-Dynamic-Range Images for HDR images Local tone mapping Preserves details No halo Edge-preserving filter Frédo Durand & Julie Dorsey Laboratory for Computer Science
More informationColor , , Computational Photography Fall 2018, Lecture 7
Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 7 Course announcements Homework 2 is out. - Due September 28 th. - Requires camera and
More informationSpatio-Temporal Retinex-like Envelope with Total Variation
Spatio-Temporal Retinex-like Envelope with Total Variation Gabriele Simone and Ivar Farup Gjøvik University College; Gjøvik, Norway. Abstract Many algorithms for spatial color correction of digital images
More informationTonemapping and bilateral filtering
Tonemapping and bilateral filtering http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 6 Course announcements Homework 2 is out. - Due September
More informationEvaluation of a Hyperspectral Image Database for Demosaicking purposes
Evaluation of a Hyperspectral Image Database for Demosaicking purposes Mohamed-Chaker Larabi a and Sabine Süsstrunk b a XLim Lab, Signal Image and Communication dept. (SIC) University of Poitiers, Poitiers,
More informationContrast Image Correction Method
Contrast Image Correction Method Journal of Electronic Imaging, Vol. 19, No. 2, 2010 Raimondo Schettini, Francesca Gasparini, Silvia Corchs, Fabrizio Marini, Alessandro Capra, and Alfio Castorina Presented
More informationColor Image Enhancement Using Retinex Algorithm
Color Image Enhancement Using Retinex Algorithm Neethu Lekshmi J M 1, Shiny.C 2 1 (Dept of Electronics and Communication,College of Engineering,Karunagappally,India) 2 (Dept of Electronics and Communication,College
More informationImage Enhancement for Astronomical Scenes. Jacob Lucas The Boeing Company Brandoch Calef The Boeing Company Keith Knox Air Force Research Laboratory
Image Enhancement for Astronomical Scenes Jacob Lucas The Boeing Company Brandoch Calef The Boeing Company Keith Knox Air Force Research Laboratory ABSTRACT Telescope images of astronomical objects and
More informationLecture Notes 11 Introduction to Color Imaging
Lecture Notes 11 Introduction to Color Imaging Color filter options Color processing Color interpolation (demozaicing) White balancing Color correction EE 392B: Color Imaging 11-1 Preliminaries Up till
More informationColor , , Computational Photography Fall 2017, Lecture 11
Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 11 Course announcements Homework 2 grades have been posted on Canvas. - Mean: 81.6% (HW1:
More informationA Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA)
A Novel Method for Enhancing Satellite & Land Survey Images Using Color Filter Array Interpolation Technique (CFA) Suma Chappidi 1, Sandeep Kumar Mekapothula 2 1 PG Scholar, Department of ECE, RISE Krishna
More information! High&Dynamic!Range!Imaging! Slides!from!Marc!Pollefeys,!Gabriel! Brostow!(and!Alyosha!Efros!and! others)!!
! High&Dynamic!Range!Imaging! Slides!from!Marc!Pollefeys,!Gabriel! Brostow!(and!Alyosha!Efros!and! others)!! Today! High!Dynamic!Range!Imaging!(LDR&>HDR)! Tone!mapping!(HDR&>LDR!display)! The!Problem!
More informationImage Demosaicing. Chapter Introduction. Ruiwen Zhen and Robert L. Stevenson
Chapter 2 Image Demosaicing Ruiwen Zhen and Robert L. Stevenson 2.1 Introduction Digital cameras are extremely popular and have replaced traditional film-based cameras in most applications. To produce
More informationAnalysis on Color Filter Array Image Compression Methods
Analysis on Color Filter Array Image Compression Methods Sung Hee Park Electrical Engineering Stanford University Email: shpark7@stanford.edu Albert No Electrical Engineering Stanford University Email:
More informationCSE 332/564: Visualization. Fundamentals of Color. Perception of Light Intensity. Computer Science Department Stony Brook University
Perception of Light Intensity CSE 332/564: Visualization Fundamentals of Color Klaus Mueller Computer Science Department Stony Brook University How Many Intensity Levels Do We Need? Dynamic Intensity Range
More informationVU Rendering SS Unit 8: Tone Reproduction
VU Rendering SS 2012 Unit 8: Tone Reproduction Overview 1. The Problem Image Synthesis Pipeline Different Image Types Human visual system Tone mapping Chromatic Adaptation 2. Tone Reproduction Linear methods
More informationFiras Hassan and Joan Carletta The University of Akron
A Real-Time FPGA-Based Architecture for a Reinhard-Like Tone Mapping Operator Firas Hassan and Joan Carletta The University of Akron Outline of Presentation Background and goals Existing methods for local
More informationPractical Implementation of LMMSE Demosaicing Using Luminance and Chrominance Spaces.
Practical Implementation of LMMSE Demosaicing Using Luminance and Chrominance Spaces. Brice Chaix de Lavarène,1, David Alleysson 2, Jeanny Hérault 1 Abstract Most digital color cameras sample only one
More informationBrightness Calculation in Digital Image Processing
Brightness Calculation in Digital Image Processing Sergey Bezryadin, Pavel Bourov*, Dmitry Ilinih*; KWE Int.Inc., San Francisco, CA, USA; *UniqueIC s, Saratov, Russia Abstract Brightness is one of the
More informationBurst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!
Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Motivation! wikipedia! exposure sequence! -4 stops! Motivation!
More informationMultiscale model of Adaptation, Spatial Vision and Color Appearance
Multiscale model of Adaptation, Spatial Vision and Color Appearance Sumanta N. Pattanaik 1 Mark D. Fairchild 2 James A. Ferwerda 1 Donald P. Greenberg 1 1 Program of Computer Graphics, Cornell University,
More informationNew applications of Spectral Edge image fusion
New applications of Spectral Edge image fusion Alex E. Hayes a,b, Roberto Montagna b, and Graham D. Finlayson a,b a Spectral Edge Ltd, Cambridge, UK. b University of East Anglia, Norwich, UK. ABSTRACT
More informationThe Effect of Opponent Noise on Image Quality
The Effect of Opponent Noise on Image Quality Garrett M. Johnson * and Mark D. Fairchild Munsell Color Science Laboratory, Rochester Institute of Technology Rochester, NY 14623 ABSTRACT A psychophysical
More informationInternational Journal of Advance Engineering and Research Development. Asses the Performance of Tone Mapped Operator compressing HDR Images
Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 9, September -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Asses
More informationDesign of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2
Design of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2 James E. Adams, Jr. Eastman Kodak Company jeadams @ kodak. com Abstract Single-chip digital cameras use a color filter
More informationHigh-Dynamic-Range Scene Compression in Humans
This is a preprint of 6057-47 paper in SPIE/IS&T Electronic Imaging Meeting, San Jose, January, 2006 High-Dynamic-Range Scene Compression in Humans John J. McCann McCann Imaging, Belmont, MA 02478 USA
More informationIssues in Color Correcting Digital Images of Unknown Origin
Issues in Color Correcting Digital Images of Unknown Origin Vlad C. Cardei rian Funt and Michael rockington vcardei@cs.sfu.ca funt@cs.sfu.ca brocking@sfu.ca School of Computing Science Simon Fraser University
More informationThe Unique Role of Lucis Differential Hysteresis Processing (DHP) in Digital Image Enhancement
The Unique Role of Lucis Differential Hysteresis Processing (DHP) in Digital Image Enhancement Brian Matsumoto, Ph.D. Irene L. Hale, Ph.D. Imaging Resource Consultants and Research Biologists, University
More informationImage acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor
Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the
More informationDigital photography , , Computational Photography Fall 2017, Lecture 2
Digital photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 2 Course announcements To the 14 students who took the course survey on
More informationApplications of Flash and No-Flash Image Pairs in Mobile Phone Photography
Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application
More informationColor Outline. Color appearance. Color opponency. Brightness or value. Wavelength encoding (trichromacy) Color appearance
Color Outline Wavelength encoding (trichromacy) Three cone types with different spectral sensitivities. Each cone outputs only a single number that depends on how many photons were absorbed. If two physically
More informationColor appearance in image displays
Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 1-18-25 Color appearance in image displays Mark Fairchild Follow this and additional works at: http://scholarworks.rit.edu/other
More informationCorrecting Over-Exposure in Photographs
Correcting Over-Exposure in Photographs Dong Guo, Yuan Cheng, Shaojie Zhuo and Terence Sim School of Computing, National University of Singapore, 117417 {guodong,cyuan,zhuoshao,tsim}@comp.nus.edu.sg Abstract
More informationLocal Contrast Enhancement
Local Contrast Enhancement Marco Bressan, Christopher R. Dance, Hervé Poirier and Damián Arregui Xerox Research Centre Europe, 6 chemin de Maupertuis, 38240 Meylan, France ABSTRACT We introduce a novel
More informationAdding Local Contrast to Global Gamut Mapping Algorithms
Adding Local Contrast to Global Gamut Mapping Algorithms Peter Zolliker, and Klaus Simon; Empa, Swiss Federal Laboratories for Materials Testing and Research, Laboratory for Media Technology; CH-8600 Dübendorf,
More informationA Comparison of the Multiscale Retinex With Other Image Enhancement Techniques
A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques Zia-ur Rahman, Glenn A. Woodell and Daniel J. Jobson College of William & Mary, NASA Langley Research Center Abstract The
More informationTonal quality and dynamic range in digital cameras
Tonal quality and dynamic range in digital cameras Dr. Manal Eissa Assistant professor, Photography, Cinema and TV dept., Faculty of Applied Arts, Helwan University, Egypt Abstract: The diversity of display
More informationCamera Image Processing Pipeline: Part II
Lecture 14: Camera Image Processing Pipeline: Part II Visual Computing Systems Today Finish image processing pipeline Auto-focus / auto-exposure Camera processing elements Smart phone processing elements
More informationCamera Image Processing Pipeline: Part II
Lecture 13: Camera Image Processing Pipeline: Part II Visual Computing Systems Today Finish image processing pipeline Auto-focus / auto-exposure Camera processing elements Smart phone processing elements
More informationSSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) Volume 2 Issue 8 August 2015
SSRG International Journal of Electronics and Communication Engeerg (SSRG-IJECE) Volume 2 Issue 8 August 2015 Image Tone Mappg for an HDR Image by Adoptive Global tone-mappg algorithm Subodh Prakash Tiwari
More informationDigital photography , , Computational Photography Fall 2018, Lecture 2
Digital photography http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 2 Course announcements To the 26 students who took the start-of-semester
More informationHigh Dynamic Range Images : Rendering and Image Processing Alexei Efros. The Grandma Problem
High Dynamic Range Images 15-463: Rendering and Image Processing Alexei Efros The Grandma Problem 1 Problem: Dynamic Range 1 1500 The real world is high dynamic range. 25,000 400,000 2,000,000,000 Image
More informationTone Mapping of HDR Images: A Review
Tone Mapping of HDR Images: A Review Yasir Salih, Wazirah bt. Md-Esa, Aamir S. Malik; Senior Member IEEE, Naufal Saad Centre for Intelligent Signal and Imaging Research (CISIR) Universiti Teknologi PETRONAS
More informationColor image Demosaicing. CS 663, Ajit Rajwade
Color image Demosaicing CS 663, Ajit Rajwade Color Filter Arrays It is an array of tiny color filters placed before the image sensor array of a camera. The resolution of this array is the same as that
More informationA simulation tool for evaluating digital camera image quality
A simulation tool for evaluating digital camera image quality Joyce Farrell ab, Feng Xiao b, Peter Catrysse b, Brian Wandell b a ImagEval Consulting LLC, P.O. Box 1648, Palo Alto, CA 94302-1648 b Stanford
More informationHDR imaging Automatic Exposure Time Estimation A novel approach
HDR imaging Automatic Exposure Time Estimation A novel approach Miguel A. MARTÍNEZ,1 Eva M. VALERO,1 Javier HERNÁNDEZ-ANDRÉS,1 Javier ROMERO,1 1 Color Imaging Laboratory, University of Granada, Spain.
More informationicam06: A refined image appearance model for HDR image rendering
J. Vis. Commun. Image R. 8 () 46 44 www.elsevier.com/locate/jvci icam6: A refined image appearance model for HDR image rendering Jiangtao Kuang *, Garrett M. Johnson, Mark D. Fairchild Munsell Color Science
More informationThe Quality of Appearance
ABSTRACT The Quality of Appearance Garrett M. Johnson Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science Rochester Institute of Technology 14623-Rochester, NY (USA) Corresponding
More informationImage Processing by Bilateral Filtering Method
ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image
More informationDemosaicing and Denoising on Simulated Light Field Images
Demosaicing and Denoising on Simulated Light Field Images Trisha Lian Stanford University tlian@stanford.edu Kyle Chiang Stanford University kchiang@stanford.edu Abstract Light field cameras use an array
More informationOn Contrast Sensitivity in an Image Difference Model
On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New
More informationFigure 1 HDR image fusion example
TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively
More informationHistograms and Color Balancing
Histograms and Color Balancing 09/14/17 Empire of Light, Magritte Computational Photography Derek Hoiem, University of Illinois Administrative stuff Project 1: due Monday Part I: Hybrid Image Part II:
More informationColor Computer Vision Spring 2018, Lecture 15
Color http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 15 Course announcements Homework 4 has been posted. - Due Friday March 23 rd (one-week homework!) - Any questions about the
More informationLecture 3: Grey and Color Image Processing
I22: Digital Image processing Lecture 3: Grey and Color Image Processing Prof. YingLi Tian Sept. 13, 217 Department of Electrical Engineering The City College of New York The City University of New York
More informationSimultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array
Simultaneous Capturing of RGB and Additional Band Images Using Hybrid Color Filter Array Daisuke Kiku, Yusuke Monno, Masayuki Tanaka, and Masatoshi Okutomi Tokyo Institute of Technology ABSTRACT Extra
More informationTone mapping. Tone mapping The ultimate goal is a visual match. Eye is not a photometer! How should we map scene luminances (up to
Tone mapping Tone mapping Digital Visual Effects Yung-Yu Chuang How should we map scene luminances up to 1:100000 000 to displa luminances onl around 1:100 to produce a satisfactor image? Real world radiance
More informationAdaptive demosaicking
Journal of Electronic Imaging 12(4), 633 642 (October 2003). Adaptive demosaicking Rajeev Ramanath Wesley E. Snyder North Carolina State University Department of Electrical and Computer Engineering Box
More informationA Novel approach for Enhancement of Image Contrast Using Adaptive Bilateral filter with Unsharp Masking Algorithm
ISSN 2319-8885,Volume01,Issue No. 03 www.semargroups.org Jul-Dec 2012, P.P. 216-223 A Novel approach for Enhancement of Image Contrast Using Adaptive Bilateral filter with Unsharp Masking Algorithm A.CHAITANYA
More informationAn Adaptive Kernel-Growing Median Filter for High Noise Images. Jacob Laurel. Birmingham, AL, USA. Birmingham, AL, USA
An Adaptive Kernel-Growing Median Filter for High Noise Images Jacob Laurel Department of Electrical and Computer Engineering, University of Alabama at Birmingham, Birmingham, AL, USA Electrical and Computer
More informationA Saturation-based Image Fusion Method for Static Scenes
2015 6th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES) A Saturation-based Image Fusion Method for Static Scenes Geley Peljor and Toshiaki Kondo Sirindhorn
More informationColour correction for panoramic imaging
Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in
More informationFrequencies and Color
Frequencies and Color Alexei Efros, CS280, Spring 2018 Salvador Dali Gala Contemplating the Mediterranean Sea, which at 30 meters becomes the portrait of Abraham Lincoln, 1976 Spatial Frequencies and
More informationLight. intensity wavelength. Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies
Image formation World, image, eye Light Light is electromagnetic waves Laser is light that contains only a narrow spectrum of frequencies intensity wavelength Visible light is light with wavelength from
More informationarxiv: v1 [cs.gr] 18 Jan 2016
Which Tone-Mapping Operator Is the Best? A Comparative Study of Perceptual Quality arxiv:1601.04450v1 [cs.gr] 18 Jan 2016 XIM CERDÁ-COMPANY, C. ALEJANDRO PÁRRAGA and XAVIER OTAZU Computer Vision Center,
More informationColor Constancy Using Standard Deviation of Color Channels
2010 International Conference on Pattern Recognition Color Constancy Using Standard Deviation of Color Channels Anustup Choudhury and Gérard Medioni Department of Computer Science University of Southern
More informationCvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro
Cvision 2 Digital Imaging António J. R. Neves (an@ua.pt) & João Paulo Silva Cunha & Bernardo Cunha IEETA / Universidade de Aveiro Outline Image sensors Camera calibration Sampling and quantization Data
More informationVisibility of Uncorrelated Image Noise
Visibility of Uncorrelated Image Noise Jiajing Xu a, Reno Bowen b, Jing Wang c, and Joyce Farrell a a Dept. of Electrical Engineering, Stanford University, Stanford, CA. 94305 U.S.A. b Dept. of Psychology,
More informationFixing the Gaussian Blur : the Bilateral Filter
Fixing the Gaussian Blur : the Bilateral Filter Lecturer: Jianbing Shen Email : shenjianbing@bit.edu.cnedu Office room : 841 http://cs.bit.edu.cn/shenjianbing cn/shenjianbing Note: contents copied from
More informationTarget detection in side-scan sonar images: expert fusion reduces false alarms
Target detection in side-scan sonar images: expert fusion reduces false alarms Nicola Neretti, Nathan Intrator and Quyen Huynh Abstract We integrate several key components of a pattern recognition system
More informationOn Contrast Sensitivity in an Image Difference Model
On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New
More informationDesign of practical color filter array interpolation algorithms for digital cameras
Design of practical color filter array interpolation algorithms for digital cameras James E. Adams, Jr. Eastman Kodak Company, Imaging Research and Advanced Development Rochester, New York 14653-5408 ABSTRACT
More informationLearning the image processing pipeline
Learning the image processing pipeline Brian A. Wandell Stanford Neurosciences Institute Psychology Stanford University http://www.stanford.edu/~wandell S. Lansel Andy Lin Q. Tian H. Blasinski H. Jiang
More informationContinuous Flash. October 1, Technical Report MSR-TR Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052
Continuous Flash Hugues Hoppe Kentaro Toyama October 1, 2003 Technical Report MSR-TR-2003-63 Microsoft Research Microsoft Corporation One Microsoft Way Redmond, WA 98052 Page 1 of 7 Abstract To take a
More informationDemosaicing Algorithm for Color Filter Arrays Based on SVMs
www.ijcsi.org 212 Demosaicing Algorithm for Color Filter Arrays Based on SVMs Xiao-fen JIA, Bai-ting Zhao School of Electrical and Information Engineering, Anhui University of Science & Technology Huainan
More informationCapturing Light in man and machine
Capturing Light in man and machine CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2015 Etymology PHOTOGRAPHY light drawing / writing Image Formation Digital Camera
More informationLightness Perception in Tone Reproduction for High Dynamic Range Images
EUROGRAPHICS 2005 / M. Alexa and J. Marks (Guest Editors) Volume 24 (2005), Number 3 Lightness Perception in Tone Reproduction for High Dynamic Range Images Grzegorz Krawczyk and Karol Myszkowski and Hans-Peter
More informationColor and perception Christian Miller CS Fall 2011
Color and perception Christian Miller CS 354 - Fall 2011 A slight detour We ve spent the whole class talking about how to put images on the screen What happens when we look at those images? Are there any
More informationPerceptual Evaluation of Tone Reproduction Operators using the Cornsweet-Craik-O Brien Illusion
Perceptual Evaluation of Tone Reproduction Operators using the Cornsweet-Craik-O Brien Illusion AHMET OĞUZ AKYÜZ University of Central Florida Max Planck Institute for Biological Cybernetics and ERIK REINHARD
More informationCOLOR and the human response to light
COLOR and the human response to light Contents Introduction: The nature of light The physiology of human vision Color Spaces: Linear Artistic View Standard Distances between colors Color in the TV 2 How
More information