Photometric image processing for high dynamic range displays

Size: px
Start display at page:

Download "Photometric image processing for high dynamic range displays"

Transcription

1 J. Vis. Commun. Image R. 18 (2007) Photometric image processing for high dynamic range displays Matthew Trentacoste a, *, Wolfgang Heidrich a, Lorne Whitehead a, Helge Seetzen a,b, Greg Ward b a The University of British Columbia, 2329 West Mall, Vancouver BC, V6T 1Z4 Canada b Dolby Canada, 1310 Kootenay Street, Vancouver BC, V5K 4R1 Canada Received 16 November 2006; accepted 12 June 2007 Available online 17 July 2007 Abstract Many real-world scenes contain brightness levels exceeding the capabilities of conventional display technology by several orders of magnitude. Through the combination of several existing technologies, new high dynamic range displays have been constructed recently. These displays are capable of reproducing a range of intensities much closer to that of real environments. We present several methods of reproducing photometrically accurate images on this new class of devices, and evaluate these methods in a perceptual framework. Ó 2007 Elsevier Inc. All rights reserved. Keywords: High dynamic range; Displays; Image processing; Photometry 1. Introduction The high dynamic range (HDR) imaging pipeline has been the subject of considerable interest from the computer graphics and imaging communities in recent years. The intensities and dynamic ranges found in many scenes and applications vastly exceed those of conventional imaging techniques, and the established practices and methods of addressing those images are insufficient. Researchers have developed additions and modifications to existing methods of acquiring, processing, and displaying images to accommodate contrasts that exceed the limitations of conventional, low dynamic range (LDR) techniques and devices. Methods exist for acquiring HDR images and video from multiple LDR images [4,13]. New cameras are capable of capturing larger dynamic ranges in a single exposure [1]. File formats have been designed to accommodate the additional data storage requirements [7,8,18]. Most relevant to this paper, high * Corresponding author. addresses: mmt@cs.ubc.ca (M. Trentacoste), heidrich@cs.ubc. ca (W. Heidrich), whitehead@physics.ubc.ca (L. Whitehead), helge. seetzen@dolby.com (H. Seetzen), gward@lmi.net (G. Ward). dynamic range display systems have been developed to accurately reproduce a much wider range of luminance values. The work done by Ward [17] and Seetzen et al. [14,15] has provided devices that vastly exceed the dynamic range of conventional displays. These devices are capable of higher intensity whites and lower intensity blacks, while maintaining adequately low quantization across the entire luminance range. HDR displays are constructed by optically combining a standard LCD panel with a second, typically much lower resolution, spatial light modulator, such as an array of individually controlled LEDs [14]. The latter replaces the constant intensity backlight of normal LCD assemblies. Due to this design, pixel intensities in HDR displays cannot be controlled independently of each other. Dependencies are introduced since every LED overlaps hundreds of LCD pixels, and thus contributes to the brightness of all of them. It is therefore necessary to employ image processing algorithms to factor an HDR image into LDR pixel values for the LCD panel, as well as LDR intensities for the low resolution LED array. In this paper, we discuss algorithms to perform this separation and to accurately reproduce photometric images. Achieving this goal entails designing efficient algorithms /$ - see front matter Ó 2007 Elsevier Inc. All rights reserved. doi: /j.jvcir

2 440 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) to produce the best images possible; characterizing the monitor; and calibrating it to reproduce the most faithful approximation of appearance, compared to the input image. We evaluate our methods by comparing the output image to the input using perceptual models of the human visual system. The remainder of this paper is structured as follows: Section 2 covers the topics related to the work presented. Section 3 describes the task of rendering images and details the difficulties faced in doing so. Section 4 details the measurements required to correct for the actual hardware and calibrate the output, and how those measurements are incorporated into the image processing methods. Section 5 presents the results of the work, and evaluates them using a perceptually-based metric. 2. Related work 2.1. Veiling glare and local contrast perception Any analysis of the display of images includes an inherent discussion about the viewer: the perceptual makeup of the human observer. While the human visual system is an amazing biological sensor, it does have shortcomings that can be exploited for the purpose of creating display devices. One such shortcoming is that, while humans can see a vast dynamic range across a scene, they are unable to see more than a small portion of it within a small angle subtended by the eye. This inherent limitation, called veiling glare can be explained by the scattering properties of the cornea, lens, and vitreous fluid, and by inter-reflection from the retina, all of which reduce the visibility of low contrast features in the neighborhood of bright light sources. Veiling glare depends on a large number of parameters including spatial frequency, wavelength, pupil size as a function of adaptation luminance [10], and subject age. While different values are reported for the threshold past which we cannot discern high contrast boundaries, most agree that the maximum perceivable local contrast is in the neighborhood of 150:1. Scene contrast boundaries above this threshold appear blurry and indistinct, and the eye is unable to judge the relative magnitudes of the adjacent regions. From Moon and Spencer s original work on glare [11], we know that any high contrast boundary will scatter at least 4% of its energy on the retina to the darker side of the boundary, obscuring the visibility of the edge and details within a few degrees of it. When the edge contrast reaches a value of 150:1, the visible contrast on the dark side is reduced by a factor of 12, rendering details indistinct or invisible. This limitation of the human visual system is central to the operating principle of HDR display technology, as we will discuss in the following section HDR display technology In a conventional LCD display, two polarizers and a liquid crystal are used to modulate the light coming from a uniform backlight, typically a fluorescent tube assembly. The light is polarized by the first polarizer and transmitted through the liquid crystal where the polarization of the light is rotated in accordance with the control voltages applied to each pixel of liquid crystal. Finally, the light exits the LCD by transmission through the second polarizer. The luminance level of the light transmitted at each pixel is controlled by the polarization state of the liquid crystal. It is important to note that, even at the darkest state of a LCD pixel, some remaining light is transmitted. The dynamic range of an LCD is defined by the ratio between the light transmitted at the brightest state and the light transmitted in the darkest state. For a typical color LCD display, this ratio is usually around 300:1. Monochromatic specialty LCDs have a contrast ratio of 700:1, with numbers exceeding 2000:1 reported in some cases. The luminance level of the display can be easily adjusted by controlling the brightness of the backlight, but the contrast ratio will remain the limiting factor. In order to maintain a reasonable black level of about 1 cd/m 2, the LCD is thus limited to a maximum brightness of about 300 cd/m 2. Approaches such as the dynamic contrast advertised in recent LCD televisions can overcome this problem to a degree and increase the apparent contrast across multiple frames. However, such methods can only adjust the intensity of the entire backlight for each frame displayed depending on its average luminance, and provide no benefit for static images or scenes without fast-moving action. The fundamental principle of HDR displays is to use an LCD panel as an optical filter of programmable transparency to modulate a high intensity but low resolution image formed by a second spatial light modulator. This setup effectively multiplies the contrast of the LCD panel with that of the second light modulator such that global contrast ratios in excess of 100,000:1 can be achieved [14]. In the case of an HDR display, each element of the rear modulator is individually controllable, and together these elements represent a version of the 2D input image. Currently, this second modulator consists of an array of LEDs placed behind the LCD panel, as depicted in the upper left panel of Fig. 1. The array of LEDs is placed on a hexagonal grid for optimal packing, and the upper right panel of Fig. 1 demonstrates the LEDs of different intensities in the hexagonal arrangement that forms the backlight. In order to ensure uniform illumination upon the LCD, the LED grid is placed behind a diffuser to blur the discrete points into a smoothly varying field. This lower-frequency illumination reduces artifacts caused by misalignment of the LCD and LED grid, and parallax from viewing the display from indirect angles, which would be very difficult to compensate for, and would perceptually be much more noticeable than low frequency errors. The width of the point spread function (PSF) is quite large compared to the spacing of the LEDs, as seen in the lower left panel of Fig. 1 which shows the point spreads of two adjacent

3 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) Fig. 1. (a) Arrangement of LED grid positioned behind the LCD panel. (b) Representation of hexagonal LED arrangement with different intensities. (c) Plot of the shape of an LED point spread, with another copy of the point spread positioned at the location of an adjacent LED. (d) Photograph of the BrightSide/Dolby HDR display. LEDs. This overlapping of PSFs implies that not only is the peak intensity of the display greater than any individual LED, but that any attempt to derive LED values from an input image requires some form of deconvolution. Local contrast from one pixel to its neighbor is limited to roughly the contrast of the LCD panel (300:1), since the illumination pattern produced by the low resolution LED array varies only minimally at the scale of the LCD pixel size. Perceptually, this limitation does not impair image quality, since local contrast perception around an edge is limited to about 150:1 (Section 2.1). From the psychophysical theory mentioned in Section 2.1 we can establish the largest possible spacing of the backlight LEDs for a given viewing distance. Veiling glare is therefore central to the operating principle of HDR displays, in that it allows for the use of a LED array with significantly reduced resolution compared to the LCD panel. It is important to note that, as long as the local contrast is below the maximum contrast of the LCD panel, relative (and even absolute) luminance can be maintained, and edges can be reproduced at full sharpness. Only once this contrast range of the LCD panel is exceeded is some fidelity lost near high contrast boundaries, but this effect is below the detectable threshold, as has been verified in user studies [15]. The lower right panel of Fig. 1 shows an HDR display based on these principles, the BrightSide/Dolby DR37-P. This is the display we use in our experiments for this paper. 3. HDR image processing algorithms 3.1. Problem statement and reference algorithm This section details the primary contribution of the paper: algorithms for processing images to drive HDR displays. We first discuss the overall challenge and formulate a high-level approach. Working from that method, we identify practical algorithms that can be used to drive HDR displays in real-time. Given an image within the luminance range of the HDR display, the goal of our work is to determine LED driving values and an LCD panel image so as to minimize the perceptual difference between the input image and the one formed by the display. This process must take into account the physical limitations of the display hardware, including the limited contrast of the LCD panel, the feasible intensity range of the LEDs, and the finite precision of both due to quantization. Fig. 2 shows a sample of the desired output of the algorithm. The LED backlight image is a low-frequency, blackand-white version of the input image and contains the major features of the input. The LCD panel contains the color information and the high frequency detail, adjusted for the backlight. Similar to the tone-mapping operator of Chiu et al. [2], the panel has reverse gradients around light sources to compensate for the light leaking across the edge in the backlight. While this effect is undesirable in a tone-mapped image, it is beneficial when processing images for display. These artifacts compensate for the blur inherent in the backlight, such that when the two are optically combined, the result is close to the original. Since the input image and the final output are HDR images, they had to be tone-mapped to be printed, while the LCD image and the backlight are both 8 bit images and are shown directly. The image processing task we face can be framed as a constrained non-linear optimization problem, where the objective function is based on a complex metric of perceptual image differences, such as the visible difference predictor [3] or its HDR counterpart [9]. Although this approach is possible, and in fact results in a high-quality reference solution [16], it is not attractive in a practical setting. While the visible difference predictor is a very powerful method of comparing two images, it is very slow (on the order of minutes per image). It is therefore not feasible to evaluate it in real-time on image sequences or live video streams on hardware that can be incorporate into a display. Precomputation is also not an attractive option, since the processing is heavily parameterized on the characteristics of the HDR display, such as its luminance range and LED layout. We therefore desire an efficient algorithm that can process full frame images at 60 Hz either using the graphics processing unit (GPU) of a control computer, or using a signal processor or field-programmable gate array (FPGA) within the display.

4 442 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) Fig. 2. Left: original HDR image, tone-mapped for print. Center left: tone-mapped HDR image produced by the HDR display after processing with the algorithms presented in this paper. Center right: low-frequency luminance image of the LED backlight used to produce this image. Right: the corresponding LCD image compensated for the backlight. The top row represents a real-world example, while the bottom row shows a synthetic test case along with the intensity distribution on a single scanline Real-time algorithm In order to develop such an algorithm, we have to abandon the global optimization and perceptual error metric in favor of more efficient local optimization and a less expensive error metric such as least squares. Not only is the full global optimization across all LEDs and LCD pixels too computationally intense to be performed in real-time by GPUs and FGPAs, but the hardware architecture is inefficient at computing linear systems. Instead, we frame the algorithm as in image processing problem that is more amenable to implementation on real-time hardware. However, it is still important to ensure that the perceptual error is small, which is not necessarily the case even if the mean square error is small. For this reason, we verify all our algorithms by evaluating them on a set of test images with the HDR visible difference predicator of Mantiuk et al. [9] (see Section 5). A very effective way of improving the performance is to determine the LED driving values and the LCD panel image in two separate stages. Suppose there is a way of determining the LED values first. One can then use the high resolution LCD panel to compensate for the blur caused by the low-resolution nature of the LED array. Specifically, given the driving values for the LEDs, one can compute the spatial light distribution B of the backlight, taking into account both the geometric layout of the LEDs, and their PSFs. One can then compute target pixel values P for the LCD panel as a pixel-by-pixel division of the target image I by the backlight B I P ¼ f 1 ; ð1þ B where f represents the physical response of the LCD panel, and the 8-bit control signal quantization, non-linear response function, and the inability to reproduce quantities of light greater or less than its dynamic range. Since the HDR display optically multiplies f(p) and B, the resulting image I = f(p) Æ B is the closest possible approximation of the input image I for the selected LED values. The full algorithm thus consists of the following steps, also illustrated in Fig. 3: (1) Given the desired image I, determine a desired backlight distribution B. (2) Determine the LED driving levels d that most closely approximate B. (3) Given d, simulate the resulting backlight B. (4) Determine the LCD panel P that corrects for the low resolution of the backlight B. The individual stages are explained in detail in the following section. Fig. 4 shows the tone-mapped image of

5 an input image for which we will demonstrate the results of the individual stages. M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) Target backlight The first stage is to process the desired image I and to generate a target light distribution B on the backlight. The input I should be in photometric units, and its color representation should use the same chromaticity, white point, and primaries as the HDR display. The output B is a black-and-white image in photometric units. Treating the task of deriving the set of LED driving intensities as a system of equations, we note that it is significantly over-constrained, as the number of pixels in the input image is greater than the number of LEDs that are being solved for. Over-constrained problems require more complicated and computationally intensive solver methods to guarantee the solution best satisfies all the constraints. Instead, we choose to reduce the number of constraints by downsampling the image so the resolution of the output corresponds to the resolution of the LED grid rather than the resolution of the LCD panel. The resulting number of pixels in B directly corresponds to the degrees of freedom available by controlling the LED intensities, simplifying the solution process, and reducing the computational requirements of the subsequent stages. Determining the target backlight itself requires three sub-stages: (1) Since the LCD panel only absorbs light, the array of white LEDs needs to produce at least as much light as required to produce each individual color channel at each point on the image plane. The target backlight luminance for each pixel is therefore set to the maximum of all color channels for that pixel. (2) A nonlinear function is applied to these target luminance values in order to divide the dynamic range evenly between the LED array and the LCD panel, and to spread quantization errors uniformly across both components. We experimentally verified that a square root function works best, meaning that the Fig. 4. Tone-mapped original HDR image for reference. LED array and the LCD panel are both responsible for producing roughly the square root of the target intensity, such that the optical multiplication of the two values results in a good approximation of the target image. (3) The final step is to downsample the image to the resolution of the LED grid. There are several ways to implement this step. On the display FPGA, the step is implemented as the average of neighborhoods of pixels around LED positions. On a GPU the same algorithm is implemented as recursive block averages forming an image pyramid in order to work within the finite number of texture accesses available. Fig. 5 shows the output of this stage: a monochrome, low-resolution sampling of square root of the original image Deriving LED intensities The output from the previous stage is a target light distribution for the backlight, which is already at the resolution of the LED array. However, the pixels in that image cannot be used directly as LED driving values, since the LEDs are significantly blurred optically. In fact, the point spread function of each LED is roughly Gaussian in shape, with significant overlap of between the PSFs of Fig. 3. Flowchart of stages of the implementation. The HDR input image is used to determine a desired backlight configuration, which in turn is used to determine the LED driving values. The actual backlight is simulated from these driving values, and this simulation determines the LCD image.

6 444 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) neighboring LEDs (see Fig. 1). As discussed, this blur is the result of a deliberate design decision at the level of the display optics, since it avoids hard edges in the backlight illumination. In order to derive LED driving values, d, we must therefore compensate for the optical blur, which we do by implementing an approximate deconvolution of the target backlight with the PSF of the LEDs. This operation can be described as minimizing the error of the linear system min d kwd Bk 2 ð2þ subject to the physical constraints on d, where W is a matrix describing the optical blur. More specifically, W contains the intensity of the PSF of each LED at each pixel location, such that multiplying by a vector of LED intensities will result in the simulated backlight. In order to efficiently find an approximate solution to this system, we make use of the fact that this matrix is sparse, and banddiagonal. We therefore expect that the pixels in the target backlight B are already fairly close to the driving values d, and we can obtain good results with a small computational investment. We chose one of the simplest iterative solvers, the Gauss Seidel method, on which to base our implementation. The basic Gauss Seidel iteration d ðkþ j ¼ B j P i<j w jid ðkþ i w jj P i>j w jid ðk 1Þ i is the result of the reordering of the system in Eq. (2) and solving for the unknowns d j. Every step, a new estimate d (k) of the solution is chosen by comparing the current value of the system to the desired value. The new solution estimate is used to update the value of the system. We make several modifications to this formulation to suit our purposes. Instead of considering all other LEDs for each LED, we use a smaller neighborhood Nðd j Þ, and only perform a single iteration. The resulting computation is a weighted average of the neighborhood of LEDs. Given a desired backlight image B, it tries to account for light contributions from other LEDs weighted according to PSF. By choosing d ð0þ ¼ B, it collapses to Fig. 5. Output of target backlight pass. ð3þ B d j ¼ j P Nðd jþ i w ji B i ð4þ w jj for a given LED j, where w jj is the value of the point spread for that LED, or simply the (PSF). Then, for a given LED j, the desired luminance value of the backlight at its position is compared to the luminance coming from the surrounding LEDs. The value of LED j is chosen to compensate for any disparity between the desired backlight and the illumination present. The results are clamped to [0, 1] and passed to the subsequent simulation stage and the LED controller in the display. Fig. 6 shows the output of this stage. While it has a lower resolution than the original input image (Fig. 4), it shows more contrast than the target backlight (Fig. 5) to compensate for the optical blur Backlight simulation At this point, the LED values have been determined, and the remaining stages are aimed at computing the LCD pixel values. To this end, we first need to determine the actual light distribution B on the backlight at the full resolution of the LCD panel. B should be similar to the target distribution B, but in general there will be minor differences due to quantization and clamping of the driving values, as well as approximations in the computation of those values (Section 3.2.2). Simulating the actual light distribution B involves convolving the driving values d with the LED point spread function. On an FPGA, we directly evaluate each pixel by reading the value of the PSF for the distance to the current pixel from a lookup table (LUT) and modulate it by the current driving value. On GPUs, we use a splatting approach, and simply draw screen aligned quadrilaterals with textures of the PSF into the framebuffer. Each texture is modulated by its driving value and we use alpha blending to accumulate the results. Since the PSF is very smooth, both methods can be implemented at a lower resolution, and the results can be upsampled. Fig. 7 shows the output of this stage. If the LED values have been chosen appropriately, it should closely resemble the image in Fig. 5 before downsampling, although a perfect match is not expected due to quantization and clamping of LED intensities to the physically feasible intensity range Blur compensation The final step in the algorithm is to determine the LCD pixel values by dividing the input image by the simulated backlight, according to Eq. (1). The division is performed per pixel and per color channel. The resulting LCD pixel values are processed with the inverse response function of the LCD panel such that the LCD transparency is controlled linearly. Fig. 8 shows the output of this stage. It displays the same characteristics as the LCD panel image in Fig. 2. Since the result was obtained dividing by a low-frequency

7 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) Fig. 6. Output of pass to determine LED intensities. Fig. 8. Output of blur correction pass. version of the original image, the LCD panel contains the same reverse gradients as the work of Chiu et al. [2]. The LCD image still contains all of the high-frequency and color information as the original image, but the low frequencies are damped, since they are generated by the backlight. Fig. 9 shows a tone-mapped reproduction of the final HDR image produced on the HDR display resulting from all the operations we have described in this section Discussion The algorithm described in this section is fast enough for real-time processing of large (HDTV resolution) high dynamic range images. It has been implemented in software, on GPUs, and on an FPGA chip that is integrated in the commercial displays by BrightSide Technologies/ Dolby. It is in daily use in these implementations. While the algorithm produces images of good quality under most circumstances, it systematically under-estimates the brightness of small bright regions with dark surroundings. This artifact is caused by the use of a downsampled image for determining the LED values (Section 3.2.2), as well as the approximate solution of Eq. (2). It is worth pointing out that in natural images these artifacts are not easily perceptible. However, for more demanding applications such as medical imaging, a more faithful, albeit more computationally expensive, representation can be desirable Error diffusion We can alleviate these artifacts for demanding applications by optimizing the LED intensities at full image resolution. As explained above, a full global optimization is computationally not feasible in real-time on the hardware under consideration, and therefore we developed a greedy local optimization scheme that is inspired by the error diffusion algorithm for dithering [5]. Starting from the initial LED estimate (Section 3.2.2), we improve on this solution by processing LEDs in scanline order, adjusting the intensity of each LED to minimize per-pixel error over the area of influence for that LED. Specifically, we minimize min ki W j Dd j ab ðj 1Þ k 2 ð5þ Dd j over the pixels within a local neighborhood of the jth LED, where Dd j is the change of intensity that needs to be applied to the LED, and B (j 1) is the full-resolution, simulated backlight after the first j 1 LEDs have been updated. Note the B (j 1) can be updated incrementally at reasonable cost: the full update of all LEDs is as expensive as the initial backlight simulation. Fig. 7. Output of backlight simulation pass. In this image, features corresponding to the LED positions are visible because the image is linearly scaled. These features are not visible when the physical display is viewed because of human lightness sensitivity. Fig. 9. Tone-mapped simulation of results.

8 446 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) The parameter a in the above equation corresponds to the average LCD pixel transmissivity in the range [0, 1]. Error diffusion chooses a backlight that results in the maximum number of LCD pixels having an average value of a. A value of 1 (corresponding to the LCD at half the maximum intensity) produces the best final image quality, since 2 it provides the distribution of LCD pixels most able to compensate for the low-frequency of the backlight B. Moving that value closer to 1 the LCD transmission will provide 2 the maximum room to correct for differences in luminance between the backlight and desired image. If the average pixel value in a region is already close to full white (or black), then additional local changes towards brighter (darker) tones are limited in that region. However, it is worth noting that there may be reasons to choose other, especially larger, values for a. Since large values of a correspond to a more transparent LCD panel, the same display brightness can be achieved with lower LED power. Therefore, it is possible to trade image quality for power savings, which may be interesting for mobile devices. Fig. 10 compares the final image produced by the error diffusion algorithm to that produced by the algorithm from Section for a simple test scene. It shows a significant improvement in the reproduction of small bright features with error diffusion. 4. Measurement and calibration The image processing algorithms described in the previous sections require accurate descriptions of the geometric and optical properties of the HDR display. The quality of this data is of paramount importance. In fact, a full solution of the LEDs and LCD pixels using approximate calibration data almost always looks worse than the approximate solution using accurate calibration data. Many attributes of the display must be measured to ensure that the simulation results are correct. These include the LCD panel response, the LED array alignment, the peak luminance of each LED, as well as the LED point spread function. All attributes related to light intensities are measured in absolute units, which provide the necessary means of comparing the original image to the simulated result LCD panel response First, we need to determine the non-linear response of the LCD panel. Most LCD panel controller circuitry approximates a power function with an exponent of 2.5. The production of correct images requires compensating for this nonlinearity. To obtain the inverse, we follow the same procedure as for LDR display calibration: we measure the luminance of each of the LCD panel driving values, and represent the inverse as a fitted function or by using a lookup table (LUT). This calibration procedure is standard for displays, and can be performed with standard tools. Since the LCD panel acts as a modulator, we do not need to capture any absolute measurement of its response, and instead use a normalized function. The response of the DR37-P LCD panel is shown in Fig. 11 compared with an ideal function with exponent of 2.5 mapped to the same dynamic range LED array alignment Since the LEDs in a display are automatically mounted on a circuit board, the relative positioning of the LEDs with respect to each other does not deviate in any significant way from the construction plans. However, in the final assembly, the misalignment between LCD panel and LED array is on average around 3 pixels. This offset is calibrated by examining the difference between the location of several LED PSFs and the corresponding LCD pixel positions LED response Due to the variance in LED construction and the circuitry that supplies power, the response of the LEDs is neither linear nor is it the same for each LED. Without calibration, they do not respond linearly to driving values Fig. 10. Comparison of error diffusion to the original method as a 2D image and intensity profile. The top circle represents the target image, in the center is the result of the error diffusion algorithm, and at the bottom the result of the algorithm from Section

9 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) Luminance (cd/m 2 ) Measured data 2.5 power LCD intensity level Fig. 11. Semilog plot of LCD panel response (in solid blue) compared with the response x 2.5 (in dashed green). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) and they have different peak intensities. We measure these differences with a Lumetrix IQCam [6], but multi-exposure HDR imaging could also be used [4,13] LED point spread function The point spread function (PSF) of the LEDs, previously discussed in Section 2.2 and shown in Fig. 1, is the most critical parameter in accurately rendering images. The measurement procedure is straightforward. We turn on a single LED and take a high dynamic range image of the display with a calibrated camera like the Lumetrix IQCam. Because of the variation in peak intensity of LEDs, we normalize the measured data, and later multiply it by the peak value computed from calibrating the individual LED intensities and responses. Several sources of measurement error can affect the quality of the image PSF. Artifacts can appear due to the LCD pixel spacing and camera photosite spacing, and noise present in the HDR image. For these and other reasons, we do not use the measured image data directly, but instead fit a function to it. The PSF is similar to a Gaussian, but has a wider tail, so we model it as the sum of several Gaussians of varying scales and widths. We recover these values by solving a least-square minimization problem for the relative scales and widths of the component Gaussians. 5. Evaluation To evaluate the quality of the discussed algorithms, we processed a large number of HDR images with the display parameters of a commercial HDR display, the BrightSide/ Dolby DR37-P. We then inspected the images visually on the actual display, and performed a quantitative analysis using the HDR visual difference predicator (VDP) by Mantiuk et al. [9]. Ideally, the comparison could be based on a user study instead of an algorithmic comparison. However, it is technically extremely challenging to create photometrically accurate, artifact-free reference images for a large variety of test images, so that we resort to the VDP approach for now. The display contains W LEDs on a 18.8 mm hexagonal close-packing matrix, where each LED is individually controlled over its entire dynamic range with 256 addressable steps. The LCD panel is a 37 in Chi Mei Optoelectronics V370H1-L01 LCD panel with a 250:1 simultaneous contrast ratio 1 and resolution. For a full white box occupying the center third of the screen, the maximum luminance is measured as 4760 cd/m 2. For a black image, the minimum luminance is zero, since all LEDs are off. The minimum luminance is less than 6 cd/ m 2 on a ANSI 9 checkerboard (the VESA contrast standard). The HDR VDP takes both the original and displayed images as input, and computes probabilities for the detection of image differences in local neighborhoods tiling the image. It works with absolute photometric units, and takes into account properties of the human visual system, such as veiling glare (Section 2.1), contrast sensitivity for different spatial frequencies, and non-linear luminance perception, among others. The output of the VDP can be somewhat hard to interpret. For computational efficiency, the VDP implementation only filters the images with respect to a subset of spatial frequencies and orientations. This approximation results in banded areas of detection which, upon first inspection, appear unrelated to the feature (see Fig. 12). The true probability distribution should be much smoother, and if all frequency bands and orientations were used, these features would be wider and more evenly defined. For our purposes, this is not a serious limitation since we only desire to infer the existence of a perceivable difference and its spatial extent and magnitude, instead of its exact shape. We visualize the detection probabilities as color coded regions on top of a black-and-white version of the image. Probabilities over 95% are marked solid red, probabilities between 75% and 95% are shown as a gradient from green to red, and probabilities below 75% are not colored. It is important to note that the VDP computes detection probabilities based on side-by-side comparisons of individual local image regions with a reference. As such it is a very conservative measure for the practically more relevant situation where a large image is presented without a reference Evaluation results In the evaluation of our methods, we compare the original image to a simulation of the luminance values output by the display device. The measurements taken during 1 Display manufacturers often employ various methods of distorting the calculation of dynamic range, such as altering room illumination between measurements. The ANSI 9 checkerboard provides a standard measure of the usable display dynamic range, which we use to determine this number.

10 448 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) in this paper they are first tone-mapped to 8 bits using Reinhard et al. s photographic tone-mapping operator [12]. Fig. 12. Example of HDR VDP output (see text for explanation). the calibration process provide absolute luminance data, and we make use of it to accurately simulate the luminance values produced by the display hardware. We used version 1.2 of the HDR VDP software, with a simulated viewing distance of 3.5 m, and defaults for the other parameters. We have run our algorithms on a large number of HDR images, from which we choose five test patterns and two photographs as a representative sample set for discussion here. Each set is presented in the same way: the original image is on top, the display output is in the middle, and the VDP probability overlay is at the bottom. Since both the original and displayed images are HDR, for printing Test pattern The left column of Fig. 13 shows a combination of several features at different frequencies. In the center are vertical and horizontal frequency gratings of different spacings, while the horizontal white bars above and below are linear gradients. There are solid rectangles on the left, and the outlined boxes on the right can be used to check alignment of the display. The black level is set to 1 cd/m 2 and the peak intensity is set to 2200 cd/m % of the pixels had more than a 75% probability of detection, while 0.71% had more than a 95% probability of detection in direct side-to-side comparison. This image is a very difficult image to reproduce correctly with the display hardware, especially on the right side near the outlined boxes. Several of the issues, especially on the right, stem from the fact that none of the outlined boxes are big enough to produce the required light intensity. The bright area is too small to have the veiling glare obscure the excess backlight in the surrounding dark areas. The bars outside the vertical box, and the patches on the dark areas indicate that there is too much backlight. The larger patches are the result of the backlight being too bright for a large area. They do not appear adjacent to the outline rectangles because there is veiling glare to obscure the differences in those areas. Fig. 13. Test pattern and frequency ramp sample images.

11 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) Frequency ramp The right column of Fig. 13 consists of alternating white and black boxes of various widths and heights, reminiscent of some of the DCT basis functions used by JPEG images. Once again, the black level is set to 1 cd/m 2 and the peak intensity is set to 2200 cd/m % of the pixels had more than a 75% probability of detection while 0.79% had more than a 95% probability. Considering the edge contrasts and feature sizes, the algorithm performs well, but shows the common problem of failing to maintain peak intensity towards edges of features. The red bars inside the white rectangles indicate where the LCD panel switched to full white and caused a perceivable discontinuity. The red bars in the corners of dark areas indicate excessive light being spilled from the two adjacent bright areas. As expected, the differences become more visible for the higher frequency regions in the top right, where the feature size is smaller than an LED. The hexagonal packing of the LED grid is aligned horizontally, so while thin horizontal features can be accurately depicted, thin vertical features will cause a saw-tooth like vertical pattern that is detectable under certain circumstances. This orientation difference is why the error is detected in the upper right, but not in the lower left where the same features are present at a different orientation Apartment The left column of Fig. 14 is the first of several photographs of real scenes, and depicts an indoor environment. The values are roughly calibrated to absolute photometric units; the minimum luminance is 10 2 cd/m 2 and the maximum value is 1620 cd/m % of the pixels had more than a 75% probability of detection while 0.16% had more than a 95% probability. Compared to the test patterns, it has noticeably less error. Most natural images do not contain such drastic contrast boundaries as the test patterns, and the result is fewer areas where the display is not able to accurately represent the image. Most of the error is in the small bright reflections on the balcony, or in the reflection of the lamp in the TV. While the these differences are predicted to be detectable in direct comparison, the image quality produced by HDR display is very good (center row), and free of disturbing artifacts Moraine The right column of Fig. 14 is a sample of an outdoor scene. Again, the values are roughly calibrated to absolute photometric units. For this image, the minimum luminance is 0.5 cd/m 2 and the maximum value is 2200 cd/m 2. This image is an example of an image that is perfectly represented on the display with 0.0% of the pixels had more than a 75% probability of detection in side-by-side comparison. No boundaries are so extreme that we cannot accurately reproduce luminance and detail on both sides. Finally, Fig. 15 includes an additional three HDR images processed with our algorithm. For the Belgium image, 0.29% of the pixels had more than a 75% probabil- Fig. 14. Apartment and Moraine test images.

12 450 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) Fig. 15. Additional image samples. Left: Belgium courtesy Dani Lischinski. Center: Fog courtesy Jack Tumblin. Right: Atrium courtesy Karol Myszkowski. ity of detection while 0.17% had more than a 95% probability. For the Fog image, 0.12% of the pixels had more than a 75% probability of detection while 0.07% had more than a 95% probability. For the Atrium image, 0.22% of the pixels had more than a 75% probability of detection while 0.14% had more than a 95% probability. These percentages are representative of the quality of our algorithm s reproduction of images. Most of the errors observed are in specular highlights, where we are not able to reproduce the luminances of the image. These artifacts are the result of the low-frequency nature of the backlight, where we cannot increase the intensity of small features without adversely affecting the quality of surrounding regions. The tests we have performed show that, using the image processing algorithms presented in this paper, the representation of natural images on the HDR display is very faithful to the original HDR image. While it is certainly possible to construct test patterns resulting in detectable image differences, natural images do not usually exhibit this behavior. Furthermore, the fact that the VDP indicates differences detectable in direct comparison with a reference makes it a very conservative measure. Moreover, the differences introduced by the processing are not usually perceived as degrading the image quality. In a visual inspection of real-world images without a reference, even expert viewers miss a large percentage of the areas highlighted by the VDP algorithm, even more so for animated scenes where visible artifacts are extremely rare. 6. Conclusions In this paper, we have presented algorithms for the accurate depiction of photometrically calibrated images on dual-modulator HDR displays. The steady increase in HDR imaging research has created a strong desire to display the additional luminance information that those techniques provide. Display hardware with the potential to fulfill these needs is now available, but due to material limitations the images produced are not pixel-perfect copies of the original image. Instead, the displays make use of fundamental limits in local contrast perception, that are well documented in the psychophysics literature. The two image processing algorithms we discussed in this paper are in active use in the commercially available HDR displays,

13 M. Trentacoste et al. / J. Vis. Commun. Image R. 18 (2007) and have been shown to several thousand people on various occasions, including trade shows. As discussed in Section 3, we have not addressed the topics of remapping images with pixel values outside the displayable space of the monitor. Hence, there is an opportunity to improve tone-mapping techniques from very high dynamic range images to HDR images that the monitor supports, and color space transformations given the extra considerations required over larger contrast ranges. These topics and others are all aimed at more accurate color appearance models, which are needed for the accurate display of images. Fundamentally, all the same constraints found with LDR display systems still apply to HDR displays, but have been loosened. Limits on peak intensity, feasible chromaticities, and other characteristics still exist. Research needs to be conducted in how well current practices work on HDR displays and how they could, or should, be improved. Our evaluation with the HDR visible difference predicator shows that reproduction of natural images is very good, but limitations of both the hardware and the algorithms can be detected on test patterns and under direct comparison with ground-truth images. Although very difficult to implement in practice, in the future we would like to conduct a formal user study with photometrically calibrated reference scenes. It would also be interesting to design a user study centered around perceived image quality instead of difference with respect to a reference image. References [1] P.M. Acosta-Serafini, I. Masaki, C.G. Sodini, Single-chip imager system with programmable dynamic range, U.S. Patent 6,977,685, [2] K. Chiu, M. Herf, P. Shirley, S. Swamy, C. Wang, K. Zimmerman, Spatially nonuniform scaling functions for high contrast images, in: Proceedings of Graphics Interface 1993 (1993) [3] S. Daly, The visible differences predictor: an algorithm for the assessment of image fidelity, Digital images and human vision (1993) [4] P. Debevec, J. Malik. Recovering high dynamic range radiance maps from photographs, in: SIGGRAPH 1997: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA (1997) [5] R. Floyd, L. Steinberg. An adaptive algorithm for spatial grey scale, in: Proceedings of the SID International Symposium, Digital Technology Papers (1975) [6] Lumetrix. IQCam imaging photometer, [7] R. Mantiuk, A. Efremov, K. Myszkowski, H. Seidel, Backward compatible high dynamic range MPEG video compression, ACM Transactions on Graphics (special issue SIGGRAPH 2006) 25 (3) (2006) [8] R. Mantiuk, G. Krawczyk, K. Myszkowski, H. Seidel, Perceptionmotivated high dynamic range video encoding, ACM Transactions on Graphics (special issue SIGGRAPH 2004) 23 (3) (2004) [9] R. Mantiuk, K. Myszkowski, H. Seidel. Visible difference predicator for high dynamic range images, in: Proceedings of IEEE International Conference on Systems, Man and Cybernetics (2004) [10] P. Moon, D. Spencer, Visual data applied to lighting design, Journal of the Optical Society of America 34 (605) (1944). [11] P. Moon, D. Spencer, The visual effect of non-uniform surrounds, Journal of the Optical Society of America 35 (3) (1945) [12] E. Reinhard, M. Stark, P. Shirley, J. Ferwerda, Photographic tone reproduction for digital images, ACM Transactions on Graphics (special issue SIGGRAPH 2002) 21 (3) (2002) [13] M. Robertson, S. Borman, R. Stevenson, Dynamic range improvements through multiple exposures, in: Proceedings of International Conference on Image Processing (ICIP) 1999 (1999) [14] H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, A. Vorozcovs, High dynamic range display systems, ACM Transactions on Graphics (special issue SIGGRAPH 2004) 23 (3) (2004) [15] H. Seetzen, L. Whitehead, G. Ward, A high dynamic range display using low and high resolution modulators, in: Society for Information Display International Symposium Digest of Technical Papers (2003) [16] M. Trentacoste, Photometric Image Processing for HDR Displays. Master s thesis, The University of British Columbia (2006). [17] G. Ward, A wide field, high dynamic range, stereographic viewer, in: Proceedings of PICS 2002, April [18] G. Ward, M. Simmons, Subband encoding of high dynamic range imagery, in: APGV 2004: Proceedings of the First Symposium on Applied Perception in Graphics and Visualization, ACM Press, New York, NY, USA (2004)

Photometric Image Processing for High Dynamic Range Displays. Matthew Trentacoste University of British Columbia

Photometric Image Processing for High Dynamic Range Displays. Matthew Trentacoste University of British Columbia Photometric Image Processing for High Dynamic Range Displays Matthew Trentacoste University of British Columbia Introduction High dynamic range (HDR) imaging Techniques that can store and manipulate images

More information

Realistic Image Synthesis

Realistic Image Synthesis Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106

More information

Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach

Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach 2014 IEEE International Conference on Systems, Man, and Cybernetics October 5-8, 2014, San Diego, CA, USA Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach Huei-Yung Lin and Jui-Wen Huang

More information

The luminance of pure black: exploring the effect of surround in the context of electronic displays

The luminance of pure black: exploring the effect of surround in the context of electronic displays The luminance of pure black: exploring the effect of surround in the context of electronic displays Rafa l K. Mantiuk a,b, Scott Daly b and Louis Kerofsky b a Bangor University, School of Computer Science,

More information

icam06, HDR, and Image Appearance

icam06, HDR, and Image Appearance icam06, HDR, and Image Appearance Jiangtao Kuang, Mark D. Fairchild, Rochester Institute of Technology, Rochester, New York Abstract A new image appearance model, designated as icam06, has been developed

More information

MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER

MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER International Journal of Information Technology and Knowledge Management January-June 2012, Volume 5, No. 1, pp. 73-77 MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY

More information

High Dynamic Range Imaging

High Dynamic Range Imaging High Dynamic Range Imaging 1 2 Lecture Topic Discuss the limits of the dynamic range in current imaging and display technology Solutions 1. High Dynamic Range (HDR) Imaging Able to image a larger dynamic

More information

Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and Photographs

Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and Photographs Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and Photographs Allan G. Rempel1 Matthew Trentacoste1 Helge Seetzen1,2 H. David Young1 Wolfgang Heidrich1 Lorne Whitehead1 Greg Ward2 1) The University

More information

Multiscale model of Adaptation, Spatial Vision and Color Appearance

Multiscale model of Adaptation, Spatial Vision and Color Appearance Multiscale model of Adaptation, Spatial Vision and Color Appearance Sumanta N. Pattanaik 1 Mark D. Fairchild 2 James A. Ferwerda 1 Donald P. Greenberg 1 1 Program of Computer Graphics, Cornell University,

More information

Images and Displays. Lecture Steve Marschner 1

Images and Displays. Lecture Steve Marschner 1 Images and Displays Lecture 2 2008 Steve Marschner 1 Introduction Computer graphics: The study of creating, manipulating, and using visual images in the computer. What is an image? A photographic print?

More information

Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and Photographs

Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and Photographs Ldr2Hdr: On-the-fly Reverse Tone Mapping of Legacy Video and Photographs Allan G. Rempel1 Matthew Trentacoste1 Helge Seetzen1,2 H. David Young1 Wolfgang Heidrich1 Lorne Whitehead1 Greg Ward2 1) The University

More information

VU Rendering SS Unit 8: Tone Reproduction

VU Rendering SS Unit 8: Tone Reproduction VU Rendering SS 2012 Unit 8: Tone Reproduction Overview 1. The Problem Image Synthesis Pipeline Different Image Types Human visual system Tone mapping Chromatic Adaptation 2. Tone Reproduction Linear methods

More information

Assistant Lecturer Sama S. Samaan

Assistant Lecturer Sama S. Samaan MP3 Not only does MPEG define how video is compressed, but it also defines a standard for compressing audio. This standard can be used to compress the audio portion of a movie (in which case the MPEG standard

More information

EC-433 Digital Image Processing

EC-433 Digital Image Processing EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

High dynamic range and tone mapping Advanced Graphics

High dynamic range and tone mapping Advanced Graphics High dynamic range and tone mapping Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Cornell Box: need for tone-mapping in graphics Rendering Photograph 2 Real-world scenes

More information

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications IEEE Transactions on Image Processing, Vol. 21, No. 2, 2012 Eric Dedrick and Daniel Lau, Presented by Ran Shu School

More information

Images and Displays. CS4620 Lecture 15

Images and Displays. CS4620 Lecture 15 Images and Displays CS4620 Lecture 15 2014 Steve Marschner 1 What is an image? A photographic print A photographic negative? This projection screen Some numbers in RAM? 2014 Steve Marschner 2 An image

More information

High Performance Imaging Using Large Camera Arrays

High Performance Imaging Using Large Camera Arrays High Performance Imaging Using Large Camera Arrays Presentation of the original paper by Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz,

More information

White paper. Wide dynamic range. WDR solutions for forensic value. October 2017

White paper. Wide dynamic range. WDR solutions for forensic value. October 2017 White paper Wide dynamic range WDR solutions for forensic value October 2017 Table of contents 1. Summary 4 2. Introduction 5 3. Wide dynamic range scenes 5 4. Physical limitations of a camera s dynamic

More information

Graphics and Image Processing Basics

Graphics and Image Processing Basics EST 323 / CSE 524: CG-HCI Graphics and Image Processing Basics Klaus Mueller Computer Science Department Stony Brook University Julian Beever Optical Illusion: Sidewalk Art Julian Beever Optical Illusion:

More information

Photometric Image Processing for High Dynamic Range Displays

Photometric Image Processing for High Dynamic Range Displays Photometric Image Processing for High Dynamic Range Displays by Matthew Trentacoste B.Sc., Carnegie Mellon University, 2003 A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF

More information

Compression of High Dynamic Range Video Using the HEVC and H.264/AVC Standards

Compression of High Dynamic Range Video Using the HEVC and H.264/AVC Standards Compression of Dynamic Range Video Using the HEVC and H.264/AVC Standards (Invited Paper) Amin Banitalebi-Dehkordi 1,2, Maryam Azimi 1,2, Mahsa T. Pourazad 2,3, and Panos Nasiopoulos 1,2 1 Department of

More information

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015 Computer Graphics Si Lu Fall 2017 http://www.cs.pdx.edu/~lusi/cs447/cs447_547_comput er_graphics.htm 10/02/2015 1 Announcements Free Textbook: Linear Algebra By Jim Hefferon http://joshua.smcvt.edu/linalg.html/

More information

Image Processing by Bilateral Filtering Method

Image Processing by Bilateral Filtering Method ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

HDR Video Compression Using High Efficiency Video Coding (HEVC)

HDR Video Compression Using High Efficiency Video Coding (HEVC) HDR Video Compression Using High Efficiency Video Coding (HEVC) Yuanyuan Dong, Panos Nasiopoulos Electrical & Computer Engineering Department University of British Columbia Vancouver, BC {yuand, panos}@ece.ubc.ca

More information

ABSTRACT. Keywords: Color image differences, image appearance, image quality, vision modeling 1. INTRODUCTION

ABSTRACT. Keywords: Color image differences, image appearance, image quality, vision modeling 1. INTRODUCTION Measuring Images: Differences, Quality, and Appearance Garrett M. Johnson * and Mark D. Fairchild Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science, Rochester Institute of

More information

High Dynamic Range Imaging: Towards the Limits of the Human Visual Perception

High Dynamic Range Imaging: Towards the Limits of the Human Visual Perception High Dynamic Range Imaging: Towards the Limits of the Human Visual Perception Rafał Mantiuk Max-Planck-Institut für Informatik Saarbrücken 1 Introduction Vast majority of digital images and video material

More information

Brightness Calculation in Digital Image Processing

Brightness Calculation in Digital Image Processing Brightness Calculation in Digital Image Processing Sergey Bezryadin, Pavel Bourov*, Dmitry Ilinih*; KWE Int.Inc., San Francisco, CA, USA; *UniqueIC s, Saratov, Russia Abstract Brightness is one of the

More information

25/02/2017. C = L max L min. L max C 10. = log 10. = log 2 C 2. Cornell Box: need for tone-mapping in graphics. Dynamic range

25/02/2017. C = L max L min. L max C 10. = log 10. = log 2 C 2. Cornell Box: need for tone-mapping in graphics. Dynamic range Cornell Box: need for tone-mapping in graphics High dynamic range and tone mapping Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Rendering Photograph 2 Real-world scenes

More information

ISSN Vol.03,Issue.29 October-2014, Pages:

ISSN Vol.03,Issue.29 October-2014, Pages: ISSN 2319-8885 Vol.03,Issue.29 October-2014, Pages:5768-5772 www.ijsetr.com Quality Index Assessment for Toned Mapped Images Based on SSIM and NSS Approaches SAMEED SHAIK 1, M. CHAKRAPANI 2 1 PG Scholar,

More information

Visualizing High Dynamic Range Images in a Web Browser

Visualizing High Dynamic Range Images in a Web Browser jgt 29/4/2 5:45 page # Vol. [VOL], No. [ISS]: Visualizing High Dynamic Range Images in a Web Browser Rafal Mantiuk and Wolfgang Heidrich The University of British Columbia Abstract. We present a technique

More information

Practical Content-Adaptive Subsampling for Image and Video Compression

Practical Content-Adaptive Subsampling for Image and Video Compression Practical Content-Adaptive Subsampling for Image and Video Compression Alexander Wong Department of Electrical and Computer Eng. University of Waterloo Waterloo, Ontario, Canada, N2L 3G1 a28wong@engmail.uwaterloo.ca

More information

Correcting Over-Exposure in Photographs

Correcting Over-Exposure in Photographs Correcting Over-Exposure in Photographs Dong Guo, Yuan Cheng, Shaojie Zhuo and Terence Sim School of Computing, National University of Singapore, 117417 {guodong,cyuan,zhuoshao,tsim}@comp.nus.edu.sg Abstract

More information

A Short History of Using Cameras for Weld Monitoring

A Short History of Using Cameras for Weld Monitoring A Short History of Using Cameras for Weld Monitoring 2 Background Ever since the development of automated welding, operators have needed to be able to monitor the process to ensure that all parameters

More information

Evaluation of High Dynamic Range Content Viewing Experience Using Eye-Tracking Data (Invited Paper)

Evaluation of High Dynamic Range Content Viewing Experience Using Eye-Tracking Data (Invited Paper) Evaluation of High Dynamic Range Content Viewing Experience Using Eye-Tracking Data (Invited Paper) Eleni Nasiopoulos 1, Yuanyuan Dong 2,3 and Alan Kingstone 1 1 Department of Psychology, University of

More information

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT Sapana S. Bagade M.E,Computer Engineering, Sipna s C.O.E.T,Amravati, Amravati,India sapana.bagade@gmail.com Vijaya K. Shandilya Assistant

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Tone mapping. Digital Visual Effects, Spring 2009 Yung-Yu Chuang. with slides by Fredo Durand, and Alexei Efros

Tone mapping. Digital Visual Effects, Spring 2009 Yung-Yu Chuang. with slides by Fredo Durand, and Alexei Efros Tone mapping Digital Visual Effects, Spring 2009 Yung-Yu Chuang 2009/3/5 with slides by Fredo Durand, and Alexei Efros Tone mapping How should we map scene luminances (up to 1:100,000) 000) to display

More information

White Paper High Dynamic Range Imaging

White Paper High Dynamic Range Imaging WPE-2015XI30-00 for Machine Vision What is Dynamic Range? Dynamic Range is the term used to describe the difference between the brightest part of a scene and the darkest part of a scene at a given moment

More information

What is an image? Images and Displays. Representative display technologies. An image is:

What is an image? Images and Displays. Representative display technologies. An image is: What is an image? Images and Displays A photographic print A photographic negative? This projection screen Some numbers in RAM? CS465 Lecture 2 2005 Steve Marschner 1 2005 Steve Marschner 2 An image is:

More information

Color Reproduction. Chapter 6

Color Reproduction. Chapter 6 Chapter 6 Color Reproduction Take a digital camera and click a picture of a scene. This is the color reproduction of the original scene. The success of a color reproduction lies in how close the reproduced

More information

Camera Image Processing Pipeline: Part II

Camera Image Processing Pipeline: Part II Lecture 14: Camera Image Processing Pipeline: Part II Visual Computing Systems Today Finish image processing pipeline Auto-focus / auto-exposure Camera processing elements Smart phone processing elements

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Filter Design Circularly symmetric 2-D low-pass filter Pass-band radial frequency: ω p Stop-band radial frequency: ω s 1 δ p Pass-band tolerances: δ

More information

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Abstract

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

CS 465 Prelim 1. Tuesday 4 October hours. Problem 1: Image formats (18 pts)

CS 465 Prelim 1. Tuesday 4 October hours. Problem 1: Image formats (18 pts) CS 465 Prelim 1 Tuesday 4 October 2005 1.5 hours Problem 1: Image formats (18 pts) 1. Give a common pixel data format that uses up the following numbers of bits per pixel: 8, 16, 32, 36. For instance,

More information

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images Keshav Thakur 1, Er Pooja Gupta 2,Dr.Kuldip Pahwa 3, 1,M.Tech Final Year Student, Deptt. of ECE, MMU Ambala,

More information

Image and Video Processing

Image and Video Processing Image and Video Processing () Image Representation Dr. Miles Hansard miles.hansard@qmul.ac.uk Segmentation 2 Today s agenda Digital image representation Sampling Quantization Sub-sampling Pixel interpolation

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System

Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System Journal of Electrical Engineering 6 (2018) 61-69 doi: 10.17265/2328-2223/2018.02.001 D DAVID PUBLISHING Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System Takayuki YAMASHITA

More information

Camera Image Processing Pipeline: Part II

Camera Image Processing Pipeline: Part II Lecture 13: Camera Image Processing Pipeline: Part II Visual Computing Systems Today Finish image processing pipeline Auto-focus / auto-exposure Camera processing elements Smart phone processing elements

More information

The Science Seeing of process Digital Media. The Science of Digital Media Introduction

The Science Seeing of process Digital Media. The Science of Digital Media Introduction The Human Science eye of and Digital Displays Media Human Visual System Eye Perception of colour types terminology Human Visual System Eye Brains Camera and HVS HVS and displays Introduction 2 The Science

More information

A Wavelet-Based Encoding Algorithm for High Dynamic Range Images

A Wavelet-Based Encoding Algorithm for High Dynamic Range Images The Open Signal Processing Journal, 2010, 3, 13-19 13 Open Access A Wavelet-Based Encoding Algorithm for High Dynamic Range Images Frank Y. Shih* and Yuan Yuan Department of Computer Science, New Jersey

More information

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002

DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 2002 DIGITAL IMAGE PROCESSING (COM-3371) Week 2 - January 14, 22 Topics: Human eye Visual phenomena Simple image model Image enhancement Point processes Histogram Lookup tables Contrast compression and stretching

More information

HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY

HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY Ronan Boitard Mahsa T. Pourazad Panos Nasiopoulos University of British Columbia, Vancouver, Canada TELUS Communications Inc., Vancouver,

More information

CPSC 4040/6040 Computer Graphics Images. Joshua Levine

CPSC 4040/6040 Computer Graphics Images. Joshua Levine CPSC 4040/6040 Computer Graphics Images Joshua Levine levinej@clemson.edu Lecture 04 Displays and Optics Sept. 1, 2015 Slide Credits: Kenny A. Hunt Don House Torsten Möller Hanspeter Pfister Agenda Open

More information

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression

Image Processing Computer Graphics I Lecture 20. Display Color Models Filters Dithering Image Compression 15-462 Computer Graphics I Lecture 2 Image Processing April 18, 22 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/ Display Color Models Filters Dithering Image Compression

More information

BBM 413! Fundamentals of! Image Processing!

BBM 413! Fundamentals of! Image Processing! BBM 413! Fundamentals of! Image Processing! Today s topics" Point operations! Histogram processing! Erkut Erdem" Dept. of Computer Engineering" Hacettepe University" "! Point Operations! Histogram Processing!

More information

Firas Hassan and Joan Carletta The University of Akron

Firas Hassan and Joan Carletta The University of Akron A Real-Time FPGA-Based Architecture for a Reinhard-Like Tone Mapping Operator Firas Hassan and Joan Carletta The University of Akron Outline of Presentation Background and goals Existing methods for local

More information

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing BBM 413 Fundamentals of Image Processing Erkut Erdem Dept. of Computer Engineering Hacettepe University Point Operations Histogram Processing Today s topics Point operations Histogram processing Today

More information

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing

BBM 413 Fundamentals of Image Processing. Erkut Erdem Dept. of Computer Engineering Hacettepe University. Point Operations Histogram Processing BBM 413 Fundamentals of Image Processing Erkut Erdem Dept. of Computer Engineering Hacettepe University Point Operations Histogram Processing Today s topics Point operations Histogram processing Today

More information

DISPLAY metrology measurement

DISPLAY metrology measurement Curved Displays Challenge Display Metrology Non-planar displays require a close look at the components involved in taking their measurements. by Michael E. Becker, Jürgen Neumeier, and Martin Wolf DISPLAY

More information

Image Distortion Maps 1

Image Distortion Maps 1 Image Distortion Maps Xuemei Zhang, Erick Setiawan, Brian Wandell Image Systems Engineering Program Jordan Hall, Bldg. 42 Stanford University, Stanford, CA 9435 Abstract Subjects examined image pairs consisting

More information

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab 2009-2010 Vincent DeVito June 16, 2010 Abstract In the world of photography and machine vision, blurry

More information

FEATURE. Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display

FEATURE. Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display Adaptive Temporal Aperture Control for Improving Motion Image Quality of OLED Display Takenobu Usui, Yoshimichi Takano *1 and Toshihiro Yamamoto *2 * 1 Retired May 217, * 2 NHK Engineering System, Inc

More information

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro

Cvision 2. António J. R. Neves João Paulo Silva Cunha. Bernardo Cunha. IEETA / Universidade de Aveiro Cvision 2 Digital Imaging António J. R. Neves (an@ua.pt) & João Paulo Silva Cunha & Bernardo Cunha IEETA / Universidade de Aveiro Outline Image sensors Camera calibration Sampling and quantization Data

More information

Visible Light Communication-based Indoor Positioning with Mobile Devices

Visible Light Communication-based Indoor Positioning with Mobile Devices Visible Light Communication-based Indoor Positioning with Mobile Devices Author: Zsolczai Viktor Introduction With the spreading of high power LED lighting fixtures, there is a growing interest in communication

More information

Chapter 9 Image Compression Standards

Chapter 9 Image Compression Standards Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 1IT342 Image Compression Standards The image standard specifies the codec, which defines how

More information

Visibility of Uncorrelated Image Noise

Visibility of Uncorrelated Image Noise Visibility of Uncorrelated Image Noise Jiajing Xu a, Reno Bowen b, Jing Wang c, and Joyce Farrell a a Dept. of Electrical Engineering, Stanford University, Stanford, CA. 94305 U.S.A. b Dept. of Psychology,

More information

Image Processing. Adrien Treuille

Image Processing. Adrien Treuille Image Processing http://croftonacupuncture.com/db5/00415/croftonacupuncture.com/_uimages/bigstockphoto_three_girl_friends_celebrating_212140.jpg Adrien Treuille Overview Image Types Pixel Filters Neighborhood

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

Automatic Selection of Brackets for HDR Image Creation

Automatic Selection of Brackets for HDR Image Creation Automatic Selection of Brackets for HDR Image Creation Michel VIDAL-NAQUET, Wei MING Abstract High Dynamic Range imaging (HDR) is now readily available on mobile devices such as smart phones and compact

More information

RGB Laser Meter TM6102, RGB Laser Luminance Meter TM6103, Optical Power Meter TM6104

RGB Laser Meter TM6102, RGB Laser Luminance Meter TM6103, Optical Power Meter TM6104 1 RGB Laser Meter TM6102, RGB Laser Luminance Meter TM6103, Optical Power Meter TM6104 Abstract The TM6102, TM6103, and TM6104 accurately measure the optical characteristics of laser displays (characteristics

More information

Subjective evaluation of image color damage based on JPEG compression

Subjective evaluation of image color damage based on JPEG compression 2014 Fourth International Conference on Communication Systems and Network Technologies Subjective evaluation of image color damage based on JPEG compression Xiaoqiang He Information Engineering School

More information

Image Enhancement for Astronomical Scenes. Jacob Lucas The Boeing Company Brandoch Calef The Boeing Company Keith Knox Air Force Research Laboratory

Image Enhancement for Astronomical Scenes. Jacob Lucas The Boeing Company Brandoch Calef The Boeing Company Keith Knox Air Force Research Laboratory Image Enhancement for Astronomical Scenes Jacob Lucas The Boeing Company Brandoch Calef The Boeing Company Keith Knox Air Force Research Laboratory ABSTRACT Telescope images of astronomical objects and

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University!

Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Burst Photography! EE367/CS448I: Computational Imaging and Display! stanford.edu/class/ee367! Lecture 7! Gordon Wetzstein! Stanford University! Motivation! wikipedia! exposure sequence! -4 stops! Motivation!

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

On Contrast Sensitivity in an Image Difference Model

On Contrast Sensitivity in an Image Difference Model On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New

More information

High dynamic range imaging and tonemapping

High dynamic range imaging and tonemapping High dynamic range imaging and tonemapping http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 12 Course announcements Homework 3 is out. - Due

More information

ICC Votable Proposal Submission Colorimetric Intent Image State Tag Proposal

ICC Votable Proposal Submission Colorimetric Intent Image State Tag Proposal ICC Votable Proposal Submission Colorimetric Intent Image State Tag Proposal Proposers: Jack Holm, Eric Walowit & Ann McCarthy Date: 16 June 2006 Proposal Version 1.2 1. Introduction: The ICC v4 specification

More information

The Quantitative Aspects of Color Rendering for Memory Colors

The Quantitative Aspects of Color Rendering for Memory Colors The Quantitative Aspects of Color Rendering for Memory Colors Karin Töpfer and Robert Cookingham Eastman Kodak Company Rochester, New York Abstract Color reproduction is a major contributor to the overall

More information

Image Processing. Michael Kazhdan ( /657) HB Ch FvDFH Ch. 13.1

Image Processing. Michael Kazhdan ( /657) HB Ch FvDFH Ch. 13.1 Image Processing Michael Kazhdan (600.457/657) HB Ch. 14.4 FvDFH Ch. 13.1 Outline Human Vision Image Representation Reducing Color Quantization Artifacts Basic Image Processing Human Vision Model of Human

More information

Application Note (A13)

Application Note (A13) Application Note (A13) Fast NVIS Measurements Revision: A February 1997 Gooch & Housego 4632 36 th Street, Orlando, FL 32811 Tel: 1 407 422 3171 Fax: 1 407 648 5412 Email: sales@goochandhousego.com In

More information

Denoising and Effective Contrast Enhancement for Dynamic Range Mapping

Denoising and Effective Contrast Enhancement for Dynamic Range Mapping Denoising and Effective Contrast Enhancement for Dynamic Range Mapping G. Kiruthiga Department of Electronics and Communication Adithya Institute of Technology Coimbatore B. Hakkem Department of Electronics

More information

DodgeCmd Image Dodging Algorithm A Technical White Paper

DodgeCmd Image Dodging Algorithm A Technical White Paper DodgeCmd Image Dodging Algorithm A Technical White Paper July 2008 Intergraph ZI Imaging 170 Graphics Drive Madison, AL 35758 USA www.intergraph.com Table of Contents ABSTRACT...1 1. INTRODUCTION...2 2.

More information

Enhanced LWIR NUC Using an Uncooled Microbolometer Camera

Enhanced LWIR NUC Using an Uncooled Microbolometer Camera Enhanced LWIR NUC Using an Uncooled Microbolometer Camera Joe LaVeigne a, Greg Franks a, Kevin Sparkman a, Marcus Prewarski a, Brian Nehring a a Santa Barbara Infrared, Inc., 30 S. Calle Cesar Chavez,

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Practical assessment of veiling glare in camera lens system

Practical assessment of veiling glare in camera lens system Professional paper UDK: 655.22 778.18 681.7.066 Practical assessment of veiling glare in camera lens system Abstract Veiling glare can be defined as an unwanted or stray light in an optical system caused

More information

International Journal of Advance Engineering and Research Development. Asses the Performance of Tone Mapped Operator compressing HDR Images

International Journal of Advance Engineering and Research Development. Asses the Performance of Tone Mapped Operator compressing HDR Images Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 9, September -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Asses

More information

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis

Chapter 2: Digital Image Fundamentals. Digital image processing is based on. Mathematical and probabilistic models Human intuition and analysis Chapter 2: Digital Image Fundamentals Digital image processing is based on Mathematical and probabilistic models Human intuition and analysis 2.1 Visual Perception How images are formed in the eye? Eye

More information

Simulation of film media in motion picture production using a digital still camera

Simulation of film media in motion picture production using a digital still camera Simulation of film media in motion picture production using a digital still camera Arne M. Bakke, Jon Y. Hardeberg and Steffen Paul Gjøvik University College, P.O. Box 191, N-2802 Gjøvik, Norway ABSTRACT

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK IMAGE COMPRESSION FOR TROUBLE FREE TRANSMISSION AND LESS STORAGE SHRUTI S PAWAR

More information

Tonemapping and bilateral filtering

Tonemapping and bilateral filtering Tonemapping and bilateral filtering http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 6 Course announcements Homework 2 is out. - Due September

More information

Reference Free Image Quality Evaluation

Reference Free Image Quality Evaluation Reference Free Image Quality Evaluation for Photos and Digital Film Restoration Majed CHAMBAH Université de Reims Champagne-Ardenne, France 1 Overview Introduction Defects affecting films and Digital film

More information

The Quality of Appearance

The Quality of Appearance ABSTRACT The Quality of Appearance Garrett M. Johnson Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science Rochester Institute of Technology 14623-Rochester, NY (USA) Corresponding

More information