High Dynamic Range Imaging: Towards the Limits of the Human Visual Perception

Size: px
Start display at page:

Download "High Dynamic Range Imaging: Towards the Limits of the Human Visual Perception"

Transcription

1 High Dynamic Range Imaging: Towards the Limits of the Human Visual Perception Rafał Mantiuk Max-Planck-Institut für Informatik Saarbrücken 1 Introduction Vast majority of digital images and video material stored today can capture only a fraction of visual information visible to the human eye and does not offer sufficient quality to reproduce them on the future generation of display devices. The limitating factor is not the resolution, since most consumer level digital cameras can take images of higher number of pixels than most of displays can offer. The problem is a limited color gamut and even more limited dynamic range (contrast) that cameras can capture and that majority of image and video formats can store. For instance, each pixel value in the JPEG image encoding is represented using three 8-bit integer numbers (0-255) using the YC r C b color space. Such color space is able to store only a small part of visible color gamut (although containing the colors most often encountered in the real world), as illustrated in Figure 1-left, and even smaller part of luminance range that can be perceived by our eyes, as illustrated in Figure 1-right. The reason for this is that the JPEG format was designed to store as much information as can be displayed on the majority of displays, which were Cathode Ray Tube (CRT) 11

2 LCD Display [2006] ( cd/m 2 ) CRT Display (1-100 cd/m 2 ) Moonless Sky cd/m Full Moon cd/m Sun cd/m Luminance [cd/m ] Fig. 1: Left: color gamut frequently used in traditional imaging (CCIR-705), compared to the full visible color gamut. Right: real-world luminance values compared with the range of luminance that can be displayed on CRT and LDR monitors. monitors at the time when the JPEG compression was developed. This assumption is no longer valid, as the new generations of LCD and Plasma displays can visualize much broader color gamut and dynamic range than their CRT ancestors. Moreover, as new display devices become available, there is a need for higher precision of image and video content. The traditional lowdynamic range and limited color gamut imaging, which is confined to three 8-bit integer color channels, cannot offer the precision that is needed for the further developments in image capture and display technologies. The High Dynamic Range Imaging (HDRI) overcomes the limitation of traditional imaging by using much higher precision when performing operations on color. Pixel colors are specified in HDR images as a triple of floating point values (usually 32-bit per color channel), providing the accuracy that is far below the visibility threshold of the human eye. Moreover, HDRI operates on colors of original scenes, instead of their renderings on a particular display medium, as is the case of the traditional imaging. By its inherent colorimetric precision, HDRI can represent all colors that can be found in real world and can be perceived by the human eye. HDRI, which originated from the computer graphics field, has been recently gaining momentum and revolutionizing almost all fields of digital imaging. One of the breakthroughs of the HDR revolution was the development of an HDR display, which proved that the visualization of color and the luminance range close to real scenes is possible (Seetzen, Heidrich, Stuerzlinger, Ward, Whitehead, Trentacoste, Ghosh & Vorozcovs 2004). One of the first to adopt HDRI were video game developers together with graphics card vendors. Today most of the state-of-the art video game engines perform rendering using HDR precision to deliver more believable and appealing virtual reality worlds. A computer generated imagery used in the special 12

3 effect production strongly depends on the HDR techniques. High-end cinematographic cameras, both analog and digital, already provide significantly higher dynamic range than most of the displays today. Their quality can be retained after digitalization only if a form of HDR representation is used. HDRI is also a strong trend in digital photography, mostly due to the multiexposure techniques, which can be used to take an HDR image even with a consumer level digital camera. To catch up with the HDR trend, many software vendors announce their support of the HDR image formats, taking Adobe R Photoshop R CS2 as an example. Besides its significant impact on existing imaging technologies that we can observe today, HDRI has potential to radically change the methods in which imaging data is processed, displayed and preserved in several fields of science. Computer vision algorithms can greatly benefit from the increased precision of HDR images, which lack over- or under-exposed regions, which are often the cause of the algorithms failure. Medical imaging has already developed image formats (DICOM format) that can partly cope with shortcomings of traditional images, however they are supported only by specialized hardware and software. HDRI gives the sufficient precision for medical imaging and therefore its capture, processing and rendering techniques can be used also in this field. For instance, HDR displays can show even better contrast than high-end medical displays and therefore facilitate diagnosing based on CT scans. HDR techniques can also find applications in astronomical imaging, remote sensing, industrial design and scientific visualization. HDRI does not only provide higher precision, but also enables to synthesize, store and visualize a range of perceptual cues, which are not achievable with the traditional imaging. Most of the imaging standards and color spaces have been developed to match the needs of office or display illumination conditions. When viewing such scenes or images in such conditions, our visual system operates in a mixture of day-light and dim-light vision state, so called the mesopic vision. When viewing out-door scenes, we use day-light perception of colors, so called the photopic vision. This distinction is important for digital imaging as both types of vision shows different performance and result in different perception of colors. HDRI can represent images of luminance range fully covering both the photopic and the mesopic vision, thus making distinction between them possible. One of the differences between mesopic and photopic vision is the impression of colorfulness of objects. We tend to regard objects more colorful when they are brightly illuminated, which is the phenomena that is called Hunt s effect. To render enhanced colorfulness properly, digital images must preserve information about the actual level of luminance of the original scene, which is not possible in the case of the traditional imaging. Real-world scenes are not only brighter and more colorful than their digital reproductions, but also contain much higher contrast, both 13

4 local between neighboring objects, and global between distant objects. The eye has evolved to cope with such high contrast and its presence in a scene evokes important perceptual cues. The traditional imaging, unlike HDRI, is not able to represent such high-contrast scenes. Similarly, the traditional images can hardly represent such common visual phenomena as self-luminous surfaces (sun, shining lamps) and bright specular highlights. They also do not contain enough information to reproduce visual glare (brightening of the areas surrounding shining objects) and a short-time dazzle due to sudden raise of light level (e.g. when exposed to the sunlight after staying indoors). To faithfully represent, store and then reproduce all these effects, the original scene must be stored and treated using high fidelity HDR techniques. Despite its advantages, the inception of HDRI in various fields of digital imaging poses serious problems. The biggest is the lack of well standardized color spaces and image formats, of which traditional imaging is abundant. Such color spaces and image formats would facilitate exchange of information between HDR applications. Due to the different treatment of color, introduction of HDRI also requires redesigning entire imaging pipeline, including acquisition (cameras, computer graphics synthesis), storage (formats, compression algorithms) and display (HDR display devices and display algorithms). This paper summarizes the work we have done to make the transition from the traditional imaging to HDRI smoother. In the next section we describe our implementation of HDR image and video processing framework, which we created for the purpose of our research projects and which we made available as an Open Source project. Section 3 describes our contributions in the field of HDR image and video encoding. These include a perceptually motivated color space for efficient encoding of HDR pixels and two extensions of MPEG standard that allow to store movies containing full color gamut and luminance range visible to the human eye. 2 HDR Imaging Framework Most of the traditional image processing libraries store each pixel using limitedprecision integer numbers. Moreover, they offer restricted means of colorimetric calibration. To overcome these problems, we have implemented HDR imaging framework as a package of several command line programs for reading, writing, manipulating and viewing high-dynamic range (HDR) images and video frames. The package was intended to solve our current research problems, therefore simplicity and flexibility were priorities in its design. Since we found the software very useful in numerous projects, we decided to make it available for the research community as an Open Source project 14

5 licensed under the GPL. The software is distributed under the name pfstools and its home page can be found at net/. The major role of the software is the integration of several imaging and image format libraries, such as ImageMagick, OpenEXR and NetPBM, into a single framework for processing high precision images. To provide enough flexibility for a broad range of applications, we have build pfstools on the following concepts: Images/frames should hold an arbitrary number of channels (layers), which can represent not only color, but also depth, alpha-channel, and texture attributes; Each channel should be stored with high precision, using floating point numbers. If possible, the data should be colorimetrically calibrated and provide the precision that exceeds the performance of the human visual system. Luminance should be stored using physical units of cd/m 2 to distinguish between the night- and the day-light vision. There should be user-defined data entries for storing additional, application specific information (e.g. colorimetric coordinates of the white point). pfstools are built around a generic and simple format of storing images, which requires only a few lines of code to read or write. The format offers arbitrary number of channels, each represented as a 2-D array of 32-bit floating point numbers. There is no compression as the files in this format are intended to be transferred internally between applications without writing them to a disk. A few channels have a predefined function. For example, channels with the IDs X, Y and Z are used to store color data in the CIE XYZ (absolute) color space. This is different to most imaging frameworks that operate on RGB channels. The advantage of the CIE XYZ color space is that it is precisely defined in terms of spectral radiance and the full visible color gamut can be represented using only positive values of color components. The file format also offers a way to include in an image any number of user tags (name and value pairs), which can contain any application dependent data. A sequence of images is interpreted by all pfs-compliant applications as consequtive frames of an animation, so that video can be processed in the same way as images. The format is described in detail in a separate specification 1. pfstools are a set of command line tools with almost no graphical user interface. This greatly facilitates scripting and lessens the amount of work needed to program and maintain a user interface. The exception is a viewer 1 Specification of the pfs format can be found at: 15

6 of HDR images. The main components of pfstools are: programs for reading and writing images in all major HDR and LDR formats (e.g. OpenEXR, Radiance s RGBE, logluv TIFF, 16-bit TIFF, PFM, JPEG, PNG, etc.), programs for basic image manipulation (rotation, scaling, cropping, etc.), an HDR image viewer, and a library that simplifies file format reading and writing in C++. The package includes also an interface for matlab and GNU Octave. The pfstools framework does not impose any restrictions on the programming language. All programs that exchange data with pfstools must read or write the file format, but there is no need to use any particular library. The typical usage of pfstools involves executing several programs joined by UNIX pipes. The first program transmits the current frame or image to the next one in the chain. The final program should either display an image or write it to a disk. Such pipeline architecture improves flexibility of the software but also gives straightforward means for parallel execution of the pipeline components on multiprocessor computers. Some examples of command lines are given below: pfsin input.exr pfsfilter pfsout output.exr Read the image input.exr, apply the filter pfsfilter and write the output to output.exr. pfsin input.exr pfsfilter pfsview Read the image input.exr, apply the filter pfsfilter and show the result in an HDR image viewer. pfsin in%04d.exr --frames 100:2:200 \ pfsfilter pfsout out%04d.hdr Read the sequence of OpenEXR frames in0100.exr, in0102.exr,.., in0200.exr, apply the filter pfsfilter and write the result in Radiance s RGBE format to out0000.hdr, out0001.hdr,... pfstools is only a base set of tools which can be easily extended and integrated with other software. For example, pfstools is used to read, write and convert images and video frames for the prototype implementation of our image and video compression algorithms. HDR images can be rendered on existing displays using one of the several implemented tone mapping algorithms from the pfstmo package 2, which is build on top of pfstools. Using the software from the pfscalibration package 3, which is also based on pfstools, 2 pfstmo home page: 3 pfscalibration home page: calibration/pfs.html 16

7 cameras can be calibrated and images rescaled in physical or colorimetrical units. A computational model of the human visual system HDR-VDP 4 uses pfstools to read its input from multitude of image formats. We created pfstools to fill the gap in the imaging software, which can seldom handle HDR images. We have found from the s we received and the discussion group contacts that pfstools is used for high definition HDR video encoding, medical imaging, variety of tone mapping projects, texture manipulations and quality evaluation of CG rendering. 3 HDR Image and Video Compression Wide acceptance of new imaging technology is hardly possible if there is no image and video content that the users could benefit from. The distribution of digital content is strongly limited if there is no efficient image and video compression and no standard file formats that software and hardware could recognize and read. In this section we propose several solutions to the problem of HDR image and video compression, including a color space for HDR pixels that is used as an extension to the MPEG-4 standard, and a backwardcompatible HDR MPEG compression algorithm. 3.1 Color Space for HDR Pixels Although the most natural representation of HDR images is a triple of floating point numbers, such representation does not lead to the best image or video compression ratios and adds complexity to compression algorithms. Moreover, since the existing image and video formats, such as MPEG-4 or JPEG2000, can encode only integer numbers, HDR pixels must be represented as integers in order to encode them using these formats. Therefore, it is highly desirable to convert HDR pixels from a triple of 32-bit floating point values, to integer numbers. Such integer encoding of luminance should take into account the limitations of human perception and the fact that the eye can see only limited numbers of luminance levels and colors. This section gives an overview of the color space that can efficiently represent HDR pixel values using only integer numbers and the minimal number of bits. More information on this color space can be found in (Mantiuk, Myszkowski & Seidel 2006). Different applications may require different precision of the visual data. For example satellite imaging may require multi-spectral techniques to capture information that is not even visible to the human eye. However, for a 4 HDR-VDP home page: index.html 17

8 large number of applications it is sufficient if the human eye cannot notice any encoding artifacts. It is important to note that low dynamic range formats, like JPEG or a simple profile MPEG, can not represent the full range of colors that the eye can see. Although the quantization artifacts due to 8-bit discretization in those formats are hardly visible to our eyes, those encoding can represent only the fraction of the dynamic range and the color gamut that the eye can see. Choice of the color space used for image or video compression has a great impact on the compression performance and the capabilities of the encoding format. To offer the best trade-off between compression efficiency and visual quality without imposing any assumptions on the display technology, we propose that the color space used for compression has the following properties: 1. The color space can encode the full color gamut and the full range of luminance that is visible to the human eye. This way the human eye, instead of the current imaging technology, defines the limits of such encoding. 2. A unit distance in the color space correlates with the Just Noticeable Difference (JND). This offers a more uniform distribution of distortions across an image and simplifies control over distortions for lossy compression algorithms. 3. Only positive integer values are used to encode luminance and color. Integer representation simplifies and improves image and video compression. 4. A half-unit distance in the color space is below 1 JND. If this condition is met, the quantization errors due to rounding to integer numbers are not visible. 5. The correlation between color channels should be minimal. If color channels are correlated, the same information is encoded twice, which worsens the compression performance. 6. There is a direct relation between the encoded integer values and the photometrically calibrated XYZ color values. There are several color spaces that already meet some of the above requirements, but there is no color space that accommodates them all. For example, the Euclidean distance in the CIE L u v color space correlates with the JND (Property 2), but this color space does not generalize to the full range of visible luminance levels, ranging from scotopic light levels, to very bright photopic conditions. Several perceptually uniform quantization strategies have been proposed (Sezan, Yip & Daly 1987, Lubin & Pica 1991), including the grayscale standard display function from the DICOM standard (DICOM PS ). However, none of these take into account as broad dynamic range and diversified luminance conditions as required by Property 1. Most of the traditional image or video formats use so called gamma correction to convert luminance or RGB tristimulus values into integer numbers, which can be latter encoded. Gamma correction is usually given in a form of 18

9 a power function intensity = signal γ (or signal = intensity (1/γ) for an inverse gamma correction), where the value of γ is typically around 2.2. Gamma correction was originally intended to reduce camera noise and to control the current of the electron beam in CRT monitors. Further details on gamma correction can be found in (Poynton 2003). Accidentally, light intensity values, after being converted into signal using the inverse gamma correction formula, correspond usually well with our perception of lightness. Therefore such values are also well suited for image encoding since the distortions caused by image compression are equally distributed across the whole scale of signal values. In other words, altering signal by the same amount for both small values and large values of a signal should result in the same magnitude of visible changes. Unfortunately, this is only true for a limited range of luminance values, usually within a range from 0.1 to 100 cd/m 2. This is because the response characteristics of the human visual system (HVS) to luminance 5 changes considerably above 100 cd/m 2. This is especially noticeable for HDR images, which can span the luminance range from 10 5 to cd/m 2. An ordinary gamma correction is not sufficient in such case and a more elaborate model of luminance perception is needed. This problem is solved by the JND encoding, described in this section bit JND L 8-bit u Fig. 2: 28-bit per pixel JND encoding 8-bit v JND encoding can be regarded as an extension of gamma correction to HDR pixel values. The name JND encoding is motivated by its design, which makes the encoded values correspond to the Just Noticeable Differences (JND) of luminance. JND encoding requires two bytes to represent color and 12 bits to encode luminance (see Figure 2). Chroma (hue and saturation) is represented using u and v chromacities as recommended by CIE 1976 Uniform Chromacity Scales (UCS) diagram and defined by equations: u = v = 4X X + 15Y + 3Z 9Y X + 15Y + 3Z (1) (2) 5 HVS use both types of photoreceptors, cones and rods, in the range of luminance aproximately from 0.1 to 100 cd/m 2. Above 100 cd/m 2 only cones contribute to the visual response. 19

10 Luma, l, is found from absolute luminance values, y [cd/m 2 ], using the formula: a y if y < y l l hdr (y) = b y c + d if y l y < y h (3) e log(y)+ f if y y h There is also a formula for the inverse conversion, from 12-bit luma to luminance: a l hdr if l hdr < l l y(l hdr ) = b (l hdr + d ) c if l l l hdr < l h (4) e exp( f l hdr ) if l hdr l h The constants are given in the table below: a = e = a = e = b = f = b = e 30 f = c = y l = c = l l = d = y h = d = l h = The above formulas have been derived from the psychophysical measurements of the luminance detection thresholds 6. To meet our initial requirements for HDR color space, in particular Property 4, the derived formulas guarantee that the same difference of values l, regardless whether in bright or in dark region, corresponds to the same visible difference. Neither luminance nor the logarithm of luminance has this property, since the response of the human visual system to luminance is complex and non-linear. The values of l lay in the range from 0 to 4095 (12 bit integer) for the corresponding luminance values from 10 5 to cd/m 2, which is the range of luminance that the human eye can effectively see (although the values above 10 6 can be damaging to the eye and would mostly be useful for representing the luminance of bright light sources). Function l(y) (Equation 3) is plotted in Figure 3 and labelled JND encoding. Note that both the formula and the shape of the JND encoding is very similar to the nonlinearity (gamma correction) used in the srgb color space. Both JND encoding and srgb nonlinearity follow similar curve on the plot, but the JND encoding is more conservative (a steeper curve means that a luminance range is projected on a larger number of discrete luma values, V, thus lowering quantization errors). However, the srgb non-linearity results in a too steep function for luminance above 100 cd/m 2, which requires too many bits to encode real-world luminance values. The color space described in this section can be directly used for many existing image and video compression formats, such as JPEG-2000 and MPEG-4. 6 The full derivation of this function can be found in (Mantiuk, Myszkowski & Seidel 2006). The formulas are derived from the threshold versus intensity characteristic measured for human subjects and fitted to the analytical model (CIE 1981). 20

11 Luma, l (pixel gray-value) logarithmic compression srgb JND Encoding 0 1e ,000 1e+06 1e+08 1e+10 Luminance, y [cd/m 2 ] Fig. 3: Functions mapping physical luminance y to encoded luma values l. JND Encoding perceptual encoding of luminance; srgb nonlinearity (gamma correction) used for the srgb color space; logarithmic compression logarithm of luminance, rescaled to 12-bit integer range. Note that encoding high luminance values using the srgb nonlinearity (dashed line) would require significantly larger number of bits than the perceptual encoding. Both these formats can encode luminance with 12 or more bits, which make them fully capable of representing HDR pixel values. As a proof of concept we extended an MPEG-4 compression algorithm to use the proposed color space. The modified video encoder achieved good compression performance, offering the ability to store the full color gamut and the range of luminance that is visible to the human eye (Mantiuk, Krawczyk, Myszkowski & Seidel 2004), as demonstrated in Figure 4. Moreover, the advanced HDR video player, which we created for the purpose of playback of HDR movies, can play video and apply one from several available tone-mapping algorithms in real-time (Krawczyk, Myszkowski & Seidel 2005). The additional advantage of HDR content is the possibility to simulate on traditional displays the perceptual effects that are normally only evoked when observing scenes of large contrast and luminance range. An examples of such effects are the night vision and an optically accurate motion blur, demonstrated in Figure 5. More examples can be found at the project page: mpi-inf.mpg.de/resources/hdrvideo/index.html. The application of the proposed color space is not limited to image and video encoding. Since the color space is approximately perceptually uniform (Property 2), it can be used as a color difference metric for HDR images, similarly as the CIE L u v color space is commonly used for traditional images. The luminance coding can also approximate photoreceptor response to light in the computational models of the human visual system (Mantiuk, Myszkowski & Seidel 2006). Since the proposed color encoding minimizes the number of bits required to represent color and at the same time does not 21

12 Fig. 4: Two screenshots from the advanced HDR video player, showing an extreme dynamic range captured withing HDR video sequences. Blue frames represent virtual filters that adjust exposure in the selected regions. Fig. 5: Screenshots demonstrating simulation of perceptual and optical effects, possible only for HDR content. Left: simulation of night vision, resulting in a limited color vision and bluish cast of colors. Right: simulation of physically accurate motion blur (right side) compared with the motion blur computed from the traditional video material (left side). compromise visual quality, it can be an attractive method of encoding data transmitted digitally from the CPU to a graphics card or from the graphics card to a display device. 3.2 Backward-compatible HDR Video Compression Since the traditional, low-dynamic range (LDR) file formats for images and video, such as JPEG or MPEG, have become widely adapted standards, supported by almost all software and hardware equipment dealing with digital imaging, it cannot be expected that these formats will be immediately replaced with their HDR counterparts. To facilitate transition from the traditional to HDR imaging, there is a need for backward compatible HDR formats, that would be fully compatible with existing LDR formats and at the same time would support enhanced dynamic range and color gamut. 22

13 23 Fig. 6: The proposed backward compatible HDR DVD movie processing pipeline. The high dynamic range content, provided by advanced cameras and CG rendering, is encoded in addition to the low dynamic range (LDR) content in the video stream. The files compressed with the proposed HDR MPEG method can play on traditional LDR and future generation HDR displays.

14 l hdr l ldr Encoding movies in HDR format is attractive for cinematography, especially that movies are already shot with high-end cameras, both analog and digital, that can capture much higher dynamic range than typical MPEG compression can store. To encode cinema movies using traditional MPEG compression, the movie must undergo processing called color grading. Part of this process is the adjustment of tones (tone-mapping) and colors (gamut-mapping), so that they can be displayed on majority of TV sets (refer to Figure 6). Although such processing can produce high quality content for typical CRT and LCD displays, the high quality information, from which advanced HDR displays could benefit, is lost. To address this problem, the proposed HDR- MPEG encoding can compress both LDR and HDR into the same backward compatible movie file (see Figure 6). Depending on the capabilities of the display and playback hardware or software, either LDR or HDR content is displayed. This way HDR content can be added to the video stream at the moderate cost of about 30% of the LDR stream size. Because of such small overhead, both standard-definition and high-definition (HD) movies can fit in their original storage medium when encoded with HDR information. Fig. 7: A data flow of the backward compatible HDR MPEG encoding. The complete data flow of the proposed backward compatible HDR video compression algorithm is shown in Figure 7. The encoder takes two sequences of HDR and LDR frames as input. The LDR frames, intended for LDR devices, usually contain a tone mapped or gamut mapped version of the 24

15 original HDR sequence. The LDR frames are compressed using a standard MPEG encoder (MPEG encode in Figure 7) to produce a backward compatible LDR stream. The LDR frames are then decoded to obtain a distorted (due to lossy compression) LDR sequence, which is later used as a reference for the HDR frames (see MPEG decode in Figure 7). Both the LDR and HDR frames are then converted to compatible color spaces, which minimize differences between LDR and HDR colors. The reconstruction function (see Find reconstruction function in Figure 7) reduces the correlation between LDR and HDR pixels by giving the best prediction of HDR pixels based on the values of LDR pixels. The residual frame is introduced to store a difference between the original HDR values and the values predicted by the reconstruction function. To further improve compression, invisible luminance and chrominance variations are removed from the residual frame (see Filter invisible noise in Figure 7). Such filtering simulates the visual processing that is performed by the retina in order to estimate the contrast detection threshold at which the eye does not see any differences. The contrast magnitudes that are below this threshold are set to zero. Finally, the pixel values of a residual frame are quantized (see Quantize residual frame in Figure 7) and compressed using a standard MPEG encoder into a residual stream. Both the reconstruction function and the quantization factors are compressed using a lossless arithmetic encoding and stored in an auxiliary stream. This subsection is intended to give only an overview of the compression algorithm. Further details can be found in (Mantiuk, Efremov, Myszkowski & Seidel 2006a) or (Mantiuk, Efremov, Myszkowski & Seidel 2006b) and on the project web page: hdr/hdrmpeg/. We implemented and tested a dual video stream encoding for the purpose of a backward compatible HDR encoding, however, we believe that other applications that require encoding multiple streams can partly or fully benefit from the proposed method. For example, a movie could contain a separate video stream for color blind people. Such a stream could be efficiently encoded because of its high correlation with the original color stream. Movie producers commonly target different audiences with different color appearance (for example Kill Bill 2 was screened with a different color stylization in Japan). The proposed algorithm could be easily extended so that several color stylized movies could be stored on a single DVD. This work is also a step towards an efficient encoding of multiple viewpoint video, required for 3D video (Matusik & Pfister 2004). 25

16 4 Conclusions In this paper we introduce the concept of HDR imaging, pointing out its advantages over the traditional digital imaging. We describe our implementation of the image processing software that operates on HDR images and offers flexibility necessary for research purposes. We believe that the key issue that needs to be resolved to enable wide acceptance of HDRI is efficient image and video compression of HDR content. We address the compression issues by deriving a perceptually-motivated HDR color space capable of encoding the entire dynamic range and color gamut visible to the human eye. We propose also two compression algorithms, one being a straightforward extension of the existing MPEG standard, and the other offering backward compatibility with traditional video content and equipment. The proposed backwardcompatible algorithm facilitates a smooth transition from the traditional to high-fidelity HDR DVD content. In our work we try to realize the concept of an imaging framework that would not be restricted by any particular imaging technology and, if storage efficiency is required, be limited only by the capabilities of the human visual system. If the traditional imaging is strongly dependent on the particular technology (e.g. primaries of color spaces based on the red, green and blue phosphor in CRT displays), HDRI can offer an image-independent representation of images and video. However, redesigning existing imaging software and hardware to work with HDR content requires a lot of effort and definition of new imaging standards. Our mission is to popularize the concept of HDR imaging, develop standard tools and algorithms for processing HDR content and research the aspects of human perception that have key influence on digital imaging. Acknowledgements I would like to thank my advisors, Karol Myszkowski and Hans-Peter Seidel, for supporting my work on HDRI. Special thanks go to Grzegorz Krawczyk and Alexander Efremov for their work on the HDR video compression projects. 26

17 References CIE (1981). An Analytical Model for Describing the Influence of Lighting Parameters Upon Visual Performance, Vol. 1. Technical Foundations, CIE 19/2.1, International Organization for Standardization. DICOM PS (2004). Part 14: Grayscale standard display function, Digital Imaging and Communications in Medicine (DICOM), National Electrical Manufacturers Association. URL: Krawczyk, G., Myszkowski, K. & Seidel, H.-P. (2005). Perceptual effects in real-time tone mapping, SCCG 05: Proc. of the 21st Spring Conference on Computer Graphics, pp Lubin, J. & Pica, A. (1991). A non-uniform quantizer matched to the human visual performance, Society of Information Display Int. Symposium Technical Digest of Papers (22): Mantiuk, R., Efremov, A., Myszkowski, K. & Seidel, H.-P. (2006a). Backward compatible high dynamic range mpeg video compression, ACM Transactions on Graphics 25(3). Mantiuk, R., Efremov, A., Myszkowski, K. & Seidel, H.-P. (2006b). Design and evaluation of backward compatible high dynamic range video compression, MPI Technical Report MPI-I , Max Planck Institute für Informatik. Mantiuk, R., Krawczyk, G., Myszkowski, K. & Seidel, H.-P. (2004). Perception-motivated high dynamic range video encoding, ACM Transactions on Graphics 23(3): Mantiuk, R., Myszkowski, K. & Seidel, H.-P. (2006). Lossy compression of high dynamic range images and video, Proc. of Human Vision and Electronic Imaging XI, Vol of Proceedings of SPIE, SPIE, San Jose, USA, p V. Matusik, W. & Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes, ACM Trans. on Graph. 23(3): Poynton, C. (2003). Digital Video and HDTV: Algorithms and Interfaces, Morgan Kaufmann. Seetzen, H., Heidrich, W., Stuerzlinger, W., Ward, G., Whitehead, L., Trentacoste, M., Ghosh, A. & Vorozcovs, A. (2004). High dynamic range display systems, ACM Trans. on Graph. 23(3): Sezan, M., Yip, K. & Daly, S. (1987). Uniform perceptual quantization: Applications to digital radiography, IEEE Transactions on Systems, Man, and Cybernetics 17(4):

icam06, HDR, and Image Appearance

icam06, HDR, and Image Appearance icam06, HDR, and Image Appearance Jiangtao Kuang, Mark D. Fairchild, Rochester Institute of Technology, Rochester, New York Abstract A new image appearance model, designated as icam06, has been developed

More information

VU Rendering SS Unit 8: Tone Reproduction

VU Rendering SS Unit 8: Tone Reproduction VU Rendering SS 2012 Unit 8: Tone Reproduction Overview 1. The Problem Image Synthesis Pipeline Different Image Types Human visual system Tone mapping Chromatic Adaptation 2. Tone Reproduction Linear methods

More information

HDR Video Compression Using High Efficiency Video Coding (HEVC)

HDR Video Compression Using High Efficiency Video Coding (HEVC) HDR Video Compression Using High Efficiency Video Coding (HEVC) Yuanyuan Dong, Panos Nasiopoulos Electrical & Computer Engineering Department University of British Columbia Vancouver, BC {yuand, panos}@ece.ubc.ca

More information

A Wavelet-Based Encoding Algorithm for High Dynamic Range Images

A Wavelet-Based Encoding Algorithm for High Dynamic Range Images The Open Signal Processing Journal, 2010, 3, 13-19 13 Open Access A Wavelet-Based Encoding Algorithm for High Dynamic Range Images Frank Y. Shih* and Yuan Yuan Department of Computer Science, New Jersey

More information

High dynamic range and tone mapping Advanced Graphics

High dynamic range and tone mapping Advanced Graphics High dynamic range and tone mapping Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Cornell Box: need for tone-mapping in graphics Rendering Photograph 2 Real-world scenes

More information

Compression of High Dynamic Range Video Using the HEVC and H.264/AVC Standards

Compression of High Dynamic Range Video Using the HEVC and H.264/AVC Standards Compression of Dynamic Range Video Using the HEVC and H.264/AVC Standards (Invited Paper) Amin Banitalebi-Dehkordi 1,2, Maryam Azimi 1,2, Mahsa T. Pourazad 2,3, and Panos Nasiopoulos 1,2 1 Department of

More information

Color , , Computational Photography Fall 2018, Lecture 7

Color , , Computational Photography Fall 2018, Lecture 7 Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2018, Lecture 7 Course announcements Homework 2 is out. - Due September 28 th. - Requires camera and

More information

HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY

HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY Ronan Boitard Mahsa T. Pourazad Panos Nasiopoulos University of British Columbia, Vancouver, Canada TELUS Communications Inc., Vancouver,

More information

MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER

MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY AUTOMATING THE BIAS VALUE PARAMETER International Journal of Information Technology and Knowledge Management January-June 2012, Volume 5, No. 1, pp. 73-77 MODIFICATION OF ADAPTIVE LOGARITHMIC METHOD FOR DISPLAYING HIGH CONTRAST SCENES BY

More information

Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach

Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach 2014 IEEE International Conference on Systems, Man, and Cybernetics October 5-8, 2014, San Diego, CA, USA Extended Dynamic Range Imaging: A Spatial Down-Sampling Approach Huei-Yung Lin and Jui-Wen Huang

More information

25/02/2017. C = L max L min. L max C 10. = log 10. = log 2 C 2. Cornell Box: need for tone-mapping in graphics. Dynamic range

25/02/2017. C = L max L min. L max C 10. = log 10. = log 2 C 2. Cornell Box: need for tone-mapping in graphics. Dynamic range Cornell Box: need for tone-mapping in graphics High dynamic range and tone mapping Advanced Graphics Rafał Mantiuk Computer Laboratory, University of Cambridge Rendering Photograph 2 Real-world scenes

More information

Color , , Computational Photography Fall 2017, Lecture 11

Color , , Computational Photography Fall 2017, Lecture 11 Color http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 11 Course announcements Homework 2 grades have been posted on Canvas. - Mean: 81.6% (HW1:

More information

Working with Wide Color Gamut and High Dynamic Range in Final Cut Pro X. New Workflows for Editing

Working with Wide Color Gamut and High Dynamic Range in Final Cut Pro X. New Workflows for Editing Working with Wide Color Gamut and High Dynamic Range in Final Cut Pro X New Workflows for Editing White Paper Contents Introduction 3 Background 4 Sources of Wide-Gamut HDR Video 6 Wide-Gamut HDR in Final

More information

12/02/2017. From light to colour spaces. Electromagnetic spectrum. Colour. Correlated colour temperature. Black body radiation.

12/02/2017. From light to colour spaces. Electromagnetic spectrum. Colour. Correlated colour temperature. Black body radiation. From light to colour spaces Light and colour Advanced Graphics Rafal Mantiuk Computer Laboratory, University of Cambridge 1 2 Electromagnetic spectrum Visible light Electromagnetic waves of wavelength

More information

Images. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 38

Images. CS 4620 Lecture Kavita Bala w/ prior instructor Steve Marschner. Cornell CS4620 Fall 2015 Lecture 38 Images CS 4620 Lecture 38 w/ prior instructor Steve Marschner 1 Announcements A7 extended by 24 hours w/ prior instructor Steve Marschner 2 Color displays Operating principle: humans are trichromatic match

More information

Considerations of HDR Program Origination

Considerations of HDR Program Origination SMPTE Bits by the Bay Wednesday May 23rd, 2018 Considerations of HDR Program Origination L. Thorpe Canon USA Inc Canon U.S.A., Inc. 1 Agenda Terminology Human Visual System Basis of HDR Camera Dynamic

More information

Brightness Calculation in Digital Image Processing

Brightness Calculation in Digital Image Processing Brightness Calculation in Digital Image Processing Sergey Bezryadin, Pavel Bourov*, Dmitry Ilinih*; KWE Int.Inc., San Francisco, CA, USA; *UniqueIC s, Saratov, Russia Abstract Brightness is one of the

More information

DIGITAL IMAGING FOUNDATIONS

DIGITAL IMAGING FOUNDATIONS CHAPTER DIGITAL IMAGING FOUNDATIONS Photography is, and always has been, a blend of art and science. The technology has continually changed and evolved over the centuries but the goal of photographers

More information

Compression and Image Formats

Compression and Image Formats Compression Compression and Image Formats Reduce amount of data used to represent an image/video Bit rate and quality requirements Necessary to facilitate transmission and storage Required quality is application

More information

Multiscale model of Adaptation, Spatial Vision and Color Appearance

Multiscale model of Adaptation, Spatial Vision and Color Appearance Multiscale model of Adaptation, Spatial Vision and Color Appearance Sumanta N. Pattanaik 1 Mark D. Fairchild 2 James A. Ferwerda 1 Donald P. Greenberg 1 1 Program of Computer Graphics, Cornell University,

More information

Visualizing High Dynamic Range Images in a Web Browser

Visualizing High Dynamic Range Images in a Web Browser jgt 29/4/2 5:45 page # Vol. [VOL], No. [ISS]: Visualizing High Dynamic Range Images in a Web Browser Rafal Mantiuk and Wolfgang Heidrich The University of British Columbia Abstract. We present a technique

More information

The luminance of pure black: exploring the effect of surround in the context of electronic displays

The luminance of pure black: exploring the effect of surround in the context of electronic displays The luminance of pure black: exploring the effect of surround in the context of electronic displays Rafa l K. Mantiuk a,b, Scott Daly b and Louis Kerofsky b a Bangor University, School of Computer Science,

More information

Realistic Image Synthesis

Realistic Image Synthesis Realistic Image Synthesis - HDR Capture & Tone Mapping - Philipp Slusallek Karol Myszkowski Gurprit Singh Karol Myszkowski LDR vs HDR Comparison Various Dynamic Ranges (1) 10-6 10-4 10-2 100 102 104 106

More information

Perceptual Rendering Intent Use Case Issues

Perceptual Rendering Intent Use Case Issues White Paper #2 Level: Advanced Date: Jan 2005 Perceptual Rendering Intent Use Case Issues The perceptual rendering intent is used when a pleasing pictorial color output is desired. [A colorimetric rendering

More information

Color Reproduction. Chapter 6

Color Reproduction. Chapter 6 Chapter 6 Color Reproduction Take a digital camera and click a picture of a scene. This is the color reproduction of the original scene. The success of a color reproduction lies in how close the reproduced

More information

Chapter 9 Image Compression Standards

Chapter 9 Image Compression Standards Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 1IT342 Image Compression Standards The image standard specifies the codec, which defines how

More information

H34: Putting Numbers to Colour: srgb

H34: Putting Numbers to Colour: srgb page 1 of 5 H34: Putting Numbers to Colour: srgb James H Nobbs Colour4Free.org Introduction The challenge of publishing multicoloured images is to capture a scene and then to display or to print the image

More information

Images and Displays. Lecture Steve Marschner 1

Images and Displays. Lecture Steve Marschner 1 Images and Displays Lecture 2 2008 Steve Marschner 1 Introduction Computer graphics: The study of creating, manipulating, and using visual images in the computer. What is an image? A photographic print?

More information

Computer Graphics. Rendering. Rendering 3D. Images & Color. Scena 3D rendering image. Human Visual System: the retina. Human Visual System

Computer Graphics. Rendering. Rendering 3D. Images & Color. Scena 3D rendering image. Human Visual System: the retina. Human Visual System Rendering Rendering 3D Scena 3D rendering image Computer Graphics Università dell Insubria Corso di Laurea in Informatica Anno Accademico 2014/15 Marco Tarini Images & Color M a r c o T a r i n i C o m

More information

High-Dynamic-Range (HDR) Vision

High-Dynamic-Range (HDR) Vision B. Hoefflinger (Ed.) High-Dynamic-Range (HDR) Vision Microelectronics, Image Processing, Computer Graphics With 172 Figures Sprin ger Contents 1 The Eye and High-Dynamic-Range Vision Bernd Hoefflinger

More information

Photo Editing Workflow

Photo Editing Workflow Photo Editing Workflow WHY EDITING Modern digital photography is a complex process, which starts with the Photographer s Eye, that is, their observational ability, it continues with photo session preparations,

More information

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015

Computer Graphics. Si Lu. Fall er_graphics.htm 10/02/2015 Computer Graphics Si Lu Fall 2017 http://www.cs.pdx.edu/~lusi/cs447/cs447_547_comput er_graphics.htm 10/02/2015 1 Announcements Free Textbook: Linear Algebra By Jim Hefferon http://joshua.smcvt.edu/linalg.html/

More information

Practical Content-Adaptive Subsampling for Image and Video Compression

Practical Content-Adaptive Subsampling for Image and Video Compression Practical Content-Adaptive Subsampling for Image and Video Compression Alexander Wong Department of Electrical and Computer Eng. University of Waterloo Waterloo, Ontario, Canada, N2L 3G1 a28wong@engmail.uwaterloo.ca

More information

A HIGH DYNAMIC RANGE VIDEO CODEC OPTIMIZED BY LARGE-SCALE TESTING

A HIGH DYNAMIC RANGE VIDEO CODEC OPTIMIZED BY LARGE-SCALE TESTING A HIGH DYNAMIC RANGE VIDEO CODEC OPTIMIZED BY LARGE-SCALE TESTING Gabriel Eilertsen Rafał K. Mantiuk Jonas Unger Media and Information Technology, Linköping University, Sweden Computer Laboratory, University

More information

Graphics and Image Processing Basics

Graphics and Image Processing Basics EST 323 / CSE 524: CG-HCI Graphics and Image Processing Basics Klaus Mueller Computer Science Department Stony Brook University Julian Beever Optical Illusion: Sidewalk Art Julian Beever Optical Illusion:

More information

ISSN Vol.03,Issue.29 October-2014, Pages:

ISSN Vol.03,Issue.29 October-2014, Pages: ISSN 2319-8885 Vol.03,Issue.29 October-2014, Pages:5768-5772 www.ijsetr.com Quality Index Assessment for Toned Mapped Images Based on SSIM and NSS Approaches SAMEED SHAIK 1, M. CHAKRAPANI 2 1 PG Scholar,

More information

Multimedia Systems Color Space Mahdi Amiri March 2012 Sharif University of Technology

Multimedia Systems Color Space Mahdi Amiri March 2012 Sharif University of Technology Course Presentation Multimedia Systems Color Space Mahdi Amiri March 2012 Sharif University of Technology Physics of Color Light Light or visible light is the portion of electromagnetic radiation that

More information

IMAGE PROCESSING >COLOR SPACES UTRECHT UNIVERSITY RONALD POPPE

IMAGE PROCESSING >COLOR SPACES UTRECHT UNIVERSITY RONALD POPPE IMAGE PROCESSING >COLOR SPACES UTRECHT UNIVERSITY RONALD POPPE OUTLINE Human visual system Color images Color quantization Colorimetric color spaces HUMAN VISUAL SYSTEM HUMAN VISUAL SYSTEM HUMAN VISUAL

More information

Photometric Image Processing for High Dynamic Range Displays. Matthew Trentacoste University of British Columbia

Photometric Image Processing for High Dynamic Range Displays. Matthew Trentacoste University of British Columbia Photometric Image Processing for High Dynamic Range Displays Matthew Trentacoste University of British Columbia Introduction High dynamic range (HDR) imaging Techniques that can store and manipulate images

More information

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD)

Color Science. What light is. Measuring light. CS 4620 Lecture 15. Salient property is the spectral power distribution (SPD) Color Science CS 4620 Lecture 15 1 2 What light is Measuring light Light is electromagnetic radiation Salient property is the spectral power distribution (SPD) [Lawrence Berkeley Lab / MicroWorlds] exists

More information

Subjective evaluation of image color damage based on JPEG compression

Subjective evaluation of image color damage based on JPEG compression 2014 Fourth International Conference on Communication Systems and Network Technologies Subjective evaluation of image color damage based on JPEG compression Xiaoqiang He Information Engineering School

More information

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 - COMPUTERIZED IMAGING Section I: Chapter 2 RADT 3463 Computerized Imaging 1 SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS RADT 3463 COMPUTERIZED IMAGING Section I: Chapter 2 RADT

More information

ORIGINAL ARTICLE A COMPARATIVE STUDY OF QUALITY ANALYSIS ON VARIOUS IMAGE FORMATS

ORIGINAL ARTICLE A COMPARATIVE STUDY OF QUALITY ANALYSIS ON VARIOUS IMAGE FORMATS ORIGINAL ARTICLE A COMPARATIVE STUDY OF QUALITY ANALYSIS ON VARIOUS IMAGE FORMATS 1 M.S.L.RATNAVATHI, 1 SYEDSHAMEEM, 2 P. KALEE PRASAD, 1 D. VENKATARATNAM 1 Department of ECE, K L University, Guntur 2

More information

HDR Images (High Dynamic Range)

HDR Images (High Dynamic Range) HDR Images (High Dynamic Range) 1995-2016 Josef Pelikán & Alexander Wilkie CGG MFF UK Praha pepca@cgg.mff.cuni.cz http://cgg.mff.cuni.cz/~pepca/ 1 / 16 Dynamic Range of Images bright part (short exposure)

More information

HDR formats. Imaging & Randering

HDR formats. Imaging & Randering HDR formats Imaging & Randering HDR vs. LDR HDR Scene referred standard Tone mapping Usefull for: Many different output devices Postprocessing LDR Output referred standard srgb 1,6 ordes of magnitude Don

More information

Colors in Images & Video

Colors in Images & Video LECTURE 8 Colors in Images & Video CS 5513 Multimedia Systems Spring 2009 Imran Ihsan Principal Design Consultant OPUSVII www.opuseven.com Faculty of Engineering & Applied Sciences 1. Light and Spectra

More information

Images and Displays. CS4620 Lecture 15

Images and Displays. CS4620 Lecture 15 Images and Displays CS4620 Lecture 15 2014 Steve Marschner 1 What is an image? A photographic print A photographic negative? This projection screen Some numbers in RAM? 2014 Steve Marschner 2 An image

More information

Camera Image Processing Pipeline: Part II

Camera Image Processing Pipeline: Part II Lecture 13: Camera Image Processing Pipeline: Part II Visual Computing Systems Today Finish image processing pipeline Auto-focus / auto-exposure Camera processing elements Smart phone processing elements

More information

The Influence of Luminance on Local Tone Mapping

The Influence of Luminance on Local Tone Mapping The Influence of Luminance on Local Tone Mapping Laurence Meylan and Sabine Süsstrunk, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland Abstract We study the influence of the choice

More information

EC-433 Digital Image Processing

EC-433 Digital Image Processing EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)

More information

Visual Perception. Overview. The Eye. Information Processing by Human Observer

Visual Perception. Overview. The Eye. Information Processing by Human Observer Visual Perception Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Class Introduction to DIP/DVP applications and examples Image as a function Concepts

More information

Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System

Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System Journal of Electrical Engineering 6 (2018) 61-69 doi: 10.17265/2328-2223/2018.02.001 D DAVID PUBLISHING Noise Characteristics of a High Dynamic Range Camera with Four-Chip Optical System Takayuki YAMASHITA

More information

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color

Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color Understand brightness, intensity, eye characteristics, and gamma correction, halftone technology, Understand general usage of color 1 ACHROMATIC LIGHT (Grayscale) Quantity of light physics sense of energy

More information

Colour Management Workflow

Colour Management Workflow Colour Management Workflow The Eye as a Sensor The eye has three types of receptor called 'cones' that can pick up blue (S), green (M) and red (L) wavelengths. The sensitivity overlaps slightly enabling

More information

Digital Radiography using High Dynamic Range Technique

Digital Radiography using High Dynamic Range Technique Digital Radiography using High Dynamic Range Technique DAN CIURESCU 1, SORIN BARABAS 2, LIVIA SANGEORZAN 3, LIGIA NEICA 1 1 Department of Medicine, 2 Department of Materials Science, 3 Department of Computer

More information

IN this lecture note, we describe high dynamic range

IN this lecture note, we describe high dynamic range IEEE SPM MAGAZINE, VOL. 34, NO. 5, SEPTEMBER 2017 1 High Dynamic Range Imaging Technology Alessandro Artusi, Thomas Richter, Touradj Ebrahimi, Rafał K. Mantiuk, arxiv:1711.11326v1 [cs.gr] 30 Nov 2017 IN

More information

Wireless Communication

Wireless Communication Wireless Communication Systems @CS.NCTU Lecture 4: Color Instructor: Kate Ching-Ju Lin ( 林靖茹 ) Chap. 4 of Fundamentals of Multimedia Some reference from http://media.ee.ntu.edu.tw/courses/dvt/15f/ 1 Outline

More information

Colour. Why/How do we perceive colours? Electromagnetic Spectrum (1: visible is very small part 2: not all colours are present in the rainbow!

Colour. Why/How do we perceive colours? Electromagnetic Spectrum (1: visible is very small part 2: not all colours are present in the rainbow! Colour What is colour? Human-centric view of colour Computer-centric view of colour Colour models Monitor production of colour Accurate colour reproduction Colour Lecture (2 lectures)! Richardson, Chapter

More information

CSE 332/564: Visualization. Fundamentals of Color. Perception of Light Intensity. Computer Science Department Stony Brook University

CSE 332/564: Visualization. Fundamentals of Color. Perception of Light Intensity. Computer Science Department Stony Brook University Perception of Light Intensity CSE 332/564: Visualization Fundamentals of Color Klaus Mueller Computer Science Department Stony Brook University How Many Intensity Levels Do We Need? Dynamic Intensity Range

More information

Visual Perception of Images

Visual Perception of Images Visual Perception of Images A processed image is usually intended to be viewed by a human observer. An understanding of how humans perceive visual stimuli the human visual system (HVS) is crucial to the

More information

IMAGES AND COLOR. N. C. State University. CSC557 Multimedia Computing and Networking. Fall Lecture # 10

IMAGES AND COLOR. N. C. State University. CSC557 Multimedia Computing and Networking. Fall Lecture # 10 IMAGES AND COLOR N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lecture # 10 IMAGES AND COLOR N. C. State University CSC557 Multimedia Computing and Networking Fall 2001 Lecture

More information

Compression Method for High Dynamic Range Intensity to Improve SAR Image Visibility

Compression Method for High Dynamic Range Intensity to Improve SAR Image Visibility Compression Method for High Dynamic Range Intensity to Improve SAR Image Visibility Satoshi Hisanaga, Koji Wakimoto and Koji Okamura Abstract It is possible to interpret the shape of buildings based on

More information

ABSTRACT. Keywords: color appearance, image appearance, image quality, vision modeling, image rendering

ABSTRACT. Keywords: color appearance, image appearance, image quality, vision modeling, image rendering Image appearance modeling Mark D. Fairchild and Garrett M. Johnson * Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA

More information

Stereo Matching Techniques for High Dynamic Range Image Pairs

Stereo Matching Techniques for High Dynamic Range Image Pairs Stereo Matching Techniques for High Dynamic Range Image Pairs Huei-Yung Lin and Chung-Chieh Kao Department of Electrical Engineering National Chung Cheng University Chiayi 621, Taiwan Abstract. We investigate

More information

Image Distortion Maps 1

Image Distortion Maps 1 Image Distortion Maps Xuemei Zhang, Erick Setiawan, Brian Wandell Image Systems Engineering Program Jordan Hall, Bldg. 42 Stanford University, Stanford, CA 9435 Abstract Subjects examined image pairs consisting

More information

Color Management. Photographers Thomas Zuber.

Color Management. Photographers Thomas Zuber. Color Management For Color and Black & White Photographers 2010 Thomas Zuber Agenda Scope of Presentation Three characteristics of light What is/is not Color Management Color Management for Cameras Review:

More information

Prof. Feng Liu. Winter /09/2017

Prof. Feng Liu. Winter /09/2017 Prof. Feng Liu Winter 2017 http://www.cs.pdx.edu/~fliu/courses/cs410/ 01/09/2017 Today Course overview Computer vision Admin. Info Visual Computing at PSU Image representation Color 2 Big Picture: Visual

More information

Visibility, Performance and Perception. Cooper Lighting

Visibility, Performance and Perception. Cooper Lighting Visibility, Performance and Perception Kenneth Siderius BSc, MIES, LC, LG Cooper Lighting 1 Vision It has been found that the ability to recognize detail varies with respect to four physical factors: 1.Contrast

More information

Camera Image Processing Pipeline: Part II

Camera Image Processing Pipeline: Part II Lecture 14: Camera Image Processing Pipeline: Part II Visual Computing Systems Today Finish image processing pipeline Auto-focus / auto-exposure Camera processing elements Smart phone processing elements

More information

Colour. Electromagnetic Spectrum (1: visible is very small part 2: not all colours are present in the rainbow!) Colour Lecture!

Colour. Electromagnetic Spectrum (1: visible is very small part 2: not all colours are present in the rainbow!) Colour Lecture! Colour Lecture! ITNP80: Multimedia 1 Colour What is colour? Human-centric view of colour Computer-centric view of colour Colour models Monitor production of colour Accurate colour reproduction Richardson,

More information

Module 6 STILL IMAGE COMPRESSION STANDARDS

Module 6 STILL IMAGE COMPRESSION STANDARDS Module 6 STILL IMAGE COMPRESSION STANDARDS Lesson 16 Still Image Compression Standards: JBIG and JPEG Instructional Objectives At the end of this lesson, the students should be able to: 1. Explain the

More information

It should also be noted that with modern cameras users can choose for either

It should also be noted that with modern cameras users can choose for either White paper about color correction More drama Many application fields like digital printing industry or the human medicine require a natural display of colors. To illustrate the importance of color fidelity,

More information

University of British Columbia CPSC 414 Computer Graphics

University of British Columbia CPSC 414 Computer Graphics University of British Columbia CPSC 414 Computer Graphics Color 2 Week 10, Fri 7 Nov 2003 Tamara Munzner 1 Readings Chapter 1.4: color plus supplemental reading: A Survey of Color for Computer Graphics,

More information

Colour. Cunliffe & Elliott, Chapter 8 Chapman & Chapman, Digital Multimedia, Chapter 5. Autumn 2016 University of Stirling

Colour. Cunliffe & Elliott, Chapter 8 Chapman & Chapman, Digital Multimedia, Chapter 5. Autumn 2016 University of Stirling CSCU9N5: Multimedia and HCI 1 Colour What is colour? Human-centric view of colour Computer-centric view of colour Colour models Monitor production of colour Accurate colour reproduction Cunliffe & Elliott,

More information

Color and perception Christian Miller CS Fall 2011

Color and perception Christian Miller CS Fall 2011 Color and perception Christian Miller CS 354 - Fall 2011 A slight detour We ve spent the whole class talking about how to put images on the screen What happens when we look at those images? Are there any

More information

Introduction to Color Science (Cont)

Introduction to Color Science (Cont) Lecture 24: Introduction to Color Science (Cont) Computer Graphics and Imaging UC Berkeley Empirical Color Matching Experiment Additive Color Matching Experiment Show test light spectrum on left Mix primaries

More information

Image Processing by Bilateral Filtering Method

Image Processing by Bilateral Filtering Method ABHIYANTRIKI An International Journal of Engineering & Technology (A Peer Reviewed & Indexed Journal) Vol. 3, No. 4 (April, 2016) http://www.aijet.in/ eissn: 2394-627X Image Processing by Bilateral Image

More information

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools

Acquisition Basics. How can we measure material properties? Goal of this Section. Special Purpose Tools. General Purpose Tools Course 10 Realistic Materials in Computer Graphics Acquisition Basics MPI Informatik (moving to the University of Washington Goal of this Section practical, hands-on description of acquisition basics general

More information

The Effect of Opponent Noise on Image Quality

The Effect of Opponent Noise on Image Quality The Effect of Opponent Noise on Image Quality Garrett M. Johnson * and Mark D. Fairchild Munsell Color Science Laboratory, Rochester Institute of Technology Rochester, NY 14623 ABSTRACT A psychophysical

More information

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University

Images and Graphics. 4. Images and Graphics - Copyright Denis Hamelin - Ryerson University Images and Graphics Images and Graphics Graphics and images are non-textual information that can be displayed and printed. Graphics (vector graphics) are an assemblage of lines, curves or circles with

More information

APPLICATION OF HIGH DYNAMIC RANGE PHOTOGRAPHY TO BLOODSTAIN ENHANCEMENT PHOTOGRAPHY. By Danielle Jennifer Susanne Schulz

APPLICATION OF HIGH DYNAMIC RANGE PHOTOGRAPHY TO BLOODSTAIN ENHANCEMENT PHOTOGRAPHY. By Danielle Jennifer Susanne Schulz APPLICATION OF HIGH DYNAMIC RANGE PHOTOGRAPHY TO BLOODSTAIN ENHANCEMENT PHOTOGRAPHY By Danielle Jennifer Susanne Schulz Bachelor of Forensic and Investigative Science, May 2008, West Virginia University

More information

Lecture 1: image display and representation

Lecture 1: image display and representation Learning Objectives: General concepts of visual perception and continuous and discrete images Review concepts of sampling, convolution, spatial resolution, contrast resolution, and dynamic range through

More information

Image Perception & 2D Images

Image Perception & 2D Images Image Perception & 2D Images Vision is a matter of perception. Perception is a matter of vision. ES Overview Introduction to ES 2D Graphics in Entertainment Systems Sound, Speech & Music 3D Graphics in

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

ABSTRACT 1. PURPOSE 2. METHODS

ABSTRACT 1. PURPOSE 2. METHODS Perceptual uniformity of commonly used color spaces Ali Avanaki a, Kathryn Espig a, Tom Kimpe b, Albert Xthona a, Cédric Marchessoux b, Johan Rostang b, Bastian Piepers b a Barco Healthcare, Beaverton,

More information

ICC Votable Proposal Submission Colorimetric Intent Image State Tag Proposal

ICC Votable Proposal Submission Colorimetric Intent Image State Tag Proposal ICC Votable Proposal Submission Colorimetric Intent Image State Tag Proposal Proposers: Jack Holm, Eric Walowit & Ann McCarthy Date: 16 June 2006 Proposal Version 1.2 1. Introduction: The ICC v4 specification

More information

Tonal quality and dynamic range in digital cameras

Tonal quality and dynamic range in digital cameras Tonal quality and dynamic range in digital cameras Dr. Manal Eissa Assistant professor, Photography, Cinema and TV dept., Faculty of Applied Arts, Helwan University, Egypt Abstract: The diversity of display

More information

High Dynamic Range Image Formats

High Dynamic Range Image Formats High Dynamic Range Image Formats Bernhard Holzer Matr.Nr. 0326825 Institute of Computer Graphics & Algorithms TU Vienna Abstract HDR-image formats are able to encode a much greater range of colors and

More information

Evaluation of High Dynamic Range Content Viewing Experience Using Eye-Tracking Data (Invited Paper)

Evaluation of High Dynamic Range Content Viewing Experience Using Eye-Tracking Data (Invited Paper) Evaluation of High Dynamic Range Content Viewing Experience Using Eye-Tracking Data (Invited Paper) Eleni Nasiopoulos 1, Yuanyuan Dong 2,3 and Alan Kingstone 1 1 Department of Psychology, University of

More information

Towards Real-time Hardware Gamma Correction for Dynamic Contrast Enhancement

Towards Real-time Hardware Gamma Correction for Dynamic Contrast Enhancement Towards Real-time Gamma Correction for Dynamic Contrast Enhancement Jesse Scott, Ph.D. Candidate Integrated Design Services, College of Engineering, Pennsylvania State University University Park, PA jus2@engr.psu.edu

More information

POST-PRODUCTION/IMAGE MANIPULATION

POST-PRODUCTION/IMAGE MANIPULATION 6 POST-PRODUCTION/IMAGE MANIPULATION IMAGE COMPRESSION/FILE FORMATS FOR POST-PRODUCTION Florian Kainz, Piotr Stanczyk This section focuses on how digital images are stored. It discusses the basics of still-image

More information

Image Registration for Multi-exposure High Dynamic Range Image Acquisition

Image Registration for Multi-exposure High Dynamic Range Image Acquisition Image Registration for Multi-exposure High Dynamic Range Image Acquisition Anna Tomaszewska Szczecin University of Technology atomaszewska@wi.ps.pl Radoslaw Mantiuk Szczecin University of Technology rmantiuk@wi.ps.pl

More information

Mahdi Amiri. March Sharif University of Technology

Mahdi Amiri. March Sharif University of Technology Course Presentation Multimedia Systems Color Space Mahdi Amiri March 2014 Sharif University of Technology The wavelength λ of a sinusoidal waveform traveling at constant speed ν is given by Physics of

More information

CS148: Introduction to Computer Graphics and Imaging. Displays. Topics. Spatial resolution Temporal resolution Tone mapping. Display technologies

CS148: Introduction to Computer Graphics and Imaging. Displays. Topics. Spatial resolution Temporal resolution Tone mapping. Display technologies CS148: Introduction to Computer Graphics and Imaging Displays Topics Spatial resolution Temporal resolution Tone mapping Display technologies Resolution World is continuous, digital media is discrete Three

More information

Appearance Match between Soft Copy and Hard Copy under Mixed Chromatic Adaptation

Appearance Match between Soft Copy and Hard Copy under Mixed Chromatic Adaptation Appearance Match between Soft Copy and Hard Copy under Mixed Chromatic Adaptation Naoya KATOH Research Center, Sony Corporation, Tokyo, Japan Abstract Human visual system is partially adapted to the CRT

More information

Image Processing. Michael Kazhdan ( /657) HB Ch FvDFH Ch. 13.1

Image Processing. Michael Kazhdan ( /657) HB Ch FvDFH Ch. 13.1 Image Processing Michael Kazhdan (600.457/657) HB Ch. 14.4 FvDFH Ch. 13.1 Outline Human Vision Image Representation Reducing Color Quantization Artifacts Basic Image Processing Human Vision Model of Human

More information

What is an image? Images and Displays. Representative display technologies. An image is:

What is an image? Images and Displays. Representative display technologies. An image is: What is an image? Images and Displays A photographic print A photographic negative? This projection screen Some numbers in RAM? CS465 Lecture 2 2005 Steve Marschner 1 2005 Steve Marschner 2 An image is:

More information

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor Image acquisition Digital images are acquired by direct digital acquisition (digital still/video cameras), or scanning material acquired as analog signals (slides, photographs, etc.). In both cases, the

More information

High dynamic range imaging and tonemapping

High dynamic range imaging and tonemapping High dynamic range imaging and tonemapping http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 12 Course announcements Homework 3 is out. - Due

More information

High Dynamic Range Texture Compression

High Dynamic Range Texture Compression High Dynamic Range Texture Compression Kimmo Roimela Tomi Aarnio Joonas Ita ranta Nokia Research Center Figure 1: Encoding extreme colors. Left to right: original (48 bpp), our method (8 bpp), DXTC-encoded

More information