Original. Image. Distorted. Image
|
|
- Stanley Miles
- 5 years ago
- Views:
Transcription
1 An Automatic Image Quality Assessment Technique Incorporating Higher Level Perceptual Factors Wilfried Osberger and Neil Bergmann Space Centre for Satellite Navigation, Queensland University of Technology, GPO Box 2434, Brisbane, 4001, Australia Anthony Maeder School of Engineering, University of Ballarat, Ballarat, 3353, Australia Abstract We present an objective image quality assessment technique which is based on the properties of the human visual system (HVS). It consists of two major components: an early vision model (multi-channel and designed specically for complex natural images), and a visual attention model which indicates regions of interest in a scene through the use of Importance Maps. Visible errors are then weighted, depending on the perceptual importance of the region in which they occur. We show that this technique produces a high correlation with subjective test data (0.93), compared to only 0.65 for PSNR. This technique is particularly useful for images coded with spatially varying quality. 1 Introduction The limited accuracy obtained by simple objective quality metrics such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE) has led to the development of more advanced quality assessment techniques. Models based on the early stages of human vision [4, 12] (i.e. up to the primary visual cortex) have shown promise and are useful for determining whether compression errors are visible or not, at each location in an image. However, many compression applications introduce suprathreshold errors, and allow variable quality across dierent parts of the image. Vision models which treat visible errors equally, regardless of their location in the image, may not be powerful enough to accurately predict picture quality in such cases. This is because we are known to be more sensitive to errors in areas of the scene to which we are paying attention than to errors in peripheral areas [10, 11]. In this paper we revise a HVS-based quality metric for monochrome images that we have recently developed [16] and demonstrate its high correlation with subjective opinion by comparing its predictions with subjective Mean Opinion Score (MOS) data. The HVS model contains both an early vision stage (which detects whether an error is visible, assuming foveal viewing) and a model of human visual attention. The early vision model has been designed to specically take into account the structure of complex natural images. The attention model takes into account several factors which are known to inuence visual attention, to produce an Importance Map (IM) [13, 15]. This map is used to weight the inuence of the visible errors produced by the early vision model, in order to obtain a more accurate indication of picture quality. 2 Important HVS Characteristics Extensive physiological and psychophysical experimentation into the operation of the primate visual system has resulted in a good understanding of the HVS, in particular up to the primary visual cortex (area V1). Important features of our early vision which need to be considered by a vision model include: sensitivity to contrast changes rather than luminance changes (approximated by Weber's law at high luminance, and de Vries{Rose law at lower luminance levels). frequency dependent sensitivity to stimuli. This can be modeled by a Contrast Sensitivity Function (CSF), which estimates the visibility threshold for stimuli at dierent spatial frequencies. However, the shape and threshold of the CSF is dependent upon the type of stimulus used. For naturalistic images, the CSF reduces signicantly at low to mid frequencies [22]. masking, which refers to our reduced ability to detect a stimulus on a spatially or temporally complex background. Thus errors are less visible along strong edges, in textured areas, or immediately following a scene change. The amount of masking caused by a background depends not
2 only on the background's contrast, but also on the level of uncertainty created by the background and stimulus [21]. Areas of high uncertainty (e.g. complex areas of a scene, textures) induce higher masking than areas of the same contrast with lower uncertainty (e.g. edges, gratings). 2.1 Attention and Eye Movements In order to deal eciently with the tremendous amount of information with which it is presented, the HVS operates with variable resolution. High acuity is obtained in a very small area called the fovea, which is only around 2 degrees of visual angle in diameter. Our visual eld however spans degrees. Our fovea is repositioned by rapid, directed eye movements called saccades, which occur every 100{500 milliseconds. Visual attention processes control the location of future saccades by scanning peripheral regions in parallel, looking for important or uncertain areas which require foveal viewing. Thus a strong relation exists between eye movements and attention. Studies of viewer eye movement patterns while viewing natural image or video scenes have shown that subjects tend to xate on similar locations in a scene [20, 23], provided they are not given dierent instructions prior to viewing. Fixations are generally not distributed evenly over the scene; instead there tend to be a few regions in the scene which receive a disproportionately large number of xations, while other areas are not foveally viewed at all. Our perception of overall picture quality is heavily inuenced by the quality in these areas of interest, while peripheral regions can undergo signicant degradation without strongly aecting overall quality [10, 11]. However, care must be taken when reducing peripheral resolution not to remove future visual attractors or create new attractors [6]. A number of factors have been found to inuence our eye movements and visual attention. In general, objects which stand out from their surrounds with respect to a particular factor are likely to attract our attention. Some of the most important factors include: Contrast. [8, 19] Regions which have high contrast with their surrounds attract our attention and are likely to be of greater visual importance. Size. Larger regions are more likely to attract attention than smaller ones [2, 8]. However a saturation point exists, after which importance due to size levels o. Shape. [9, 19] Long, thin regions have been found to be visual attractors. Original Image Distorted Image Importance Calculation (5 factors) Multi-channel Early Vision Model IM PDM PDM IPDM weighted by Summation IM IPQR Figure 1: Block Diagram of Image Quality Assessment Technique. Colour. [3, 14] A strong inuence occurs when the colour of a region is distinct from the background colour. Motion. [14] Our peripheral vision is highly tuned to detecting changes in motion, and our attention is involuntarily drawn to peripheral areas undergoing motion distinct from its surrounds. Location. Eye-tracking experiments have shown that viewers eyes are directed at the centre 25% of a screen for a majority of viewing material [7]. Foreground / Background. [2, 3] Viewers are more likely to be attracted to objects in the foreground than those in the background. People. Many studies [9, 19, 23] have shown that we are drawn to focus on people in a scene, in particular their faces, eyes, mouth, and hands. Context. [9, 23] Viewers eye movements can be dramatically changed, depending on the instructions they are given prior to or during the observation of an image. 3 Model Description A block diagram of our quality assessment technique is shown in Figure 1. The model is based on the approach used in [16]. The multi-channel early vision model is described in detail in [17]. In brief, it consists of the following processes: a conversion from luminance to contrast using Peli's LBC algorithm [18]; application of a CSF using data obtained from naturalistic stimuli; a contrast masking function which raises the CSF threshold more quickly in textured (uncertain) areas than in at or edge regions; visibility thresholding; and Minkowski summation. This process produces what we term a Perceptual Distortion Map (PDM). This map indicates the parts of the coded picture that contain visible distortions, assuming that each area is viewed foveally.
3 As discussed in Section 2.1, our perception of overall picture quality is strongly inuenced by the quality of the most perceptually important parts of a scene. To give an indication of the location of regions of interest in a scene, we utilise Importance Maps (IM) [13, 15]. These automatically generated maps identify visually salient regions based on factors known to inuence visual attention. A complete description of how IMs are generated can be found in [15]. In brief, the original image is rst segmented into homogeneous regions. The salience of each region is then determined with respect to each of 5 factors: contrast, size, shape, location, and foreground/background. The exible structure of the algorithm allows additional factors to easily be incorporated, such as colour or a priori knowledge of scene content. The factors are weighted equally and are summed to produce an overall IM for the image. Each region is assigned an importance value in the range from 0.0{1.0, with 1.0 representing highest importance. The IM is used to weight the output of the PDM, such that errors in areas classied as being of lower importance have a lower inuence than errors in areas of high perceptual importance. This is given by: IP DM(x; y) = P DM(x; y) IM(x; y) (1) where IP DM(x; y) represents the IM-weighted PDM, and is an importance scaling power. We have found a value of of 1.0 to give good results. To produce a single number representing picture quality from the IPDM, Minkowski summation with an exponent of 2 is performed [5]. This summed error value can be converted to a scale from 1{5, so that correlation with subjective Mean Opinion Score (MOS) data can be determined. This is done using: (I)P QR = 5 1+pE (2) where (I)P QR represents an (IM-weighted) Perceptual Quality Rating and p is a scaling constant. As a result of subjective testing we have found that a value of p = 0.8 gives good correlation with subjective data. 4 Results An example of the outputs produced by this technique can be seen in Fig. 2 for the image lighthouse. The coded image (Fig. 2(c)) is actually a composite of two wavelet coded images: one at high bitrate (1.02 bits/pixel) and one low (0.20 bits/pixel). Rectangular areas in the regions of interest in the scene (i.e. the lighthouse and surrounding buildings) have been cut from the high bitrate image and pasted onto the lower bitrate image. The result is an image with Table 1: Correlation with MOS achieved by PSNR, PQR and IPQR. Assessment Technique Images Used PSNR PQR IPQR All images JPEG & wavelet only Variable quality only high quality in the regions of interest, and lower quality in the periphery. This type of image typically has a higher subjective quality than an image of the same bitrate which has been coded at a single quality level. If the MSE (Fig. 2(d)) or PDM (Fig. 2(e)) are decomposed into a single number representing image quality (PSNR and PQR respectively), they are not capable of predicting the increased subjective quality of this composite coded picture, since they fail to take into account the perceptual importance of the location of the distortion. However, when the IPDM is summed to produce an IPQR, the increase in subjective quality is predicted. We have performed subjective testing in order to determine the correlation of our technique with subjective opinion. The subjective tests were performed in accordance with the CCIR Rec DSCQS [1] testing procedure. Eighteen subjects were asked to assess the quality of four dierent images (announcer, beach, lighthouse, and splash), which were coded using wavelet and JPEG coders at a variety of bitrates (0.15 to 1.6 bits/pixel). This produced a set of 32 compressed images. To provide a more challenging test, we included a further 12 images which were a composite of high bit rate (in an area we chose as the region of interest) and low bit rate (in all other areas) images. Plots of the predictions of our technique and PSNR versus MOS are shown in Fig. 3. Although PSNR generally provides a reasonable correlation with subjective opinion for a particular image and coder, it is not robust across dierent images and coders. This results in a poor correlation across all tested scenes. However, the predictions of the IPQR metric in Fig. 3(b) show that this technique performs independent of scene, coder, and bitrate. Unlike PSNR, the IPQR gives good results on variable quality images, since it takes into account the location as well as the magnitude of the errors. The correlations with subjective MOS achieved by PSNR, PQR, and IPQR are shown in Table 1. The IPQR technique produced a signicantly higher correlation (0.93) than PSNR (0.65), and also improved upon the PQR (0.87). As expected, IPQR was the
4 (a) (b) (c) (d) (e) (f ) Figure 2: Quality assessment for the image lighthouse, wavelet coded at two levels for an average bitrate of 0.35 bit/pixel. (a) original image, (b) Importance Map, with lighter regions representing areas of higher importance, (c) coded image, (d) MSE, (e) PDM, and (f) IPDM. (a) (b) Figure 3: Quality predictions compared to subjective MOS. (a) PSNR, (b) IPQR. Diamonds = announcer, squares = beach, crosses = lighthouse, circles = splash.
5 most successful technique when predicting the quality of the variable coded scenes. However, it also provided improved correlation over PQR for standard JPEG and wavelet coded scenes. 5 Discussion This paper has presented a quality assessment technique which incorporates higher level perceptual factors into a visual model, and demonstrated the improved quality prediction which can be achieved using this approach. A signicant improvement over the commonly used PSNR was achieved. The IPQR metric is particularly useful at predicting quality in images which have spatial variations in picture quality. An extension of the metric to video quality assessment may therefore be useful for assessing the quality of object-based coding schemes such as MPEG-4. There are several areas in which this technique can be improved or extended. We currently consider only monochrome images, so inclusion of colour in both the early vision model and the IM calculation is an obvious progression. Other visual attractors may also be included in the IM algorithm. For instance, if prior knowledge of the type of scene being viewed was available, it may be used to provide a better prediction of the location of important areas. References [1] Methodology for the subjective assessment of the quality of television pictures. ITU-R Recommendation 500-6, [2] T. Boersema and H. J. G. Zwaga. Searching for routing signs in public buildings: the distracting eects of advertisements. In D. Brogan, editor, Visual Search, pages 151{157. Taylor and Francis, [3] B. L. Cole and P. K. Hughes. Drivers don't search: they just notice. In D. Brogan, editor, Visual Search, pages 407{417. Taylor and Francis, [4] S. Daly. The visible dierence predictor: an algorithm for the assessment of image delity. In A. B. Watson, editor, Digital Images and Human Vision, pages 179{ 206. MIT Press, Cambridge, Massachusetts, [5] H. de Ridder. Minkowski-metrics as a combination rule for digital image coding impairments. In Proceedings of the SPIE - Human Vision, Visual Processing and Digital Display III, volume 1666, pages 16{26, San Jose, February [6] A. T. Duchowski and B. H. McCormick. Pre-attentive considerations for gaze-contingent image processing. In Proceedings of the SPIE - Human Vision, Visual Processing and Digital Display VI, volume 2411, pages 128{139, San Jose, February [7] G. S. Elias, G. W. Sherwin, and J. A. Wise. Eye movements while viewing NTSC format television. SMPTE Psychophysics Subcommittee white paper, March [8] J. M. Findlay. The visual stimulus for saccadic eye movements in human observers. Perception, 9:7{21, September [9] A. G. Gale. Human response to visual stimuli. In W. R. Hendee and P. N. T. Wells, editors, The Perception of Visual Information, pages 127{147. Springer-Verlag, [10] G. A. Geri and Y. Y. Zeevi. Visual assessment of variable-resolution imagery. Journal of the Optical Society of America, 12(10):2367{2375, October [11] P. Kortum and W. Geisler. Implementation of a foveated image coding system for image bandwidth reduction. In SPIE - Human Vision and Electronic Imaging, volume 2657, pages 350{360, February [12] J. Lubin. A visual discrimination model for imaging system design and evaluation. In E. Peli, editor, Vision models for target detection and recognition, pages 245{283. World Scientic, New Jersey, [13] A. Maeder, J. Diederich, and E. Niebur. Limiting human perception for image sequences. In SPIE - Human Vision and Electronic Imaging, volume 2657, pages 330{337, San Jose, February [14] E. Niebur and C. Koch. Computational architectures for attention. In R. Parasuraman, editor, The Attentive Brain. MIT Press, Cambridge, MA, [15] W. Osberger and A. J. Maeder. Automatic identication of perceptually important regions in an image using a model of the human visual system. To appear in 14th International Conference on Pattern Recognition, August [16] W. Osberger, A. J. Maeder, and N. W. Bergmann. A technique for image quality assessment based on a human visual system model. To appear in 9th European Signal Processing Conference (EUSIPCO-98), September [17] W. Osberger, A. J. Maeder, and D. McLean. A computational model of the human visual system for image quality assessment. In Proceedings DICTA-97, pages 337{342, New Zealand, December [18] E. Peli. Contrast in complex images. JOSA A, 7(10):2032{2040, October [19] J. W. Senders. Distribution of visual attention in static and dynamic displays. In Proceedings of the SPIE - Human Vision and Electronic Imaging II, volume 3016, pages 186{194, February [20] L. B. Stelmach, W. J. Tam, and P. J. Hearty. Static and dynamic spatial resolution in image coding: An investigation of eye movements. In Proceedings of the SPIE, volume 1453, pages 147{152, San Jose, [21] A. B. Watson, R. Borthwick, and M. Taylor. Image quality and entropy masking. In Proceedings of the SPIE - Human Vision and Electronic Imaging II, volume 3016, pages 2{12, February [22] M. A. Webster and E. Miyahara. Contrast adaption and the spatial structure of natural images. JOSA A, 14(9):2355{2366, September [23] A. L. Yarbus. Eye Movements and Vision. Plenum Press, New York, 1967.
University of California, Davis. ABSTRACT. In previous work, we have reported on the benets of noise reduction prior to coding of very high quality
Preprocessing for Improved Performance in Image and Video Coding V. Ralph Algazi Gary E. Ford Adel I. El-Fallah Robert R. Estes, Jr. CIPIC, Center for Image Processing and Integrated Computing University
More informationReal-time Simulation of Arbitrary Visual Fields
Real-time Simulation of Arbitrary Visual Fields Wilson S. Geisler University of Texas at Austin geisler@psy.utexas.edu Jeffrey S. Perry University of Texas at Austin perry@psy.utexas.edu Abstract This
More informationRECOMMENDATION ITU-R BT SUBJECTIVE ASSESSMENT OF STANDARD DEFINITION DIGITAL TELEVISION (SDTV) SYSTEMS. (Question ITU-R 211/11)
Rec. ITU-R BT.1129-2 1 RECOMMENDATION ITU-R BT.1129-2 SUBJECTIVE ASSESSMENT OF STANDARD DEFINITION DIGITAL TELEVISION (SDTV) SYSTEMS (Question ITU-R 211/11) Rec. ITU-R BT.1129-2 (1994-1995-1998) The ITU
More informationMultiscale model of Adaptation, Spatial Vision and Color Appearance
Multiscale model of Adaptation, Spatial Vision and Color Appearance Sumanta N. Pattanaik 1 Mark D. Fairchild 2 James A. Ferwerda 1 Donald P. Greenberg 1 1 Program of Computer Graphics, Cornell University,
More informationIs image quality a function of contrast perception?
Is image quality a function of contrast perception? Andrew M. Haun & Eli Peli Schepens Eye Research Institute, Mass Eye and Ear, Harvard Medical School, Boston MA ABSTRACT In this retrospective we trace
More informationEvaluating Context-Aware Saliency Detection Method
Evaluating Context-Aware Saliency Detection Method Christine Sawyer Santa Barbara City College Computer Science & Mechanical Engineering Funding: Office of Naval Research Defense University Research Instrumentation
More informationObjective Evaluation of Edge Blur and Ringing Artefacts: Application to JPEG and JPEG 2000 Image Codecs
Objective Evaluation of Edge Blur and Artefacts: Application to JPEG and JPEG 2 Image Codecs G. A. D. Punchihewa, D. G. Bailey, and R. M. Hodgson Institute of Information Sciences and Technology, Massey
More informationChapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli
Chapter 6. Experiment 3. Motion sickness and vection with normal and blurred optokinetic stimuli 6.1 Introduction Chapters 4 and 5 have shown that motion sickness and vection can be manipulated separately
More informationChapter 9. Conclusions. 9.1 Summary Perceived distances derived from optic ow
Chapter 9 Conclusions 9.1 Summary For successful navigation it is essential to be aware of one's own movement direction as well as of the distance travelled. When we walk around in our daily life, we get
More informationQuality Measure of Multicamera Image for Geometric Distortion
Quality Measure of Multicamera for Geometric Distortion Mahesh G. Chinchole 1, Prof. Sanjeev.N.Jain 2 M.E. II nd Year student 1, Professor 2, Department of Electronics Engineering, SSVPSBSD College of
More informationThe Quality of Appearance
ABSTRACT The Quality of Appearance Garrett M. Johnson Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science Rochester Institute of Technology 14623-Rochester, NY (USA) Corresponding
More informationCompression and Image Formats
Compression Compression and Image Formats Reduce amount of data used to represent an image/video Bit rate and quality requirements Necessary to facilitate transmission and storage Required quality is application
More informationEccentricity Effect of Motion Silencing on Naturalistic Videos Lark Kwon Choi*, Lawrence K. Cormack, and Alan C. Bovik
Eccentricity Effect of Motion Silencing on Naturalistic Videos Lark Kwon Choi*, Lawrence K. Cormack, and Alan C. Bovik Dec. 6, 206 Outline Introduction Background Visual Masking and Motion Silencing Eccentricity
More informationWhite Intensity = 1. Black Intensity = 0
A Region-based Color Image Segmentation Scheme N. Ikonomakis a, K. N. Plataniotis b and A. N. Venetsanopoulos a a Dept. of Electrical and Computer Engineering, University of Toronto, Toronto, Canada b
More informationPerceptual Vision Models for Picture Quality Assessment and Compression Applications Wilfried Osberger B.E. èelectronicsè èhonsè è B.Inf.Tech Queensland University of Technology Space Centre for Satellite
More informationPerceptual and Artistic Principles for Effective Computer Depiction. Gaze Movement & Focal Points
Perceptual and Artistic Principles for Effective Computer Depiction Perceptual and Artistic Principles for Effective Computer Depiction Perceptual and Artistic Principles for Effective Computer Depiction
More informationOn Contrast Sensitivity in an Image Difference Model
On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New
More informationDownloaded From: on 06/25/2015 Terms of Use:
A metric to evaluate the texture visibility of halftone patterns Muge Wang and Kevin J. Parker Department of Electrical and Computer Engineering University of Rochester Rochester, New York 14627, USA ABSTRACT
More informationVisual Search using Principal Component Analysis
Visual Search using Principal Component Analysis Project Report Umesh Rajashekar EE381K - Multidimensional Digital Signal Processing FALL 2000 The University of Texas at Austin Abstract The development
More informationORIGINAL ARTICLE A COMPARATIVE STUDY OF QUALITY ANALYSIS ON VARIOUS IMAGE FORMATS
ORIGINAL ARTICLE A COMPARATIVE STUDY OF QUALITY ANALYSIS ON VARIOUS IMAGE FORMATS 1 M.S.L.RATNAVATHI, 1 SYEDSHAMEEM, 2 P. KALEE PRASAD, 1 D. VENKATARATNAM 1 Department of ECE, K L University, Guntur 2
More informationNo-Reference Image Quality Assessment using Blur and Noise
o-reference Image Quality Assessment using and oise Min Goo Choi, Jung Hoon Jung, and Jae Wook Jeon International Science Inde Electrical and Computer Engineering waset.org/publication/2066 Abstract Assessment
More informationA New Scheme for No Reference Image Quality Assessment
Author manuscript, published in "3rd International Conference on Image Processing Theory, Tools and Applications, Istanbul : Turkey (2012)" A New Scheme for No Reference Image Quality Assessment Aladine
More informationVisual Perception. Overview. The Eye. Information Processing by Human Observer
Visual Perception Spring 06 Instructor: K. J. Ray Liu ECE Department, Univ. of Maryland, College Park Overview Last Class Introduction to DIP/DVP applications and examples Image as a function Concepts
More informationObjective and subjective evaluations of some recent image compression algorithms
31st Picture Coding Symposium May 31 June 3, 2015, Cairns, Australia Objective and subjective evaluations of some recent image compression algorithms Marco Bernando, Tim Bruylants, Touradj Ebrahimi, Karel
More informationEC-433 Digital Image Processing
EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)
More informationImage Quality Assessment Techniques V. K. Bhola 1, T. Sharma 2,J. Bhatnagar
Image Quality Assessment Techniques V. K. Bhola 1, T. Sharma 2,J. Bhatnagar 3 1 vijaymmec@gmail.com, 2 tarun2069@gmail.com, 3 jbkrishna3@gmail.com Abstract: Image Quality assessment plays an important
More informationEvaluation of image quality of the compression schemes JPEG & JPEG 2000 using a Modular Colour Image Difference Model.
Evaluation of image quality of the compression schemes JPEG & JPEG 2000 using a Modular Colour Image Difference Model. Mary Orfanidou, Liz Allen and Dr Sophie Triantaphillidou, University of Westminster,
More informationSaliency of Peripheral Targets in Gaze-contingent Multi-resolutional Displays. Eyal M. Reingold. University of Toronto. Lester C.
Salience of Peripheral 1 Running head: SALIENCE OF PERIPHERAL TARGETS Saliency of Peripheral Targets in Gaze-contingent Multi-resolutional Displays Eyal M. Reingold University of Toronto Lester C. Loschky
More informationOn Contrast Sensitivity in an Image Difference Model
On Contrast Sensitivity in an Image Difference Model Garrett M. Johnson and Mark D. Fairchild Munsell Color Science Laboratory, Center for Imaging Science Rochester Institute of Technology, Rochester New
More informationNO-REFERENCE PERCEPTUAL QUALITY ASSESSMENT OF RINGING AND MOTION BLUR IMAGE BASED ON IMAGE COMPRESSION
NO-REFERENCE PERCEPTUAL QUALITY ASSESSMENT OF RINGING AND MOTION BLUR IMAGE BASED ON IMAGE COMPRESSION Assist.prof.Dr.Jamila Harbi 1 and Ammar Izaldeen Alsalihi 2 1 Al-Mustansiriyah University, college
More informationABSTRACT. Keywords: color appearance, image appearance, image quality, vision modeling, image rendering
Image appearance modeling Mark D. Fairchild and Garrett M. Johnson * Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
More informationWhy Visual Quality Assessment?
Why Visual Quality Assessment? Sample image-and video-based applications Entertainment Communications Medical imaging Security Monitoring Visual sensing and control Art Why Visual Quality Assessment? What
More informationReference Free Image Quality Evaluation
Reference Free Image Quality Evaluation for Photos and Digital Film Restoration Majed CHAMBAH Université de Reims Champagne-Ardenne, France 1 Overview Introduction Defects affecting films and Digital film
More informationPractical Content-Adaptive Subsampling for Image and Video Compression
Practical Content-Adaptive Subsampling for Image and Video Compression Alexander Wong Department of Electrical and Computer Eng. University of Waterloo Waterloo, Ontario, Canada, N2L 3G1 a28wong@engmail.uwaterloo.ca
More informationIMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING
IMAGE PROCESSING PAPER PRESENTATION ON IMAGE PROCESSING PRESENTED BY S PRADEEP K SUNIL KUMAR III BTECH-II SEM, III BTECH-II SEM, C.S.E. C.S.E. pradeep585singana@gmail.com sunilkumar5b9@gmail.com CONTACT:
More informationA Modified Image Coder using HVS Characteristics
A Modified Image Coder using HVS Characteristics Mrs Shikha Tripathi, Prof R.C. Jain Birla Institute Of Technology & Science, Pilani, Rajasthan-333 031 shikha@bits-pilani.ac.in, rcjain@bits-pilani.ac.in
More informationHIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY
HIGH DYNAMIC RANGE VERSUS STANDARD DYNAMIC RANGE COMPRESSION EFFICIENCY Ronan Boitard Mahsa T. Pourazad Panos Nasiopoulos University of British Columbia, Vancouver, Canada TELUS Communications Inc., Vancouver,
More informationVisual Perception of Images
Visual Perception of Images A processed image is usually intended to be viewed by a human observer. An understanding of how humans perceive visual stimuli the human visual system (HVS) is crucial to the
More informationMEASURING IMAGES: DIFFERENCES, QUALITY AND APPEARANCE
MEASURING IMAGES: DIFFERENCES, QUALITY AND APPEARANCE Garrett M. Johnson M.S. Color Science (998) A dissertation submitted in partial fulfillment of the requirements for the degree of Ph.D. in the Chester
More informationThe Effect of Opponent Noise on Image Quality
The Effect of Opponent Noise on Image Quality Garrett M. Johnson * and Mark D. Fairchild Munsell Color Science Laboratory, Rochester Institute of Technology Rochester, NY 14623 ABSTRACT A psychophysical
More informationRetina. Convergence. Early visual processing: retina & LGN. Visual Photoreptors: rods and cones. Visual Photoreptors: rods and cones.
Announcements 1 st exam (next Thursday): Multiple choice (about 22), short answer and short essay don t list everything you know for the essay questions Book vs. lectures know bold terms for things that
More informationVISUAL ARTIFACTS INTERFERENCE UNDERSTANDING AND MODELING (VARIUM)
Proceedings of Seventh International Workshop on Video Processing and Quality Metrics for Consumer Electronics January 30-February 1, 2013, Scottsdale, Arizona VISUAL ARTIFACTS INTERFERENCE UNDERSTANDING
More informationVisibility, Performance and Perception. Cooper Lighting
Visibility, Performance and Perception Kenneth Siderius BSc, MIES, LC, LG Cooper Lighting 1 Vision It has been found that the ability to recognize detail varies with respect to four physical factors: 1.Contrast
More informationDigital Watermarking Using Homogeneity in Image
Digital Watermarking Using Homogeneity in Image S. K. Mitra, M. K. Kundu, C. A. Murthy, B. B. Bhattacharya and T. Acharya Dhirubhai Ambani Institute of Information and Communication Technology Gandhinagar
More informationA Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang
A Vestibular Sensation: Probabilistic Approaches to Spatial Perception (II) Presented by Shunan Zhang Vestibular Responses in Dorsal Visual Stream and Their Role in Heading Perception Recent experiments
More informationicam06, HDR, and Image Appearance
icam06, HDR, and Image Appearance Jiangtao Kuang, Mark D. Fairchild, Rochester Institute of Technology, Rochester, New York Abstract A new image appearance model, designated as icam06, has been developed
More information3D display is imperfect, the contents stereoscopic video are not compatible, and viewing of the limitations of the environment make people feel
3rd International Conference on Multimedia Technology ICMT 2013) Evaluation of visual comfort for stereoscopic video based on region segmentation Shigang Wang Xiaoyu Wang Yuanzhi Lv Abstract In order to
More informationA Foveated Visual Tracking Chip
TP 2.1: A Foveated Visual Tracking Chip Ralph Etienne-Cummings¹, ², Jan Van der Spiegel¹, ³, Paul Mueller¹, Mao-zhu Zhang¹ ¹Corticon Inc., Philadelphia, PA ²Department of Electrical Engineering, Southern
More informationImplementation of a foveated image coding system for image bandwidth reduction. Philip Kortum and Wilson Geisler
Implementation of a foveated image coding system for image bandwidth reduction Philip Kortum and Wilson Geisler University of Texas Center for Vision and Image Sciences. Austin, Texas 78712 ABSTRACT We
More informationTED TED. τfac τpt. A intensity. B intensity A facilitation voltage Vfac. A direction voltage Vright. A output current Iout. Vfac. Vright. Vleft.
Real-Time Analog VLSI Sensors for 2-D Direction of Motion Rainer A. Deutschmann ;2, Charles M. Higgins 2 and Christof Koch 2 Technische Universitat, Munchen 2 California Institute of Technology Pasadena,
More informationAN IMPROVED NO-REFERENCE SHARPNESS METRIC BASED ON THE PROBABILITY OF BLUR DETECTION. Niranjan D. Narvekar and Lina J. Karam
AN IMPROVED NO-REFERENCE SHARPNESS METRIC BASED ON THE PROBABILITY OF BLUR DETECTION Niranjan D. Narvekar and Lina J. Karam School of Electrical, Computer, and Energy Engineering Arizona State University,
More informationVisual Perception. Jeff Avery
Visual Perception Jeff Avery Source Chapter 4,5 Designing with Mind in Mind by Jeff Johnson Visual Perception Most user interfaces are visual in nature. So, it is important that we understand the inherent
More informationWide-Band Enhancement of TV Images for the Visually Impaired
Wide-Band Enhancement of TV Images for the Visually Impaired E. Peli, R.B. Goldstein, R.L. Woods, J.H. Kim, Y.Yitzhaky Schepens Eye Research Institute, Harvard Medical School, Boston, MA Association for
More informationHuman Vision. Human Vision - Perception
1 Human Vision SPATIAL ORIENTATION IN FLIGHT 2 Limitations of the Senses Visual Sense Nonvisual Senses SPATIAL ORIENTATION IN FLIGHT 3 Limitations of the Senses Visual Sense Nonvisual Senses Sluggish source
More informationABSTRACT. Keywords: Color image differences, image appearance, image quality, vision modeling 1. INTRODUCTION
Measuring Images: Differences, Quality, and Appearance Garrett M. Johnson * and Mark D. Fairchild Munsell Color Science Laboratory, Chester F. Carlson Center for Imaging Science, Rochester Institute of
More informationReview Paper on. Quantitative Image Quality Assessment Medical Ultrasound Images
Review Paper on Quantitative Image Quality Assessment Medical Ultrasound Images Kashyap Swathi Rangaraju, R V College of Engineering, Bangalore, Dr. Kishor Kumar, GE Healthcare, Bangalore C H Renumadhavi
More informationGaze Direction in Virtual Reality Using Illumination Modulation and Sound
Gaze Direction in Virtual Reality Using Illumination Modulation and Sound Eli Ben-Joseph and Eric Greenstein Stanford EE 267, Virtual Reality, Course Report, Instructors: Gordon Wetzstein and Robert Konrad
More informationInsights into High-level Visual Perception
Insights into High-level Visual Perception or Where You Look is What You Get Jeff B. Pelz Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of Technology Students Roxanne
More informationSpatial Vision: Primary Visual Cortex (Chapter 3, part 1)
Spatial Vision: Primary Visual Cortex (Chapter 3, part 1) Lecture 6 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Princeton University, Spring 2019 1 remaining Chapter 2 stuff 2 Mach Band
More informationEnhanced image saliency model based on blur identification
Enhanced image saliency model based on blur identification R.A. Khan, H. Konik, É. Dinet Laboratoire Hubert Curien UMR CNRS 5516, University Jean Monnet, Saint-Étienne, France. Email: Hubert.Konik@univ-st-etienne.fr
More informationTRAFFIC SIGN DETECTION AND IDENTIFICATION.
TRAFFIC SIGN DETECTION AND IDENTIFICATION Vaughan W. Inman 1 & Brian H. Philips 2 1 SAIC, McLean, Virginia, USA 2 Federal Highway Administration, McLean, Virginia, USA Email: vaughan.inman.ctr@dot.gov
More informationPart I Introduction to the Human Visual System (HVS)
Contents List of Figures..................................................... List of Tables...................................................... List of Listings.....................................................
More informationGAZE contingent display techniques attempt
EE367, WINTER 2017 1 Gaze Contingent Foveated Rendering Sanyam Mehra, Varsha Sankar {sanyam, svarsha}@stanford.edu Abstract The aim of this paper is to present experimental results for gaze contingent
More informationUncorrelated Noise. Linear Transfer Function. Compression and Decompression
Final Report on Evaluation of Synthetic Aperture Radar (SAR) Image Compression Techniques Guner Arslan and Magesh Valliappan EE381K Multidimensional Signal Processing Prof. Brian L. Evans December 6, 1998
More informationPsychophysics of night vision device halo
University of Wollongong Research Online Faculty of Health and Behavioural Sciences - Papers (Archive) Faculty of Science, Medicine and Health 2009 Psychophysics of night vision device halo Robert S Allison
More informationHuman Vision and Human-Computer Interaction. Much content from Jeff Johnson, UI Wizards, Inc.
Human Vision and Human-Computer Interaction Much content from Jeff Johnson, UI Wizards, Inc. are these guidelines grounded in perceptual psychology and how can we apply them intelligently? Mach bands:
More informationDecoding Natural Signals from the Peripheral Retina
Decoding Natural Signals from the Peripheral Retina Brian C. McCann, Mary M. Hayhoe & Wilson S. Geisler Center for Perceptual Systems and Department of Psychology University of Texas at Austin, Austin
More informationArtefact Characterisation for JPEG and JPEG 2000 Image Codecs: Edge Blur and Ringing
I'.NCINEER- Vol. XXXX, No. 3, pp. 25-3, 27
More informationDecoding natural signals from the peripheral retina
Journal of Vision (2011) 11(10):19, 1 11 http://www.journalofvision.org/content/11/10/19 1 Decoding natural signals from the peripheral retina Brian C. McCann Mary M. Hayhoe Wilson S. Geisler Center for
More informationLinear Gaussian Method to Detect Blurry Digital Images using SIFT
IJCAES ISSN: 2231-4946 Volume III, Special Issue, November 2013 International Journal of Computer Applications in Engineering Sciences Special Issue on Emerging Research Areas in Computing(ERAC) www.caesjournals.org
More informationTransport System. Telematics. Nonlinear background estimation methods for video vehicle tracking systems
Archives of Volume 4 Transport System Issue 4 Telematics November 2011 Nonlinear background estimation methods for video vehicle tracking systems K. OKARMA a, P. MAZUREK a a Faculty of Motor Transport,
More informationRecommendation ITU-R BT.1866 (03/2010)
Recommendation ITU-R BT.1866 (03/2010) Objective perceptual video quality measurement techniques for broadcasting applications using low definition television in the presence of a full reference signal
More informationISO/IEC JTC 1/SC 29 N 16019
ISO/IEC JTC 1/SC 29 N 16019 ISO/IEC JTC 1/SC 29 Coding of audio, picture, multimedia and hypermedia information Secretariat: JISC (Japan) Document type: Title: Status: Text for PDAM ballot or comment Text
More informationHOW CLOSE IS CLOSE ENOUGH? SPECIFYING COLOUR TOLERANCES FOR HDR AND WCG DISPLAYS
HOW CLOSE IS CLOSE ENOUGH? SPECIFYING COLOUR TOLERANCES FOR HDR AND WCG DISPLAYS Jaclyn A. Pytlarz, Elizabeth G. Pieri Dolby Laboratories Inc., USA ABSTRACT With a new high-dynamic-range (HDR) and wide-colour-gamut
More informationLecture 8. Human Information Processing (1) CENG 412-Human Factors in Engineering May
Lecture 8. Human Information Processing (1) CENG 412-Human Factors in Engineering May 30 2009 1 Outline Visual Sensory systems Reading Wickens pp. 61-91 2 Today s story: Textbook page 61. List the vision-related
More informationChapter 2: The Beginnings of Perception
Chapter 2: The Beginnings of Perception We ll see the first three steps of the perceptual process for vision https:// 49.media.tumblr.co m/ 87423d97f3fbba8fa4 91f2f1bfbb6893/ tumblr_o1jdiqp4tc1 qabbyto1_500.gif
More informationECE/OPTI533 Digital Image Processing class notes 288 Dr. Robert A. Schowengerdt 2003
Motivation Large amount of data in images Color video: 200Mb/sec Landsat TM multispectral satellite image: 200MB High potential for compression Redundancy (aka correlation) in images spatial, temporal,
More informationIOC, Vector sum, and squaring: three different motion effects or one?
Vision Research 41 (2001) 965 972 www.elsevier.com/locate/visres IOC, Vector sum, and squaring: three different motion effects or one? L. Bowns * School of Psychology, Uni ersity of Nottingham, Uni ersity
More informationDESIGNING AND CONDUCTING USER STUDIES
DESIGNING AND CONDUCTING USER STUDIES MODULE 4: When and how to apply Eye Tracking Kristien Ooms Kristien.ooms@UGent.be EYE TRACKING APPLICATION DOMAINS Usability research Software, websites, etc. Virtual
More informationA New Metric for Color Halftone Visibility
A New Metric for Color Halftone Visibility Qing Yu and Kevin J. Parker, Robert Buckley* and Victor Klassen* Dept. of Electrical Engineering, University of Rochester, Rochester, NY *Corporate Research &
More informationImage Distortion Maps 1
Image Distortion Maps Xuemei Zhang, Erick Setiawan, Brian Wandell Image Systems Engineering Program Jordan Hall, Bldg. 42 Stanford University, Stanford, CA 9435 Abstract Subjects examined image pairs consisting
More informationCompression of High Dynamic Range Video Using the HEVC and H.264/AVC Standards
Compression of Dynamic Range Video Using the HEVC and H.264/AVC Standards (Invited Paper) Amin Banitalebi-Dehkordi 1,2, Maryam Azimi 1,2, Mahsa T. Pourazad 2,3, and Panos Nasiopoulos 1,2 1 Department of
More informationChapter IV THEORY OF CELP CODING
Chapter IV THEORY OF CELP CODING CHAPTER IV THEORY OF CELP CODING 4.1 Introduction Wavefonn coders fail to produce high quality speech at bit rate lower than 16 kbps. Source coders, such as LPC vocoders,
More informationGraphics and Perception. Carol O Sullivan
Graphics and Perception Carol O Sullivan Carol.OSullivan@cs.tcd.ie Trinity College Dublin Outline Some basics Why perception is important For Modelling For Rendering For Animation Future research - multisensory
More informationRASTA-PLP SPEECH ANALYSIS. Aruna Bayya. Phil Kohn y TR December 1991
RASTA-PLP SPEECH ANALYSIS Hynek Hermansky Nelson Morgan y Aruna Bayya Phil Kohn y TR-91-069 December 1991 Abstract Most speech parameter estimation techniques are easily inuenced by the frequency response
More informationVisually Lossless Coding in HEVC: A High Bit Depth and 4:4:4 Capable JND-Based Perceptual Quantisation Technique for HEVC
Visually Lossless Coding in HEVC: A High Bit Depth and 4:4:4 Capable JND-Based Perceptual Quantisation Technique for HEVC Lee Prangnell Department of Computer Science, University of Warwick, England, UK
More informationGrayscale and Resolution Tradeoffs in Photographic Image Quality. Joyce E. Farrell Hewlett Packard Laboratories, Palo Alto, CA
Grayscale and Resolution Tradeoffs in Photographic Image Quality Joyce E. Farrell Hewlett Packard Laboratories, Palo Alto, CA 94304 Abstract This paper summarizes the results of a visual psychophysical
More informationVision and Color. Reading. Optics, cont d. Lenses. d d f. Brian Curless CSE 557 Autumn Good resources:
Reading Good resources: Vision and Color Brian Curless CSE 557 Autumn 2015 Glassner, Principles of Digital Image Synthesis, pp. 5-32. Palmer, Vision Science: Photons to Phenomenology. Wandell. Foundations
More informationVision and Color. Brian Curless CSE 557 Autumn 2015
Vision and Color Brian Curless CSE 557 Autumn 2015 1 Reading Good resources: Glassner, Principles of Digital Image Synthesis, pp. 5-32. Palmer, Vision Science: Photons to Phenomenology. Wandell. Foundations
More informationThe Perceived Image Quality of Reduced Color Depth Images
The Perceived Image Quality of Reduced Color Depth Images Cathleen M. Daniels and Douglas W. Christoffel Imaging Research and Advanced Development Eastman Kodak Company, Rochester, New York Abstract A
More informationVisual Perception. human perception display devices. CS Visual Perception
Visual Perception human perception display devices 1 Reference Chapters 4, 5 Designing with the Mind in Mind by Jeff Johnson 2 Visual Perception Most user interfaces are visual in nature. So, it is important
More informationModulating motion-induced blindness with depth ordering and surface completion
Vision Research 42 (2002) 2731 2735 www.elsevier.com/locate/visres Modulating motion-induced blindness with depth ordering and surface completion Erich W. Graf *, Wendy J. Adams, Martin Lages Department
More informationLimitations of the Oriented Difference of Gaussian Filter in Special Cases of Brightness Perception Illusions
Short Report Limitations of the Oriented Difference of Gaussian Filter in Special Cases of Brightness Perception Illusions Perception 2016, Vol. 45(3) 328 336! The Author(s) 2015 Reprints and permissions:
More informationEvaluating and Improving Image Quality of Tiled Displays
Evaluating and Improving Image Quality of Tiled Displays by Steven McFadden A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy
More informationCOLOR IMAGE QUALITY EVALUATION USING GRAYSCALE METRICS IN CIELAB COLOR SPACE
COLOR IMAGE QUALITY EVALUATION USING GRAYSCALE METRICS IN CIELAB COLOR SPACE Renata Caminha C. Souza, Lisandro Lovisolo recaminha@gmail.com, lisandro@uerj.br PROSAICO (Processamento de Sinais, Aplicações
More informationDEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE
DEVELOPMENT OF LOSSY COMMPRESSION TECHNIQUE FOR IMAGE Asst.Prof.Deepti Mahadeshwar,*Prof. V.M.Misra Department of Instrumentation Engineering, Vidyavardhini s College of Engg. And Tech., Vasai Road, *Prof
More informationComparing Computer-predicted Fixations to Human Gaze
Comparing Computer-predicted Fixations to Human Gaze Yanxiang Wu School of Computing Clemson University yanxiaw@clemson.edu Andrew T Duchowski School of Computing Clemson University andrewd@cs.clemson.edu
More informationPerceptual Blur and Ringing Metrics: Application to JPEG2000
Perceptual Blur and Ringing Metrics: Application to JPEG2000 Pina Marziliano, 1 Frederic Dufaux, 2 Stefan Winkler, 3, Touradj Ebrahimi 2 Genista Corp., 4-23-8 Ebisu, Shibuya-ku, Tokyo 150-0013, Japan Abstract
More informationAnalysis of Gaze on Optical Illusions
Analysis of Gaze on Optical Illusions Thomas Rapp School of Computing Clemson University Clemson, South Carolina 29634 tsrapp@g.clemson.edu Abstract A comparison of human gaze patterns on illusions before
More informationChapter 3: Psychophysical studies of visual object recognition
BEWARE: These are preliminary notes. In the future, they will become part of a textbook on Visual Object Recognition. Chapter 3: Psychophysical studies of visual object recognition We want to understand
More information