Analysis of the Interpolation Error Between Multiresolution Images

Similar documents
CS6670: Computer Vision Noah Snavely. Administrivia. Administrivia. Reading. Last time: Convolution. Last time: Cross correlation 9/8/2009

Image Sampling. Moire patterns. - Source: F. Durand

Fast Perception-Based Depth of Field Rendering

Image Scaling. This image is too big to fit on the screen. How can we reduce it? How to generate a halfsized

Last Lecture. photomatix.com

Sampling and Pyramids

Image Filtering. Median Filtering

Convolution Pyramids. Zeev Farbman, Raanan Fattal and Dani Lischinski SIGGRAPH Asia Conference (2011) Julian Steil. Prof. Dr.

A Study of Slanted-Edge MTF Stability and Repeatability

Image Deblurring and Noise Reduction in Python TJHSST Senior Research Project Computer Systems Lab

Images and Filters. EE/CSE 576 Linda Shapiro

Last Lecture. photomatix.com

Multispectral Fusion for Synthetic Aperture Radar (SAR) Image Based Framelet Transform

Frequency Domain Enhancement

Image Processing for feature extraction

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

Removing Temporal Stationary Blur in Route Panoramas

Sampling and Reconstruction

1.Discuss the frequency domain techniques of image enhancement in detail.

An Adaptive Kernel-Growing Median Filter for High Noise Images. Jacob Laurel. Birmingham, AL, USA. Birmingham, AL, USA

Digital images. Digital Image Processing Fundamentals. Digital images. Varieties of digital images. Dr. Edmund Lam. ELEC4245: Digital Image Processing

SUPER RESOLUTION INTRODUCTION

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Image Filtering. Reading Today s Lecture. Reading for Next Time. What would be the result? Some Questions from Last Lecture

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Pyramids. Sanja Fidler CSC420: Intro to Image Understanding 1 / 35

Motion illusion, rotating snakes

A DEVELOPED UNSHARP MASKING METHOD FOR IMAGES CONTRAST ENHANCEMENT

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Photo-Consistent Motion Blur Modeling for Realistic Image Synthesis

Real-time Simulation of Arbitrary Visual Fields

Filters. Materials from Prof. Klaus Mueller

Announcements. Image Processing. What s an image? Images as functions. Image processing. What s a digital image?

Restoration of Motion Blurred Document Images

CS534 Introduction to Computer Vision. Linear Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

Image Processing (EA C443)

Chapter 2 Fourier Integral Representation of an Optical Image

Sampling and Reconstruction

Region Adaptive Unsharp Masking Based Lanczos-3 Interpolation for video Intra Frame Up-sampling

Antialiasing & Compositing

Matlab (see Homework 1: Intro to Matlab) Linear Filters (Reading: 7.1, ) Correlation. Convolution. Linear Filtering (warm-up slide) R ij

The Statistics of Visual Representation Daniel J. Jobson *, Zia-ur Rahman, Glenn A. Woodell * * NASA Langley Research Center, Hampton, Virginia 23681

ELEC Dr Reji Mathew Electrical Engineering UNSW

On Contrast Sensitivity in an Image Difference Model

Overview. Neighborhood Filters. Dithering

SURVEILLANCE SYSTEMS WITH AUTOMATIC RESTORATION OF LINEAR MOTION AND OUT-OF-FOCUS BLURRED IMAGES. Received August 2008; accepted October 2008

USE OF HISTOGRAM EQUALIZATION IN IMAGE PROCESSING FOR IMAGE ENHANCEMENT

Wavelet Transform. From C. Valens article, A Really Friendly Guide to Wavelets, 1999

Extended depth of field for visual measurement systems with depth-invariant magnification

Blind Single-Image Super Resolution Reconstruction with Defocus Blur

On Contrast Sensitivity in an Image Difference Model

ELEC Dr Reji Mathew Electrical Engineering UNSW

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Advanced Digital Signal Processing Part 2: Digital Processing of Continuous-Time Signals

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

Frequencies and Color

Aliasing and Antialiasing. What is Aliasing? What is Aliasing? What is Aliasing?

Image Filtering and Gaussian Pyramids

Image Enhancement in Spatial Domain

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

Image Processing by Bilateral Filtering Method

Coded photography , , Computational Photography Fall 2018, Lecture 14

>>> from numpy import random as r >>> I = r.rand(256,256);

Prof. Feng Liu. Winter /10/2019

Midterm Examination CS 534: Computational Photography

An edge-enhancing nonlinear filter for reducing multiplicative noise

CSCI 1290: Comp Photo

Image Interpolation. Image Processing

Histogram Painting for Better Photomosaics

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

LENSLESS IMAGING BY COMPRESSIVE SENSING

Fourier analysis of images

DIGITAL IMAGE PROCESSING UNIT III

Texture Editor. Introduction

Hybrid Halftoning A Novel Algorithm for Using Multiple Halftoning Techniques

Chapter 17 Waves in Two and Three Dimensions

Enhanced DCT Interpolation for better 2D Image Up-sampling

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Performance Evaluation of Edge Detection Techniques for Square Pixel and Hexagon Pixel images

Noise Reduction Technique in Synthetic Aperture Radar Datasets using Adaptive and Laplacian Filters

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

Coded photography , , Computational Photography Fall 2017, Lecture 18

Subband coring for image noise reduction. Edward H. Adelson Internal Report, RCA David Sarnoff Research Center, Nov

Improved Fusing Infrared and Electro-Optic Signals for. High Resolution Night Images

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Motivation: Image denoising. How can we reduce noise in a photograph?

ACM Fast Image Convolutions. by: Wojciech Jarosz

02/02/10. Image Filtering. Computer Vision CS 543 / ECE 549 University of Illinois. Derek Hoiem

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015

Fast Motion Blur through Sample Reprojection

Enhancement of Unusual Color in Aerial Video Sequences for Assisting Wilderness Search and Rescue

Wave or particle? Light has. Wavelength Frequency Velocity

Texture mapping from 0 to infinity

Defocus Map Estimation from a Single Image

Demosaicing Algorithm for Color Filter Arrays Based on SVMs

Wavelet Transform. From C. Valens article, A Really Friendly Guide to Wavelets, 1999

Digital Media. Lecture 4: Bitmapped images: Compression & Convolution Georgia Gwinnett College School of Science and Technology Dr.

Performance Evaluation of Different Depth From Defocus (DFD) Techniques

Motivation: Image denoising. How can we reduce noise in a photograph?

Transcription:

Brigham Young University BYU ScholarsArchive All Faculty Publications 1998-10-01 Analysis of the Interpolation Error Between Multiresolution Images Bryan S. Morse morse@byu.edu Follow this and additional works at: https://scholarsarchive.byu.edu/facpub Part of the Computer Sciences Commons Original Publication Citation B. S. Morse, "Analysis of the interpolation error between multiresolution images," in IEEE International Conference on Image Processing (ICIP), pp. 213-216, October 1998. BYU ScholarsArchive Citation Morse, Bryan S., "Analysis of the Interpolation Error Between Multiresolution Images" (1998). All Faculty Publications. 638. https://scholarsarchive.byu.edu/facpub/638 This Peer-Reviewed Article is brought to you for free and open access by BYU ScholarsArchive. It has been accepted for inclusion in All Faculty Publications by an authorized administrator of BYU ScholarsArchive. For more information, please contact scholarsarchive@byu.edu, ellen_amatangelo@byu.edu.

Analysis of the Interpolation Error Between Multiresolution Images Bryan S. Morse Department of Computer Science, Brigham Young University 3361 TMCB, Provo, UT 84602 morse @cs.byu.edu Abstract Many rendering or image-analysis systems require calculation of versions of an image at lesser resolutions than the original. Because the$ltering required to perform such calculations accurately cannot typically be done in real time, many systems use interpolation between images at precalculated resolutions. This discrete sampling of the scale component of multiresolution image spaces is analogous to spatial sampling in discrete images. This paper quantijies and bounds the error that can be introduced during such interpolation as afunction of the scale-space sampling rate used. A method is presented that uses the difision equation to relate spatial derivatives to scale derivatives and from there to an error bound. 1. Introduction Many graphics and image-processing systems require that an initial high-resolution image be calculated (rendered) at some arbitrary lesser resolution. For example, when interactively viewing large, high resolution images one often needs to view the entire image at some reduced resolution while still being able to, when needed, view parts of the image at higher resolution. For example, in an interactive graphical environment where one can effectively move nearer or farther from an image, movement away from the image corresponds to a decrease in resolution (and corresponding increase in field of view) while movement towards the image corresponds to increasing resolution (and correspondingly decreasing the field of view). Alternatively, one may map an image onto the surface of an object that is visible at varying distances from the viewer (e.g., a receding surface where nearer parts of the object are visible at higher resolution and farther parts are visible at lower resolution). To avoid aliasing artifacts, such multiresolution rendering obviously requires pre-filtering [ I]. However, for many applications, on-the-fly filtering for arbitrary resolutions may not be feasible. The most commonly-used approach to this problem is to precompute versions of the image at a large number of pre-defined resolutions and to interpolate between them when asked to render the image (or some portion) at some arbitrary resolution, thus trading off precomputation and storage for increased interactive performance. When used for graphical texture mapping, this technique is known as MZP mapping [2] and is almost universally available in current graphics hardware and software systems. Similar approaches are also used in multiscale analysis of images, in which a hierarchy of reduced-resolution versions of an image is generated [3, 4, 5, 61. In many of these techniques, however, one may precompute versions of the image at specified resolutions but may find that desired properties exhibit themselves between these sampled resolutions-thus introducing a scale-space sampling question yet unanswered or even agreed upon in the imageprocessing community. While interpolation between multiresolution images has advantages of simplicity and speed, it does not always approximate well the actual change in the value of a pixel under continuous change in resolution. Hybrid methods using filtering of precomputed resolutions instead of interpolation have been proposed [7], but although this approach avoids the inaccuracy of interpolation and is much faster than directly filtering the original image, it still requires further filtering of one of the precomputed images at interactive speeds. While the limitations of interpolating between multiple resolutions (e.g., MIP mapping) are well known [7], little work has been published that quantifies or bounds the error in such methods. An example of such errors is illustrated in Figures land 2. Clearly, this error can also be made less by more finely sampling the scales used to precompute multiresolution versions of the image. But this leads to an important, fundamental, and yet unanswered question: what constitutes sufficient sampling of multiple resolutions when interpolating between multiresolution images? This paper presents a method for bounding the error in such multiresolution interpolation, thus allowing us to either estimate the resulting error or to find desired sampling rates that limit the error to a desired bound. 0-8186-8821-1/98 $10.00 0 1998 IEEE 213

Original Image Three-Quarter Resolution Image Half-Resolution Image Interpolated Image Absolute Difference Image (range 0 to 81) Figure 1: Example of errors that occur when interpolating between multiresolution images. LEFT COLUMN: original image (top) and half-resolution version (bottom). RIGHT COLUMN: three-quarter resolution (top), interpolated approximation of three-quarter resolution (middle), and absolute difference (bottom). Notice the artificial contrast enhancement and sharpening introduced in the interpolated image and reflected in the difference image. The difference image is normalized for display and has a maximum value of 81 (nearly one-third of the range of the original image). 2 14

where L(Z, 0) denotes the underlying scene (original image or zero-scale basis for the space), * denotes convolution, and G(Z, a) denotes a measurement aperture with size 0. It can be shown that in order to avoid artifacts from spurious resolution (temporary increases in sharpness as resolution decreases), the unique selection of aperture weights is the Gaussian [SI: tween thetwo curves in (a). For this reason, scale spaces are most commonly generated using Gaussian blurring where the blurring parameter a is the scale of the image. Measurement scale (defined in this way) and resolution are inversely related. An important property of Gaussian-generated scale spaces is that Gaussian blurring with scale a is equivalent to running the diffusion equation for time t: d -L = V2L at where t = a2/2. This property is key as we attempt to determine the error in linear interpolation in the resolution (scale) dimension of these spaces. 3. Error in Interpolation Between Resolutions The approximation error E in linear interpolation of a function f between two known values separated by h is Difference between the actual and approximated p.s.f. Figure 2: Errors that can occur in interpolation of onedimensional Gaussian point-spread functions. The interpolated function is sharper and higher-contrast than the correct function. Interpolation of two-dimensional point-spread functions behave similarly. 2. Scale Spaces A useful tool for mathematically representing multiresolution spaces is the concept of a scale space (e.g., [4],[5], and many others): the set of all images of the same scene at varying resolutions. If we assume that the multiresolution images are acquired (generated) from some base image using a scaled, weighted measurement aperture applied uniformly over the image, such a scale space may be written as the convolution of the basis image with scaled versions of the measurement aperture: where x is the intermediate point at which the magnitude of the second derivative of f is greatest [9]. Thus, if we can bound the second derivative of the multiresolution image with respect to scale, we can bound the error in such interpolation. The key to bounding these derivatives with respect to scale is the diffusion equation. Using Eq. 1 and substituting t = a2/2 and dt = o da, -L d l = -V2L da a Extending this to second-order derivatives, The implication of this is that if we can bound the fourth derivatives with respect to our spatial variables, we can likewise bound the second derivative with respect to scale (resolution). Substituting this into Eq. 2, L(F, a) = L(Z7 0) c G(Z, U) 215

where x is now the intermediate point at which the magnitude of the second derivative with respect to scale (fourth derivative with respect to space) is greatest. If we sample scale exponentially, as is usually the case in scale-space implementations [5] and multiresolution displays [2], the scale c% at step i of the resolution is D~ = c@ for some exponential base b (the multiplying factor from scale to scale). The difference between one sampled scale D and the next is thus h = a(b - 1). Substituting this into Eq. 3 gives If we bound IV2V2Ll by some value B, this becomes where b - 1 is the percentage increase from one sampled scale to another and B is the estimated bound on the fourthorder spatial derivatives. There are thus two values that control the error in interpolation between multiple resolutions: 1. The percentage increase between scales (b - l), and 2. The estimated bound on our fourth spatial derivatives B 2 IV2V2LI. Example 1: For a discretely-sampled initial image (i.e., ignoring any derivatives higher than those captured by the image discretization), the derivative bound B is four times the number of image grey-levels N. If we sample resolution through successive doubling (b = 2) as is often used in multiresolution methods [2,3], the error bound per pixel is thus N E<- 2 or as much as one-halfof the image range. Example 2: Suppose that instead of simply determining the error, we wish to determine a sampling factor that ensures a desired bound on the error. With N = 256 intensity levels, again using B = 4N, and desiring a maximum error of a single intensity level (E < l), solving for sampling factor b gives This implies that to ensure an error bound of a single intensity level, one can reduce the height and width of the image by no more than 8.8% at a time, far less than successive halving of each dimension and much closer to the 1.1 or a scale multipliers reportedly used in recent multiscale research [6]. 4. Conclusion Using the diffusion equation as a way to tie secondorder spatial derivatives to first-order scale derivatives in scale spaces, we have turned bounds on fourth-order spatial derivatives into a bound on the error in linear interpolation across resolutions. Although the potential for error in the interpolation of resolution has been appreciated for several years [7], the methods presented here provide a basis for quantitative analysis of this error. Similar techniques could also be used for higher-order interpolation functions. References [l] E. A. Feibush, M. Levoy, and R. L. Cook. Synthetic texturing using digital filters. In SIGGRAPH, 1980. [2] L. Williams. Pyramidal parametrics. In SIGGRAPH, 1983. [3] Peter J. Burt and Edward H. Adelson. The Laplacian pyramid as a compact image code. IEEE Transactions on Communications, 31:532-540, 1983. [4] Andrew P. Witkin. Scale space filtering. In Proc. International Joint Conference on Artificial Intelligence (Karlsruhe, W Germany), pages 1019-1023,1983. [5] Bart M. ter Haar Romeny and Luc Florack. A multiscale geometric model of human vision. In B. Hendee and P. N. T. Wells, editors, Perception of Visual Information. Springer-Verlag, Berlin, 1991. [6] Stephen M. Pizer, Bryan S. Morse, David Eberly, and Daniel S. Fritsch. Zoom-invariant vision of figural shape: The mathematics of cores. CVIU, 1998. [7] P. S. Heckbert. Filtering by repeated integration. In SIGGRAPH, 1986. [8] J. Babaud, A. P. Witkin, M. Baudin, and R. 0. Duda. Uniqueness of the Gaussian kernel for scale-space filtering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8( 1):26-33, 1986. [9] Stephen M. Pizer and Victor L. Wallace. To Compute Numerically. Little, Brown, and Company, 1983. b 5 1.088 2 16