Camera Resolution and Distortion: Advanced Edge Fitting

Similar documents
Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing

Edge-Raggedness Evaluation Using Slanted-Edge Analysis

Sampling Efficiency in Digital Camera Performance Standards

A Study of Slanted-Edge MTF Stability and Repeatability

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates

Defense Technical Information Center Compilation Part Notice

Intrinsic Camera Resolution Measurement Peter D. Burns a and Judit Martinez Bauza b a Burns Digital Imaging LLC, b Qualcomm Technologies Inc.

Diagnostics for Digital Capture using MTF

Capturing the Color of Black and White

What is a "Good Image"?

An Evaluation of MTF Determination Methods for 35mm Film Scanners

Determination of the MTF of JPEG Compression Using the ISO Spatial Frequency Response Plug-in.

Modified slanted-edge method and multidirectional modulation transfer function estimation

Midterm Examination CS 534: Computational Photography

On spatial resolution

Practical Scanner Tests Based on OECF and SFR Measurements

Migration from Contrast Transfer Function to ISO Spatial Frequency Response

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

Southern African Large Telescope. RSS CCD Geometry

Fast MTF measurement of CMOS imagers using ISO slantededge methodology

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

CHAPTER 33 ABERRATION CURVES IN LENS DESIGN

Modeling and Synthesis of Aperture Effects in Cameras

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

ELEC Dr Reji Mathew Electrical Engineering UNSW

Be aware that there is no universal notation for the various quantities.

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

ISO INTERNATIONAL STANDARD. Photography Electronic still-picture cameras Resolution measurements

Introduction to Video Forgery Detection: Part I

Influence of Image Enhancement Processing on SFR of Digital Cameras

Evaluating a Camera for Archiving Cultural Heritage

The Beam Characteristics of High Power Diode Laser Stack

A Simple Method for the Measurement of Modulation Transfer Functions of Displays

EE-527: MicroFabrication

Sensors and Sensing Cameras and Camera Calibration

Update on the INCITS W1.1 Standard for Evaluating the Color Rendition of Printing Systems

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Visibility of Uncorrelated Image Noise

IEEE P1858 CPIQ Overview

Robert B.Hallock Draft revised April 11, 2006 finalpaper2.doc

Measurement of the Modulation Transfer Function (MTF) of a camera lens. Laboratoire d Enseignement Expérimental (LEnsE)

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Parameters of Image Quality

Method for quantifying image quality in push-broom hyperspectral cameras

Cameras. CSE 455, Winter 2010 January 25, 2010

Two strategies for realistic rendering capture real world data synthesize from bottom up

Properties of Structured Light

University of Westminster Eprints

Criteria for Optical Systems: Optical Path Difference How do we determine the quality of a lens system? Several criteria used in optical design

OFFSET AND NOISE COMPENSATION

Review of graininess measurements

Multispectral Image Capturing System Based on a Micro Mirror Device with a Diffraction Grating

The Camera : Computational Photography Alexei Efros, CMU, Fall 2008

QUANTITATIVE IMAGE TREATMENT FOR PDI-TYPE QUALIFICATION OF VT INSPECTIONS

Laser Beam Analysis Using Image Processing

Geometric optics & aberrations

The key to a fisheye is the relationship between latitude ø of the 3D vector and radius on the 2D fisheye image, namely a linear one where

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Design of Practical Color Filter Array Interpolation Algorithms for Cameras, Part 2

Introduction. Geometrical Optics. Milton Katz State University of New York. VfeWorld Scientific New Jersey London Sine Singapore Hong Kong

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Waves & Oscillations

Spatial harmonic distortion: a test for focal plane nonlinearity

Advanced Lens Design

Super Sampling of Digital Video 22 February ( x ) Ψ

Optical System Case Studies for Speckle Imaging

Focus-Aid Signal for Super Hi-Vision Cameras

Measuring the impact of flare light on Dynamic Range

Beam Profiling. Introduction. What is Beam Profiling? by Michael Scaggs. Haas Laser Technologies, Inc.

Chapter 18 Optical Elements

Image Enhancement in Spatial Domain

Evaluating Commercial Scanners for Astronomical Images. The underlying technology of the scanners: Pixel sizes:

This document is a preview generated by EVS

The Camera : Computational Photography Alexei Efros, CMU, Fall 2005

Iso pdf. Iso pdf.zip

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Performance Factors. Technical Assistance. Fundamental Optics

Computer Vision. The Pinhole Camera Model

A shooting direction control camera based on computational imaging without mechanical motion

Some of the important topics needed to be addressed in a successful lens design project (R.R. Shannon: The Art and Science of Optical Design)

doi: /

Analysis of Focus Errors in Lithography using Phase-Shift Monitors

Instructions for Use of Resolution Chart

MTF Analysis and its Measurements for Digital Still Camera

Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for optical design and data quality

multiframe visual-inertial blur estimation and removal for unmodified smartphones

Lecture Notes 10 Image Sensor Optics. Imaging optics. Pixel optics. Microlens

LENSES. INEL 6088 Computer Vision

Multi Viewpoint Panoramas

Digital Photography Standards

Multispectral imaging and image processing

Forensic Framework. Attributing and Authenticating Evidence. Forensic Framework. Attribution. Forensic source identification

Photogrammetric Measurement Error Associated with Lens Distortion

Building a Real Camera. Slides Credit: Svetlana Lazebnik

Structure in out-of-focus beams of X-ray focusing mirrors: Causes and possible solutions. Fiona Rust Department of Physics, University of Bath

A Structured Light Range Imaging System Using a Moving Correlation Code

WFC3 TV3 Testing: IR Channel Nonlinearity Correction

The Bellows Extension Exposure Factor: Including Useful Reference Charts for use in the Field

Transcription:

28, Society for Imaging Science and Technology Camera Resolution and Distortion: Advanced Edge Fitting Peter D. Burns; Burns Digital Imaging and Don Williams; Image Science Associates Abstract A frequently used method for camera imaging performance evaluation is that based on the ISO standard for resolution and spatial frequency responses (SFR). This standard, ISO 2233, defines a method based on a straight edge element in a test chart. While the method works as intended, results can be influenced by lens distortion due to curvature in the captured edge feature. We interpret this as the introduction of a bias (error) into the measurement, and describe a method to reduce or eliminate its effect. We use a polynomial edge-fitting method, currently being considered for a revised IS2233. Evaluation of image distortion is addressed in two more recent standards, ISO 785 and 984. Applying these methods along with the SFR analysis complements the SFR analysis discussed here. Introduction Edge-gradient analysis is a well-established method for evaluating the capture of image detail by an imaging system. Originally developed for optical and photographic systems, it was adapted for the evaluation of digital cameras and scanners, when it was applied to slanted, or rotated, image features. The basic steps are shown in Fig.. Acquire edge Profile Compute derivative Figure: Edge-gradient analysis steps Discrete Fourier Trans. Edge-SFR Measurement For the ISO 2233,2 method there are three basic operations: acquiring an edge profile from the (image) data; computing the derivative in the direction across the edge, and computing the discrete Fourier transform of this derivative array. If we interpret the slanted-edge Spatial Frequency Response (SFR) measurement as an estimation problem, several sources of error can be seen as introducing bias and/or variation into the estimated SFR. For example, standard software programs do not require a precise alignment of the edge feature in the scene with image sampling array. The edge location is computed (estimated) from the data. An error introduced into the computed slope propagates as a bias error in the resulting SFR or MTF measurement. Most measurement error analysis focuses on variation. In this paper, we address a source of systematic error, and how to reduce it. The estimation of the direction (slope) of the edge has a direct effect on the computed SFR. This has been modeled in much the same way as microdensitometer aperture misalignment. 3 In the slanted-edge analysis, the processing of the image data by projection along the edge can be approximated by the synthesis of a slit of length m pixels. The effective MTF due to the slope error is 3 T ( u) ( ms u) = sin, () ππ ms u where is the original data sampling interval, s the slope misalignment error, and u the spatial frequency. Estimation of the edge slope can be influenced by image noise, which is an example of a source variation leading to a bias in the SFR measurement. The edge slope is computed from the set of line-by-line edge positions, which are computed from the firstderivative vectors of each image line in the region of interest (ROI). Image Distortion Interaction With the adoption of several standard imaging performance measures, it is tempting to think of each as distinct. While many are aimed at measuring different imaging characteristics, however, it is instructive to see how one characteristic can influence measurement of a quite different attribute. This is the case with distortion and image resolution. The edge-sfr method as widely practiced is based on a straight image feature. However, camera lens distortion will usually bend the edge so that it is curved when presented for analysis. When the normal SFR analysis is performed on such as edge, an error introduced. The result is influenced by the curvature of the edge, because the computed edge response is no longer an image profile normal to the edge. So while the SFR measurement is evaluating the system output as presented, the measurement of one attribute is modifying the evaluation of another (SFR). The influence of distorted edge features on SFR evaluation was discussed in Ref. 2. Analysis of residual edge-fitting errors was also suggested as part of automatic detection of the condition. Baer 4 addressed the SFR evaluation of cameras by introducing circular test edges. For many cameras, a radial variation in image blur can be accommodated by evaluating a centered circular edge. More recently, Cardei et al. 5 also used circular edge features and polynomial fitting to sections (arcs) for their SFR analysis. They suggest an iterative method based on computing increasingorder models to the edge shape. The chosen SFR is based on the highest-order polynomial that is consistent with an estimate of over-fitting, based on the set of residual values. In many cases the camera image processing path will include distortion correction of the image, and it is possible to perform the SFR analysis after this has been done. However, there can still be a residual curvature of the edge feature. In addition, the step of lens correction requires an interpolation and resampling of the image data. Since this is spatial processing, this will also influence the measured SFR. For system analysis of a camera whose path includes this lens correction, SFR evaluation after this resampling is appropriate, since this is for the delivered image. For subsystem evaluation where the influence of distortion and, e.g., focus, motion blur need to be separated, this will not give the intended measure. Proc. IS&T Electronic Imaging Symposium 28, Image Quality and System Performance XV

Advanced Edge-fitting For situations where there is residual curvature in the edge feature, or lens correction is not applied, we can modify the edgeestimation step of the SFR method. When the set of edge locations (computed from the line-derivative data) are used to find the edge, we can adopt a polynomial function, rather than the standard line (first-order). The second step is to use this fitted function when forming the super-sampled edge profile vector. When investigating the effectiveness of this approach, it was useful to have a reference, noise-free image file with known edge profile. This was done, using the error function, or integrated Gaussian function function to be computed. The much lower SFR (Fig. 5) is the result. In other words, the distorted edge has introduced a negative bias into the SFR measurement. However, when a polynomial edge-fitting step is employed, the SFR results show little if any influence of the distortion. Figure 6 shows the results from the corrected edge profile. In this noisefree case, the corrected results are almost identical to those for the undistorted image. eeeeee(xx) = 2 xx ππ ee tt2 dddd. (2) An integrated Gaussian edge centered at xx = μμ can be written as, ee(xx, μμ, σσ) = 2 + erf xx μμ. (3) σσ 2 The corresponding line-spread function and resulting SFR will have a Gaussian form. Figure 2 shows the function plotted for μμ =, σσ = pixel. This represents the x-axis profile for a vertical edge feature. To generate a slanted-edge image array, each row in the array should offset, to achieve the desired edge-angle. The x-axis is plotted in units of pixels. In Eq. 3 the width parameter, σσ, can be used to adjust the (spatial-frequency) bandwidth of SFR image array..9.8.7.6 Edge signal.5.4.3.2. -3-2 - 2 3 x, pixel Figure 2: Computed edge function used in computed reference image. Figure 3 shows a computed image * with such an edge feature at a 5 angle from vertical. Also shown superimposed is the fitted edge (line) and set of edge location data. The SFR computed from the image array of Fig. 3 is also shown, with the expected form. Figure 4 shows a computed, distorted edge, and the result of a polynomial edge-finding method. The SFR computed from the uncorrected edge profile, i.e. standard method is shown in Fig. 5. The distorted edge has caused a widened edge- and line spread Figure 3: Computed Gaussian edge image with detected edge location, and resulting SFR. Residual Error Analysis Up to now we have discussed the use of ideal, Gaussian edges. We now introduce a more realistic element into our SFR measurement image noise. For our brief investigation here, we simply add a random noise array to the previously computed edge image arrays. The pixel-to-pixel variations are independent, and at a level consistent with well-exposed digital images. A Normal random variable, (standard deviation =.3 for a [-255] signal encoding) was added. Note that we expect that this will introduce a variation into the line-to-line edge-finding and the subsequent projection of the image data when forming the edge profile. * All computed images were saved as monochrome, 8-bit, uncompressed TIFF files.

Given this measurement variation, and the previously discussed bias due to image distortion leads us to a statistical approach. 6 We consider the fitting of the edge location, whether to a first or higher-order function as an estimation problem. As for any statistical modelling effort, examining the remaining residual error is useful. Figure 7 shows the results of the edge fitting step for the noisy, ideal edge in terms of the residual error. We compute the difference between each line-by-line edge location and the fitted edge (equation) in distance normal to the edge. We see a uniform apparently random error and symmetrical histogram consistent with a good edge model. 5 Figure 4: Computed Gaussian distorted edge image with detected edge location: linear (blue dash) and 3 rd order polynomial (red circles) line 5 2 25 - -.5.5.2..8 Figure 5: SFR based distorted edge of Fig. 4 computed without correction Frequency, prob..6.4.2 -.3 -.2 -...2.3 Figure 7: Edge location residual variation for ideal edge and linear fit, and corresponding histogram (lower). Figure 6: Computed Gaussian distorted edge image with detected edge location and corrected SFR For the distorted edge image array, we compute the SFR with an (incorrect) linear fit, and the third-order polynomial. Figure 8 shows both sets of residual error values. As expected, the residual for a linear fit to a distorted edge shows large, non-random variation. However, when the polynomial function is used, the results are similar to those of Fig. 6 for the straight edge feature. Figure 9 shows the probability histogram for these data. Having looked at the details of the edge finding, we now compare the resulting SFR for both of these cases. Figure shows

remarkably consistent results when polynomial edge fitting is used for the distorted data set. To aid the interpretation, every 5 th value for the distorted image has been plotted..9.8.7 ideal edge distorted.6 5 5 SFR.5.4.3 line.2 5 5. 2 2..2.3.4.5.6.7 Frequency, cy/pixel 25-6 -4-2 2 25 - -.5.5 Figure 8: Edge residual for the distorted image of Fig. 4 after linear (left) and third-order (right) edge-fitting Frequency, prob..2.8.6.4.2..8.6 Figure : Results for the ideal and distorted images (with noise) after advanced edge-fitting SFR analysis. Edge-SFR analysis was completed for the distorted image. When using the standard, the linear edge-fit result is shown in Fig., labeled as Distort. first order. As expected, the polynomial edge analysis shows higher SFR results, consistent with the previous results. We take this to be the desired SFR. The distortion-corrected image, with a now-straight edge, was then analyzed, and the results are also shown in Fig...9.8.7.6 Distort. first order Distort. polyn..4.2 SFR.5.4 Corrected first order -.6 -.4 -.2.2.4.6.8.3.2 Figure 9: Probability histogram of residual edge position error for distorted image after third-order edge-fitting Lens-distortion Correction Digital processing is often used to correct image distortion introduced by lens aberration. This requires a resampling of the image array, based on known (or estimated) spatial characteristics of the image capture. Resampling involves interpolation of the sampled image, and can be expected to modify the effective SFR based on the output image. We can use the polynomial-based SFR analysis to quantify this effect in the following example. We started with a computed straight edge image, as in Fig. 3. We then used Adobe Photoshop software to introduce geometrical distortion (modest barrel distortion). This saved image was taken as the distorted input. This was then corrected by applying the inverse operation (pinhole) taken as the corrected image...5..5.2.25.3.35.4.45.5 Frequency, cy/pixel Figure : SFR analysis with lens correction: distorted image and st-order analysis and polynomial analysis, and st -order for corrected (resampled) image Comparing the SFR results from this experiment allows us to compute the reduction in SFR due to the digital lens correction. This is the difference between the polynomial result for the distorted edge, and the result for the corrected image. As we can see, for this example and software, the difference is relatively small (see arrows in the figure). ISO 785 and 984 While it is certainly possible to correct the above curved edge image feature to improve the SFR measurement, it is helpful to consider this in the context of other standard performance measures. ISO recently released the 785 7 standard for a

geometric distortion measurement, and ISO 984 8 for the wavelength (color) dependent nature of optical distortion. For camera system performance evaluation, these (macro) image measures will likely give context to their effects on our (micro) edge-based SFR results. The methods used for the two distortion standards are based on a test chart with a regular array of dots. These are detected in the test image, and measures of dot-to-dot distance variation define the measure. Figure 2 shows the result of the evaluation in the Quiver and contour plots from the evaluation of a smartphone camera. The quiver plot is from Ref. 9. This (image) fielddistortion analysis gives us insight as to where, and to what extent, image distortion is most extreme. Note also that the left-to-right asymmetry of the apparent geometrical distortion can also indicate misalignment of the camera to the test chart and fixture (keystoning). feature. Edge fitting can be based on polynomial model. Results indicate that the method can be effective, particularly when paired with analysis of the residual errors for the edge-location model. In addition, this field-dependent distortion can be independently evaluated using two other ISO standard methods. For system testing, the results of such macro distortion can be used to identify image regions of serious distortion, likely to be of concern. In addition testing fixture alignment can be evaluated and adjusted prior to full system testing. Acknowledgements It is a pleasure to acknowledge several helpful discussions with members of ISO/TC42 standards teams, in particular Dietmar Wueller and Norman Koren. References [] ISO 2233:24, Photography -- Electronic still picture imaging -- Resolution and spatial frequency responses, ISO, 24. [2] P. D. Burns and D. Williams, Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing, Proc. IS&T s PICS Conference, pg. 9-95, 22. [3] R. A. Jones, Photogr. Sci. Eng., 9, 355-359 (965). [4] R. L. Baer, The Circular-Edge Spatial Frequency Response Test, Proc. SPIE-IS&T Electronic Imaging Symposium, SPIE vol. 2594, 7-8, 24 [5] V. Cardel, B. Fowler, S. Kavusi and J. Philips, MTF Measurements of Wide Field of View Cameras, Proc. IS&T Symposium on Electronic Imaging, DPMI-6, 26. [6] P. D. Burns, Estimation Error in Image Quality Measurements, Proc. SPIE vol. 7867, 2. [7] ISO 785:25, Photography Digital cameras Geometric distortion measurements, ISO, 25. [8] ISO 984:25, Photography Digital cameras Chromatic displacement measurements, ISO, 25. Figure 2: Quiver and contour plot representing measured geometric distortion. Each arrow (distance) length is drawn as 5%.(Ref. 9) Conclusions ISO 2233 defines a method based on a straight edge element in a test chart. Image distortion, however, introduces a bias error into the result. We can both detect and correct the effect of this image field distortion by generalizing the fit to the detected edge [9] P. D. Burns and D. Williams, Going Mobile: Evaluating Smartphone Capture for Collections, Proc. IS&T Archiving Conf. pg. 2-6, 26. Author Biographies Peter Burns (Burns Digital Imaging) is a consultant supporting digital imaging system and service development, and related intellectual property efforts. Previously he worked for Carestream Health, Eastman Kodak and Xerox Corp. He is a frequent conference speaker, and teaches courses on these subjects. Don Williams is founder of Image Science Associates, a digital imaging consulting and software group. Their work focuses on quantitative performance metrics for digital capture imaging devices, and imaging fidelity issues for the cultural heritage community. He has taught short courses for many years, contributes to several imaging standards activities.