A collection of hyperspectral images for imaging systems research Torbjørn Skauli a,b, Joyce Farrell *a

Similar documents
DESIGN AND CHARACTERIZATION OF A HYPERSPECTRAL CAMERA FOR LOW LIGHT IMAGING WITH EXAMPLE RESULTS FROM FIELD AND LABORATORY APPLICATIONS

A simulation tool for evaluating digital camera image quality

Norsk Elektro Optikk AS (NEO) HySpex Airborne Sensors System Overview

Hyperspectral Image capture and analysis of The Scream (1893)

Method for quantifying image quality in push-broom hyperspectral cameras

Visibility of Uncorrelated Image Noise

METHOD FOR CALIBRATING THE IMAGE FROM A MIXEL CAMERA BASED SOLELY ON THE ACQUIRED HYPERSPECTRAL DATA

Texture characterization in DIRSIG

DEFENSE APPLICATIONS IN HYPERSPECTRAL REMOTE SENSING

OPAL Optical Profiling of the Atmospheric Limb

Signal-to-Noise Ratio (SNR) discussion

Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for optical design and data quality

Spectral Analysis of the LUND/DMI Earthshine Telescope and Filters

A Kalman-Filtering Approach to High Dynamic Range Imaging for Measurement Applications

According to the proposed AWB methods as described in Chapter 3, the following

Speed and Image Brightness uniformity of telecentric lenses

Improving the Collection Efficiency of Raman Scattering

Program for UV Intercomparison 2014 in Davos:

WIDE SPECTRAL RANGE IMAGING INTERFEROMETER

Supplementary Materials

Optoliner NV. Calibration Standard for Sighting & Imaging Devices West San Bernardino Road West Covina, California 91790

Learning the image processing pipeline

Estimation of spectral response of a consumer grade digital still camera and its application for temperature measurement

Hyperspectral goes to UAV and thermal

Hyperspectral Sensor

The Effect of Exposure on MaxRGB Color Constancy

High Resolution Spectral Video Capture & Computational Photography Xun Cao ( 曹汛 )

Novel Hemispheric Image Formation: Concepts & Applications

Southern African Large Telescope. RSS CCD Geometry

PROCEEDINGS OF SPIE. Feasibility of a standard for full specification of spectral imager performance

Multispectral. imaging device. ADVANCED LIGHT ANALYSIS by. Most accurate homogeneity MeasureMent of spectral radiance. UMasterMS1 & UMasterMS2

For a long time I limited myself to one color as a form of discipline. Pablo Picasso. Color Image Processing

ENMAP RADIOMETRIC INFLIGHT CALIBRATION, POST-LAUNCH PRODUCT VALIDATION, AND INSTRUMENT CHARACTERIZATION ACTIVITIES

DECISION NUMBER FOURTEEN TO THE TREATY ON OPEN SKIES

Instructions for the Experiment

SR-5000N design: spectroradiometer's new performance improvements in FOV response uniformity (flatness) scan speed and other important features

Hyperspectral Imager for Coastal Ocean (HICO)

New applications of Spectral Edge image fusion

Simulation of film media in motion picture production using a digital still camera

Spectroscopy Lab 2. Reading Your text books. Look under spectra, spectrometer, diffraction.

The Importance of Wavelengths on Optical Designs

Quantitative Hyperspectral Imaging Technique for Condition Assessment and Monitoring of Historical Documents

Basic Hyperspectral Analysis Tutorial

A Quantix monochrome camera with a Kodak KAF6303E CCD 2-D array was. characterized so that it could be used as a component of a multi-channel visible

A Spectral Database of Commonly Used Cine Lighting Andreas Karge, Jan Fröhlich, Bernd Eberhardt Stuttgart Media University

Observing a colour and a spectrum of light mixed by a digital projector

Atmospheric interactions; Aerial Photography; Imaging systems; Intro to Spectroscopy Week #3: September 12, 2018

Digital Photography: Fundamentals of Light, Color, & Exposure Part II Michael J. Glagola - December 9, 2006

GUIDE TO SELECTING HYPERSPECTRAL INSTRUMENTS

Compact Dual Field-of-View Telescope for Small Satellite Payloads

ECEN. Spectroscopy. Lab 8. copy. constituents HOMEWORK PR. Figure. 1. Layout of. of the

SYSTEMATIC NOISE CHARACTERIZATION OF A CCD CAMERA: APPLICATION TO A MULTISPECTRAL IMAGING SYSTEM

Remote Sensing Calibration Solutions

Sharpness, Resolution and Interpolation

Spectral and Polarization Configuration Guide for MS Series 3-CCD Cameras

Multispectral Imaging

Preliminary Characterization Results: Fiber-Coupled, Multi-channel, Hyperspectral Spectrographs

In Situ Measured Spectral Radiation of Natural Objects

SECTION I - CHAPTER 2 DIGITAL IMAGING PROCESSING CONCEPTS

Observational Astronomy

Figure 1 HDR image fusion example

A Study of Slanted-Edge MTF Stability and Repeatability

Solid State Luminance Standards

KODAK VISION Expression 500T Color Negative Film / 5284, 7284

GCMS-3 GONIOSPECTROPHOTOMETER SYSTEM

Camera Calibration Certificate No: DMC III 27542

Vehicle tracking with multi-temporal hyperspectral imagery

RADIOMETRIC CALIBRATION OF INTENSITY IMAGES OF SWISSRANGER SR-3000 RANGE CAMERA

LENSES. INEL 6088 Computer Vision

Application of GIS to Fast Track Planning and Monitoring of Development Agenda

Advances in Hyperspectral Imaging Technologies for Multi-channel Fiber Sensing

IMAGE SENSOR SOLUTIONS. KAC-96-1/5" Lens Kit. KODAK KAC-96-1/5" Lens Kit. for use with the KODAK CMOS Image Sensors. November 2004 Revision 2

MASSACHUSETTS INSTITUTE OF TECHNOLOGY LINCOLN LABORATORY 244 WOOD STREET LEXINGTON, MASSACHUSETTS

TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0

Modifications of a sinarback 54 digital camera for spectral and high-accuracy colorimetric imaging: simulations and experiments

SPECTRAL SCANNER. Recycling

Demosaicing and Denoising on Simulated Light Field Images

General Imaging System

Calibration of a High Dynamic Range, Low Light Level Visible Source

Colour image watermarking in real life

White paper. Wide dynamic range. WDR solutions for forensic value. October 2017

Benchtop System Quick Start

EASTMAN EXR 200T Film / 5293, 7293

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

MR-i. Hyperspectral Imaging FT-Spectroradiometers Radiometric Accuracy for Infrared Signature Measurements

Imaging with hyperspectral sensors: the right design for your application

1. Theory of remote sensing and spectrum

Improved Spectra with a Schmidt-Czerny-Turner Spectrograph

MR-i. Hyperspectral Imaging FT-Spectroradiometers Radiometric Accuracy for Infrared Signature Measurements

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

Camera Calibration Certificate No: DMC II

OUTDOOR PORTRAITURE WORKSHOP

Digital Image Processing. Lecture # 8 Color Processing

DIGITAL IMAGING. Handbook of. Wiley VOL 1: IMAGE CAPTURE AND STORAGE. Editor-in- Chief

Camera Calibration Certificate No: DMC II

Color Constancy Using Standard Deviation of Color Channels

Capturing Light in man and machine

Improved SIFT Matching for Image Pairs with a Scale Difference

Measurement of Texture Loss for JPEG 2000 Compression Peter D. Burns and Don Williams* Burns Digital Imaging and *Image Science Associates

Enhanced LWIR NUC Using an Uncooled Microbolometer Camera

Transcription:

A collection of hyperspectral images for imaging systems research Torbjørn Skauli a,b, Joyce Farrell *a a Stanford Center for Image Systems Engineering, Stanford CA, USA; b Norwegian Defence Research Establishment, P. O. Box 25, 2027 Kjeller, Norway ABSTRACT A set of hyperspectral image data are made available, intended for use in modeling of imaging systems. The set contains images of faces, landscapes and buildings. The data cover wavelengths from 0.4 to 2.5 micrometers, spanning the visible, NIR and SWIR electromagnetic spectral ranges. The images have been recorded with two HySpex line-scan imaging spectrometers covering the spectral ranges 0.4 to 1 micrometers and 1 to 2.5 micrometers. The hyperspectral data set includes measured illuminants and software for converting the radiance data to estimated reflectance. The images are being made available for download at http://scien.stanford.edu Keywords: Hyperspectral imaging, color imaging, colorimetry 1. INTRODUCTION Conventional imaging systems, such as color imaging, exhibit relatively broad spectral responses. For predicting the performance of such imaging systems, it is desirable to have a model of the underlying detailed spectral properties of all elements of the imaging chain, including relevant scenes. Such detailed spectral information can be captured using hyperspectral imaging. For each pixel, a hyperspectral imager samples the spectrum in a large number of spectral bands, so that the image can be seen as a "cube" of data with two spatial dimensions and the wavelength as a third dimension. Typically, hyperspectral imaging samples the spectrum with sufficient resolution to capture the spectral characteristics of most solids. Therefore hyperspectral data contain spectral information at a level of detail which is sufficient for a detailed modeling of conventional imaging systems. There are still only a limited number of hyperspectral image collections which are freely available [1-5]. The purpose of the work presented here has been to record hyperspectral images of some scenes of interest for image systems development and instructional use. The imagery includes outdoor scenes, several human faces and a "still life" test image with a fruit basket and a color chart. The data span a wide spectral range, from 400 to 2500 nm wavelength. 2. HYPERSPECTRAL IMAGE RECORDING Many technologies exist for hyperspectral imaging. The most common configuration, which is used here, is an imaging spectrometer where a slit in the focal plane defines one row of pixels, whose spectra are projected on the columns of an image sensor, thus recording a spatial slice of the "hypercube". The imaging spectrometer is typically used in a line-scan configuration where the scanning provides the second spatial dimension. Imaging spectrometers have the significant advantage of recording all bands simultaneously for any given pixel, eliminating spectral artifacts due to movement or non-stationary scenes. On the other hand, the line-scan imaging modality can lead to spatial artifacts from imperfect scan movement or scene movements during recording. Here we used two HySpex line-scan imaging spectrometers[7] whose characteristics are summarized in Table 1. The HySpex VNIR-1600 covers the visible and near infrared (VNIR) spectral range from 400 to 1000 nm with 3.7 nm resolution and 1600 spatial pixels across the field of view. The HySpex SWIR 320m-e covers the "short wave infrared" (SWIR) spectral range from 970 to 2500 nm with 6 nm spectral resolution. This camera has only 320 pixels across the field of view, essentially due to the limited maturity of detector arrays in the SWIR range. The VNIR camera has a total field of view of 17 degrees, so that the nominal angular extent of a pixel is 0.185 mrad along the field of view. The magnification of the SWIR camera is such that the pixel size is 4 times larger in SWIR compared to VNIR. Because of the non-centrosymmetric optics, magnification of both cameras vary somewhat across the field of view, with the ends of the field of view deviating by about 10% from the mean magnification. According to the manufacturer, the camera * joyce_farrell@stanford.edu

optics is end-to-end corrected to minimize spatial and spectral co-registration errors. Such errors can otherwise lead to significant artifacts in the spectra[8]. The "smile" and "keystone" distortions are specified to less than 10% of a pixel. Table 1. Basic specifications for the HySpex cameras used to record the images. HySpex VNIR-1600 HySpex SWIR-320m-e Spectral range 400-1000 nm 9700-2500 Spectral sampling 3.7 nm 6 nm No. of bands 160 256 Angular field of view 17 degrees 13.5 degrees Spatial resolution along FOV 0.185 mrad 0.74 mrad Spatial resolution across FOV 0.37 mrad 0.74 mrad No. of pixels along FOV 1600 320 Closeup lens 1 m 3 m 1 m 3 m Field of view 0.30 m 0.90 m 0.23 m 0.68 m Spatial sampling along FOV 0.2 mm 0.6 mm 0.7 mm 2 mm Depth of focus 7 mm 67 mm 28 mm 255 mm Here, rotation scanning is used to cover the spatial dimension perpendicular to the slit. The two cameras were placed on a rotation stage such that the axis of rotation and the direction of the slit are both vertical, and the scan proceeds in the horizontal direction. The scan speed is chosen to approximate square pixels, where horizontal and vertical pixel sampling intervals are nominally the same. Note that in this version of the VNIR camera, the slit width is chosen so that it corresponds to 0.37 mrad in the scan direction, twice the pixel sampling interval. Therefore the images will appear somewhat sharper in the vertical direction. The cameras are placed side by side on the rotation stage with a center to center distance of 10 cm. For finite object distances, this displacement leads to slightly different viewing angles for the two cameras. Also, the recording with the two cameras is performed in two successive scans with the particular setup used here, so that the VNIR and SWIR images are not recorded simultaneously. For these reasons, a simple pixel to pixel mapping is not possible between the two spectral ranges. The cameras have their focus fixed at infinity. For imaging of objects at closer range, a corrective lens is installed at the entrance aperture. Here, corrective lenses for 1 m and 3 m object distance have been used for the indoor recordings. The resulting resolution and field of view are shown in Table 1. The integration time, which sets a lower limit on the scan speed, was normally chosen so that the brightest areas of the image approached but did not reach saturation. The well capacity of the VNIR camera is about 40000 electrons so that the peak signal to noise ratio is on the order of 200. The signal to noise ratio tends to be lower towards the ends of the spectral range due to the wavelength dependencies of the illuminant spectrum, optical throughput and detector quantum efficiency. Multi-frame averaging of each scan line was used in some of the images to improve the signal to noise ratio. Images of faces and the "still life" scene were recorded indoors under artificial light from two unfiltered tungsten studio lamps. Bands below 420 nm and above 950 nm were excluded in the VNIR images due to low SNR values, leaving 133 spectral bands. Outdoor images were recorded in daylight, in partially overcast conditions, and the full spectral range of the two cameras was used. 3. DATA PREPARATION AND FORMAT During recording, the cameras store the images in a lossless "corrected raw data" format with 16 bits per sample, intended for real-time processing[9]. After recording, a separate software uses camera calibration data embedded in the raw files to convert the images to 32-bit floating-point radiance values (W/(sr nm m 2 )).

Figure 1. Hyperspectral image of a simple test scene. The image on the left shows the still life scene rendered under the studio tungsten lamps. The spectral power of the illuminant and the color signal of a red surface in this scene (highlighted by a white square) are plotted in the graph in the lower left of this figure. The image on the right shows the scene rendered under D65 and the graph on the right plots the spectral power of D65 and the red surface under this illuminant. The graph in the middle plots the spectral reflectance of the red surface, extracted from the actual image data. For indoor recordings in artificial light, the data are converted to reflectance as follows: The spatial variation of illumination was characterized by recording images of a large white board placed directly in front of the imaged objects. A low-order polynomial function was fitted to the observed illumination variation, and then used to compensate for these variations in the radiance image. Independent measurements of the spectral power in the illuminant were also made using a zinc oxide calibration target. The illuminant spectrum can be divided out of the data to yield images of apparent spectral reflectance. The reflectance estimates obtained this way do not take into account the 3D structure of objects, which leads to shading effects. Also, we have ignored the slight increase of light reflected back from the room when the white board is placed in the object plane. Despite these shortcomings, the images of spectral reflectance realistically represent the apparent scene properties as observed by a camera. The full set of floating-point image data represent a very large volume of data which cannot be realistically posted for online access at this time. Therefore, we compress the data using the singular value decomposition (SVD) to find a relatively small set of spectral basis functions and corresponding coefficients that account for 99.99% of the variance for each hyperspectral image [10, 11]. Using the SVD we can reduce the file size by a factor of 30 for faces and 10 for the scene illustrated in Figure 1. Figure 2 shows the spectral reflectance of several patches in the Macbeth ColorChecker derived from the full and compressed hyperspectral image data. Some deviations are apparent, but overall the simple compression scheme reproduces the data to a good accuracy.

Figure 2. Effect of data compression. This figure plots the spectral reflectance of patches in the Macbeth Color Checker in the "still life" scene. The spectral reflectance is plotted in solid colored lines and the reflectance generated from the spectral basis function coefficients is shown as a dashed gray line. 4. SCENES AND IMAGE SETS 4.1 Test scene A "still life" test scene is shown in Figure 1. It consists of a fruit basket, a Macbeth color checker and a reflectance reference on a light grey table and with a dark grey background. The reflectance reference has regions of nominally 5, 50 and 90% reflectance. (In retrospect, it can be noted that secondary illumination from the table will tend to distort the light reflected from the reference.) The recording of images from the still life scene was done with averaging of 8 exposures for each scan line to obtain a high SNR. A similar set of images was recorded when the fruit and vegetables were partially decayed from being stored for a week at room temperature. 4.2 Faces The faces of about 70 subjects were imaged using the VNIR and SWIR cameras. To obtain a good signal to noise ratio, a relatively slow scan rate was used so that the scanning of a face lasted about 20 seconds. The subjects were resting their head against a wall, and were instructed to try to hold their breath during scanning across the face. Still, the slow scanning leads to some motion artifacts in the images. In particular, blinking artifacts are visible on the eyelids in many cases. Also, it is inevitable that most of the facial expressions are not very lively. A few images were recorded with very long scan times, and then breathing is visible as oscillations on lines in the image.

Figure 3. Samples taken from the face database. Although the faces were illuminated by studio tungsten lamps, the faces are shown here rendered under D65. 4.3 Outdoor scenes Two outdoor scenes were imaged: the Stanford main quad (Figure 4) and the view from a nearby hill overlooking the San Francisco bay ("the Dish", Figure 5). The rotary stage of the two HySpex imagers was used to record full 360- degree panoramic scans. Unfortunately, for practical reasons only small sections of the recorded images shown in the figures are being made available online at this time. The image from the quad was recorded in difficult conditions with varying cloud cover, where the wall with colorful murals is on the shadow side of the building. The hilltop views were recorded in mostly sunny conditions with a thin cloud cover in some areas. 5. DATA ACCESS AND USAGE The data can be accessed through www.scien.stanford.edu. The website contains the compressed hyperspectral data as well as Matlab source code for decompression and more detailed technical information about the images. The subjects whose faces are included in the image set have consented to having their hyperspectral image published. Subjects are not to be identified by name. Further details about usage rights are given on the website.

6. DISCUSSION AND CONCLUSIONS We are aware of only one other hyperspectral face database that is available for public use[6]. Our data extends this previous work to include hyperspectral images of faces captured at relatively high resolution, not only in the visible, but also in the NIR and SWIR spectral ranges. The images represent the spectral properties of the scenes with an accuracy which is sufficient, even after compression, for accurate modeling of a variety of imaging systems. The data may also be used to test algorithms for image analysis, with the limitation that "ground truth" information about the true properties of scene materials is not available beyond what can be seen from the images themselves. Not least, we believe that the data will be valuable for instructional use. Images are being made available from the Stanford Center for Image Systems Engineering (SCIEN) together with detailed technical data including software to read images into Matlab. Figure 4. This figure shows part of a panorama scene that includes the outside of the Memorial Church at Stanford University. The left graph shows the spectral energy in the gold paint on the church and the right graph shows the spectral energy in the sky.

Figure 5. Views towards San Francisco and Stanford from "the Dish". ACKNOWLEDGMENTS The HySpex cameras were kindly loaned from the Manufacturer, Norsk Elektro Optikk AS. We are of course also grateful to the subjects who have consented to publishing their image. REFERENCES [1] Parraga, C. A. et al., "Color and luminance information in natural scenes: errata,," Jorunal of the Optical Society of America, vol. 15, pp. 563-569, 1998. [2] AVIRIS airborne imager data available at http://aviris.jpl.nasa.gov/data/free_data.html and http://compression.jpl.nasa.gov/hyperspectral/ [3] Chakrabarti, A., and Zickler, T., "Statistics of Real-World Hyperspectral Images," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2011. [4] Foster, D. H. et al., "Information limits on neural identification of coloured surfaces in natural scenes," Visual Neuroscience, vol. 21, pp. 331-336, 2004. [5] Snyder, D. et al., "Development of a Web-based Application to Evaluate Target Finding Algorithms," Proc. 2008 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Boston, MA, vol. 2, pp. 915-918, 2008. [6] Wei, D. et al., "Studies on Hyperspectral Face Recognition in Visible Spectrum with Feature Band Selection," IEEE Transactions on System, Man and Cybernetics, Part A vol. 40, pp. 1354 1361, 2010. [7] HySpex Hyperspectral Cameras: An Overview. Available: http://www.neo.no/hyspex/ [8] Skauli, T., "An upper-bound metric for characterizing spectral and spatial coregistration errors in spectral imaging," Optics Express, vol. 20, pp. 918-933 2012. [9] Skauli, T., "Sensor noise informed representation of hyperspectral data, with benefits for image storage and processing," Optics Express, vol. 19, pp. 13031-13046 2011. [10] Marimont, D. H. and Wandell, B. A., "Linear models of surface and illuminant spectra," Journal of the Optical Society of America A, vol. 9, pp. 1905-1913, 1992. [11] Wandell, B. A., "The synthesis and analysis of color images," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, pp. 2-13, 1987.