Introduction course in particle image velocimetry

Similar documents
Particle Image Velocimetry

Bias errors in PIV: the pixel locking effect revisited.

PIV STUDY OF STANDING WAVES IN A RESONANT AIR COLUMN

Autocorrelation-Based Estimate of Particle Image Density in Particle Image Velocimetry

Examination, TEN1, in courses SK2500/SK2501, Physics of Biomedical Microscopy,

Image acquisition. In both cases, the digital sensing element is one of the following: Line array Area array. Single sensor

Measurement of digital particle image velocimetry precision using electro-optically created particle-image displacements

Optical Signal Processing

USING PIV ON THE SPLASH WATER IN A PELTON TURBINE

Exercise questions for Machine vision

High-Speed PIV Analysis Using Compressed Image Correlation

a) How big will that physical image of the cells be your camera sensor?

Single Photon Interference Laboratory

3D light microscopy techniques

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Analysis of Processing Parameters of GPS Signal Acquisition Scheme

Digital camera. Sensor. Memory card. Circuit board

Imaging Fourier transform spectrometer

The new CMOS Tracking Camera used at the Zimmerwald Observatory

Estimation of spectral response of a consumer grade digital still camera and its application for temperature measurement

Observational Astronomy

Guide to SPEX Optical Spectrometer

Error quantification of particle position estimation on colour images for Particle Tracking Velocimetry

Infrared Single Shot Diagnostics for the Longitudinal. Profile of the Electron Bunches at FLASH. Disputation

Fibre Laser Doppler Vibrometry System for Target Recognition

Spatial-Phase-Shift Imaging Interferometry Using Spectrally Modulated White Light Source

Nature Structural & Molecular Biology: doi: /nsmb Supplementary Figure 1

On spatial resolution

Solution Set #2

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Mechanical Engineering Department. 2.71/2.710 Final Exam. May 21, Duration: 3 hours (9 am-12 noon)

General Imaging System

IMAGING TECHNIQUES FOR MEASURING PARTICLE SIZE SSA AND GSV

S.R.Taplin, A. Gh.Podoleanu, D.J.Webb, D.A.Jackson AB STRACT. Keywords: fibre optic sensors, white light, channeled spectra, ccd, signal processing.

CCDS. Lesson I. Wednesday, August 29, 12

1.Explain the principle and characteristics of a matched filter. Hence derive the expression for its frequency response function.

Acoustic resolution. photoacoustic Doppler velocimetry. in blood-mimicking fluids. Supplementary Information

Particles Depth Detection using In-Line Digital Holography Configuration

Signal segmentation and waveform characterization. Biosignal processing, S Autumn 2012

Supplementary Figure 1. Effect of the spacer thickness on the resonance properties of the gold and silver metasurface layers.

Modern Navigation. Thomas Herring

MEASUREMENT OF RAYLEIGH WAVE ATTENUATION IN GRANITE USING

Module 3: Video Sampling Lecture 18: Filtering operations in Camera and display devices. The Lecture Contains: Effect of Temporal Aperture:

SUPPLEMENTARY INFORMATION

The electric field for the wave sketched in Fig. 3-1 can be written as

Characterizing the Temperature. Sensitivity of the Hartmann Sensor

Technical Notes. Integrating Sphere Measurement Part II: Calibration. Introduction. Calibration

Laser Telemetric System (Metrology)

TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0

Measurements of Droplets Spatial Distribution in Spray by Combining Focus and Defocus Images

Physics 431 Final Exam Examples (3:00-5:00 pm 12/16/2009) TIME ALLOTTED: 120 MINUTES Name: Signature:

Preliminary Investigations on Thermometry in Thermal Flows via Transient Grating Spectroscopy (TGS)

Photonic-based multi-wavelength sensor for object identification

EE 392B: Course Introduction

Image acquisition. Midterm Review. Digitization, line of image. Digitization, whole image. Geometric transformations. Interpolation 10/26/2016

Astigmatism Particle Tracking Velocimetry for Macroscopic Flows

Readout Electronics. P. Fischer, Heidelberg University. Silicon Detectors - Readout Electronics P. Fischer, ziti, Uni Heidelberg, page 1

3D light microscopy techniques

Automatic Amplitude Estimation Strategies for CBM Applications

R. J. Jones College of Optical Sciences OPTI 511L Fall 2017

Figure1. To construct a light pulse, the electric component of the plane wave should be multiplied with a bell shaped function.

Figure 1: A detailed sketch of the experimental set up.

Theory and Applications of Frequency Domain Laser Ultrasonics

Radar-Verfahren und -Signalverarbeitung

REPORT DOCUMENTATION PAGE

This document is downloaded from DR-NTU, Nanyang Technological University Library, Singapore.

Tomographic Particle Image Velocimetry Using Colored Shadow Imaging

Chapter Ray and Wave Optics

Module 5: Experimental Modal Analysis for SHM Lecture 36: Laser doppler vibrometry. The Lecture Contains: Laser Doppler Vibrometry

TSBB09 Image Sensors 2018-HT2. Image Formation Part 1

Response spectrum Time history Power Spectral Density, PSD

combustion diagnostics

DETECTORS Important characteristics: 1) Wavelength response 2) Quantum response how light is detected 3) Sensitivity 4) Frequency of response

In-line digital holographic interferometry

Design of a digital holographic interferometer for the. ZaP Flow Z-Pinch

Translational Doppler detection using direct-detect chirped, amplitude-modulated laser radar

Camera Test Protocol. Introduction TABLE OF CONTENTS. Camera Test Protocol Technical Note Technical Note

INTRODUCTION TO RADAR SIGNAL PROCESSING

plasmonic nanoblock pair

Experimental Physics. Experiment C & D: Pulsed Laser & Dye Laser. Course: FY12. Project: The Pulsed Laser. Done by: Wael Al-Assadi & Irvin Mangwiza

Outline. Motivation Experimental Set-Up Theory behind the set-up Results Acknowledgements

Chromatic Modulation Based Acoustic Analysis Technique. For In-process Monitoring Of Laser Materials Processing. Submitted for

Nd: YAG Laser Energy Levels 4 level laser Optical transitions from Ground to many upper levels Strong absorber in the yellow range None radiative to

On-line spectrometer for FEL radiation at

EE119 Introduction to Optical Engineering Fall 2009 Final Exam. Name:

Chapter 4 SPEECH ENHANCEMENT

Supplementary Figure S1: Schematic view of the confocal laser scanning STED microscope used for STED-RICS. For a detailed description of our

Grant Soehnel* and Anthony Tanbakuchi

Lecture 6 Fiber Optical Communication Lecture 6, Slide 1

Fig Color spectrum seen by passing white light through a prism.

Where m is an integer (+ or -) Thus light will be spread out in colours at different angles

Noise Analysis of AHR Spectrometer Author: Andrew Xiang

White-light interferometry, Hilbert transform, and noise

EWGAE 2010 Vienna, 8th to 10th September

QE65000 Spectrometer. Scientific-Grade Spectroscopy in a Small Footprint. now with. Spectrometers

Detectors for microscopy - CCDs, APDs and PMTs. Antonia Göhler. Nov 2014

Lithography. 3 rd. lecture: introduction. Prof. Yosi Shacham-Diamand. Fall 2004

Control of Noise and Background in Scientific CMOS Technology

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Department of Electrical Engineering and Computer Science

Based on lectures by Bernhard Brandl

5kW DIODE-PUMPED TEST AMPLIFIER

Transcription:

Introduction course in particle image velocimetry Olle Törnblom March 3, 24 Introduction Particle image velocimetry (PIV) is a technique which enables instantaneous measurement of the flow velocity at several positions in a plane. The working principle is quite simple (c.f. figure ): the flow is seeded with light reflecting particles (usually smoke is used in air and glass beads in water), a light sheet illuminates the particles in the measurement plane and a camera is used to take two exposures of the illuminated plane. The two exposures should be taken within a short interval, so that the same particles are caught in both exposures. The two exposures can be taken either as a double exposure of one image or as two different images. The double exposure technique is based on an autocorrelation and this results in an directional ambiguity (since it is impossible to tell whether an imaged particle was in the first or second exposure), thus the flow can only be in one direction when this method is used. Hence, the method with two images and cross-correlation is more commonly used and the remaining part of this introduction will deal with this technique. A good reference book which deals with most of the aspects of PIV is Raffel et al. (998). smoke seeding light source light sheet imaged area resulting images 2 Figure : Sketch of a typical setup for PIV measurements.

Cavity Cavity 2 Mirror λ=64 nm Beam combiner Mirror Harmonic generator λ=532 nm & 64 nm IR-dump Harmonic separator λ=532 nm Cylindrical lens Light sheet Figure 2: Double cavity Nd:YAG PIV-laser. Measurement technique Seeding Seeding the flow with light reflecting particles is necessary in order to image the flow field. The particles should be small enough to follow the flow but large enough to reflect the required amount of light. In general one can say that PIV needs a higher seeding density than LDV (Laser Doppler Velocimetry). A good rule of thumb is that around ten particles should be correlated for each measured velocity vector. The light sheet Figure shows a typical PIV setup and figure 2 shows the principal layout of a common PIV-laser. The plane where the measurements are to be taken should be illuminated by a light sheet. Commonly a pulsed Nd:YAG laser (Neodymium Yttrium Aluminum Garnet) is used as the light source because of its high light intensity. Pulsed lasers need some time to build up energy before they can deliver a new pulse and the two images in a PIV image pair has to be taken within a quite short period of time. Therefore, it is common in PIV to use a laser with two cavities. The laser pulses have a duration time of 5- ns and the energy in one pulse can be up to 4 mj. Nd:YAG lasers emit light with a wavelength of 64 nm which is in the infrared range. For PIV-purposes light with this wavelength is not very useful since most cameras have their maximum sensitivity in the blue-green part of the spectrum. It is also disadvantageous not to be able to see the light sheet when positioning it in the measurement section. For these reasons the wavelength of the Nd:YAG laser is halved, using a device called a harmonic generator, so that it becomes 532 nm. The harmonic generator is not % efficient and therefore a separator and an IR-dump is needed to get rid of the remaining IR-light. The laser beam coming out from the laser has an axisymmetric shape which need to the shaped with a cylindrical lens in order to form a planar light sheet. The camera PIV puts special demands on the camera that is going to be used, especially if the flow velocity is high, the imaged area is small and if the particles are small. The first two circumstances requires the camera to be able to take two images Note that this duration time and energy level make a power of more than 4 MW possible, so the laser light should be handled with care 2

camera shutter image image 2 light source Time Figure 3: How short inter image times are achieved when using continuos high speed cameras. within a short period of time in order for the same individual particles to appear in both images. Short inter exposure times can be achieved either by having a high speed camera which continuously records images at a rate of several khz or by using a camera with a progressive scan architecture. With a high speed camera the shortest inter image time possible today is around µs and this is achieved by letting the first laser pulse come at the end of the exposure of the first image and the second pulse in the beginning of the exposure of the second image, c.f. figure 3. Cameras with progressive scan architecture can take two images with less than µs delay. Figure 4 shows the layout of such a sensor. Directly after the first image has been recorded the charge of each pixel is transferred to its designated position in the interline shift register, a new image can now be recorded by the pixels. The second image is exposed until the first image has been read out from the interline shift register and is then transferred to the image buffer in the same way. The second image will due to this procedure be exposed for a longer time than the first and to avoid it becoming overexposed by ambient light a filter which only pass the laser wavelength can be fitted on the camera lens. If the particles are small or if the light intensity is low the camera needs to have a high sensitivity to incoming light. The sensitivity of a photon sensor such as a CCD-camera is measured in QE (quantum efficiency) which is the average number of electrons that are released from the sensor when it is hit by a photon. The QE is often wavelength dependent with a maximum efficiency in the blue-green part of the visible spectrum. The most sensitive cameras on the market today have Peltzier cooled (to reduce thermal noice) CCD-sensors (CCD=Charge coupled device) and a QE of around 7%. Light sensitive pixels Interline shift registers Output shift register Figure 4: Architecture of an interline CCD. 3

Since the laser light is monochromatic there is no need to use a colour camera, this also makes the image files smaller. Image evaluation Cross-correlation of images The aim of the cross-correlation is to find the distance that the particle pattern has moved during the inter image time and translate this into a velocity measure. The relation between velocities u and particle displacements d is simply u = d M t, () where M is the magnification and t is the inter image time. The cross-correlation function is not calculated on the whole images but on smaller parts of these called interrogation areas (IAs), see figure 6. The crosscorrelation from one IA results in one velocity vector. The cross-correlation can be seen as finding which relative displacement of the IAs that gives the best pattern match. This displacement should be proportional to the average velocity in the IAs. Cross-correlation functions can be calculated in a number of different ways (c.f. e.g. Bendat & Piersol (986)), the direct procedure to compute an unbiased one dimensional sample cross-correlation function R AB (x, y), for the M N- point samples A(m, n) and b(m, n), with x < M and y < N is defined by R A,B (x, y) = M x N y (M x )(N y ) m= n= M+x N y (M x )(N y ) m= n= M x N+y (M x )(N y ) m= n= M+x (M x )(N y ) m= A(m, n)b(m + x, n + y) x, y A(m x, n)b(m, n + y) x <, y A(m, n y)b(m + x, n) x, y < N+y n= A(m x, n y)b(m, n) x, y < (2) Cross-correlation function via finite Fourier transforms The direct method to compute the cross-correlation quickly becomes very heavy to apply when larger data-sets are to be analysed. A more efficient way to estimate cross-correlation functions is use fast Fourier transforms (FFTs). This reduces the computation from O[N 4 ] operations to O[N 2 log 2 N] operations in the case of a two-dimensional correlation. When Fourier transforms are used one takes advantage of the correlation theorem (see e.g. Bendat & Piersol (986)) which states that the cross-correlation of two functions is equivalent to a complex conjugate multiplication of their Fourier transforms: R AB  ˆB (3) where  and ˆB are the Fourier transforms of A and B, respectively and ˆB represents the complex conjugate of ˆB. Using FFTs means treating the data as if it is periodic. The periodicity can give rise to aliasing if the particles have moved a distance larger than half the 4

Sample weighting Effective weighting of correlation values.8.5.6.4.2 4 3 2 4 3 2 2 3 4 4 2 2 4 2 3 4 4 2 2 4 Figure 5: Effective correlation value weighting in FFT-based cross-correlation calculations. The effective weighting function to the right is the convolution of two sample weighting functions such as the one to the left. size of the IA. The solution to aliasing problems is to either increase the IA size or reduce the inter image time t. A maybe more serious problem with the FFTs is that bias errors occur if these are not accounted for. Due to the finite size of the IAs the overlap of the images becomes smaller with increasing displacement. This bias results in an underestimation of the peak magnitude for all displacements other than zero. A weighting function should be applied to the cross-correlation function to avoid this bias. This weighting function is found by convoluting the sample weighting functions (which should be equal to one for all points in the image and zero elsewhere). The bias is removed by dividing the correlation function with the effective weighting function which will have a pyramid shape, see figure 5. An instructive example of convolution can be found on http://www.jhu.edu/~signals/convolve/. It should be noted that many FFT implementations (e.g. the one in Matlab) shuffles the output data so that the DC-component is found at index and the highest positive and negative frequency at index N/2. To get a monotonically increasing spectrum (and resulting dispacements) the data has to be reorganised (fftshift in Matlab). Peak detection and subpixel interpolation When the cross-correlation has been performed a measure of the displacement is found by detecting the location of the highest correlation peak. Just detecting the peak will result in an uncertainty of ±/2 pixel in the peak location. However, the accuracy can be increased substantially by curve fitting and interpolation. This may sound as inventing new information which hasn t been measured. But the procedure can be defended with the argument that the cor- 5

5 4 3 Cross correlate 2 5 5 2 5 25 3 5 [ x, y]/ t [U, V] 2 25 3 6 4 2 8 6 4 2 2 4 6 8 2 4 6 Figure 6: Illustration of how velocity information is extracted from an image pair. relation is based on the images of several particles. If, for example, an interrogation area pair contains ten particle images and eight particles has a displacement of 3 pixels and two a displacement of 2 pixels, the maximum correlation peak would be located at 3 pixels but a subpixel interpolation could predict the correct displacement of 2.8 pixels since the correlation at two pixels will be higher than the one at four pixels. The most common way to perform the subpixel interpolation is to use a three-point estimator. When the maximum peak has been detected at [i, j], the neighbouring values are used to fit a function to the peak. In the case of a Gaussian peak fit when the peak is assumed to have the shape f(x) = C exp[ (x x) 2 /k], the displacements are found by: x = i + y = j + ln R (i,j) ln R (i+,j) 2 ln R (i,j) 4 ln R (i,j) +2 ln R (i+,j) ln R (i,j ) ln R (i,j+) 2 ln R (i,j ) 4 ln R (i,j) +2 ln R (i,j+). Other ways to interpolate at the subpixel level are parabolic peak fit and peak centroid. Overlapping IAs (oversampling) are often used, see figure 6, in order to maximise the use of available information and fulfil the Nyquist sampling theorem. 6

Homework assignment Before the laboratory lesson at KTH you should prepare yourself by writing (preferably in Matlab) your own PIV-image evaluation routine. After you have collected your data in the laboratory lesson you should be able use your own routine to calculate the velocities. To help you to develop a routine an image pair Image8 A.tif and Image8 B.tif are provided at the course web-site. The image pair was recorded, with an inter image time of 5 µs, behind a notchback car model as can be seen i figure. The images size is 256 256 pixels, the physical image size is 37.8 37.8 mm. For reference a velocity field computed by the PIV-system at Mechanics KTH is provided together with a Matlab script which reads and plots the data. Suggested procedure If you want you can start with the following procedure when designing your evaluation routine. Useful Matlab commands at the different stages are given. Read in the image data. (imread) Convert the data to double precision (native in Matlab) to avoid problems with certain functions. (double) Remove the dominating DC component in the images. (A-mean(A)) Select interrogation areas of 32 32 pixels with 5% overlap. Compute the FFTs of the IAs. (fft2) Compute the cross-spectral density function. (A.*conj(B)) Compute the cross-correlation by the inverse transformation. (ifft2) Shuffle the data so that the origin is at the center. (ifftshift2) Locate the highest correlation peak. ([Y,I]=max(R)) Convert the displacement to physical units and calculate the corresponding velocities. Plot the data. (quiver) Compare your result with the provided reference field. Try to implement subpixel interpolation and see how the results improves. Spurious vectors can be avoided by applying a criterion for the minimum ratio between the highest peak and the second highest peak. A good value of the peak height ratio is.2. Don t hesitate to contact me at telephone (8-79 6752) or e-mail (olle@mech.kth.se) if you have any questions regarding the assignment or PIV in general. Acknowledgement Parts of this material has been taken from the PIV part of a vehicle aerodynamics course given at KTH by Barbro Muhammad-Klingmann and Johan Westin. 7

References Bendat, J. S. & Piersol, A. G. 986 Random Data, Analysis and Measurement Procedures, 2nd edn. Wiley-interscience. Crutchfield, S. 24 The joy of convolution. http://www.jhu.edu/ ~signals/convolve/. Raffel, M., Willert, C. & Kompenhans, J. 998 Particle Image Velocimetry. A Practical Guide. Springer. 8