DIgital image correlation for analyzing portable electronic products during drop impact tests

Size: px
Start display at page:

Download "DIgital image correlation for analyzing portable electronic products during drop impact tests"

Transcription

1 DIgital image correlation for analyzing portable electronic products during drop impact tests Citation for published version (APA): Scheijgrond, P. L. W. (2005). DIgital image correlation for analyzing portable electronic products during drop impact tests. (DCT rapporten; Vol ). Eindhoven: Technische Universiteit Eindhoven. Document status and date: Published: 01/01/2005 Document Version: Publisher s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. The final author version and the galley proof are versions of the publication after peer review. The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the Taverne license above, please follow below link for the End User Agreement: Take down policy If you believe that this document breaches copyright please contact us at: openaccess@tue.nl providing details and we will investigate your claim. Download date: 06. Apr. 2019

2 Digital Image Correlation for Analyzing Portable Electronic Products during Drop Impact Tests P.L.W. Scheijgrond DCT Traineeship report Coach(es): dr. D.X.Q. Shi Ir. W.D.van Driel Supervisor: Prof. dr. H. Nijmeijer Prof. dr. G.Q. Zhang Technische Universiteit Eindhoven Department Mechanical Engineering Dynamics and Control Technology Group Eindhoven, November, 2005

3 The following bundel consists of: Report Manual English Paper Chinese Paper 2

4 Digital Image Correlation for Analyzing Portable Electronic Products during Drop Impact Tests P.L.W. Scheijgrond Philips Mobile Display Systems, Shanghai, China Eindhoven University of Technology, Department of Mechanical Engineering, Eindhoven, The Netherlands Coaching: dr. D.X.Q. Shi Philips Mobile Display Systems Shanghai, China Ir. W.D. van Driel Philips Semiconductors Nijmegen, The Netherlands Supervisors: Prof. dr. H. Nijmeijer Prof. dr. G.Q. Zhang Department of Mechanical Engineering Eindhoven University of Technology Eindhoven, The Netherlands Shanghai, 17th June 2005

5 Abstract The reliability of portable electronic products is a major issue for manufacturers and customers. The quality of these products is mainly judged on drop test performance done on a product level. To improve the product design, insight in the phenomena that take place during a drop is needed. Most test that are done till now are limited to measure local accelerations and strains. In this paper the feasibility to analyze a guided free fall drop of portable electronic products by optical inspection using digital image correlation is studied. This technology can examine the product on every arbitrary place on its surface and allows the product to make guided free fall drops. For this study, a mobile phone is dropped on a pavement stone under different orientations and heights. The phone is prepared with a speckle and the impact is recorded by using a high-speed camera. A custom made program based on digital image correlation is used to calculate the displacement fields during the impact. Out of these results deformations, strains, G- levels, velocities, energy losses, rotations and bending can be calculated. Both local and global phenomena have been measured and of phones that were dropped from different heights under different orientations. The results were examined and new insights in drop test parameters were obtained.

6 Contents 1 Introduction Context Literature review Motivation Experiments Preparation Used methods Phone drop Data processing Discussion of methods Measurements Equipment Measurement environment Data processing Digital image correlation accuracy Post calculation accuracy Results Horizontal phone Vertical phone Conclusions Method evaluation Results evaluation Recommendations 39 A Picture sequence 43 B Dotplots 45 C accuracybudgets 47 3

7 D Verification data 49 E Bending visualization 51 F Matlab file 53 4

8 Chapter 1 Introduction 1.1 Context Portable electronic products are nowadays fully integrated in daily life and most of them are used every day. Drops of portable electronic products happen daily and a lot of damage can be made during a drop. Cracking of mobile displays and breakage of solder joints are examples that can cause the whole device to stop working. The reliability of portable electronic products is a major issue for manufacturers and customers. The quality of these products is mainly judged on drop test performance done on product level. To improve the product design, insight in the dynamic phenomena that take place during a drop is needed and realistic drop tests needs to be done. 1.2 Literature review Nowadays a growing amount of drops tests are made, but test methods and results vary a lot. Tests can be divided in tests on product level, in which the whole product is dropped, and test on board level, in which a part of the product is dropped. The reliability of a product is often tested by the use of tumble tests. In these tests the portable device is placed in a box that turns around, lifts the device to a certain height and drops the device on the other side of the box, after which the procedure is repeated. During a tumble test the device drops in an arbitrary position on the ground and after a user defined amount of drops the product is inspected on failures. These test provide statistical information about the amount of drops a device can withstand, but no information about the phenomena that take place during a drop is acquired. On board level a lot of guided drop tests are done [6],[7]. In these tests information about the accelerations and strains is acquired using strain gages and accelerometers. The strain gages are placed on the board and the 5

9 accelerometers next to the board, according to the JEDEC standard [1]. On product level guided drop tests have been done with a cellular phone [2]. For this test a high speed camera is used to get a indication of the overal impact behavior and an accelerometer is used to measure the local response during impact. Impact on PCBs during free fall under different impact orientation are studied on both product level and board level [3], what shows big variations of accelerations and strains during different impact orientations. All these tests [1], [2], [3], [6], [7], are limited to measure the acceleration or strain of a small area on the product. In materials technology displacements and strains are often studied by using digital image correlation [4], [5]. This technology correlates a pair of digital speckle patterns obtained at two different moments in time and searches for the location of a point within the speckle pattern, with the best match in gray-level distribution of a defined subset, by maximizing the correlation coefficient. The location where the match is found indicates the displacement of a pixel after the step in time. This technology provides a wide range of measurement sensitivity and resolution for macro- and microscale displacement measurements and is a non-contact measurement. 1.3 Motivation Drop tests are an important tool to examine the impact of portable electronic products. The results provide insight in phenomena occurring during the impact, which are the basis for: evaluation of finite element simulations input for finite element simulations new design rules Tests conducted up to now, provide information about local phenomena and were mainly guided drops. More insight in the phenomena that take place during a drop would be achieved when local and global movements of the phone could be captured in one drop and be compared. This study explores the use digital image correlation, in combination with a high-speed camera, to inspect the drop impact of portable electronic products during a guided free fall. This non contact technology allows the use of guided free falls, which is more realistic than a guided drop and can provide information about the position of every arbitrary point on the surface of a portable electronic product. Based on the calculation of the position of different points on the phone, both global and local movements can be examined and different parameters, such as velocity, acceleration, strain, impact time, rotation, bending and energy loss, can be calculated. In this study a guided free fall is 6

10 conducted, because this gives the possibility to study different impact orientations, which will give varying results [3]. The results are analyzed and new insights in drop testing are provided. To limit the amount of test variations a cellular phone during vertical and horizontal impact from 1.5 and 2 meter is examined on one drop test facility, using one type of camera. Which test set up is used and how the tests are done is described in chapter 2 and the results are presented and discussed in chapter 3. Conclusions about the used test and the acquired results are given in chapter 5 and recommendations about further tests are given in chapter 6. 7

11 8

12 Chapter 2 Experiments 2.1 Preparation For digital image correlation the portable electronic product is prepared with a random speckle pattern on its surface in order to obtain a random gray value distribution in the recorded images. In this test a Nokia 3568i is prepared with a speckle, which must provide sharp contrasts in gray levels between adjacent pixels. The drop orientations are pre-determined and the side that will face the camera is prepared with a speckle. In this test one camera is used, so only 2 dimensional movements are captured, what limits the test to impacts that stay in one plane. During a free fall a phone will always have a 3 dimensional movement, but when the phone is dropped in horizontal or vertical orientation the movement is mainly in one plane and a 2 dimensional movement is assumed. In this test the left side of the phone is prepared with a speckle, because this is the side of the phone that is parallel to the plane in which the 2 dimensional movement takes place. To produce the speckle on the phone the left side was painted white by use of a mat white spray can. Mat spray produces a smooth white surface that doesn t reflect light. The speckle needs to have dots of a size about the same as the size of one pixel on the recording. Too big dots will give different pixels in the same area having the same gray level, so no contrast can be seen. Too small dots will cause too weak difference in contrast, because the pixels wil average the gray level of dark and light dots. An ordinary spray can produces too small dots for this particular test, but the dot size of a ball pen gives a high contrast speckle and so the speckle on the phone is prepared by placing random dots by hand. Examination of this speckle showed fluctuating gray level intensities of pixels that were next to each other, which agrees with the definition of a good speckle. The used drop test facility is the NDT-2000, made by Herstad+Piper A/S, which is a guided free fall drop test facility that can control different impact orientations of the phone, see figure 2.1. For safety reasons a trans- 9

13 Figure 2.1: NDT-2000 test set up parent cabinet is mounted around the device. To control the orientation the phone is attached in a certain angle on suction cups, which hold the phone by vacuum, see the enlargement in figure 2.1. When the phone is placed in the desired angle the user can choose the drop height, 1.5 meter or 2 meter, and the phone is dropped along a slider till 40 centimeter above the ground. At this height the vacuum is released and the phone will make an orientated free fall drop on a concrete tile. A high speed camera, the Photron FASTCAM-Ultima 512, is used to record the impact. This camera has a resolution of 512 x 512 pixels and can be set on different frame rates, varying from 60 to fps, and different shutter speeds, varying from 16.7 ms till 3.7 µs. On frame rates higher than 2000 fps the maximum feasible resolution goes down linear with the frame rate till a resolution of 512 x 32 on fps. The camera head is fixed on a tripod outside the cabinet and is aimed to the position of impact. The user can choose to film the whole phone or zoom in on a specific part of the phone by placing the camera head further or closer to the cabinet. The luminance can strongly influence the image quality. In this test two light sources, the LG-III Cold light source and the Fostec LLC light source, in combination with two optical fibres are used. The phone is placed on the spot where the impact will take place and the position and intensity of the two lights are adjusted till the speckle on the phone has an uniform light intensity. 10

14 2.2 Used methods Phone drop By preparing the speckle it is defined which side of the phone will be inspected during impact, which limits the way the phone can be clamped. Both horizontal orientation, in which the phone hits the ground with the front cover, and vertical orientation, in which the phone hits the ground with its bottom, are analyzed. For making a drop test the phone is clamped on the horizontal or vertical vacuum cup and the user must hold one hand at the button, that determines the drop height, and one hand on the button that triggers the high speed camera. After the drop tester is triggered with the desired drop height, 1.5 meter or 2.0 meter, the camera is triggered and the recording is made. When the recording is inspected it can turn out that the triggering is not proper done or some adjustments to the test set up have to be made after which a new test is conducted. This can be a iterative procedure, in which many parameters, like placement of the camera, focus, shutter time, placement and intensity of the lights and impact orientation, will have to be adjusted. There is a trade off between frame rate and maximum visible field, because the FASTCAM Ultima-512 works on lower resolutions at higher frame rates. Recordings at high frame rate are often acquired by starting with recordings at low frame rate. Also for the shutter speed there is a trade off between moved images and the amount of information that is acquired when a photo is taken. At a lower shutter speed the picture can easily get moved, because of the high velocity of the phone, but the more light can be catched during one photo, the more accurate information about the gray level distribution will be acquired. An example of a good recording is placed in appendix A. When a good recording is made and the impact position is known a ruler is placed on the impact position to measure the scale of one pixel. The longer the recorded ruler the more accurate the estimation for the pixel scale is. In the measurements that are made with the current test set up scales vary from 238 µm/pixel till 173 µm/pixel. The whole procedure of making a recording is summarized in figure Data processing Digital image correlation The principle of the digital image correlation technique is to compare two digital images that are made at two different moments in time. The digital images are made up of a rectangular array of 8 bit pixels, which provides gray levels varying from 0 to 255. The gray level of a pixel represents the average light intensity at that particular location of the speckle. The initial position 11

15 Choose impact orientation Prepare speckle Place camera Choose camera parameters Place optical fibres Clamp phone Conduct drop Check recording Record ruler Figure 2.2: procedure for making recordings of a selected point in the first image is given by (x r, y r ). The position of the same point in the second image is given by (x t, y t ). Around (x r, y r ) a subset A r of m x m pixels is taken from first image and mapped onto a m x m subset A t in the second image centered at (x t, y t ). The correlation coefficient C, which measures the match between subsets A r and A t, is given by formula

16 C(u, v) = In which: m m i=1 j=1 [f(x i, y j ) f] [g(x i, y i ) ḡ] m m i=1 j=1 [f(x i, y j ) f] m 2 m i=1 j=1 [g(x i, y i ) ḡ]2 (2.1) u = displacement in x-direction v = displacement in y-direction f(x i, y j ) = the gray level at point (x i, y j ) in the subset A r g(x i, y j) = the gray level at point (x i, y j) in the subset A t f = mean gray level of the points (x i, y j ) in the subset A r ḡ = mean gray level of the points (x i, y j) in the subset A t m = length of subset Reference subset Search region Frame 1 Frame 2 Search region Target subset Figure 2.3: Coarse search A coarse search is used to correlate the two images. To limit the amount of calculations a search region, in which the best match is expected to be found, is defined, see figure 2.3. Within this region the correlation coefficient is calculated for every discrete position. This search region must be taken higher than the amount of pixels a point can move between two recordings. In the case of a drop test the point with the highest velocity determines 13

17 the search region in vertical direction, which is case dependent and might be adjusted when the results show that the highest velocity is higher than assumed. In order to achieve subpixel accuracy, a bicubic spline interpolation is employed to obtain gray level values between the pixels, so the subset can be mapped to positions within one pixel accuracy. This method uses four basis functions and interpolates the results in a non-linear way, so the first derivative of the surface is continuous. If the subset area around the best matching position of the coarse search is interpolated, the reference subset A r can be mapped on to new subsets with positions of subpixel accuracy and a fine search can be conducted to find the highest correlation coefficient among those positions. High interpolation levels provide many extra positions, which will take a lot of calculation time to determine and examine. Three ways to perform a fine search have been examined in this study. The first one calculates all the interpolated gray values of every possible position of the whole subset, checks the correlation coefficient of every interpolated position and chooses the one with the highest correlation coefficient as new position. This is a safe method, because it is sure that the best match is found, but this method will take a lot of calculation time. The fine search can be speeded up by a so called hill climbing algorithm. This algorithm searches along one row of integrated positions for the value with the highest correlation coefficient. From this position on a search along the column that position is in is applied and the position with the highest correlation coefficient is taken as new position after which again a search along the row is applied. This is repeated until two times the same position is found and this position is taken as best match. The examined positions are at maximum half a pixel away from the origin. This is based on the assumption that if the correlation peak was further away than half a pixel the coarse search would have found the neighboring pixel as best match. The second fine search method is based on the assumption that the peak in correlation coefficients is rotation-symmetric. This algorithm makes one interpolation, examines the 8 interpolated positions around the original position, compares the correlation coefficient with that one of the origin after which the position with the highest correlation coefficient is taken as the new position. Around this new position a second integration is made, by using the values found in the previous integration. During this second integration the distance between the origin and the integrated positions is half of the distance in the previous integration. Again 9 positions are examined and the position with the highest correlation coefficient is taken as new position. This is repeated till the desired amount of integration is achieved. In figure 2.4 a possible fine search is visualized: The first integration finds the right under position as best match, which also appears to be the best match during the second integration. At the third integration the left under position is taken as best match and the fourth finds the middle right posi- 14

18 origin 1,2 3 4 Figure 2.4: Second fine search method tion. A big advantage of this algorithm is that the size of the matrices that are used are kept small and a limited amount of correlation coefficients is calculated. After the position with the highest correlation coefficient is found the subset A t of this position is saved and used as reference subset A r for the correlation with the next image. This is done for every pixel, after which the image, that was used as target image becomes the reference image to correlate with the next following image in the image sequence that is made during the recording of the drop. The update of A r reduces the effect of intensity fluctuations and the effect of errors in position. Smoothing Digital image correlation calculates the positions of dots in discrete positions, but real positions vary in a continuous manner within the calculated area. The discrete nature of the positions can produce errors in velocity, acceleration or strain. Therefore the positions of the dots are smoothed by a smoothing algorithm. In this study a algorithm have been taken that makes different a least square interpolations and estimates the positions by taking the average of these interpolations. The least square interpolation is made through the position of the even numbered points and estimates 15

19 the positions of the odd numbered points and the same is done through odd numbered points to estimate the positions of the even numbered points. The interpolations are done through the rows and columns of the matrix with the x positions of the dots and through the rows and columns of the matrix with the y positions of the dots. The averaged position of the real value and the estimated value of every point based on the column fit and the row fit is calculated. As final smoothed position the average of the row and column fits is taken. Over the time scale the positions of each point are smoothed by a butterworth filter. This filter is used as a low pass filter to illuminate the high frequency fluctuations. The order and the cut-off frequency of the butterworth filter are case dependent and have to be determined separately for every experiment. For some calculations in this study the average position over a user defined region is taken to reduce the effect of errors. This is done by averaging the positions of a 6 x 6 pixels region, what equal to a region of about 1 x 1 [mm]. Post calculations The position of a pixel is converted to millimeters by multiplying the position in pixels by the scale that is calculated after the recording, as described in paragraph The displacements, u and v, in respectively x and y- directions are calculated by the following formulas: u = (position xframei+1 position xframei ) (2.2) v = (position yframei+1 position yframei ) (2.3) The velocities are calculated via formula 2.4 and 2.5. v x = u framerate (2.4) v y = v framerate (2.5) The accelerations are calculated via formula 2.6 and 2.7. a x = (v xframei+1 v xframei ) framerate (2.6) a y = (v yframei+1 v yframei ) framerate (2.7) If the accelerations need to be expressed in G-levels the result of formula 2.6 and 2.7 need to be divided by 9.81 [m/s 2 ]. To give an estimation of the accelerations between the discrete time steps a spline interpolation is applied. 16

20 Stains can be calculated out of the displacement of two adjacent points and the distance between those points via the following formulas: ɛ x = u 2 u 1 x ɛ y = v 2 v 1 y γ xy = u 2 u 1 y + v 2 v 1 x (2.8) (2.9) (2.10) In which: u 1 = displacement first pixel in x-direction u 2 = displacement second pixel in x-direction v 1 = displacement first pixel in y-direction v 2 = displacement second pixel in y-direction x = distance between point 1 and 2 in x-direction y = distance between point 1 and 2 in y-direction A phone that stays in one plane has 3 degrees of freedom, two translations and one rotation. The translations are found direct out of the results of the digital image correlation algorithm. For the rotation a linear line is fitted through the (x,y) positions of data points that were in one horizontal line in the first image. The outcome is the slope and constant of every row in the grid of dots. The slope of the phone is taken as the average of the slopes of every row, after which the rotation angle of the phone, α, is calculated by formula α = arctan(slope phone) 360 (2.11) 2 π Bending is best visualized when the rotations and translations are eliminated from the movements of the dots. Therefore a new linear line, of which the slope is the calculated averaged slope, is fitted through the rows, which gives new constants for every line. The new line fit is subtracted from the position of every dot, which gives the deviation in vertical direction from the fitted line. A spline can be fitted through the deviations, which will visualize the bending profile of every row of the grid. Finally the elongation of a row is calculated by summing the distances between all the points of the spline and dividing this sum by the original length of the row. The whole data processing program is summarized the diagram of figure

21 Define reference pixel Define subset Define search region Coarse search Fine search Go to next pixel Go to next picture Smooth positions rotation bending displacements strains velocities accelerations Interpolate accelerations Figure 2.5: Programming steps 18

22 Chapter 3 Discussion of methods 3.1 Measurements Equipment Three out of the six degrees of freedom the phone has are captured with the high-speed camera, the x-translation, y-translation and rotation around the z-axis. During the drop the phone should make no movements in the other degrees of freedom. Vertical and horizontal drops conducted by the NDT stay relatively accurate in one plane and assumed is that the phone will not move out of that plane. This assumption leads to errors when the phone does move out of the plane, but because every drop is unique no estimation of the error made due to this effect can be made. Recordings of the phone have been made under different test conditions. Different heights, orientations, zoom levels and frame rates give a high number of possible test combinations. For examinations of global movements, the whole phone is recorded on a low frame rate, typically 8000 fps. This kind of recordings provide information about the difference in displacements on different points on the phone. For more accurate results the camera is placed against the cabinet to get a higher zoom-level, which provides a smaller pixel-scale. A higher frame-rate can be applied to get more data points during the impact, but this goes with a smaller image, so only a small part of the movement can be examined Measurement environment For the proper functionality of the digital image correlation algorithm a good speckle and a good recording of the speckle is crucial. Fluctuations in the lightning causes noise in the recording of the gray levels. The gray values fluctuate over time, which makes that the subset A t is never the same as the subset A r, even when the perfect displacement is made. The effect of noise is measured by recording a non moving phone. The fluctuation in 19

23 gray levels can be seen in figure 3.1. These fluctuations can be caused by fluctuations in other light sources, like TL-lights, or by fluctuations in the intensity of the cold light sources gray level [ ] time [ms] Figure 3.1: fluctuation in gray level of one pixel 3.2 Data processing Digital image correlation accuracy All the results depend on the accuracy of the digital image correlation algorithm. To verify the algorithm a couple of pictures with a known translation or rotation are made and used as a picture sequence. These translations and rotations are made by placing a fine speckle under a microscope and rotate or translate this speckle. The digital image algorithm calculated the displacement of a grid of 6x6 pixels and compared the calculated displacements with the actual displacements. Because not every point made the same displacement the average displacement of those pixels minus the real displacement is taken as the error. The standard deviation in the calculated displacements is also calculated and shows how consequent the algorithm is in the calculation of the displacements. The results are given in appendix D. In table D.1 can be seen that the position error initially reduces when more integrations 20

24 are applied, but at integration levels higher than 4 the error does not reduce anymore. The standard deviation of the found positions in table D.2 gets initially bigger at higher integration levels, because at the first integration levels all the displacements are the same. After that the displacements start to fluctuate, which is probably due to bad integrated gray level values. On every integration level can be seen that the standard deviation becomes bigger when the algorithm gets further in the picture sequence. This is because after every correlation the target subset A t becomes the reference subset A r. For most calculations made in this report an integration level of 5 is applied, that provides an accuracy of 1 32 pixel Figure 3.2: rotation of 2.8 degrees (left) and 23.9 degrees (right) The verification of the rotation shows that the algorithm is capable of detecting small rotations. The bigger the rotation is the more difficult it will become to find the displacement of a pixel. This is because non-rotated subsets A r are mapped on the rotated subset. The error in positions is best seen by plotting the position of the pixels when there is no smoothing algorithm applied. In figure 3.2 the grid that is rotated 2.8 degrees is still quite regular, but for 23.9 degrees some points get out of grid. Big subsets also have more difficulties to capture the rotation than small subsets. This is because the effect of the rotation is bigger at the edges of the subset than in the center. During a picture sequence the position of the examined pixel can get too close to the border of the image. This can result in a search area that is partly out of the image, which means not a complete subset A t can be found. In order to keep the subset complete and the results accurate only positions that have their complete subset in the image are examined here. This results in a limited number of possible positions, which limits the movement range that can be examined. 21

25 The different fine search methods, that is discussed in paragraph shows to be faster at the different sequences that are examined, but the results are less accurate. Both faster algorithms have a lower average correlation coefficient than the slow method. For the hill climbing method this is most probably due to local optima that are found and assumed to be global optima. The other method becomes less less accurate at high interpolation levels, what can cause lower correlation coefficients. Another problem with this algorithm is that the shape of the correlation peak should be almost rotation symmetric, because if the peak is too far away from the starting point the algorithm will have much difficulties to get that far. A third limitation of this search method is that it will fail to find the global optimum once a local optimum is found. In most of the calculations in this study the slow method is used, to avoid mismatches due to the algorithm Post calculation accuracy The position is the source parameter for all the other parameters during the post calculation. This makes the accuracy of the position very important. It is difficult to give an estimation about the accuracy of the position, because the algorithm depends strongly on the recorded and interpolated gray values, which can contain all kind of errors. Non regarding these errors an accuracy budget can still give insight in the effect on accuracy of the different variables on the results. The accuracy budget of the position can be found with the following formula: position = n scale + scale 2 int (3.1) The accuracy budget of the velocity is given by formula 3.2. velocity = framerate scale + scale 2 int (3.2) The accuracy budget of the acceleration is given by formula 3.3. in which acceleration = 2 framerate 2 scale + scale 2 int (3.3) scale = the scale of a pixel scale = the inaccuracy in scale n = the number ot the frame int = the integration level Here is seen that the pixel scale is a very important variable. The smaller the scale, the higher the accuracy in position, velocity and acceleration. A 22

26 smaller scale can be achieved by zooming in on the phone or recording the image on a higher resolution, of which the last one is to be preferred. A higher resolution will provide the possibility to examine the whole movement of the phone on a more accurate scale. Using a higher zoom level limits the recording to a small part of the phone and has a smaller depth of focus, what can easily result in recordings that are not sharp. During the tests pixel scales varied from 238 ± 2.2 µm to 173 ± 0.45 µm. Because of the discrete nature of the positions the smoothing algorithm and the butterworth smoothing improve the results and a higher accuracy than the accuracy budgets give can be achieved. The effect of growing standard deviation in a picture sequence, as was mentioned in paragraph and seen in Appendix D, can be explained by the n in the accuracy budget of the position. These up-pilling mistakes don t have influence on the velocity and acceleration, because for those parameters the difference in positions is taken. High integration levels decrease the error, but here should be noted that high integration levels can easily introduce inaccurate gray level distributions in the subset and in reality the integration level is limited, as is also seen in paragraph The impact of a phone typically takes 0.5 [ms], which means that on a framerate of 8000 fps 4 foto s of the impact would be taken, which is slightly enough to capture the shape of the acceleration peak. So a higher framerate is needed to have more recordings during the impact, but this will reduce the accuracy of the velocity and specially that of acceleration. This shows that raising the framerate is not always improving the overal measurement, because the results become very inaccurate at high framerates. For the used camera the calculated acceleration level on fps is too inaccurate and there should be something done about the scale first, before the framerate is increased. Typical results for the accuracy budgets can be found in appendix C. This table shows that at fps the accuracy budget of the acceleration is more that 1000 [G], which is unacceptable with results around 1500 [G]. There need to be noted that by using smoothing methods as described in paragraph 2.2.2, the accuracy is increased a lot. 23

27 The accuracy budgets of the strains are given by formula 3.4, and ( ( u1 + u 2 ) ɛ x = x 2 x 1 ( ( v1 + v 2 ) ɛ x = y 2 y 1 ( ( u1 + u 2 ) γ xy = y 2 y 1 in which + 2n(u 2 u 1 ) ) scale + scale (x 2 x 1 ) 2 2 int (3.4) + 2n(v 2 v 1 ) ) (y 2 y 1 ) 2 + ( v 1 + v 2 ) x 2 x 1 scale = the scale of a pixel scale = the inaccuracy in scale scale + scale 2 int (3.5) + 2n(u 2 u 1 ) (y 2 y 1 ) 2 + 2n(v 2 v 1 ) ) (x 2 x 1 ) 2 n = the number ot the frame int = the integration level x x = x-position of pixel x y x = y-position of pixel x u x = x-displacement of pixel x v x = y-displacement of pixel x Because the accuracy budget depends of a lot of parameters it is difficult to calculate a value for the accuracy, but the budgets tell how to interpret the results of the strains. The following remarks can be made about the accuracy of the strains: When the points are close to each other, the error increases. When the displacements are high, the error increases. The further the sequence gets the more inaccurate the results become. The first item means that the calculation of local stains is less accurate than the calculation of global strains. Item two means that the error during free fall is higher than during the impact. For other parameters it is very difficult to give an accuracy budget, but the results should always be interpreted in a careful way. scale + scale 2 int (3.6) 24

28 Chapter 4 Results 4.1 Horizontal phone The picture sequence in appendix A, shows the impact of a phone that is dropped from 2 meter. The recording is made with 8000 fps on a resolution of 128 x 512 pixels and with a scale of 238 [µm/pixel] of which the uncertainty is 2.2 [µm/pixel]. In this recording a grid over the whole phone can be examined, to examine the overall movement and local areas can be examined, so differences in parameters on different points on the phone can be visualized. A limitation of this recording is that the phone consists of different parts, a back cover, the main body and a front cover. Because the covers get loosen from the main body they should not be included in the examined area, which limits the size of the examined grid. First the movement of a group of pixels, spread over an area of 16 x 330 pixels on the main body of the phone, is calculated. The coordinates of those pixels during the impact have been plotted in appendix B. The axis of the pictures is adjusted to see the deformations sharper and it is easy to see where the phone bends most. Because the phone arrives a little bit rotated, the grid is taken under an angle with the phone, what explains why the left dots have a lower minimum position than the right dots. To examine the difference in parameters over the phone 3 different parts are examined and the displacements of 3 groups of 6 x 6 pixels on the left, middle and right of the main body of the phone are calculated. The vertical movement of these areas is the main and most interesting movement and to keep an indication of the direction only the movement in x-direction is discussed here. The different velocity profiles are plotted in figure 4.1. In the recording in appendix A, can be seen that the right side hits the ground first, which agrees with the velocity profiles in figure 4.1. The steep drop in velocity represents the impact, which takes on the left side 0.6 [ms] later place than on the right side. The peak in the velocity just before the impact is because of the rotation of the phone after the right side hits the ground. 25

29 6 left middle right 4 2 velocity [m/s] time [ms] Figure 4.1: The average velocity of 25 pixels on the left, middle and right of the main body of the phone dropped from 2 meter In figure 4.2 the acceleration profile of the 3 spots is plotted. The highest peak in acceleration is 1400 [G] for the left spot, 1600 [G] for the middle spot and 1500 [G] for the right spot. Another recording of a phone that is dropped from 1.5 meter also hits the ground with the right side first, but in a bit more rotated orientation. The velocity profiles of the left, middle and right of the phone are plotted in figure 4.3. Here is also seen that the translational velocity is transferred in rotational velocity, which explains the peaks in the velocity profile of the left and middle point. The resulting acceleration profiles gave big differences in G-levels of which are plotted in figure 4.4. The acceleration peak of the left side of the phone differs quite from the other two acceleration peaks, which shows the need to measure the accelerations at different locations. The right and middle side show peaks of 1200 [G] and 1600 [G], respectively, but the left side shows a peak of 1900 [G], what is higher than the highest peak of a phone that is dropped from 2 meter. The impact time can still be a point of discussion. According to the JEDEC standard [1] the impact time is the width of a half sinus shaped acceleration peak. For acceleration peaks in figure 4.2 and 4.4 it is difficult to determine the width of one peak, because different peaks interfere with 26

30 left middle right acceleration [G] time [ms] Figure 4.2: The average acceleration of 25 pixels on the left, middle and right of the main body of the phone dropped from 2 meter each other. To estimate the acceleration peaks the first peak is extrapolated and the width of the peak, based on this extrapolation, is 0.64 [ms] for the left, 0.54 [ms] for the middle and 0.57 [ms] for the right, for the phone dropped from 2 meter. For the phone dropped from 1.5 meter the impact times for the left, middle and right side are, respectively, 0.49, 0.48 and 0.47 [ms]. The average velocity profile of the phone drop from 2 meter is given in figure 4.5. This profile gives information of the velocity before and after impact out of which an approximation of the energy loss can be calculated. This is done via formula 4.1, which results for the case mentioned above an energy loss of 84 %, which is 1.22 [J]. U = 1 2 m(v2 impact v 2 rebound ) (4.1) If the rotation and the translation of the phone is removed from the displacements the bending of the phone can be visualized. This is done for the sequence in appendix A and appendix E shows the bending of 6 lines on the phone. The scale on the vertical axis is not the same as the scale of the horizontal axis, so the bending is magnified. The bending is calculated 27

31 7 6 left middle right 5 4 velocity [m/s] time [ms] Figure 4.3: The average velocity of 25 pixels on the left, middle and right of the main body of the phone dropped from 1.5 meter on basis of the positions and the fluctuations in the lines further on in the sequence clearly illustrate the effect of the framenumber n on the position, as given in formula 3.1. Another consequence of these fluctuations is that the calculation of elongation, as discussed on the end of paragraph 2.2.2, gives bad results, which cannot be used. For an analysis of the strains the examined area is preferably taken over a lager part of the phone, so the dots can be taken further out of each other and the error can be reduced, see paragraph Based on the results presented in appendices A, B and E the area that is at about 2 3rd of the width of the grid, which is about 25 [mm] from the bottom of the phone, has a lot of deformation and the stain fields of this area are given at two different moments in time in the figure 4.6 till The results of the strains all over time can be very fluctuating. The maximum measured strain during the whole sequence is 0.06, but the majority of the strains is smaller than 0.01 and is close to 0.005, except for ɛ y, which represents the strains in horizontal direction, which are all close to 0.001, what is explained by the fact that the phone drops in vertical direction. This would mean that when only the plastic cover of the phone deforms the position of the dots that were next to each other in one row should get quite far out of each other 28

32 left middle right 0 acceleration [G] time [ms] Figure 4.4: The average acceleration of 25 pixels on the left, middle and right of the main body of the phone dropped from 1.5 meter 29

33 velocity [m/s] time [ms] Figure 4.5: The average velocity of a grid of 16 x 330 pixels on the main body of the phone during a deformation. For a strain of 0.06 and a thickness of the plastic of 1 mm this would mean the difference in the dots can be 60 µm. For a dotplot that has a high deformation the difference between some dots, that appears to be in a smooth line, can be 50 µm, what is in the same order of magnitude as 60 µm. Another way to approach the verification of the results is to estimate the force, divide the force by the length and depth of the plastic cover, which gives the average stress in the cover and divide the average stress by the Young s modulus. This estimation is very rough, because the stress is assumed to be uniform, the dimensions of the plastic are roughly estimated and assumed to be uniform and the mass distribution is not regarded. The mass is assumed to be the mass of the phone, which is kg, and the acceleration is assumed to be 1500 G, which would result in a force of 2.45 kn. This force is assumed to be applied uniformly over the whole area, which is taken as 10 cm x 1 mm, what gives a average stress of 22.2 MN/m 2. With a Young s Modulus of N/m 2, poly carbonate, this would result in an average strain of This is the average strain on a high acceleration level, which is in the same order of magnitude as the higher values of the measured strains. When the phone locally bends the strain can be higher. Concluding can be said that the measured strain levels 30

34 appears to be reasonable epsilon x Figure 4.6: ɛ x during horizontal drop at first impact (0 ms) epsilon y Figure 4.7: ɛ y during horizontal drop at first impact (0 ms) gamma x y Figure 4.8: γ xy during horizontal drop at first impact (0 ms) 31

35 epsilon x Figure 4.9: ɛ x during horizontal drop at 0.75 ms after impact epsilon y Figure 4.10: ɛ y during horizontal drop at 0.75 ms after impact gamma x y Figure 4.11: γ xy during horizontal drop at 0.75 ms after impact 4.2 Vertical phone On fps a recording of 70 frames, with a scale of 219 ± 0.6 [µm] is made of a vertical phone, that is dropped from 1.5 meter. Appendix C shows that fps has still an acceptable accuracy and on this frame rate more data points during the impact are achieved. Out of recordings can be seen that the telephone hits the ground with the corner of the front and the bottom of the phone first and after that with the corner of the back and the bottom. This causes accelerations in both x and y direction and so the movement of the phone in both directions needs to be examined. For this two areas 32

36 of 6 x 6 pixels are examined, one on the bottom of the phone and one in the middle. The acceleration graphs in both x and y direction are given in figure 4.12 and These plots show that the acceleration peak in x-direction in the middle of the phone is the highest. The total acceleration at the examined areas can be calculated by formula 4.2 of which the graphs are given in figure This figure shows the magnitude of the absolute acceleration, which is for the bottom 2700 [G] and for the middle part 2600 [G]. Both peaks are quite higher than the acceleration peaks in the horizontal drops, what shows the need to examine different impact orientations. For the acceleration peaks of the vertical drop it is also difficult to determine the width of the peak, because of interference of different peaks and no conclusions are made of the impact time, measured via the JEDEC standard [1]. a total = (a x ) 2 + (a y ) 2 ) (4.2) To obtain information of the overall movement of the phone the framerate has to be lowered to 8000 fps, so a resolution of 128 x 512 pixels is achieved. In this recording a grid of 20 x 230 pixels on the main body of the phone have been analyzed. This analysis give insight in the bending of the phone and shows that, like during the horizontal drop, most bending takes place at 2.5 [cm] away from the bottom. The overall velocity profile of the phone is given in figure 4.14 and via formula 4.1 the energy loss is calculated. During the vertical drop the lost in energy is 88 % of the original energy, which is in this case equal to 0.93 [J]. The strains show again a irregular pattern and the highest strain measured is The average of the strains is also higher than during the horizontal drop, so concluding can be said that the measured impact during a vertical drop is much worse in terms of accelerations and strains than the measured impact during a horizontal drop. 33

37 vertical horizontal acceleration [G] time [ms] Figure 4.12: acceleration profile in two directions on the middle of the vertical phone vertical horizontal Figure 4.13: acceleration profile in two directions on the bottom of the vertical phone 34

38 2 1 0 velocity [m/s] time [ms] Figure 4.14: The average velocity of a grid of 20 x 230 pixels on the main body of the phone 3000 bottom middle 2500 absolute acceleration [G] time [ms] Figure 4.15: absolute acceleration profile on the bottom and the middle part of the vertical phone 35

39 36

40 Chapter 5 Conclusions 5.1 Method evaluation The results in this report show a promising indication of what is possible with digital image correlation for analyzing portable products during drop impact tests. The big advantage of this method is that it can analyze every arbitrary point on the phone and is not limited to local measurements. Another big advantage is that the measurement system is non contact and can analyze guided free drops, which are realistic and allow study of different impact positions. The recordings can be used to zoom in on different parts of the phone and compare velocity and acceleration profiles or strains. But the recordings can also be used to analyze the overall movement of the phone. Another advantage is that new parameters, such as velocity, rotation and bending can be examined. The recording itself is very useful to interpret the results and indicate weak zones in the product. To keep this method accurate high quality equipment is needed, which can make the test set up very expensive. Specially the high speed camera that should have both a high frame rate and a high resolution. The drop tester that is used provided good drops, with a quite good repeatability. The disadvantage of the free drops the tester provided is that the angle of the recorded plane with the phone can change, which changes the recorded speckle. The lightning needs to provide a stable luminance, so the gray levels do not get influenced by fluctuations in light intensity. The digital image correlation method shows good results, but it is difficult to give a indication of the overal accuracy of those results. The accuracy of the other results strongly depends on the accuracy of the digital image correlation algorithm and the used equipment. The post calculations provide more results than most other test methods and will improve the understanding of the phenomena occurring during drop impact. The results provide a good basis for the evaluation of finite element simulations, input for finite element simulations and new design rules. 37

41 5.2 Results evaluation The conducted tests give extensive insight in the phenomena that take place during drop impact, because many different parameters were obtained, which could be combined to give a good basis for conclusions. Two impact orientations have been examined: a horizontal drop and a vertical drop. The comparison of different places on the phone during a horizontal drop showed that the impact parameters at different positions can vary a lot. Depending on the orientation there can be differences from 1200 [G] till 1900 [G] on different places on the phone. Accelerations in a vertical drop from 1.5 meter can be as high as 2600 [G], which is higher than accelerations measured during a horizontal drop. Conclusions on strains are difficult to obtain, because of the irregular strain patterns. The calculation of bending points out which spots bend most and can indicate weak spots. Impact times are difficult to analyze, because of interference of different acceleration peaks. Results as dotplots, rotations and bending increase knowledge of the impact behavior and the recording itself is very valuable to examine the exact movement of the phone. In this study all these results are combined, which provide good insight in the impact behavior of a mobile phone. 38

42 Chapter 6 Recommendations The used method is a first step in a new way of examining drop tests. The used equipment and methods can still be improved, so higher accuracy can be achieved and more information can be obtained. The key to higher accuracy is the use of high speed cameras that have a higher resolution or can zoom in on the phone, so the pixel scale can be decreased. Out of the two options a higher resolution is preferred, because this will still allow the user to inspect the overal movement of the phone. The consequence of a higher resolution is that the dot size of the used speckle can get to big and a new method of producing a speckle has to be investigated. Further are more stable light conditions needed, so the fluctuations in gray levels can be reduced. In this study one high speed camera is used, what limits the recordings to 2D movements and so the amount of impact orientations are limited. To examine all impact orientations 3D movements should be captured. Depending on the equipment the possibilities of two different ideas can be studied. The best way to study 3D movements is to use 2 synchronized camera s. Another idea is to project a fine grid on the speckle and to determine movements in the 3rd direction by examining the change in the shape of the grid. The used method was able to capture the movements of the outside of a phone, but during a drop every part will have it s own acceleration and ways to visualize the movement of different parts of the phone have to be investigated. Also the possibility to drop other products or PCB s can be studied, depending on the requirements of the user. These different tests can require different drop test facilities and the effects of digital image correlation on other drop test facilities have not been examined. The use of a guided drop tester, of which different ideas are already developed, will give an advantage in the accuracy of the digital image correlation algorithm. Instead of a speckle also other patterns or objects are used for the visualization of displacements. For the analysis of the movements of a human body 39

43 or a car often reflecting stickers are used. During the movement a flashlight is synchronized with one or more high speed cameras and the reflectors light up at every photo that is taken. Another way to indicate orientation is the use of line recognition software, which can indicate shapes and positions of objects. A third way is the use of a regular pattern which will give an advantage in interpolating the gray levels for achieving sub-pixel accuracy. All those methods have their advantages and disadvantages, which are worth studying. Forces can be calculated with F = m a and the acceleration of every point can be measured. But it is not known how much mass will act on this point of the phone and so no forces can be calculated. A known mass distribution of the phone will allow the program to calculate the impact force on every part of the phone and so is worth further investigation. The stress in the phone can be calculated with σ = ɛ E, but the Young s Modulus of the surface is not known. The Young s modulus can vary over the whole surface and will depend on the shape of the plastic, the way the rest of the phone is attached to the plastic and the material surrounding the plastic. The different influences on the stiffness distribution have to be taken into account to calculate the Young s modulus and get insight in the stress. During the inspection of a picture sequence many pixels and many pictures are examined, which result in a high number of correlation coefficients that needs to be calculated. When a higher resolution and frame rate are used this number can get excessively big and techniques to reduce the calculation time have to be implemented in the digital image correlation algorithm. A phenomena that is not captured with the used algorithm is the change in shape and orientation of the subset. The effect of this change in shape and orientation of the subset is reduced in the used algorithm by taking for every picture the best matching subset as new reference subset. The capability to detect the change in shape and orientation can provide a higher accuracy in positions and give new insight in occurring phenomena. To get rid of the digital nature of the positions different smoothing algorithms are available. In this study a simple smoothing algorithm is implemented, but more advanced are available. The different smoothing algorithms will provide different results and the effect of the algorithms on the obtained measurements should be investigated in further studies. 40

44 Bibliography [1] JEDEC Solid State Technology Association 2003 Arlington. JESD22- B111: Board level drop test method of components for handheld electronic products, July [2] J.G. Kim and Y.K. Park. Experimental verification of drop/impact simulation for a cellular phone. Society for experimental mechanics, 44: , [3] Y.C. Ong, V.P.W. Shim, T.C. Chai, and C.T. Lim. Comparison of mechanical response of pcbs subjected to product-level and board-level drop impact tests. EPTC Conference Proceedings, Singapore, December [4] H.L.J. Pang, D.X.Q. Shi, X.R. Zhang, and Q.J. Liu. Application of digital speckle correlation to micro-deformation measurement of a flip chip assembly. 53rd Electronic components and technology conference, New Orleans, Louisana, USA, pages , May [5] D.X.Q. Shi, H.L.J. Pang, X.R. Zhang, Q.J. Liu, and M. Ying. In-situ micro-digital image speckle correlation technique for characterization of materials properties and verification of numerical models. IEEE transactions on components and packaging technologies, 27(4): , December [6] T.Y. Tee, H.S. Ng, C.T. Lim, Pek E., and Z.W. Zhong. Impact life prediction modeling of tfbga packages under board level drop test. Microelectronics reliability journal, 44(7): , [7] T.Y. Tee, H.S. Ng, C.T. Lim, Pek E., and Z.W. Zhong. Board level drop test and simulation of tfbga packages for telecomunication applications. 53rd Electronic components and technology conference, New Orleans, Louisana, USA, pages , May

45 42

46 Appendix A Picture sequence Figure A.1: Phone under horizontal impact recorded at 8000 fps 43

47 44

48 Appendix B Dotplots 45

49 Figure B.1: deformation sequence of phone during impact starting from ms till 1 ms, in steps of ms 46

50 Appendix C accuracybudgets scale = 173 [µm] scale = 0.45 [µm] integration level = 5 fps displacement [µm] velocity [mm/s] acceleration [G] scale = 238 [µm] scale = 2.2 [µm] integration level = 5 fps displacement [µm] velocity [mm/s] acceleration [G] Table C.1: Accuracy budgets 47

51 48

52 Appendix D Verification data error [µm] displacement [µm] integration level Table D.1: Mean errors in position standard deviation displacement [µm] integration level E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E E-03 Table D.2: standard deviation of positions 49

53 50

54 Appendix E Bending visualization 51

55 phoneshape phoneshape phoneshape horizontal position [pixels] 10 5 horizontal position [pixels] 10 5 horizontal position [pixels] vertical position [pixels] vertical position [pixels] vertical position [pixels] 20 phoneshape phoneshape phoneshape horizontal position [pixels] 10 5 horizontal position [pixels] 10 5 horizontal position [pixels] vertical position [pixels] vertical position [pixels] vertical position [pixels] 20 phoneshape phoneshape phoneshape horizontal position [pixels] 10 5 horizontal position [pixels] 10 5 horizontal position [pixels] vertical position [pixels] vertical position [pixels] vertical position [pixels] 20 phoneshape phoneshape phoneshape horizontal position [pixels] 10 5 horizontal position [pixels] 10 5 horizontal position [pixels] vertical position [pixels] vertical position [pixels] vertical position [pixels] 20 phoneshape phoneshape phoneshape horizontal position [pixels] 10 5 horizontal position [pixels] 10 5 horizontal position [pixels] vertical position [pixels] vertical position [pixels] vertical position [pixels] 20 phoneshape phoneshape phoneshape horizontal position [pixels] 10 5 horizontal position [pixels] 10 5 horizontal position [pixels] vertical position [pixels] vertical position [pixels] vertical position [pixels] Figure E.1: bending of phone during impact starting from ms till ms, in steps of ms

56 Appendix F Matlab file 53

57 H:\Important files drop testing\rundic.m Page 1 17 november :04:26 1 close all; 2 clear all; 3 4 starttime=cputime; 5 6 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 7 % Input variables: % 8 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 9 % for the whole file: 10 % positive x-direction is from up to down 11 % positive y-direction is from left to right % > positive y-direction 14 % 15 % 16 % 17 % 18 % 19 % ' 20 % positive x-direction % specify the directory in which the pictures are placed 23 dir=char(input('in which directory are the pictures placed? (Indicate like c:\\dir \\subdir\\) ','s')); 24 path(path,dir); %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % loading the images: 29 % provide the name of the images: leave the last 3 digits out and leave the 30 %.bmp extension out. 31 imname=input('what is the name of the first image (without the last 3 numbers and extension, e.g. cam001_c001s )? ','s'); startim=input('what is the frame number of the image that is used as first referen ce in the sequence? '); 34 nrofim=input('how many images do you want to correlate? '); bmp=char('.bmp'); % providing the.bmp extension at the filename 37 for imnr=startim:startim+nrofim; 38 nr=int2str(imnr); 39 if imnr<100 % setting the first of the 3 last digits to 0 40 nr=['0' nr]; 41 if imnr<10 % setting the second of the 3 last digits to 0 42 nr=['0' nr]; 43 end 44 end 45 if imnr==startim 46 startimage=[imname nr bmp]; 47 end 48 if imnr==startim+nrofim 49 endimage=[imname nr bmp];

58 H:\Important files drop testing\rundic.m Page 2 17 november :04:26 50 end 51 pic(:,:,imnr-startim+1)=double(imread([dir imname nr bmp])); % loading the ima ges in a 3-D array. 52 end % Framerate: this is the framerate that is used during the recordings 58 framerate=input('what is the used framerate [frames/sec]? '); %[frames/second] % scale factor: what is the size in millimeter of one pixel? 61 % For this: record a picture of a ruler and count how much mm one pixel is: 62 % scale=(amount of millimeters)/(amount of pixels) 63 scale=input('what is the scale factor [mm/pixel]? '); % [mm/pixel] % Coordinates of first target pixel: What are the coordinates of the pixel you 66 % want to base the correlation on. Take into account that the surrounding 67 % area to all directions should be good enough for image correlation. (for 68 % the size of the surrounding area see variable sub. 69 coorx=input('what is the coordinate in vertical direction of the upper left pixel of the examined grid? '); 70 coory=input('what is the coordinate in horizontal direction of the upper left pixe l of the examined grid? '); % Area: the area of the picture that is examined 73 areax=input('what is the size of the examined grid in vertical direction? '); 74 areay=input('what is the size of the examined grid in horizontal direction? '); % pixstep: what steps are made between the pixels that are examined. For 77 % example pixstep=5 examines the 1st, the 6st, the 11th, etc. pixel 78 pixelstepx=input('what is the amount of pixels between every dot in the examined g rid in vertical direction? '); 79 pixelstepy=input('what is the amount of pixels between every dot in the examined g rid in horizontal direction? '); % Determine subset. The subset is the size of the area surrounding the 82 % target pixel. This area is the area that is used as reference area to 83 % find the best matching new position of the pixel. An odd number is best, 84 % so the target pixel is surrounded at all sides by an equal number of 85 % reference pixels. Take into account that 3/4 of the subset of the first 86 % (upper left) pixel is out of the examined area (see variable area). 87 sub=input('what is the size of one side (m) the subset in pixels? '); % Determine search range: The search range determines the maximum amount of 90 % pixels that the particle can have moved in positive or negative 91 % direction. 92 rangex=input('what is the size of the search field in vertical direction? '); 93 rangey=input('what is the size of the search field in horizontal direction? '); % Determine stepsize: The stepsize is the amount of pixels between every

59 H:\Important files drop testing\rundic.m Page 3 17 november :04:26 97 % step in new possible position (in the next picture). The bigger the stepsize the lower 98 % the amount of positions that are examined. 99 % For the the test facility in Philips MDS Shanghai is a framestep of % necessary. This is the test set up that compbines the NDT-2000 drop test 101 % facility with the Fastcam Proton camera. 102 % framestep=input(what is the stepsize between every frame? '); 103 framestep=1; % parameters for fine search: 106 % This is the number of interpolations made around the found position 107 % pixel. The bigger this number is the more accurate a position can be found. 108 finenr=input('what is the interpolation level for the fine search? (0=no interpola tion)'); %%%%%%%%%%%%%% en of input %%%%%%%%%%%%%%%%%%%%%%%%% 111 % images are now placed in the way the most viewers present them. (0,0) is 112 % in the upper left corner. But be carefull: in this file x-direction is 113 % from up to down and y-direction is from left to right % > positive y-direction 116 % 117 % 118 % 119 % 120 % 121 % ' 122 % positive x-direction %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Start of the program: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % Determine the number of pixels that are examined: 132 nrofpixx=floor(areax/pixelstepx)+1; 133 nrofpixy=floor(areay/pixelstepy)+1; 134 nrofpix=nrofpixx*nrofpixy; % The number of possible positions of each pixel that are examined: 137 nrofposx=floor(rangex*2/framestep)+1; 138 nrofposy=floor(rangey*2/framestep)+1; 139 nrofpos=nrofposx*nrofposy; % The number of correlationcoefficients that are calculated in the coarse 142 % search. 143 nrofcor=nrofpix*nrofpos*nrofim; 144 disp(['the number of correlation coefficients that is calculated during coarse sea rch is ' num2str(nrofcor) '.']) 145

60 H:\Important files drop testing\rundic.m Page 4 17 november :04: % fine search parameters: 147 % Interpolation points further than half a pixel away from the 148 % target pixel are not interesting. If there would be the best match 149 % another pixel had been choosen during the coarse search. 150 finestep=(2^(finenr-1)); % maximum nr of steps (interpolation points) that is away from the target pixel 151 finestepwidth=finestep*2; % 1/finstepwidth indicates the smallest step made. 152 nrofposfine=((1+finestepwidth)^2)-1; % number of positions that is examined during the fine search. 153 % To prevent bugs in the program later on: 154 if finenr==0 155 nrofposfine=0; 156 end 157 disp(['the number of correlation coefficients that is calculated during a normal f ine search is ' num2str(nrofposfine) '.']) 158 %%%%%% nrofcortot=(nrofpos+nrofposfine)*nrofpix*nrofim; 161 disp(['the total number of correlation coefficients that is calculated with a norm al fine search is ' num2str(nrofcortot) '.']) % calculation of accuracy level 165 stepaccuracy=scale/finestepwidth*1000; 166 disp(['the step accuracy is ' num2str(stepaccuracy) ' [micrometer].']) % make initial matrices of zeros: % Matrix with the correlation coefficient of every pixel in every frame 171 % -1, because the first picture is reference picture and so it doesn't have a corr elation coefficient. 172 Corr=zeros(nrofpixx,nrofpixy,nrofim-1); % Matrix with x-position [pixels] 175 posx=zeros(nrofpixx,nrofpixy,nrofim); 176 % Matrix with y-position [pixels] 177 posy=zeros(nrofpixx,nrofpixy,nrofim); % Matrix with x-displacement between every frame [pixels] 180 dispx=zeros(nrofpixx,nrofpixy,nrofim-1); 181 % Matrix with y-displacement between every frame [pixels] 182 dispy=zeros(nrofpixx,nrofpixy,nrofim-1); % Put initial positions of every pixel in the position matrix 185 for pixelnrx=0:nrofpixx posx(pixelnrx+1,:,1)=coorx+pixelnrx*pixelstepx; 187 end 188 for pixelnry=0:nrofpixy posy(:,pixelnry+1,1)=coory+pixelnry*pixelstepy; 190 end 191 % A matrix that is needed for smoothing is also initialized: 192 Xcor=posx;

61 H:\Important files drop testing\rundic.m Page 5 17 november :04: Ycor=posy; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 196 % start of correlation algorithm: % 197 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 198 finished_number_of_frames=0 199 % Every pixel of one frame is examined before moving on to the other frame. 200 for framenr=1:framestep:nrofim; % indicates the number of the reference frame; 201 % Indicate the pixel that is examined with (pixnrx,pixnry) 202 for pixnry=1:nrofpixy; % indicates the number of the pixel that is examined in y-direction 203 for pixnrx=1:nrofpixx; % indicates the number of the pixel that is examine d in x-direction 204 % calculate the original position of the pixel: 205 % This is the position of the origin (coorx,coory) plus the 206 % number of pixels that the examined pixel is away from the 207 % origin (pixnrx*pixelstepx,pixnry*pixelstepy) plus the 208 % displacement of this pixel from the previous frame. 209 % (dispx(pixnrx+1,pixnry+1,framenr),dispy(pixnrx+1,pixnry+1,fra 210 % menr).round is used to start calculating from the nearest point. 211 posxvar=round(posx(pixnrx,pixnry,framenr));% coorx+pixnrx*pixelstepx+d ispx(pixnrx+1,pixnry+1,framenr); % (pixnrx*pixelstepx) calculates how far the cons idered pixel is away from the first pixel (coorx,coory) in x-direction 212 posyvar=round(posy(pixnrx,pixnry,framenr));% coory+pixnry*pixelstepy+d ispy(pixnrx+1,pixnry+1,framenr); % (pixnry*pixelstepy) calculates how far the cons idered pixel is away from the first pixel (coorx,coory) in y-direction 213 % record the original position: if framenr==1 216 % get the reference frame around the examined pixel on position 217 % (posxvar,posyvar). This only has to be done at the 218 % beginning of every picture sequence, so when a new pixel 219 % is examined. The other times the area found with the 220 % highest correlation coefficient is recorded and used as 221 % reference in the new picture sequence. (see end of fine 222 % search) 223 Fref=pic(-floor(sub/2)+posxvar:floor(sub/2)+posxvar,-floor(sub/2)+ posyvar:floor(sub/2)+posyvar,1); 224 fgem=sum(sum(fref))/(sub^2); % calculates the mean value of the re ference frame 225 else 226 Fref=Frefstore(:,:,pixnrx,pixnry); 227 fgem=fgemstore(pixnrx,pixnry); 228 end % now the reference for the examined pixel is known and the 231 % search for a match can begin: %%%%%%%%%%%%%%%%%% 234 % Coarse search: % 235 %%%%%%%%%%%%%%%%%% 236 for posnry=-((nrofposy-1)/2):((nrofposy-1)/2); % indicates the number

62 H:\Important files drop testing\rundic.m Page 6 17 november :04:26 of the position that is examined in y-direction 237 for posnrx=-((nrofposx-1)/2):((nrofposx-1)/2); % indicates the num ber of the position that is examined in x-direction 238 if framenr<nrofim % The last image doesn't have a target image ; 239 % calculate the new position: 240 newposxvar=posxvar+posnrx*framestep; 241 newposyvar=posyvar+posnry*framestep; 242 % get the frame of the new position 243 Gref=pic(-floor(sub/2)+newposxvar:floor(sub/2)+newposxvar, -floor(sub/2)+newposyvar:floor(sub/2)+newposyvar,framenr+1); 244 Ggem=sum(sum(Gref))/(sub^2); % calculates the mean value o f the target frame 245 % calculate the correlation coefficient of the examined 246 % position 247 corr=(sum(sum((fref-fgem).*(gref-ggem)))/(sqrt(sum(sum((fr ef-fgem).^2)))*sqrt(sum(sum((gref-ggem).^2))))); 248 % if the new position has a higher correlation 249 % coefficient than the previous position, than this 250 % coefficient becomes the coefficient to beat. 251 if corr>corr(pixnrx,pixnry,framenr) & corr> % store the correlation coefficient: 253 Corr(pixnrx,pixnry,framenr)=corr; 254 % record the new position of the pixel: 255 posx(pixnrx,pixnry,framenr+1)=newposxvar; 256 posy(pixnrx,pixnry,framenr+1)=newposyvar; % record for the fine search an area that is on 259 % every side of the area one pixel bigger. 260 % (which explains the -1 and +1 after 261 % newpos...) 262 Grefvar=pic(-floor(sub/2)+newposxvar-1:floor(sub/2)+ne wposxvar+1,-floor(sub/2)+newposyvar-1:floor(sub/2)+newposyvar+1,framenr+1); 263 if finenr==0 264 Frefstore(:,:,pixnrx,pixnry)=Gref; 265 fgemstore(pixnrx,pixnry)=sum(sum(gref))/(sub^2); 266 end 267 end 268 end 269 end 270 end 271 if framenr<nrofim 272 % record the position found by the coarse search: 273 poscoarsex(pixnrx,pixnry)=posx(pixnrx,pixnry,framenr+1); 274 poscoarsey(pixnrx,pixnry)=posy(pixnrx,pixnry,framenr+1); 275 end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 278 % start fine search, based on bicubic spline interpolation: % 279 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 280 % Fref and fgem stay the same. An interpolation is made on Gref 281 % so we can move the deformed image half a pixel:

63 H:\Important files drop testing\rundic.m Page 7 17 november :04: if finenr>0 % finenr=0 means no interpolation steps; 283 if framenr<nrofim % The last image doesn't have a target image; 284 %Greffine=Grefvar; 285 Greffine=interp2(Grefvar,finenr,'spline'); 286 %Greffinestore(:,:,pixnrx,pixnry)=Greffine; % Greffine=Greffine(2:(sub*2-2),2:(sub*2-2)); % leave the oute r positions out 289 for posnry=-finestep:finestep; % indicates the number of the p osition that is examined in y-direction 290 for posnrx=-finestep:finestep; % indicates the number of t he position that is examined in x-direction 291 Greffinevar=Greffine(1+posnrx+(finestepwidth):finestep width:posnrx-finestepwidth+length(greffine),1+posnry+finestepwidth:finestepwidth:p osnry-finestepwidth+length(greffine)); 292 Greffinevargem=sum(sum(Greffinevar))/(sub^2); 293 corrfine=(sum(sum((fref-fgem).*(greffinevar-greffineva rgem)))/(sqrt(sum(sum((fref-fgem).^2)))*sqrt(sum(sum((greffinevar-greffinevargem). ^2))))); 294 %ccorr(framenr)=corr(pixnrx+1,pixnry+1,framenr); 295 %ccorrvar(posnrx+finestep+1,posnry+finestep+1,framenr) =corrvar; 296 if corrfine>=corr(pixnrx,pixnry,framenr); 297 % store the correlation coefficient: 298 Corr(pixnrx,pixnry,framenr)=corrfine; 299 % record the new position of the pixel: 300 posx(pixnrx,pixnry,framenr+1)=poscoarsex(pixnrx,pi xnry)+posnrx/finestepwidth; 301 posy(pixnrx,pixnry,framenr+1)=poscoarsey(pixnrx,pi xnry)+posnry/finestepwidth; 302 if nrofpixx<6 nrofpixy<6 303 % The acquired area with the highest 304 % correlation coefficient is used as 305 % reference for the new image: 306 Frefstore(:,:,pixnrx,pixnry)=Greffinevar; 307 fgemstore(pixnrx,pixnry)=greffinevargem; 308 end 309 end 310 end 311 end 312 end 313 end 314 end 315 end %%%%%%%%%%%%%%%%%%%%%%%%%%% 318 % smoothing the positions % 319 %%%%%%%%%%%%%%%%%%%%%%%%%%% if framenr<nrofim & finenr>0 & nrofpixx>5 & nrofpixy>5 322 % calculating corrected position of every dot, based on 323 % smoothed grid.

64 H:\Important files drop testing\rundic.m Page 8 17 november :04: [Xcor,Ycor]=smoothgrid(posx(:,:,framenr+1),posy(:,:,framenr+1)); 325 % rounding the smoothed position to a known integration point. 326 Xcorround=(round(Xcor.*finestepwidth))/finestepwidth; 327 Ycorround=(round(Ycor.*finestepwidth))/finestepwidth; 328 poscorcoarsex=floor(xcorround); 329 poscorcoarsey=floor(ycorround); 330 finestepx=((xcorround-poscorcoarsex).*finestepwidth); 331 finestepy=((ycorround-poscorcoarsey).*finestepwidth); 332 posx(:,:,framenr+1)=xcor; 333 posy(:,:,framenr+1)=ycor; 334 for pixnry=1:nrofpixy; % number of the pixel in y-direction 335 for pixnrx=1:nrofpixx; % number of the pixel in x-direction 336 Frefcoarsevar=pic(-floor(sub/2)+poscorcoarsex(pixnrx,pixnry):floor (sub/2)+poscorcoarsex(pixnrx,pixnry)+1,-floor(sub/2)+poscorcoarsey(pixnrx,pixnry): floor(sub/2)+poscorcoarsey(pixnrx,pixnry)+1,framenr+1); 337 Frefcoarse=interp2(Frefcoarsevar,finenr,'spline'); 338 % positionsx=[finstepx(pixnrx,pixnry):finestepwidth :finstepx(pixnrx,pixnry)+sub*finestepwidth]; 339 Frefstore(:,:,pixnrx,pixnry)=Frefcoarse(finestepx(pixnrx,pixnry)+1 :finestepwidth:finestepx(pixnrx,pixnry)+1+finestepwidth*(sub-1),finestepy(pixnrx,p ixnry)+1:finestepwidth:finestepy(pixnrx,pixnry)+1+finestepwidth*(sub-1)); 340 fgemstore(pixnrx,pixnry)=sum(sum(frefstore(:,:,pixnrx,pixnry))/(su b^2)); 341 end 342 end 343 end 344 save([dir char('rundic.mat')]); 345 finished_number_of_frames=finished_number_of_frames end 347 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 348 % end of search algorithm % 349 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Corrav=sum(sum(sum(Corr)))/prod(size(Corr)); 353 disp(['the average correlation coefficient is ' num2str(corrav) '.']) endtime=cputime; 356 calculationtime=endtime-starttime; 357 disp(['the calculation-time of the DIC algorithm is ' num2str(calculationtime) ' [ s].']) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 361 % Interpolation of the position: % 362 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % to interpolate the results we need to shift the dimensions (shiftdim) 365 % Shift one time makes the rows columns, the columns the 3th dimension and 366 % the 3th dimension the rows. So the movement in x-direction of pixel on 367 % position (a,b) can be seen in shdispx(b,:,a) 368 shposx=shiftdim(posx,1);

65 H:\Important files drop testing\rundic.m Page 9 17 november :04: shposy=shiftdim(posy,1); % Calculation of the velocity and acceleration without interpolation % record of the displacement 374 for vnr=1:nrofim-1; 375 dispx(:,:,vnr)=posx(:,:,vnr+1)-posx(:,:,vnr); 376 dispy(:,:,vnr)=posy(:,:,vnr+1)-posy(:,:,vnr); 377 end 378 % record position [mm]: 379 posxmm=posx.*scale; 380 posymm=posy.*scale; 381 % record displacement [mm]: 382 dispxmm=dispx.*scale; 383 dispymm=dispy.*scale; 384 % record velocity in [m/s]: 385 vxms=dispxmm.*framerate/1000; 386 vyms=dispymm.*framerate/1000; 387 % calculate the velocity [m/s]: 388 for accnr=1:(nrofim-2); % nrofim-2 is the number of accelerations that can be calc ulated. 389 axmss(:,:,accnr)=(vxms(:,:,accnr+1)-vxms(:,:,accnr))*framerate; % a=(v_frame1_ 2-v_frame2_3)*framerate; 390 aymss(:,:,accnr)=(vyms(:,:,accnr+1)-vyms(:,:,accnr))*framerate; 391 end 392 % record acceleration [G]: 393 axg=axmss./9.81; 394 ayg=aymss./9.81; % so: 397 % the displacements are stored in dispx and disp y [pixels] and in dispxmm and dis pymm [mm] 398 % the correlation coefficients of the displacements are stored in Corr 399 % the velocities are stored in vxms and vyms [m/s] 400 % the accelerations are stored in axmss and aymss [m/s^2] and axg and ayg [G] % Smoothing over time 403 intmeth=input('which smoothing method over time do you want to use: No smoothing ( 1), Butterworth filter (2)? '); if intmeth==1 % needed for further calculations 406 posxint=posx; 407 posyint=posy; 408 posxintmm=posxmm; 409 posyintmm=posymm; 410 dispxint=dispx; 411 dispyint=dispy; 412 dispxintmm=dispxmm; 413 dispyintmm=dispymm; 414 vxintms=vxms; 415 vyintms=vyms; 416 axintmss=axmss;

66 H:\Important files drop testing\rundic.m Page november :04: ayintmss=aymss; 418 axintg=axg; 419 ayintg=ayg; 420 end 421 if intmeth==2 % Butterworth filter 422 orderpol=input('what is the order of lowpass Butterworth filter? '); 423 disp('asked is for Wn:'); 424 disp('if Wn is a one-element vector it will be used as the cutoff frequency'); 425 disp('in this case Wn must be 0.0 < Wn < 1.0, with 1.0 corresponding to half t he frame rate'); 426 disp('if Wn is a two-element vector, Wn = [W1 W2], you use an order 2N bandpas s filter with passband W1 < W < W2.'); 427 Wn=input('What is Wn? '); 428 [B,A]=butter(orderpol,Wn); 429 posxint=zeros(nrofpixx,nrofpixy,nrofim); 430 posyint=zeros(nrofpixx,nrofpixy,nrofim); 431 for pixnry=1:nrofpixy; % indicates the number of the pixel that is examined in y-direction 432 for pixnrx=1:nrofpixx; % indicates the number of the pixel that is examine d in x-direction 433 posxint(pixnrx,pixnry,:)=filtfilt(b,a,shposx(pixnry,:,pixnrx)); 434 posyint(pixnrx,pixnry,:)=filtfilt(b,a,shposy(pixnry,:,pixnrx)); 435 end 436 end 437 for vnr=1:nrofim-1; 438 dispxint(:,:,vnr)=posxint(:,:,vnr+1)-posxint(:,:,vnr); 439 dispyint(:,:,vnr)=posyint(:,:,vnr+1)-posyint(:,:,vnr); 440 end 441 % record position [mm]: 442 posxintmm=posxint.*scale; 443 posyintmm=posyint.*scale; 444 % record displacement [mm]: 445 dispxintmm=dispxint.*scale; 446 dispyintmm=dispyint.*scale; 447 % record velocity in [m/s]: 448 vxintms=dispxintmm.*framerate/1000; 449 vyintms=dispyintmm.*framerate/1000; 450 % calculate the velocity [m/s]: 451 for accnr=1:(nrofim-2); % nrofim-2 is the number of accelerations that can be calculated. 452 axintmss(:,:,accnr)=(vxintms(:,:,accnr+1)-vxintms(:,:,accnr))*framerate; % a=(v_frame1_2-v_frame2_3)*framerate; 453 ayintmss(:,:,accnr)=(vyintms(:,:,accnr+1)-vyintms(:,:,accnr))*framerate; 454 end 455 % record acceleration [G]: 456 axintg=axintmss./9.81; 457 ayintg=ayintmss./9.81; 458 end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 461 %%%%%%%%%%%% end of calculation of interpolated velocities and G-levels %%%%% 462 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

67 H:\Important files drop testing\rundic.m Page november :04: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 466 % saving the used parameters in one file: % 467 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 468!del parameters.txt /q 469 parameters=struct('startimage',{startimage},'endimage',{endimage},'framerate',{fra merate},'scale',{scale},'x_coordinate',{coorx},'y_coordinate',{coory},'areax',{are ax}, 'areay',{areay},'pixelstep_x',{pixelstepx},'pixelstep_y',{pixelstepy},'size_of _reference_area',{sub}, 'maximum_stepping_x_direction',{rangex},'maximum_stepping_y_direction',{rangey },'frame_stepping', {framestep},'number_of_examined_pictures',{nrofim}, 'number_of_interpolations',{finenr}); 474 diary parameters.txt 475 parameters 476 diary off %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 479 % make grid position points visible % 480 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 481 % plot the gridpoints: 482 plotthegrid=input('do you want a visualization of the grid in every frame? (0=no, 1=yes)'); 483 if plotthegrid==1; 484 figure 485 for framenr=1:nrofim 486 posvecx=[]; 487 for j=1:size(posx,2) 488 posvecx=[posvecx; -posx(:,j,framenr)]; 489 end posvecy=[]; 492 for j=1:size(posy,2) 493 posvecy=[posvecy; posy(:,j,framenr)]; 494 end 495 plot(posvecy,posvecx,'.') 496 title(['position of pixels in framenr ' num2str(framenr)]); 497 pause(0.5) 498 end 499 close 500 end %%%%%%%%%%%%%%%%%%%%%% 503 % plot the results: % 504 %%%%%%%%%%%%%%%%%%%%%% plotdot=input('you want to plot the movement of one particular point? (0=no, 1=yes ) '); 507 if plotdot==1 508 a=input('what is the row number of the pixel you want to plot? ');

68 H:\Important files drop testing\rundic.m Page november :04: b=input('what is the column number of the pixel you want to plot? '); 510 c=input('you want to plot the position [pixels] (1), velocity [m/s] (2), accel eration [G-levels] (3) or all (4)? '); % record the examined coordinate: 513 acoor=coorx+(a-1)*pixelstepx; 514 bcoor=coory+(b-1)*pixelstepy; % to plot the results we need to shift the dimensions (shiftdim) 517 % Shift one time makes the rows columns, the columns the 3th dimension and 518 % the 3th dimension the rows. So the movement in x-direction of pixel on 519 % position (a,b) can be seen in shdispx(b,:,a) % make a scale of seconds: 522 seconds=(1:nrofim)/framerate; 523 if intmeth==1 524 secondsint=(1:1/10:nrofim)/framerate; 525 if c==1 c==4 526 figure(1) 527 shposx=shiftdim(posx,1); 528 shposxint=shiftdim(posxint,1); 529 plot(seconds,shposx(b,:,a)); 530 title(['position of pixel (' num2str(acoor) ',' num2str(bcoor) ').']) 531 xlabel('time [s]') 532 ylabel('pos [pixels]') 533 end 534 if c==2 c==4 535 figure(2) 536 shvxms=shiftdim(vxms,1); 537 shvxintms=shiftdim(vxintms,1); 538 plot(seconds(1:nrofim-1),shvxms(b,:,a)); 539 title(['velocity of pixel (' num2str(acoor) ',' num2str(bcoor) ').']) 540 xlabel('time [s]') 541 ylabel('velocity [m/s]') 542 end 543 if c==3 c==4 544 figure(3) 545 shaxg=shiftdim(axg,1); 546 shaxintg=shiftdim(axintg,1); 547 plot(seconds(1:nrofim-2),shaxg(b,:,a)); 548 title(['acceleration of pixel (' num2str(acoor) ',' num2str(bcoor) '). ']) 549 xlabel('time [s]') 550 ylabel('acceleration [G]') 551 end 552 end 553 if intmeth==2 554 if c==1 c==4 555 figure(1) 556 shposx=shiftdim(posx,1); 557 shposxint=shiftdim(posxint,1); 558 plot(seconds,shposx(b,:,a),seconds,shposxint(b,:,a),'r');

69 H:\Important files drop testing\rundic.m Page november :04: title(['position of pixel (' num2str(acoor) ',' num2str(bcoor) '). Blu e is no interpolation in time domain. Red is interpolated line']) 560 xlabel('time [s]') 561 ylabel('pos [pixels]') 562 end 563 if c==2 c==4 564 figure(2) 565 shvxms=shiftdim(vxms,1); 566 shvxintms=shiftdim(vxintms,1); 567 plot(seconds(1:nrofim-1),shvxms(b,:,a),seconds(1:nrofim-1),shvxintms(b,:,a),'r'); 568 title(['velocity of pixel (' num2str(acoor) ',' num2str(bcoor) '). Blu e is no interpolation in time domain. Red is interpolated line']) 569 xlabel('time [s]') 570 ylabel('velocity [m/s]') 571 end 572 if c==3 c==4 573 figure(3) 574 shaxg=shiftdim(axg,1); 575 shaxintg=shiftdim(axintg,1); 576 plot(seconds(1:nrofim-2),shaxg(b,:,a),seconds(1:nrofim-2),shaxintg(b,:,a),'r'); 577 title(['acceleration of pixel (' num2str(acoor) ',' num2str(bcoor) '). Blue is no interpolation in time domain. Red is interpolated line']) 578 xlabel('time [s]') 579 ylabel('acceleration [G]') 580 end 581 end 582 end % % plot position of the pixels relatively to eachother: 585 % p=1; % plot the position or not? p=0 -> no plot; p=1 -> plot; 586 % if p==1 587 % figure; 588 % for framenr=1:nrofim-1; % indicates the number of the examined frame; 589 % pcolor(coory:pixelstepy:coory+areay,coorx:pixelstepx:coorx+areax,posx(:, :,framenr)); 590 % axis image 591 % shading interp 592 % colorbar('horiz') 593 % title(['position_x' num2str(framenr)]); 594 % pause(0.5) 595 % end 596 % pause 597 % for framenr=1:nrofim-1; % indicates the number of the examined frame; 598 % pcolor(coory:pixelstepy:coory+areay,coorx:pixelstepx:coorx+areax,posy(:, :,framenr)); 599 % axis image 600 % shading interp 601 % colorbar('horiz') 602 % title(['postion_y' num2str(framenr)]); 603 % pause(0.5)

70 H:\Important files drop testing\rundic.m Page november :04: % end 605 % end 606 % 607 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 608 % Calculation of rotation: % 609 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 610 seconds=(1:nrofim)/framerate; 611 if nrofpixy>2 612 % based on fitting through one row: 613 for framenr=1:nrofim; % indicates the number of the examined frame; 614 for rownr=1:nrofpixx % indicates the row number that is examined; 615 pr=polyfit(posy(rownr,:,framenr),posx(rownr,:,framenr),1); % linear fi tting through the points 616 cr(rownr,framenr)=pr(2); % p(2) is the constant of the line fit 617 dcr(rownr,framenr)=pr(1); % p(1) is de direction coefficient of the li ne fit 618 angler(rownr,framenr)=atan(pr(1)); % angle stores the angles in [deg] of every row 619 end 620 avdcr(framenr)=sum(dcr(:,framenr))/nrofpixx; % avdc stores the average dir ection coefficients of all the rows in one image 621 avangler(framenr)=sum(angler(:,framenr))/nrofpixx; % avangle stores the av erage angle of all the rows in one image 622 % avcr(framenr)=sum(cr(:,framenr))/nrofpixx; % avcr stores the average con stants of all the rows in one image 623 if framenr>1 % calculation of the rotation [deg] of the phone between two images. This is stored in rotation 624 rotationr(framenr-1)=avangler(framenr)-avangler(framenr-1); 625 end 626 end % final result of the angle of the phone (the average angle of all the linear lines) is stored as angle: 629 angle=avangler/2/pi*360; % [degrees] 630 dc=dcr; % [-] 631 rotation=rotationr/2/pi*360*framerate; %[degrees/second] angleplot=input('you want a plot of the angle of the phone? (0=no, 1=yes) '); 634 if angleplot==1 635 figure 636 plot(seconds,angle); 637 xlabel('time [s]') 638 ylabel('angle of phone [degrees]') 639 title('angle of the phone during the movement') 640 end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 643 % Calculation of bending: % 644 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % The average angle that is calculated is going to be the angle of every 647 % line fit for every row. Based on this angle a new constant has to be

71 H:\Important files drop testing\rundic.m Page november :04: % calculated: if nrofpixx>1 651 for framenr=1:nrofim; % indicates the number of the examined frame; 652 for rownr=1:nrofpixx % indicates the row number that is examined; 653 for colnr=1:nrofpixy; % indicates the column number that is examin ed; 654 cvar(rownr,colnr,framenr)=posx(rownr,colnr,framenr)-dc(rownr,f ramenr)*posy(rownr,colnr,framenr); 655 end 656 cn(rownr,framenr)=sum(cvar(rownr,:,framenr))/nrofpixy; % stores th e new constants (ConstantNew=cn) 657 end 658 end 659 for rownr=1:nrofpixx-1 % indicates the row number that is examined; 660 compr(rownr,:)=(cn(rownr+1,:)-cn(rownr,:)); 661 end 662 % calculate the compression ratio: 663 comprratio=compr./pixelstepx; % Calculate the distance between the actual position of every pixel and th e 666 % line that is fitted through the row of pixels; 667 for framenr=1:nrofim; % indicates the number of the examined frame; 668 for rownr=1:nrofpixx % indicates the row number that is examined; 669 for colnr=1:nrofpixy; % indicates the column number that is examin ed; 670 % calculate the position of the dot based on the line 671 % fit and the horizontal position: 672 lineposx(rownr,colnr,framenr)=dc(rownr,framenr)*posy(rownr,col nr,framenr)+cn(rownr,framenr); 673 end 674 end 675 end 676 % calculate the deviation from the linear line 677 bending=posx-lineposx; % remove the information of the position in the image, but keep the 680 % relative position. 681 for rownr=1:nrofpixx-1 % indicates the row number that is examined; 682 comprp(rownr,:)=(cn(nrofpixx,:)-cn(rownr,:)); 683 end 684 comprpos=[comprp; zeros(1,nrofim)]; % calculate the mode shape. Here the position in frame and rotation 687 % information is removed. 688 for framenr=1:nrofim; % indicates the number of the examined frame; 689 for rownr=1:nrofpixx % indicates the row number that is examined; 690 for colnr=1:nrofpixy; % indicates the column number that is examin ed; 691 norot(rownr,colnr,framenr)=bending(rownr,colnr,framenr)+comprp os(rownr,framenr);

72 H:\Important files drop testing\rundic.m Page november :04: end 693 end 694 end 695 % plot the mode shapes: 696 modeplot=input('you want to visualize the bending of the phone? (0=no, 1=y es) '); 697 if modeplot==1 698 figure 699 for framenr=1:nrofim; 700 for rownr=1:nrofpixx; 701 plot(posy(rownr,:,framenr),norot(rownr,:,framenr)); 702 hold on 703 end 704 title(['phoneshape ' num2str(framenr)]); 705 %axis image 706 axis([coory-10 coory+areay+10-4 areax+4]) 707 xlabel('vertical position [pixels]') 708 ylabel('horizontal position [pixels]') 709 pause(0.5) 710 hold off 711 end 712 end % Perhaps for more accurate recordings this result can be used. For 715 % the uses FASTCAM Ultimo 512 in combination with the NDT % drop tester the analyzis was too inaccurate to make this 717 % calculation. 718 for framenr=1:nrofim; % indicates the number of the examined frame; 719 for rownr=1:nrofpixx % indicates the row number that is examined; 720 for colnr=1:nrofpixy-1; % indicates the column number that is exam ined; 721 lengthline(colnr)=sqrt((posx(rownr,colnr+1,framenr)-posx(rownr,colnr,framenr))^2+(posy(rownr,colnr+1,framenr)-posy(rownr,colnr,framenr))^2); 722 end 723 lengthrow(rownr,framenr)=sum(lengthline); 724 end 725 end 726 originallength=posy(1,nrofpixy,1)-posy(1,1,1); 727 for framenr=1:nrofim; % indicates the number of the examined frame; 728 rellengthrow(:,framenr)=lengthrow(:,framenr)./originallength; 729 end 730 end 731 end 732 %%%%%%%%%%%%%%%%%%%%%%%%%%%%% 733 % calculate the deformation % 734 %%%%%%%%%%%%%%%%%%%%%%%%%%%%% 735 % straight forward based on difference in position compared to distance. 736 for framenr=1:nrofim-1; % indicates the number of the examined frame; 737 for rownr=1:nrofpixx-1 % indicates the row number that is examined; 738 for colnr=1:nrofpixy; % indicates the column number that is examined; 739 epsx(rownr,colnr,framenr)=(dispxint(rownr+1,colnr,framenr)-dispxint(ro wnr,colnr,framenr))/(posx(rownr+1,colnr,framenr)-posx(rownr,colnr,framenr));

73 H:\Important files drop testing\rundic.m Page november :04: end 741 end 742 end for framenr=1:nrofim-1; % indicates the number of the examined frame; 745 for colnr=1:nrofpixy-1 % indicates the row number that is examined; 746 for rownr=1:nrofpixx; % indicates the column number that is examined; 747 epsy(rownr,colnr,framenr)=(dispyint(rownr,colnr+1,framenr)-dispyint(ro wnr,colnr,framenr))/(posy(rownr,colnr+1,framenr)-posy(rownr,colnr,framenr)); 748 end 749 end 750 end for framenr=1:nrofim-1; % indicates the number of the examined frame; 753 for colnr=1:nrofpixy-1 % indicates the row number that is examined; 754 for rownr=1:nrofpixx-1; % indicates the column number that is examined; 755 gammaxy(rownr,colnr,framenr)=(dispxint(rownr+1,colnr,framenr)-dispxint (rownr+1,colnr,framenr))/(posy(rownr,colnr+1,framenr)-posy(rownr,colnr,framenr))+( dispyint(rownr,colnr+1,framenr)-dispyint(rownr,colnr,framenr))/(posx(rownr+1,colnr,framenr)-posx(rownr,colnr,framenr)); 756 end 757 end 758 end % plot deformation results: 761 d=input('you want to plot the deformation? (0=no, 1=yes) '); 762 if d==1 763 figure; 764 for framenr=1:nrofim-1; % indicates the number of the examined frame; 765 [C,h]=contour(1*pixelstepy:pixelstepy:nrofpixy*pixelstepy,1:pixelstepx:(nr ofpixx-1)*pixelstepx,epsx(:,:,framenr)); 766 clabel(c,h) 767 shading interp 768 axis image 769 caxis([ ]) 770 title(['epsilon_x' num2str(framenr)]); 771 pause 772 end for framenr=1:nrofim-1; % indicates the number of the examined frame; 775 [C,h]=contour(1:pixelstepy:(nrofpixy-1)*pixelstepy,1:pixelstepx:nrofpixx*p ixelstepx,epsy(:,:,framenr)); 776 clabel(c,h) 777 shading interp 778 axis image 779 caxis([ ]) 780 title(['eps_y' num2str(framenr)]); 781 pause 782 end for framenr=1:nrofim-1; % indicates the number of the examined frame; 785 [C,h]=contour(1:pixelstepy:(nrofpixy-1)*pixelstepy,1:pixelstepx:(nrofpixx-

74 H:\Important files drop testing\rundic.m Page november :04:26 1)*pixelstepx,gammaxy(:,:,framenr)); 786 clabel(c,h) 787 shading interp 788 axis image 789 caxis([ ]) 790 title(['gamma_xy' num2str(framenr)]); 791 pause 792 end 793 end % save all the results: 796 save([dir char('rundic.mat')])

75 Manual for the NDT-2000 drop test facility in combination with the FASTCAM Ultima 512 high speed camera P.L.W. Scheijgrond Philips Mobile Display Systems, Shanghai, China Eindhoven University of Technology, Department of Mechanical Engineering, Eindhoven, The Netherlands Coaching: dr. D.X.Q. Shi Philips Mobile Display Systems Shanghai, China Ir. W.D. van Driel Philips Semiconductors Nijmegen, The Netherlands Supervisors: Prof. dr. H. Nijmeijer Prof. dr. G.Q. Zhang Department of Mechanical Engineering Eindhoven University of Technology Eindhoven, The Netherlands Shanghai, 17th June 2005

76 Contents 1 Context 2 2 Used Materials 3 3 Preparation of test set up Preparation of camera Preparation of software Calibration of the camera Preparation of phone Preparation of light Use of the test set up visualizing the phone Choosing right parameters recording the drop recording the ruler use of the software input output errormessages interpretation of output 21 1

77 Chapter 1 Context The reliability of portable electronic products is a major issue for manufacturers and customers. In Philips MDS Shanghai is the NDT-2000 one of the main drop test facilities, that should provide information about the phenomena occurring when a product is dropped on the ground. The drop test facility is capable of making a drop of a product under different orientations. There are different possibilities to examine the phenomena occurring during impact. This manual describes how the impact of a mobile phone can be examined by using one FASTCAM ultimo 512 high speed camera. The test set up can provide information about deformations, strains, G-levels, velocities, energy losses, rotations and bending on arbitrary positions on the recorded side of the phone. At the time of writing this was a new technology and further research is likely to expand the possibilities of this test set up. For further details about the theoretical background is referred to Digital Image Correlation for Analyzing Portable Electronic Products during Drop Impact Tests, by Pieter L.W. Scheijgrond. This manual is focused on the use of the NDT-2000 in combination with the FASTCAM Ultimo 512, but can also be used as reference for other drop tests. For more details about the NDT-2000 is also referred to the NDT manual. For reading about the FASTCAM Ultimo 512 is referred to the manual that comes with the camera. 2

78 Chapter 2 Used Materials The test set up contains of: The NDT-2000 drop tester, produced by Herstad+Piper A/S The FASTCAM Ultima 512 high speed camera A PC system that can be linked via a IEEE 1394 connection to the high speed camera 1 LG-III Cold light source with optical fiber 1 Fostec LLC light source with optical fiber 1 tripod The software consists of the following files: rundic.m. This is the main file in which the DIC algorithm is implemented. smoothgrid.m. This file is used for grid smoothing. inpaint n ans.m. This file is used as a subfile for the grid smoothing hillclimbing.m. This file is for advanced users only. In this file the hill climbing method for the fine search is programmed and can be copied to rundic.m to implement. stepwise i ntegration.m This file is for advanced users only. In this file a stepwise integration method for the fine search is programmed and can be copied to rundic.m to implement. 3

79 Chapter 3 Preparation of test set up Figure 3.1: A proper installed test set up. In this chapter is described how a proper test set up is putted together. Here the rough placement of the parts is described. Finally it should look like figure Preparation of camera Before the camera can be used it should be connected and positioned. This is done in the following steps: 4

80 Attach the power cable to the Fastcam ultima 512 processor. A red led on the processor should light up to indicate that the device is supplied with power. Attach the IEEE-1394 cable to the Fastcam ultima 512 processor and the PC system. A orange led on the processor should light up to indicate that the device has made a connection with a computer. Attach the camera head with lens to the tripod. For recordings of a horizontal orientated phone the camera head should be on top of the tripod. For recordings of a vertical falling phone the ball-head of the tripod should be turned 90 degrees, so it is sideways from the tripod. Turning the ball head is done by turning the screw next to the ball head counter-clock wise, rotating the ball head in the desired direction and turn the screw clock wise. Remove the lens cover from the camera head by turning it. Attach a lens of your choice to the camera head. Make sure the ball head is properly fixed by re-examining the screw next to the ball head and turn it clock wise. Screw the camera head onto the screw on top of the ball head. Place the tripod with the camera head on the ground. Make sure the legs of the tripod are in the most wide spread position and the whole object feels stable. The camera can be placed far away from the kabinet of the NDT-2000, which will allow the user to make a recording of the complete phone, or it can be placed next to the kabinet, what allow the user to zoom in on a certain part of the phone and get a higher accuracy at that part. Make always sure the phone will not hit the camera head during a drop and the tripod bars are in the most wide spread position! Attach the camera cable to the camera head and the processor Preparation of software The software that is provided with the FASTCAM Ultima 512 comes on a CD-ROM. In the manual of the camera is written how the software needs to be installed on your computer. By following the instruction the PFV software should be installed properly and is immediately ready to be used. If the software is started and if under display in the camera tab the option live is chosen, the visible field of the camera appears in the screen. 5

81 3.0.3 Calibration of the camera The camera needs to record a black field as reference for the rest of the recordings. Therefore a cover needs to be placed on the lens. The cover that is provided with the lens should be putted on the front of the lens. After that the PFV software provides an option to calibrate the results. This option is placed under the tab camera, button option and calibrate in the PFV software. By clicking this button the program will make a recording which will be used as reference for the color black Preparation of phone Before the speckle is prepared on the phone the user should decide which part of the phone will be examined. For the use of one camera the number of impact orientations are limited and the inspected side should stay in one plane during the impact. To find out which side of the phone will be examined you might want to make a couple of test drops how the phone moves. How to make those recordings can be found in chapter 4. The side of the phone that will be examined needs to be painted white, by using a spray can. This should be mat spray to prevent reflections of the light in a recording. The spraying of the phone needs to be done in a well ventilated area. The sides that do not need to be sprayed have to be covered with tape. To get the best result the spraying needs to be done in several thin layers. Every layer needs to dry for several (3) minutes, after which the new layer can be sprayed. When the colors of the phone are not visible anymore and a uniform white structure is achieved the white layer is ready. For a good speckle the placement and size of the dots is very important. The speckle needs to have dots of a size about the same as the size of one pixel on the recording. Too big dots will give different pixels in the same area having the same gray level, so no contrast can be seen. Too small dots will cause too weak difference in contrast, because the pixels wil average the gray level of dark and light dots. For the FASTCAM Ultima 512 the dotsize made by a ball pen is the correct size. To prepare the speckle with a pen the user should place the dots in a real random pattern. This is best done by making smooth movements over the phone while placing the dots. If some other cameras are used it can be that the speckle preparation technology as mentioned above does not provide a good speckle and another technology has to be applied. In those cases it is advised to examine the speckle first before applying it to the phone. The best way to do that is to make a dummy speckle, place it at the place the phone is most likely to fall and make a photo with the high speed camera. The recorded image should be examined by a program, that can make a graph of the graylevels of the pictures along a certain line in the picture (Photoshop or Matlab are suitable 6

82 Figure 3.2: A phone with a good applied speckle under impact for this). The graph should show very sharp peaks, that fluctuate between the highest and the lowest possible graylevel within two data points. This check has to be done on several places in the image. If the speckle shows randomly fluctuating graylevels over whole the picture the speckle can be applied to the phone. An example of a phone with a good speckle on it is placed in figure 3.2. The covers of the phone are often taped onto the phone to prevent them from popping off the phone during a drop. This tape can be placed at any point of the phone but no over the recorded speckle. Keep in mind that putting tape on the phone can affect the drop test results Preparation of light The lightning of the phone is a delicate job. The lights need to be putted such a way that on every part of the phone the contrast between black and white is clearly seen. How to place the optical fibers properly and choose the right light intensity is described in paragraph The following steps have to be taken to get the cold light sources with optical fibres installed: Attach the cold light sources to the power supply. Attach the optical fibers to the cold light sources. For this the screw at the light hole needs to be unscrewed, see figure 3.3, the optical fibre needs to be putted in the hole and the screw needs to be tightened again. Place the fibers in the corners of the platform and point them towards the phone. To attach the fibers more stable they are often pushed between the foam and the kabinet and tape is putted around the ends of the fibers to attach them to the concrete tile. For illustration see figure

83 Fixture screw Figure 3.3: The cold light sources with the optical fibers attached Figure 3.4: The fixture of the optical fibers on the NDT

84 Chapter 4 Use of the test set up In the previous section could be read how to install the test set up. The test set up is able to take pictures now, but to get proper pictures there are often a lot of adjustments needed. How to make those adjustments and make the recording can be read in this chapter visualizing the phone In this paragraph is described how to make adjustments to the test set up, once the coarse test set up is ready and installed as in the procedure described in chapter 3. To start with this the PFV software has to be in live mode. The phone has to be put at the impact place. This place is straight below the suction cup from which the phone will be dropped. This place can be estimated by placing a ruler vertically along the suction cup or holding a robe with a weight at the end along the suction cup. Start the procedure with a low framerate, typically 2000 till 8000 fps, so a big image is available. When the fine test set up parameters are found the framerate can be increased. Examine the picture in the live screen of the PFV software. If the phone is not visible it can be due to the following reasons: The cover is still on the lens. The position of the tripod needs to be adjusted. The camera head is not good orientated on the tripod. For this a small adjustment is in most cases enough. The screw on the side of the tripod needs to be loosen a little bit. Hold the upper part of the tripod with one hand and move the camera head with the other hand. Keep watching the computer screen while moving the camera head and see if there is any improvement. If the right position is found the screw has to be tightened again. 9

85 The light intensity is too low. Try to turn the buttons on the cold light sources to improve the light intensity. The optical fibres do not aim to the telephone. Try to aim the optical fibres on the telephone. The shutter time is chosen too high. This can by adjusted by the PFV software in the tab camera, button shutter. A pop down menu appears in which different shutter times can be chosen. A lower shutter time will provide a brighter picture than a higher shutter time Choosing right parameters If the phone is visible the recording often still needs adjustments. This can be an iterative procedure and does not always have to follow order of the steps as described below. Check the intensity of the light. The picture should be uniformly lighted. The intensity can be adjusted by several parameters. The position and the light intensity of the cold light source determines the amount of light on a certain area on the phone. There should be a balance between position and intensity of both light sources. The overall intensity can be adjusted by the diafragma. When the light adjustments are made the the uniformity of the lightning needs to be checked. This is done by making one picture of the phone and import it in software that can put a mask over pixels with a graylevel lower than a certain threshold. Photoshop is an appropriate program for this. When the slider of the threshold is moved the phone should disappear uniformly when the threshold is raised. If this is not the case this step has to be done again. Make sure the picture is sharp. This can be adjusted by turning the outer ring on the lens. It is best to focus first with the whole phone in the image. After that a fine focus can be made by zooming in on the phone and make small turns on the ring, till the zoomed part is visible. Choose the appropriate shutter speed. For the shutter speed there is a trade off between moved images and the amount of information that is acquired when a photo is taken. At a lower shutter speed the picture can easily get moved, because of the high velocity of the phone, but the more light can be catched during one photo, the more accurate information about the gray level distribution will be acquired. For most cases on the FASTCAM Ultima 512 a shutter speed around seconds is appropriate

86 Choose the appropriate frame rate. The higher the frame rate the more data points are available during one drop, but the more inaccurate one data point is. For a drop test the frame rate should be in most cases at least 4000 fps. For more theoretical background about this is referred to Digital Image Correlation for Analyzing Portable Electronic Products during Drop Impact Tests by P.L.W. Scheijgrond, chapter 3.2.2, Post calculation accuracy. Check the position of the phone. When pixels very close to the edge that will tough the ground, have to be examined, it is best to film with the lowest 10 pixels the concrete tile. An explanation for this can be found in paragraph of this manual. If the impact place is not correctly filmed the position of the camera head on the tripod might needs to be adjusted. For this the screw on the side of the tripod needs to be loosen a little bit. Hold the upper part of the tripod with one hand and move the camera head with the other hand. Keep on watching to the computer screen while moving the camera head and find the right position. repeat the steps mentioned above till he picture is sharp and the location of the phone and the lightning are properly adjusted recording the drop airvalve Figure 4.1: The airvalve of the NDT

87 Zoom button Point locater Data save tab Camera tab File viewer tab Format save selection Playback box Figure 4.2: The PFV software interface Playback framerate selection Start button On/off button Emergency stop 1.5 meter drop 2 meter drop Figure 4.3: The remote control of the NDT-2000 If the test set up is adjusted in the right way a test can be conducted. After that the test results needs to be examined to see if they are appropriate input for the software. If the recordings are not appropriate some adjustments on the test set up might be done and a new recording needs to be made. In this chapter is written how to make a drop test and how to analyze the results. 12

Experimental modal analysis of an automobile tire under static load

Experimental modal analysis of an automobile tire under static load Experimental modal analysis of an automobile tire under static load Citation for published version (APA): Pieters, R. S. (2007). Experimental modal analysis of an automobile tire under static load. (DCT

More information

Calibration of current-steering D/A Converters

Calibration of current-steering D/A Converters Calibration of current-steering D/A Converters Citation for published version (APA): Radulov,. I., Quinn, P. J., Hegt, J. A., & Roermund, van, A. H. M. (2009). Calibration of current-steering D/A Converters.

More information

Voltage dip detection with half cycle window RMS values and aggregation of short events Qin, Y.; Ye, G.; Cuk, V.; Cobben, J.F.G.

Voltage dip detection with half cycle window RMS values and aggregation of short events Qin, Y.; Ye, G.; Cuk, V.; Cobben, J.F.G. Voltage dip detection with half cycle window RMS values and aggregation of short events Qin, Y.; Ye, G.; Cuk, V.; Cobben, J.F.G. Published in: Renewable Energy & Power Quality Journal DOI:.484/repqj.5

More information

Strain Measurements with the Digital Image Correlation System Vic-2D

Strain Measurements with the Digital Image Correlation System Vic-2D CU-NEES-08-06 NEES at CU Boulder 01000110 01001000 01010100 The George E Brown, Jr. Network for Earthquake Engineering Simulation Strain Measurements with the Digital Image Correlation System Vic-2D By

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

The Game Experience Questionnaire

The Game Experience Questionnaire The Game Experience Questionnaire IJsselsteijn, W.A.; de Kort, Y.A.W.; Poels, K. Published: 01/01/2013 Document Version Publisher s PDF, also known as Version of Record (includes final page, issue and

More information

The Tilings of Deficient Squares by Ribbon L-Tetrominoes Are Diagonally Cracked

The Tilings of Deficient Squares by Ribbon L-Tetrominoes Are Diagonally Cracked Open Journal of Discrete Mathematics, 217, 7, 165-176 http://wwwscirporg/journal/ojdm ISSN Online: 2161-763 ISSN Print: 2161-7635 The Tilings of Deficient Squares by Ribbon L-Tetrominoes Are Diagonally

More information

The Use of Non-Local Means to Reduce Image Noise

The Use of Non-Local Means to Reduce Image Noise The Use of Non-Local Means to Reduce Image Noise By Chimba Chundu, Danny Bin, and Jackelyn Ferman ABSTRACT Digital images, such as those produced from digital cameras, suffer from random noise that is

More information

Low frequency sound reproduction in irregular rooms using CABS (Control Acoustic Bass System) Celestinos, Adrian; Nielsen, Sofus Birkedal

Low frequency sound reproduction in irregular rooms using CABS (Control Acoustic Bass System) Celestinos, Adrian; Nielsen, Sofus Birkedal Aalborg Universitet Low frequency sound reproduction in irregular rooms using CABS (Control Acoustic Bass System) Celestinos, Adrian; Nielsen, Sofus Birkedal Published in: Acustica United with Acta Acustica

More information

NANOMEFOS (Nanometer Accuracy Non-contact Measurement of Free-form Optical Surfaces)

NANOMEFOS (Nanometer Accuracy Non-contact Measurement of Free-form Optical Surfaces) NANOMEFOS (Nanometer Accuracy Non-contact Measurement of Free-form Optical Surfaces) Citation for published version (APA): Henselmans, R., Rosielle, P. C. J. N., & Kappelhof, J. P. (2004). NANOMEFOS (Nanometer

More information

Robotizing workforce in future built environments

Robotizing workforce in future built environments Robotizing workforce in future built environments Maas, G.J.; van Gassel, F.J.M. Published: 01/01/2014 Document Version Accepted manuscript including changes made at the peer-review stage Please check

More information

The machining process : cutting

The machining process : cutting The machining process : cutting Citation for published version (APA): Hutchins, P. (1988). The machining process : cutting. (TH Eindhoven. Afd. Werktuigbouwkunde, Vakgroep Produktietechnologie : WPB; Vol.

More information

On-chip antenna integration for single-chip millimeterwave FMCW radars Adela, B.B.; Pual, P.T.M; Smolders, A.B.

On-chip antenna integration for single-chip millimeterwave FMCW radars Adela, B.B.; Pual, P.T.M; Smolders, A.B. On-chip antenna integration for single-chip millimeterwave FMCW radars Adela, B.B.; Pual, P.T.M; Smolders, A.B. Published in: Proceedings of the 2015 9th European Conference on Antennas and Propagation

More information

USTER TESTER 5-S800 APPLICATION REPORT. Measurement of slub yarns Part 1 / Basics THE YARN INSPECTION SYSTEM. Sandra Edalat-Pour June 2007 SE 596

USTER TESTER 5-S800 APPLICATION REPORT. Measurement of slub yarns Part 1 / Basics THE YARN INSPECTION SYSTEM. Sandra Edalat-Pour June 2007 SE 596 USTER TESTER 5-S800 APPLICATION REPORT Measurement of slub yarns Part 1 / Basics THE YARN INSPECTION SYSTEM Sandra Edalat-Pour June 2007 SE 596 Copyright 2007 by Uster Technologies AG All rights reserved.

More information

Vic-2D Manual. Rommel Cintrón University of Puerto Rico, Mayagüez. NEES at CU Boulder CU-NEES-08-07

Vic-2D Manual. Rommel Cintrón University of Puerto Rico, Mayagüez. NEES at CU Boulder CU-NEES-08-07 CU-NEES-08-07 NEES at CU Boulder 01000110 01001000 01010100 The George E Brown, Jr. Network for Earthquake Engineering Simulation Vic-2D Manual By Rommel Cintrón University of Puerto Rico, Mayagüez September

More information

Optical design of a high resolution vision lens

Optical design of a high resolution vision lens Optical design of a high resolution vision lens Paul Claassen, optical designer, paul.claassen@sioux.eu Marnix Tas, optical specialist, marnix.tas@sioux.eu Prof L.Beckmann, l.beckmann@hccnet.nl Summary:

More information

Development of methods for numerical error correction of machine tools : interim report no. 2 Spaan, H.A.M.; Schellekens, P.H.J.

Development of methods for numerical error correction of machine tools : interim report no. 2 Spaan, H.A.M.; Schellekens, P.H.J. Development of methods for numerical error correction of machine tools : interim report no. 2 Spaan, H.A.M.; Schellekens, P.H.J. Published: 01/01/1990 Document Version Publisher s PDF, also known as Version

More information

Directional Sensing for Online PD Monitoring of MV Cables Wagenaars, P.; van der Wielen, P.C.J.M.; Wouters, P.A.A.F.; Steennis, E.F.

Directional Sensing for Online PD Monitoring of MV Cables Wagenaars, P.; van der Wielen, P.C.J.M.; Wouters, P.A.A.F.; Steennis, E.F. Directional Sensing for Online PD Monitoring of MV Cables Wagenaars, P.; van der Wielen, P.C.J.M.; Wouters, P.A.A.F.; Steennis, E.F. Published in: Nordic Insulation Symposium, Nord-IS 05 Published: 01/01/2005

More information

Image Enhancement in Spatial Domain

Image Enhancement in Spatial Domain Image Enhancement in Spatial Domain 2 Image enhancement is a process, rather a preprocessing step, through which an original image is made suitable for a specific application. The application scenarios

More information

A 13.56MHz RFID system based on organic transponders

A 13.56MHz RFID system based on organic transponders A 13.56MHz RFID system based on organic transponders Cantatore, E.; Geuns, T.C.T.; Gruijthuijsen, A.F.A.; Gelinck, G.H.; Drews, S.; Leeuw, de, D.M. Published in: Proceedings of the IEEE International Solid-State

More information

CSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Sep 25. Homework #1. ( Due: Oct 10 ) Figure 1: The laser game.

CSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Sep 25. Homework #1. ( Due: Oct 10 ) Figure 1: The laser game. CSE548, AMS542: Analysis of Algorithms, Fall 2016 Date: Sep 25 Homework #1 ( Due: Oct 10 ) Figure 1: The laser game. Task 1. [ 60 Points ] Laser Game Consider the following game played on an n n board,

More information

Lab 4 Projectile Motion

Lab 4 Projectile Motion b Lab 4 Projectile Motion Physics 211 Lab What You Need To Know: 1 x = x o + voxt + at o ox 2 at v = vox + at at 2 2 v 2 = vox 2 + 2aΔx ox FIGURE 1 Linear FIGURE Motion Linear Equations Motion Equations

More information

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII

LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII LAB MANUAL SUBJECT: IMAGE PROCESSING BE (COMPUTER) SEM VII IMAGE PROCESSING INDEX CLASS: B.E(COMPUTER) SR. NO SEMESTER:VII TITLE OF THE EXPERIMENT. 1 Point processing in spatial domain a. Negation of an

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Intermediate and Advanced Labs PHY3802L/PHY4822L

Intermediate and Advanced Labs PHY3802L/PHY4822L Intermediate and Advanced Labs PHY3802L/PHY4822L Torsional Oscillator and Torque Magnetometry Lab manual and related literature The torsional oscillator and torque magnetometry 1. Purpose Study the torsional

More information

SUSPENSION CRITERIA FOR IMAGE MONITORS AND VIEWING BOXES.

SUSPENSION CRITERIA FOR IMAGE MONITORS AND VIEWING BOXES. SUSPENSION CRITERIA FOR IMAGE MONITORS AND VIEWING BOXES. Tingberg, Anders Published in: Radiation Protection Dosimetry DOI: 10.1093/rpd/ncs302 Published: 2013-01-01 Link to publication Citation for published

More information

CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES

CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES CHAPTER 9 POSITION SENSITIVE PHOTOMULTIPLIER TUBES The current multiplication mechanism offered by dynodes makes photomultiplier tubes ideal for low-light-level measurement. As explained earlier, there

More information

A novel output transformer based highly linear RF-DAC architecture Bechthum, E.; Radulov, G.I.; Briaire, J.; Geelen, G.; van Roermund, A.H.M.

A novel output transformer based highly linear RF-DAC architecture Bechthum, E.; Radulov, G.I.; Briaire, J.; Geelen, G.; van Roermund, A.H.M. A novel output transformer based highly linear RF-DAC architecture Bechthum, E.; Radulov, G.I.; Briaire, J.; Geelen, G.; van Roermund, A.H.M. Published in: Proceedings of the 2st European Conference on

More information

Image Filtering. Median Filtering

Image Filtering. Median Filtering Image Filtering Image filtering is used to: Remove noise Sharpen contrast Highlight contours Detect edges Other uses? Image filters can be classified as linear or nonlinear. Linear filters are also know

More information

AgilEye Manual Version 2.0 February 28, 2007

AgilEye Manual Version 2.0 February 28, 2007 AgilEye Manual Version 2.0 February 28, 2007 1717 Louisiana NE Suite 202 Albuquerque, NM 87110 (505) 268-4742 support@agiloptics.com 2 (505) 268-4742 v. 2.0 February 07, 2007 3 Introduction AgilEye Wavefront

More information

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1

Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 Objective: Introduction to DSP ECE-S352 Fall Quarter 2000 Matlab Project 1 This Matlab Project is an extension of the basic correlation theory presented in the course. It shows a practical application

More information

Leaky-wave slot array antenna fed by a dual reflector system Ettorre, M.; Neto, A.; Gerini, G.; Maci, S.

Leaky-wave slot array antenna fed by a dual reflector system Ettorre, M.; Neto, A.; Gerini, G.; Maci, S. Leaky-wave slot array antenna fed by a dual reflector system Ettorre, M.; Neto, A.; Gerini, G.; Maci, S. Published in: Proceedings of IEEE Antennas and Propagation Society International Symposium, 2008,

More information

Document Version Publisher s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Document Version Publisher s PDF, also known as Version of Record (includes final page, issue and volume numbers) Noise figure and S-parameter measurement setups for on-wafer differential 60GHz circuits Sakian Dezfuli, P.; Janssen, E.J.G.; Essing, J.A.J.; Mahmoudi, R.; van Roermund, A.H.M. Published in: Proceedings

More information

Encoding of inductively measured k-space trajectories in MR raw data

Encoding of inductively measured k-space trajectories in MR raw data Downloaded from orbit.dtu.dk on: Apr 10, 2018 Encoding of inductively measured k-space trajectories in MR raw data Pedersen, Jan Ole; Hanson, Christian G.; Xue, Rong; Hanson, Lars G. Publication date:

More information

ECC419 IMAGE PROCESSING

ECC419 IMAGE PROCESSING ECC419 IMAGE PROCESSING INTRODUCTION Image Processing Image processing is a subclass of signal processing concerned specifically with pictures. Digital Image Processing, process digital images by means

More information

Stitching MetroPro Application

Stitching MetroPro Application OMP-0375F Stitching MetroPro Application Stitch.app This booklet is a quick reference; it assumes that you are familiar with MetroPro and the instrument. Information on MetroPro is provided in Getting

More information

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION Revised November 15, 2017 INTRODUCTION The simplest and most commonly described examples of diffraction and interference from two-dimensional apertures

More information

CMOS based terahertz instrumentation for imaging and spectroscopy Matters - Kammerer, M.

CMOS based terahertz instrumentation for imaging and spectroscopy Matters - Kammerer, M. CMOS based terahertz instrumentation for imaging and spectroscopy Matters - Kammerer, M. Published in: Proceedings of the International conference on Technology and instrumentation in particle physics

More information

MICROSCOPE LAB. Resolving Power How well specimen detail is preserved during the magnifying process.

MICROSCOPE LAB. Resolving Power How well specimen detail is preserved during the magnifying process. AP BIOLOGY Cells ACTIVITY #2 MICROSCOPE LAB OBJECTIVES 1. Demonstrate proper care and use of a compound microscope. 2. Identify the parts of the microscope and describe the function of each part. 3. Compare

More information

BacklightFly Manual.

BacklightFly Manual. BacklightFly Manual http://www.febees.com/ Contents Start... 3 Installation... 3 Registration... 7 BacklightFly 1-2-3... 9 Overview... 10 Layers... 14 Layer Container... 14 Layer... 16 Density and Design

More information

The End of Thresholds: Subwavelength Optical Linewidth Measurement Using the Flux-Area Technique

The End of Thresholds: Subwavelength Optical Linewidth Measurement Using the Flux-Area Technique The End of Thresholds: Subwavelength Optical Linewidth Measurement Using the Flux-Area Technique Peter Fiekowsky Automated Visual Inspection, Los Altos, California ABSTRACT The patented Flux-Area technique

More information

Study guide for Graduate Computer Vision

Study guide for Graduate Computer Vision Study guide for Graduate Computer Vision Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 November 23, 2011 Abstract 1 1. Know Bayes rule. What

More information

Non-uniform illumination correction based on the retinex theory in digital image correlation measurement method

Non-uniform illumination correction based on the retinex theory in digital image correlation measurement method Optica Applicata, Vol. XLVII, No. 2, 2017 DOI: 10.5277/oa170203 Non-uniform illumination correction based on the retinex theory in digital image correlation measurement method GUOQING GU 1*, BIN SHE 1,

More information

Very short introduction to light microscopy and digital imaging

Very short introduction to light microscopy and digital imaging Very short introduction to light microscopy and digital imaging Hernan G. Garcia August 1, 2005 1 Light Microscopy Basics In this section we will briefly describe the basic principles of operation and

More information

Use of the method of progressive means in the analysis of errors in a line standard measurement Schellekens, P.H.J.; Amaradasa, A.A.

Use of the method of progressive means in the analysis of errors in a line standard measurement Schellekens, P.H.J.; Amaradasa, A.A. Use of the method of progressive means in the analysis of errors in a line standard measurement Schellekens, P.H.J.; Amaradasa, A.A. Published: 0/0/97 Document Version Publisher s PDF, also known as Version

More information

Defense Technical Information Center Compilation Part Notice

Defense Technical Information Center Compilation Part Notice UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11345 TITLE: Measurement of the Spatial Frequency Response [SFR] of Digital Still-Picture Cameras Using a Modified Slanted

More information

WFC3 TV3 Testing: IR Channel Nonlinearity Correction

WFC3 TV3 Testing: IR Channel Nonlinearity Correction Instrument Science Report WFC3 2008-39 WFC3 TV3 Testing: IR Channel Nonlinearity Correction B. Hilbert 2 June 2009 ABSTRACT Using data taken during WFC3's Thermal Vacuum 3 (TV3) testing campaign, we have

More information

Before you start, make sure that you have a properly calibrated system to obtain high-quality images.

Before you start, make sure that you have a properly calibrated system to obtain high-quality images. CONTENT Step 1: Optimizing your Workspace for Acquisition... 1 Step 2: Tracing the Region of Interest... 2 Step 3: Camera (& Multichannel) Settings... 3 Step 4: Acquiring a Background Image (Brightfield)...

More information

An image-based method for objectively assessing injection moulded plastic quality

An image-based method for objectively assessing injection moulded plastic quality Downloaded from orbit.dtu.dk on: Oct 23, 2018 An image-based method for objectively assessing injection moulded plastic quality Hannemose, Morten; Nielsen, Jannik Boll; Zsíros, László; Aanæs, Henrik Published

More information

Detection and Verification of Missing Components in SMD using AOI Techniques

Detection and Verification of Missing Components in SMD using AOI Techniques , pp.13-22 http://dx.doi.org/10.14257/ijcg.2016.7.2.02 Detection and Verification of Missing Components in SMD using AOI Techniques Sharat Chandra Bhardwaj Graphic Era University, India bhardwaj.sharat@gmail.com

More information

Characteristic mode based pattern reconfigurable antenna for mobile handset

Characteristic mode based pattern reconfigurable antenna for mobile handset Characteristic mode based pattern reconfigurable antenna for mobile handset Li, Hui; Ma, Rui; Chountalas, John; Lau, Buon Kiong Published in: European Conference on Antennas and Propagation (EuCAP), 2015

More information

Broadband array antennas using a self-complementary antenna array and dielectric slabs

Broadband array antennas using a self-complementary antenna array and dielectric slabs Broadband array antennas using a self-complementary antenna array and dielectric slabs Gustafsson, Mats Published: 24-- Link to publication Citation for published version (APA): Gustafsson, M. (24). Broadband

More information

Set Up and Test Results for a Vibrating Wire System for Quadrupole Fiducialization

Set Up and Test Results for a Vibrating Wire System for Quadrupole Fiducialization LCLS-TN-06-14 Set Up and Test Results for a Vibrating Wire System for Quadrupole Fiducialization Michael Y. Levashov, Zachary Wolf August 25, 2006 Abstract A vibrating wire system was constructed to fiducialize

More information

1.Discuss the frequency domain techniques of image enhancement in detail.

1.Discuss the frequency domain techniques of image enhancement in detail. 1.Discuss the frequency domain techniques of image enhancement in detail. Enhancement In Frequency Domain: The frequency domain methods of image enhancement are based on convolution theorem. This is represented

More information

nanovea.com PROFILOMETERS 3D Non Contact Metrology

nanovea.com PROFILOMETERS 3D Non Contact Metrology PROFILOMETERS 3D Non Contact Metrology nanovea.com PROFILOMETER INTRO Nanovea 3D Non-Contact Profilometers are designed with leading edge optical pens using superior white light axial chromatism. Nano

More information

Document Version Publisher s PDF, also known as Version of Record (includes final page, issue and volume numbers)

Document Version Publisher s PDF, also known as Version of Record (includes final page, issue and volume numbers) Characterization of the relative permittivity and homogeneity of liquid crystal polymer (LCP) in the 60 GHz band Huang, M.; Kazim, M.I.; Herben, M.H.A.J. Published in: Proc. Cost 2100 TD (10) 12031, Bologna,

More information

DBR based passively mode-locked 1.5m semiconductor laser with 9 nm tuning range Moskalenko, V.; Williams, K.A.; Bente, E.A.J.M.

DBR based passively mode-locked 1.5m semiconductor laser with 9 nm tuning range Moskalenko, V.; Williams, K.A.; Bente, E.A.J.M. DBR based passively mode-locked 1.5m semiconductor laser with 9 nm tuning range Moskalenko, V.; Williams, K.A.; Bente, E.A.J.M. Published in: Proceedings of the 20th Annual Symposium of the IEEE Photonics

More information

PASS Sample Size Software

PASS Sample Size Software Chapter 945 Introduction This section describes the options that are available for the appearance of a histogram. A set of all these options can be stored as a template file which can be retrieved later.

More information

Date Morning/Afternoon Time allowed: 1 hour 30 minutes

Date Morning/Afternoon Time allowed: 1 hour 30 minutes AS Level Physics B (Advancing Physics) H157/02 Physics in depth Practice Question Paper Date Morning/Afternoon Time allowed: 1 hour 30 minutes You must have: the Data, Formulae and Relationships Booklet

More information

Sensor Calibration Lab

Sensor Calibration Lab Sensor Calibration Lab The lab is organized with an introductory background on calibration and the LED speed sensors. This is followed by three sections describing the three calibration techniques which

More information

A New Elastic-wave-based NDT System for Imaging Defects inside Concrete Structures

A New Elastic-wave-based NDT System for Imaging Defects inside Concrete Structures A New Elastic-wave-based NDT System for Imaging Defects inside Concrete Structures Jian-Hua Tong and Shu-Tao Liao Abstract In this paper, a new elastic-wave-based NDT system was proposed and then applied

More information

Graphing Techniques. Figure 1. c 2011 Advanced Instructional Systems, Inc. and the University of North Carolina 1

Graphing Techniques. Figure 1. c 2011 Advanced Instructional Systems, Inc. and the University of North Carolina 1 Graphing Techniques The construction of graphs is a very important technique in experimental physics. Graphs provide a compact and efficient way of displaying the functional relationship between two experimental

More information

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals. Experiment 7 Geometrical Optics You will be introduced to ray optics and image formation in this experiment. We will use the optical rail, lenses, and the camera body to quantify image formation and magnification;

More information

Detection of mechanical instability in DI-fluxgate sensors

Detection of mechanical instability in DI-fluxgate sensors Downloaded from orbit.dtu.dk on: Nov 18, 2018 Detection of mechanical instability in DI-fluxgate sensors Pedersen, Lars William; Matzka, Jürgen Published in: Proceedings of the XVth IAGA Workshop on Geomagnetic

More information

Speckle disturbance limit in laserbased cinema projection systems

Speckle disturbance limit in laserbased cinema projection systems Speckle disturbance limit in laserbased cinema projection systems Guy Verschaffelt 1,*, Stijn Roelandt 2, Youri Meuret 2,3, Wendy Van den Broeck 4, Katriina Kilpi 4, Bram Lievens 4, An Jacobs 4, Peter

More information

Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion

Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion Determining the Relationship Between the Range and Initial Velocity of an Object Moving in Projectile Motion Sadaf Fatima, Wendy Mixaynath October 07, 2011 ABSTRACT A small, spherical object (bearing ball)

More information

Published in: Proceedings of the 20th Annual Symposium of the IEEE Photonics Benelux Chapter, November 2015, Brussels, Belgium

Published in: Proceedings of the 20th Annual Symposium of the IEEE Photonics Benelux Chapter, November 2015, Brussels, Belgium A Si3N4 optical ring resonator true time delay for optically-assisted satellite radio beamforming Tessema, N.M.; Cao, Z.; van Zantvoort, J.H.C.; Tangdiongga, E.; Koonen, A.M.J. Published in: Proceedings

More information

Lab 4 Projectile Motion

Lab 4 Projectile Motion b Lab 4 Projectile Motion What You Need To Know: x x v v v o ox ox v v ox at 1 t at a x FIGURE 1 Linear Motion Equations The Physics So far in lab you ve dealt with an object moving horizontally or an

More information

Non resonant slots for wide band 1D scanning arrays

Non resonant slots for wide band 1D scanning arrays Non resonant slots for wide band 1D scanning arrays Bruni, S.; Neto, A.; Maci, S.; Gerini, G. Published in: Proceedings of 2005 IEEE Antennas and Propagation Society International Symposium, 3-8 July 2005,

More information

Demosaicing Algorithms

Demosaicing Algorithms Demosaicing Algorithms Rami Cohen August 30, 2010 Contents 1 Demosaicing 2 1.1 Algorithms............................. 2 1.2 Post Processing.......................... 6 1.3 Performance............................

More information

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur

Mod. 2 p. 1. Prof. Dr. Christoph Kleinn Institut für Waldinventur und Waldwachstum Arbeitsbereich Fernerkundung und Waldinventur Histograms of gray values for TM bands 1-7 for the example image - Band 4 and 5 show more differentiation than the others (contrast=the ratio of brightest to darkest areas of a landscape). - Judging from

More information

Using Optics to Optimize Your Machine Vision Application

Using Optics to Optimize Your Machine Vision Application Expert Guide Using Optics to Optimize Your Machine Vision Application Introduction The lens is responsible for creating sufficient image quality to enable the vision system to extract the desired information

More information

Kit for building your own THz Time-Domain Spectrometer

Kit for building your own THz Time-Domain Spectrometer Kit for building your own THz Time-Domain Spectrometer 16/06/2016 1 Table of contents 0. Parts for the THz Kit... 3 1. Delay line... 4 2. Pulse generator and lock-in detector... 5 3. THz antennas... 6

More information

FULL SCALE FAILURE TESTING OF A REINFORCED CONCRETE BRIDGE: PHOTOGRAPHIC STRAIN MONITORING

FULL SCALE FAILURE TESTING OF A REINFORCED CONCRETE BRIDGE: PHOTOGRAPHIC STRAIN MONITORING FULL SCALE FAILURE TESTING OF A REINFORCED CONCRETE BRIDGE: PHOTOGRAPHIC STRAIN MONITORING Gabriel SAS University or Affiliation email address* PhD, Researcher NORUT Narvik AS Lodve Langes gt. 2, N-8504,

More information

Aalborg Universitet. MEMS Tunable Antennas to Address LTE 600 MHz-bands Barrio, Samantha Caporal Del; Morris, Art; Pedersen, Gert F.

Aalborg Universitet. MEMS Tunable Antennas to Address LTE 600 MHz-bands Barrio, Samantha Caporal Del; Morris, Art; Pedersen, Gert F. Aalborg Universitet MEMS Tunable Antennas to Address LTE 6 MHz-bands Barrio, Samantha Caporal Del; Morris, Art; Pedersen, Gert F. Published in: 9th European Conference on Antennas and Propagation (EuCAP),

More information

IRST ANALYSIS REPORT

IRST ANALYSIS REPORT IRST ANALYSIS REPORT Report Prepared by: Everett George Dahlgren Division Naval Surface Warfare Center Electro-Optical Systems Branch (F44) Dahlgren, VA 22448 Technical Revision: 1992-12-17 Format Revision:

More information

EE482: Digital Signal Processing Applications

EE482: Digital Signal Processing Applications Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu EE482: Digital Signal Processing Applications Spring 2014 TTh 14:30-15:45 CBC C222 Lecture 15 Image Processing 14/04/15 http://www.ee.unlv.edu/~b1morris/ee482/

More information

Rapid Array Scanning with the MS2000 Stage

Rapid Array Scanning with the MS2000 Stage Technical Note 124 August 2010 Applied Scientific Instrumentation 29391 W. Enid Rd. Eugene, OR 97402 Rapid Array Scanning with the MS2000 Stage Introduction A common problem for automated microscopy is

More information

Chapter 4 Number Theory

Chapter 4 Number Theory Chapter 4 Number Theory Throughout the study of numbers, students Á should identify classes of numbers and examine their properties. For example, integers that are divisible by 2 are called even numbers

More information

Design Description Document

Design Description Document UNIVERSITY OF ROCHESTER Design Description Document Flat Output Backlit Strobe Dare Bodington, Changchen Chen, Nick Cirucci Customer: Engineers: Advisor committee: Sydor Instruments Dare Bodington, Changchen

More information

A Waveguide Transverse Broad Wall Slot Radiating Between Baffles

A Waveguide Transverse Broad Wall Slot Radiating Between Baffles Downloaded from orbit.dtu.dk on: Aug 25, 2018 A Waveguide Transverse Broad Wall Slot Radiating Between Baffles Dich, Mikael; Rengarajan, S.R. Published in: Proc. of IEEE Antenna and Propagation Society

More information

Chapter 4 MASK Encryption: Results with Image Analysis

Chapter 4 MASK Encryption: Results with Image Analysis 95 Chapter 4 MASK Encryption: Results with Image Analysis This chapter discusses the tests conducted and analysis made on MASK encryption, with gray scale and colour images. Statistical analysis including

More information

Chapter 4: Patterns and Relationships

Chapter 4: Patterns and Relationships Chapter : Patterns and Relationships Getting Started, p. 13 1. a) The factors of 1 are 1,, 3,, 6, and 1. The factors of are 1,,, 7, 1, and. The greatest common factor is. b) The factors of 16 are 1,,,,

More information

Orthonormal bases and tilings of the time-frequency plane for music processing Juan M. Vuletich *

Orthonormal bases and tilings of the time-frequency plane for music processing Juan M. Vuletich * Orthonormal bases and tilings of the time-frequency plane for music processing Juan M. Vuletich * Dept. of Computer Science, University of Buenos Aires, Argentina ABSTRACT Conventional techniques for signal

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

Keywords: cylindrical near-field acquisition, mechanical and electrical errors, uncertainty, directivity.

Keywords: cylindrical near-field acquisition, mechanical and electrical errors, uncertainty, directivity. UNCERTAINTY EVALUATION THROUGH SIMULATIONS OF VIRTUAL ACQUISITIONS MODIFIED WITH MECHANICAL AND ELECTRICAL ERRORS IN A CYLINDRICAL NEAR-FIELD ANTENNA MEASUREMENT SYSTEM S. Burgos, M. Sierra-Castañer, F.

More information

Appendix III Graphs in the Introductory Physics Laboratory

Appendix III Graphs in the Introductory Physics Laboratory Appendix III Graphs in the Introductory Physics Laboratory 1. Introduction One of the purposes of the introductory physics laboratory is to train the student in the presentation and analysis of experimental

More information

Real Time Word to Picture Translation for Chinese Restaurant Menus

Real Time Word to Picture Translation for Chinese Restaurant Menus Real Time Word to Picture Translation for Chinese Restaurant Menus Michelle Jin, Ling Xiao Wang, Boyang Zhang Email: mzjin12, lx2wang, boyangz @stanford.edu EE268 Project Report, Spring 2014 Abstract--We

More information

DetectionofMicrostrctureofRoughnessbyOpticalMethod

DetectionofMicrostrctureofRoughnessbyOpticalMethod Global Journal of Researches in Engineering Chemical Engineering Volume 1 Issue Version 1.0 Year 01 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA)

More information

Motic Live Imaging Module. Windows OS User Manual

Motic Live Imaging Module. Windows OS User Manual Motic Live Imaging Module Windows OS User Manual Motic Live Imaging Module Windows OS User Manual CONTENTS (Linked) Introduction 05 Menus, bars and tools 06 Title bar 06 Menu bar 06 Status bar 07 FPS 07

More information

Sensor Calibration Lab

Sensor Calibration Lab Sensor Calibration Lab The lab is organized with an introductory background on calibration and the LED speed sensors. This is followed by three sections describing the three calibration techniques which

More information

How Stencil Manufacturing Methods Impact Precision and Accuracy

How Stencil Manufacturing Methods Impact Precision and Accuracy How Stencil Manufacturing Methods Impact Precision and Accuracy Ahne Oosterhof & Shane Stafford May 22, 2012 1. Happy Tuesday everyone, and welcome to today s webinar, How Stencil Manufacturing Methods

More information

Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing

Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing Refined Slanted-Edge Measurement for Practical Camera and Scanner Testing Peter D. Burns and Don Williams Eastman Kodak Company Rochester, NY USA Abstract It has been almost five years since the ISO adopted

More information

Project 1: Game of Bricks

Project 1: Game of Bricks Project 1: Game of Bricks Game Description This is a game you play with a ball and a flat paddle. A number of bricks are lined up at the top of the screen. As the ball bounces up and down you use the paddle

More information

OFFSET AND NOISE COMPENSATION

OFFSET AND NOISE COMPENSATION OFFSET AND NOISE COMPENSATION AO 10V 8.1 Offset and fixed pattern noise reduction Offset variation - shading AO 10V 8.2 Row Noise AO 10V 8.3 Offset compensation Global offset calibration Dark level is

More information

Image Enhancement using Histogram Equalization and Spatial Filtering

Image Enhancement using Histogram Equalization and Spatial Filtering Image Enhancement using Histogram Equalization and Spatial Filtering Fari Muhammad Abubakar 1 1 Department of Electronics Engineering Tianjin University of Technology and Education (TUTE) Tianjin, P.R.

More information

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception

WHITE PAPER. Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Methods for Measuring Flat Panel Display Defects and Mura as Correlated to Human Visual Perception Abstract

More information

Aalborg Universitet. Large-Scale Analysis of Art Proportions Jensen, Karl Kristoffer. Published in: Arts and Technology

Aalborg Universitet. Large-Scale Analysis of Art Proportions Jensen, Karl Kristoffer. Published in: Arts and Technology Aalborg Universitet Large-Scale Analysis of Art Proportions Jensen, Karl Kristoffer Published in: Arts and Technology DOI (link to publication from Publisher): 10.1007/978-3-319-18836-2_16 Creative Commons

More information

The study of combining hive-grid target with sub-pixel analysis for measurement of structural experiment

The study of combining hive-grid target with sub-pixel analysis for measurement of structural experiment icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) The study of combining hive-grid target with sub-pixel

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION doi:0.038/nature727 Table of Contents S. Power and Phase Management in the Nanophotonic Phased Array 3 S.2 Nanoantenna Design 6 S.3 Synthesis of Large-Scale Nanophotonic Phased

More information