Evaluating the stability of SIFT keypoints across cameras

Save this PDF as:

Size: px
Start display at page:

Download "Evaluating the stability of SIFT keypoints across cameras"

Transcription

1 Evaluating the stability of SIFT keypoints across cameras Max Van Kleek Agent-based Intelligent Reactive Environments MIT CSAIL ABSTRACT Object identification using Scale-Invariant Feature Transform (SIFT) keypoints requires stable keypoint signatures that can be reliably reproduced across variations in lighting, viewing angle, as well as camera imaging characteristics. While reproducibility has been demonstrated under various lighting and viewing angles elsewhere, this work performs an empirical evaluation of the reproducibility of SIFT keypoints and the stability of their corresponding feature signatures across a variety of camera configurations within a controlled lighting and scene arrangement. INTRODUCTION The general problem of object recognition has been an important goal of the machine vision research field since its inception. Over the past decade, with the progressive development of lower-cost, high-performing computers that require less power and occupy less physical space, along with advancements that have yielded lower-cost and smaller cameras, it has become feasible to implement vision-based systems and embed them nearly everywhere, to provide people practical assistance on a wider variety of tasks than ever previously feasible. At the same time, these applications have placed new demands on vision algorithms to perform robustly and to recognize a larger variety of objects in a wider variety of lighting and scene conditions with the same or greater reliability than previously demonstrated, using a greater variety of imaging technologies. Similar exponential advances in communications technology, in particular lowcost, high-bandwidth, and wireless networking has made it possible for applications requiring the use of multiple computers distributed physically or with distributed resources to more easily coordinate and solve particular tasks. As these applications become increasingly pervsaive, certain robustness to variations in imaging devices will also become a necessity. For example, vision-enabled applications intended for home use or for their mobile handset will need to use whatever commodity image capture hardware is already available on users PCs or cell phones, respectively. Likewise, distributed vision applications may need to pull image data from multiple heterogeneous imaging devices. This paper presents an evaluation of a popular object recognition technique across a number of popular commodity image capture devices that are readily available today. BACKGROUND One particular technique that has recently gained attention due to successful demonstrations of being applied to recognizing both object instances [] and identifying objects classes [] is the use of robust local feature descriptors. Techniques that involve local features first identify a small set of salient keypoints which are likely to capture interesting information in an image about an object, analyzing the statistics around these keypoints and associating the set with a particular object. Then, the object may be identified given any new image, by locating and matching up corresponding keypoints from those previously associated with objects. In particular, the Scale Invariant Feature Transform (SIFT), proposed by David Lowe, selects candidate keypoints by searching images at multiple scales for points that are likely to be highly localizable, and then labels each keypoint using a signature derived from gradients around the keypoint. By then associating sets of these keypoints and accompanying signatures with the object, the object can later be identified by merely identifying the corresponding same set of keypoints and features in a new image. The single most important characteristic of these keypoints and signatures to enable reliable object recognition is the reproducibility of the keypoints across a variety of image variations that are likely to affect how an object is perceived. That is, the criterion about whether to choose a particular keypoint should related to how likely to be able to be detected and consistently identified in future images under variations in lighting, viewing angle/object orientation, lens distortion, and image noise. Lowe empirically analyzed the sensitivity of the SIFT keypoints to object rotation, particularly off-axis to the image plane in a recent paper []. Mikolajczyk et al. examined SIFT keypoint performance along changes in lighting direction and intensity []. However, little work has previously surrounded keypoint reproducibility across variations in cameras, across either physical imaging characteristics, such as lens, aperture and imager configuration, or performance characteristics of the imager, such as sensitivity, resolution and noise level. This work attempts to provide initial insight into SIFT performance across a range

2 Model Logitech QuickCam Express Logitech QuickCam Pro Sony EVI-D Nikon Coolpix 99 Detector Type CMOS, 5x88 CCD, 6x8 CCD, NTSC CCD, 8x56 Interface and Price USB, -bit Color ($5 USD) USB, -bit Color ($5 USD) S-Video, digitized using Pinnacle Micro DV capture device ($959 USD) Digital still camera ( $5 USD) Table. Cameras used and their specifications of cameras through a series of experiments. EXPERIMENT Setup To determine the reproducibility of SIFT keypoints and associated signatures across variations of camera type, four cameras were selected, that varied widely in type, approximate market price, and specifications. As can be seen in Table, cameras included two widely available low-end USB webcams, to an analog S-Video NTSC video camera digitized into DV, to a high-end consumer-grade digital still camera. Images were either directly captured into x resolution (for the webcams), or downsampled after being captured at the devices native resolution. Each camera was taken and mounted on a tripod in an identical fashion at a fixed location in a room. Five incandescent lights were placed behind the camera, which was pointed at the subject at a distance of feet (for the human subject) and feet (for the toy robot). The pictures of the subject of each of the images were taken against a white wall in the laboratory. For each of the four cameras and the two conditions (human and toy robot), 6 images were taken: of background without a subject (for background subtraction), with the subject/object facing front in the center of the image, and each with the subject/object facing approximately 5 and 5 degrees away from frontal-parallel to the left of the camera, each of the subject/object facing approximately 5 and 5 degrees away from frontal-parallel to the right of the camera, respectively. Samples of the frontal-parallel pose for each camera are shown in figure. Procedure Prior to SIFT keypoint detection, images from each camera were batch-loaded, and converted to greyscale by averaging the red, green and blue pixel intensities for each pixel. Background images were separated from the rest. Contrast stretching was then performed across the whole set uniformly, by taking the minimum and maximum pixel intensities across all images and scaling the intermediate values to occupy the whole range. Background elimination Figure. Front pose for human subject using (clockwise from the top-left): qcexpress, qc pro, nikon coolpix, and Sony NTSC. Figure. Background elimination (left), mask generated by dilation with a disk of r = 5 pixels (right) To compute a background model, the mean and variance for each pixel was computed across the background images. To segment foreground from background then, each pixel in the remaining images of the set were compared against a normal distribution centered at the corresponding mean and with the corresponding covariance of the pixel in model. Pixels that were less likely than a threshold value (ɛ =.) according to this model were labeled as foreground; otherwise the pixel was labeled as background. An example of running this algorithm on an image can be seen in Figure. As can be seen by the image on the left, our background elimination procedure occasionally left holes where the foreground approached the color of the white wall. To fill in these occasional mistakes, the mask was dilated with a disk structuring element of radius 5 pixels. This effectively completed the regions for all of our test images, as can be seen by the resulting mask on the right. Keypoint identification Candidate keypoints were selected by searching a differenceof-gaussian (DoG) pyramid an octave at a time, selecting local maxima that have an intensity above a threshold (t =.8). To filter out edges, the Hessian was computed from image gradients at the candidate keypoint, and was rejected if ratio of eigenvalues of the Hessian is too large (R = ). Candidates that fell outside the foreground mask (computed

3 from the background model, above) were also eliminated, to preserve only keypoints that pertain to the foreground. SIFT feature extraction SIFT features were computed by first finding the keypoint orientation, which consisted of the dominant gradient over a window (size=6x6) centered at that keypoint. All possible gradient orientations during this process were discretized into a discrete set of N orientations (N = 6). Magnitudes of gradients of pixels within the window were weighted by a D gaussian of covariance σ, (where σ was assigned as half the width of the descriptor window), centered over the keypoint and summed. The orientation assigned to the keypoint was then this overall weighted sum direction, discretized to the nearest bin. After the keypoint orientation was assigned for a particular keypoint, the window for that keypoint was divided into q equally sized regions. A weighted histogram count of orientations was performed for each of these regions separately, taking the keypoint orientation as orientation. The results of this histogram count were then concatenated on the rest of the histogram counts, to form the SIFT feature vector of this keypoint, a single large vector of counts of size N q. Keypoint matching After the keypoints and feature vectors were computed for two images, we determined how many keypoints were reproduced from one image to the next by matching SIFT feature vectors between images. For each of the keypoints in the first image, A, we found the SIFT keypoint in the second image, B, whose feature vector was most similar (in a Euclidean distance sense) to that of the keypoint in A, and assigned the keypoint in A, B s label. Then, these keypoints were labeled and plotted with their associated histograms, and compared visually as can be seen in Figure. To see how keypoints in A corresponded to those back in B, then, the process was repeated back by taking every keypoint in B and finding its nearest neighbor in A. RESULTS In order to evaluate the performance of our version of the algorithm, the algorithm was run on pairs of images (which we will call A and B) from our set. Each pair produced four images: keypoints in A labeled with their original labels, keypoints in A labeled with the labels of their closest counterparts from B, keypoints in B labeled with their original labels, and keypoints in B labeled with their closest counterparts from A. They keypoints in A with their original labelings were then compared with keypoints in B labeled with their closest counterparts in A, and vice-versa. This output was manually analyzed for five different counts: () number of keypoints detected in A, () number of keypoints detected in B, () intersection of keypoints detected in A with those detected in B () number of keypoints from A that were detected in B and correctly labeled, (5) number of keypoints in B that were detected in A and corrrectly labeled. These correspond to columns -7 of Table. To first determine a baseline of how well our algorithm functioned, we first studied the reproducibility of keypoints among pairs of images taken with the same camera. The results can be seen for each camera on the top half of Table. Then, we studied cross-camera reproducibility for certain combinations of cameras, as can be seen in the second section of the table. The experiments were then repeated with a larger value of gradient bins, this time N = 6, yielding a total SIFT vector size of 8. Results from this second set of experiments are available in Table. The off-axis rotation images (of both 5 and 5 degrees) yielded no keypoint reproducibility, and therefore labeling performance could not be compared. Due to time constraints, further analysis of the off-axis images were left for future work. DISCUSSION We must be careful about drawing any broad conclusions about the performance of SIFT, given the results of our experiment, for a number of reasons. First, due to time and resource constraints, the number of images we were able to experiment with per camera was extremely small. Second, we used only two subjects in our experiment, a human form wearing black clothing, whose posture was slightly different each time, and a bright, plastic toy robot which remained perfectly rigid and motionless between conditions. This is not, by any means, anywhere near representative of the broad potential uses of local feature descriptors. Third, there are a large number of parameters that need to be set in the SIFT algorithm. Although we attempted to set all the parameters to sensible settings, either by using the same values described by Lowe, or by guessing a value, we were not able to, due to time constraints, tune each of the parameters to explore how performance would be affected. These parameters included the keypoint window size, gaussian window weight covariance, number of gradient bins, DoG pyramid step-size, DoG pyramid levels, edge detection threshold, and initial image blurring amount. Finally as will be described in future work, we did not have time to evaluate the performance of several of the enhancements to the algorithm, suggested by Lowe. However, if we compare the performance of our naive implementation of SIFT across our experimental conditions, we may come up with several hypotheses that may make useful predictive heuristics for practically implementing object recognition systems using SIFT. Our first observation was that keypoint detection and label matching performed very differently. Label matching was consistently higher (if we consider number of matches / number of keypoints reproduced) in same-camera experiments than between different cameras. However, this trend was not observed with just keypoint reproduction, which worked approximately equally well in within-camera and between-camera condition. In fact, particularly for the series of experiments with N = 6, between-camera keypoint reproduction was generally higher than between-cameras! However, since the number of images is so small, this is not a significant result. On the withinsame-camera human task with N =, an average of 9% of keypoints were reproduced within camera, and out of those

4 6 8 Nikon Coolpix () Sony NTSC () x A: (,87) x A: (,8) x A: (,7) x A: (5,97) x A: (,67).5 x A: (6,6) x A: (6,66).5 x A: (,5) x A:5 (56,7) x A:6 (57,9) 6 x A:7 (6,98).5 x A:8 (6,67) x A:5 (5,5) 5 x A:6 (5,7) x A:7 (95,96) x A:8 (,68) x A:9 (6,68) 6 8 x A: (65,86) x A: (7,) 6 8 x A: (9,8) x A:9 (,8) 6 8 x A: (9,75) 6 8 x A: (9,9) x A: (5,8) 6 8 x A: (,) x A: (6,9) x A:5 (,) x A:6 (,8) x A:7 (9,98) x A:8 (7,) x A: (7,79) x A: (8,6) x A:5 (9,76) Keypoints from A with original labelings B keypoints labeled with nearest matches from A Figure. Output of algorithm comparing Nikon () to Sony (), showing keypoints of A, matched keypoints from A into B, and associated SIFT histograms (N = 6). Image A: Device (Image Image B: Device (Image in A in B A B A correctly labeled in B B correctly labeled in A Sony NTSC () Sony NTSC () Sony NTSC () Sony NTSC () Nikon () Nikon () QuickCam Pro () QuickCam Pro () QuickCam Express () QuickCam Express () 8 Nikon () Sony NTSC () Nikon () QuickCam Pro () Nikon () QuickCam Express () 8 6 QuickCam Pro () QuickCam Express () 7 8 Table. Correspondence results for Human task, N = bins/quadrant (6 element SIFT) Image A: Device (Image Image B: Device (Image in A in B A B A correctly labeled in B B correctly labeled in A Sony NTSC () Sony NTSC () Nikon () Nikon () QuickCam Pro () QuickCam Pro () 7 9 QuickCam Express () QuickCam Express () 8 Nikon () Sony NTSC () 8 5 Nikon () QuickCam Pro () 8 7 Nikon () QuickCam Express () 8 QuickCam Pro () QuickCam Express () 7 8 Table. Correspondence results for Human task, N = 6 bins/quadrant (8 element SIFT) Image A: Device (Image Image B: Device (Image in A in B A B A correctly labeled in B B correctly labeled in A Sony NTSC () Sony NTSC () Nikon () Nikon () QuickCam Pro () QuickCam Pro () 6 55 QuickCam Express () QuickCam Express () Nikon () Sony NTSC () Sony NTSC () QuickCam Pro () Sony NTSC () QuickCam Express () 8 78 Table. Correspondence results for Robot task, N = 6 bins/quadrant (8 element SIFT)

5 Figure. Toy robot detection results comparing Sony NTSC Camera and matched keypoints from QuickCam Pro reproduced keypoints, on average 8% were recovered. Between cameras, 9% of the original keypoints were recovered, and out of those an average of % were properly labeled. For N = 6 within-cameras yielded a 9.% correct assignment, whereas between cameras was significantly worse,.%. In the robot condition, same-camera task, with N = 6, an average of 6.% were recovered, and out of those 7% were recovered. However, in the betweencamera task, reproducibility was much lower at %, out of which an average of only 6 % were properly labeled. The differences between the human and robot experiments are likely due to a number of factors. First, since the robot was completely rigid, unlike the human, its appearance changed little between shots with the same camera. This is likely to explain the extremely high reproducibility rate and labeling accuracy in the within-camera experiment. It was somewhat perplexing why the labelings nor the reproducibility rate between the identical-looking shots using the same camera was not %, as we might expect. Whether this is due to an imperceptible shifting of the camera, or some similarly imperceptible difference with how the camera imaged the two instances, there was no way to tell within our experiment. However, between cameras, we saw a decrease in both reproducbility and in labeling, compared to the human. There are several explanations for why the labelling is more challenging in the robot condition. First, there are many more keypoints, and therefore it is inherently more challenging (less likely) to randomly choose the right keypoint. But more significant is the effect of an abundance of keypoints with similar statistics. There are many keypoints that correspond to locations on the robot that look like other locations: such as the arms and the knees. Another difference between the robot task and the human task was that the robot task was significantly positively impacted by changing the number of gradient bins from N = to N = 6. (There was no correct labeling on the robot task with N =.) By contrast, the performance on reproducing keypoints for the human subject fared slightly worse with larger N. FUTURE WORK 5 The simple implementation of SIFT we used for these experiments did not contain any of the keypoint management that was recommended by Lowe []. Namely, for each keypoint detected, this implementation naively created a new SIFT keypoint with accompanying signature, without comparing the similarity of the new keypoint to existing keypoints in our set. For objects that are likely to have recurring, similar looking regions (for example, the robot s knees), these keypoints should all be assigned the same label. Since the robot had many regions of similar appearance, this is the most likely cause of the low reproducibility observed in the between-camera robot condition. This sort of merging could be done during the detection, or by clustering the SIFT vectors post-hoc, using an algorithm such as k-means. The other recommendation made by Lowe dealt with choosing the primary orientation for the keypoint. If the initial weighted voting for the primary orientation of the keypoint was close to a tie, Lowe s improvement created multiple keypoints with the various orientations that yielded the tie. This indeed seems like a wise choice, to reduce the susceptibility of choice of keypoint direction to noise. CONCLUSION This study demonstrated that although SIFT keypoint reproducibility and signature correspondence were sensitive to camera variations, correspondence was still possible. Due to the generally low reproducibility of keypoints, it may be necessary for applications to build robust models and rely on many redundant keypoints. Lowe has demonstrated that object recognition for certain applications may be possible by identifying as few as keypoints on an image Another interesting result was there was no clear winner, with respect to which single camera performed the best in our experiment. The CMOS sensor based camera, the QuickCam express, consistently underperformed the rest, which were CCD based cameras. Since most digital cameras embedded in mobile devices are CMOS-based, this may have important implications regarding the use of SIFT-based vision algorithms on these devices.

6 ACKNOWLEDGEMENTS I would like to thank Bill Freeman and Xiaoxu Ma and my colleagues in for an extremely fun and memorable class. RESOURCES The code and data set for this paper may be downloaded at emax/6.869/ scift. Please contact the author with any questions or comments. REFERENCES. S. Helmer and D. G. Lowe. Object recognition with many local features. Workshop on Generative Model Based Vision,.. D. G. Lowe. Distinctive features from scale-invariant keypoints. International Journal of Computer Vision,.. K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. Computer Vision and Pattern Recognition,. 6

Improved SIFT Matching for Image Pairs with a Scale Difference

Improved SIFT Matching for Image Pairs with a Scale Difference Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Peter Andreas Entschev and Hugo Vieira Neto Graduate School of Electrical Engineering and Applied Computer Science Federal

More information

Perception. Introduction to HRI Simmons & Nourbakhsh Spring 2015

Perception. Introduction to HRI Simmons & Nourbakhsh Spring 2015 Perception Introduction to HRI Simmons & Nourbakhsh Spring 2015 Perception my goals What is the state of the art boundary? Where might we be in 5-10 years? The Perceptual Pipeline The classical approach:

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

Study guide for Graduate Computer Vision

Study guide for Graduate Computer Vision Study guide for Graduate Computer Vision Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 November 23, 2011 Abstract 1 1. Know Bayes rule. What

More information

Book Cover Recognition Project

Book Cover Recognition Project Book Cover Recognition Project Carolina Galleguillos Department of Computer Science University of California San Diego La Jolla, CA 92093-0404 cgallegu@cs.ucsd.edu Abstract The purpose of this project

More information

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing

Digital Image Processing. Lecture # 6 Corner Detection & Color Processing Digital Image Processing Lecture # 6 Corner Detection & Color Processing 1 Corners Corners (interest points) Unlike edges, corners (patches of pixels surrounding the corner) do not necessarily correspond

More information

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering

CoE4TN4 Image Processing. Chapter 3: Intensity Transformation and Spatial Filtering CoE4TN4 Image Processing Chapter 3: Intensity Transformation and Spatial Filtering Image Enhancement Enhancement techniques: to process an image so that the result is more suitable than the original image

More information

Vision Review: Image Processing. Course web page:

Vision Review: Image Processing. Course web page: Vision Review: Image Processing Course web page: www.cis.udel.edu/~cer/arv September 7, Announcements Homework and paper presentation guidelines are up on web page Readings for next Tuesday: Chapters 6,.,

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

Video Synthesis System for Monitoring Closed Sections 1

Video Synthesis System for Monitoring Closed Sections 1 Video Synthesis System for Monitoring Closed Sections 1 Taehyeong Kim *, 2 Bum-Jin Park 1 Senior Researcher, Korea Institute of Construction Technology, Korea 2 Senior Researcher, Korea Institute of Construction

More information

On spatial resolution

On spatial resolution On spatial resolution Introduction How is spatial resolution defined? There are two main approaches in defining local spatial resolution. One method follows distinction criteria of pointlike objects (i.e.

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES

COMPARATIVE PERFORMANCE ANALYSIS OF HAND GESTURE RECOGNITION TECHNIQUES International Journal of Advanced Research in Engineering and Technology (IJARET) Volume 9, Issue 3, May - June 2018, pp. 177 185, Article ID: IJARET_09_03_023 Available online at http://www.iaeme.com/ijaret/issues.asp?jtype=ijaret&vtype=9&itype=3

More information

How Many Pixels Do We Need to See Things?

How Many Pixels Do We Need to See Things? How Many Pixels Do We Need to See Things? Yang Cai Human-Computer Interaction Institute, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA ycai@cmu.edu

More information

Tutorial document written by Vincent Pelletier and Maria Kilfoil 2007.

Tutorial document written by Vincent Pelletier and Maria Kilfoil 2007. Tutorial document written by Vincent Pelletier and Maria Kilfoil 2007. Overview This code finds and tracks round features (usually microscopic beads as viewed in microscopy) and outputs the results in

More information

License Plate Localisation based on Morphological Operations

License Plate Localisation based on Morphological Operations License Plate Localisation based on Morphological Operations Xiaojun Zhai, Faycal Benssali and Soodamani Ramalingam School of Engineering & Technology University of Hertfordshire, UH Hatfield, UK Abstract

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

Chapter 17. Shape-Based Operations

Chapter 17. Shape-Based Operations Chapter 17 Shape-Based Operations An shape-based operation identifies or acts on groups of pixels that belong to the same object or image component. We have already seen how components may be identified

More information

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell

Deep Green. System for real-time tracking and playing the board game Reversi. Final Project Submitted by: Nadav Erell Deep Green System for real-time tracking and playing the board game Reversi Final Project Submitted by: Nadav Erell Introduction to Computational and Biological Vision Department of Computer Science, Ben-Gurion

More information

Checkerboard Tracker for Camera Calibration. Andrew DeKelaita EE368

Checkerboard Tracker for Camera Calibration. Andrew DeKelaita EE368 Checkerboard Tracker for Camera Calibration Abstract Andrew DeKelaita EE368 The checkerboard extraction process is an important pre-preprocessing step in camera calibration. This project attempts to implement

More information

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods

An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods 19 An Efficient Color Image Segmentation using Edge Detection and Thresholding Methods T.Arunachalam* Post Graduate Student, P.G. Dept. of Computer Science, Govt Arts College, Melur - 625 106 Email-Arunac682@gmail.com

More information

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image

Background. Computer Vision & Digital Image Processing. Improved Bartlane transmitted image. Example Bartlane transmitted image Background Computer Vision & Digital Image Processing Introduction to Digital Image Processing Interest comes from two primary backgrounds Improvement of pictorial information for human perception How

More information

Real Time Word to Picture Translation for Chinese Restaurant Menus

Real Time Word to Picture Translation for Chinese Restaurant Menus Real Time Word to Picture Translation for Chinese Restaurant Menus Michelle Jin, Ling Xiao Wang, Boyang Zhang Email: mzjin12, lx2wang, boyangz @stanford.edu EE268 Project Report, Spring 2014 Abstract--We

More information

Linear Gaussian Method to Detect Blurry Digital Images using SIFT

Linear Gaussian Method to Detect Blurry Digital Images using SIFT IJCAES ISSN: 2231-4946 Volume III, Special Issue, November 2013 International Journal of Computer Applications in Engineering Sciences Special Issue on Emerging Research Areas in Computing(ERAC) www.caesjournals.org

More information

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION Revised November 15, 2017 INTRODUCTION The simplest and most commonly described examples of diffraction and interference from two-dimensional apertures

More information

Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self

Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Paul Fitzpatrick and Artur M. Arsenio CSAIL, MIT Modal and amodal features Modal and amodal features (following

More information

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror

Image analysis. CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror Image analysis CS/CME/BIOPHYS/BMI 279 Fall 2015 Ron Dror A two- dimensional image can be described as a function of two variables f(x,y). For a grayscale image, the value of f(x,y) specifies the brightness

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

Experiments with An Improved Iris Segmentation Algorithm

Experiments with An Improved Iris Segmentation Algorithm Experiments with An Improved Iris Segmentation Algorithm Xiaomei Liu, Kevin W. Bowyer, Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, U.S.A.

More information

Exercise questions for Machine vision

Exercise questions for Machine vision Exercise questions for Machine vision This is a collection of exercise questions. These questions are all examination alike which means that similar questions may appear at the written exam. I ve divided

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments , pp.32-36 http://dx.doi.org/10.14257/astl.2016.129.07 Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments Viet Dung Do 1 and Dong-Min Woo 1 1 Department of

More information

IMAGE ENHANCEMENT IN SPATIAL DOMAIN

IMAGE ENHANCEMENT IN SPATIAL DOMAIN A First Course in Machine Vision IMAGE ENHANCEMENT IN SPATIAL DOMAIN By: Ehsan Khoramshahi Definitions The principal objective of enhancement is to process an image so that the result is more suitable

More information

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror Image analysis CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror 1 Outline Images in molecular and cellular biology Reducing image noise Mean and Gaussian filters Frequency domain interpretation

More information

Computer Vision Slides curtesy of Professor Gregory Dudek

Computer Vision Slides curtesy of Professor Gregory Dudek Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short

More information

An Improved Bernsen Algorithm Approaches For License Plate Recognition

An Improved Bernsen Algorithm Approaches For License Plate Recognition IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) ISSN: 78-834, ISBN: 78-8735. Volume 3, Issue 4 (Sep-Oct. 01), PP 01-05 An Improved Bernsen Algorithm Approaches For License Plate Recognition

More information

Robot Visual Mapper. Hung Dang, Jasdeep Hundal and Ramu Nachiappan. Fig. 1: A typical image of Rovio s environment

Robot Visual Mapper. Hung Dang, Jasdeep Hundal and Ramu Nachiappan. Fig. 1: A typical image of Rovio s environment Robot Visual Mapper Hung Dang, Jasdeep Hundal and Ramu Nachiappan Abstract Mapping is an essential component of autonomous robot path planning and navigation. The standard approach often employs laser

More information

Feature Extraction Techniques for Dorsal Hand Vein Pattern

Feature Extraction Techniques for Dorsal Hand Vein Pattern Feature Extraction Techniques for Dorsal Hand Vein Pattern Pooja Ramsoful, Maleika Heenaye-Mamode Khan Department of Computer Science and Engineering University of Mauritius Mauritius pooja.ramsoful@umail.uom.ac.mu,

More information

Scrabble Board Automatic Detector for Third Party Applications

Scrabble Board Automatic Detector for Third Party Applications Scrabble Board Automatic Detector for Third Party Applications David Hirschberg Computer Science Department University of California, Irvine hirschbd@uci.edu Abstract Abstract Scrabble is a well-known

More information

Supplementary Figures

Supplementary Figures Supplementary Figures Supplementary Figure 1. The schematic of the perceptron. Here m is the index of a pixel of an input pattern and can be defined from 1 to 320, j represents the number of the output

More information

Real- Time Computer Vision and Robotics Using Analog VLSI Circuits

Real- Time Computer Vision and Robotics Using Analog VLSI Circuits 750 Koch, Bair, Harris, Horiuchi, Hsu and Luo Real- Time Computer Vision and Robotics Using Analog VLSI Circuits Christof Koch Wyeth Bair John. Harris Timothy Horiuchi Andrew Hsu Jin Luo Computation and

More information

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura MIT CSAIL 6.869 Advances in Computer Vision Fall 2013 Problem Set 6: Anaglyph Camera Obscura Posted: Tuesday, October 8, 2013 Due: Thursday, October 17, 2013 You should submit a hard copy of your work

More information

Forensic Framework. Attributing and Authenticating Evidence. Forensic Framework. Attribution. Forensic source identification

Forensic Framework. Attributing and Authenticating Evidence. Forensic Framework. Attribution. Forensic source identification Attributing and Authenticating Evidence Forensic Framework Collection Identify and collect digital evidence selective acquisition? cloud storage? Generate data subset for examination? Examination of evidence

More information

Imaging Particle Analysis: The Importance of Image Quality

Imaging Particle Analysis: The Importance of Image Quality Imaging Particle Analysis: The Importance of Image Quality Lew Brown Technical Director Fluid Imaging Technologies, Inc. Abstract: Imaging particle analysis systems can derive much more information about

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 2: Image Enhancement Digital Image Processing Course Introduction in the Spatial Domain Lecture AASS Learning Systems Lab, Teknik Room T26 achim.lilienthal@tech.oru.se Course

More information

Method to acquire regions of fruit, branch and leaf from image of red apple in orchard

Method to acquire regions of fruit, branch and leaf from image of red apple in orchard Modern Physics Letters B Vol. 31, Nos. 19 21 (2017) 1740039 (7 pages) c World Scientific Publishing Company DOI: 10.1142/S0217984917400395 Method to acquire regions of fruit, branch and leaf from image

More information

Comparing Computer-predicted Fixations to Human Gaze

Comparing Computer-predicted Fixations to Human Gaze Comparing Computer-predicted Fixations to Human Gaze Yanxiang Wu School of Computing Clemson University yanxiaw@clemson.edu Andrew T Duchowski School of Computing Clemson University andrewd@cs.clemson.edu

More information

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation

Optical Performance of Nikon F-Mount Lenses. Landon Carter May 11, Measurement and Instrumentation Optical Performance of Nikon F-Mount Lenses Landon Carter May 11, 2016 2.671 Measurement and Instrumentation Abstract In photographic systems, lenses are one of the most important pieces of the system

More information

Biometrics Final Project Report

Biometrics Final Project Report Andres Uribe au2158 Introduction Biometrics Final Project Report Coin Counter The main objective for the project was to build a program that could count the coins money value in a picture. The work was

More information

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X

International Journal of Innovative Research in Engineering Science and Technology APRIL 2018 ISSN X HIGH DYNAMIC RANGE OF MULTISPECTRAL ACQUISITION USING SPATIAL IMAGES 1 M.Kavitha, M.Tech., 2 N.Kannan, M.E., and 3 S.Dharanya, M.E., 1 Assistant Professor/ CSE, Dhirajlal Gandhi College of Technology,

More information

A Spatial Mean and Median Filter For Noise Removal in Digital Images

A Spatial Mean and Median Filter For Noise Removal in Digital Images A Spatial Mean and Median Filter For Noise Removal in Digital Images N.Rajesh Kumar 1, J.Uday Kumar 2 Associate Professor, Dept. of ECE, Jaya Prakash Narayan College of Engineering, Mahabubnagar, Telangana,

More information

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques Kevin Rushant, Department of Computer Science, University of Sheffield, GB. email: krusha@dcs.shef.ac.uk Libor Spacek,

More information

Automatics Vehicle License Plate Recognition using MATLAB

Automatics Vehicle License Plate Recognition using MATLAB Automatics Vehicle License Plate Recognition using MATLAB Alhamzawi Hussein Ali mezher Faculty of Informatics/University of Debrecen Kassai ut 26, 4028 Debrecen, Hungary. Abstract - The objective of this

More information

Finding people in repeated shots of the same scene

Finding people in repeated shots of the same scene Finding people in repeated shots of the same scene Josef Sivic C. Lawrence Zitnick Richard Szeliski University of Oxford Microsoft Research Abstract The goal of this work is to find all occurrences of

More information

Wavelet-based Image Splicing Forgery Detection

Wavelet-based Image Splicing Forgery Detection Wavelet-based Image Splicing Forgery Detection 1 Tulsi Thakur M.Tech (CSE) Student, Department of Computer Technology, basiltulsi@gmail.com 2 Dr. Kavita Singh Head & Associate Professor, Department of

More information

Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot

Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot International Conference on Control, Robotics, and Automation 2016 Computer Vision Based Chess Playing Capabilities for the Baxter Humanoid Robot Andrew Tzer-Yeu Chen, Kevin I-Kai Wang {andrew.chen, kevin.wang}@auckland.ac.nz

More information

Photographing Long Scenes with Multiviewpoint

Photographing Long Scenes with Multiviewpoint Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an

More information

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game

37 Game Theory. Bebe b1 b2 b3. a Abe a a A Two-Person Zero-Sum Game 37 Game Theory Game theory is one of the most interesting topics of discrete mathematics. The principal theorem of game theory is sublime and wonderful. We will merely assume this theorem and use it to

More information

A Real Time Static & Dynamic Hand Gesture Recognition System

A Real Time Static & Dynamic Hand Gesture Recognition System International Journal of Engineering Inventions e-issn: 2278-7461, p-issn: 2319-6491 Volume 4, Issue 12 [Aug. 2015] PP: 93-98 A Real Time Static & Dynamic Hand Gesture Recognition System N. Subhash Chandra

More information

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT:

NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: IJCE January-June 2012, Volume 4, Number 1 pp. 59 67 NON UNIFORM BACKGROUND REMOVAL FOR PARTICLE ANALYSIS BASED ON MORPHOLOGICAL STRUCTURING ELEMENT: A COMPARATIVE STUDY Prabhdeep Singh1 & A. K. Garg2

More information

Graz University of Technology (Austria)

Graz University of Technology (Austria) Graz University of Technology (Austria) I am in charge of the Vision Based Measurement Group at Graz University of Technology. The research group is focused on two main areas: Object Category Recognition

More information

Nonuniform multi level crossing for signal reconstruction

Nonuniform multi level crossing for signal reconstruction 6 Nonuniform multi level crossing for signal reconstruction 6.1 Introduction In recent years, there has been considerable interest in level crossing algorithms for sampling continuous time signals. Driven

More information

Digital Image Processing 3/e

Digital Image Processing 3/e Laboratory Projects for Digital Image Processing 3/e by Gonzalez and Woods 2008 Prentice Hall Upper Saddle River, NJ 07458 USA www.imageprocessingplace.com The following sample laboratory projects are

More information

IncuCyte ZOOM Fluorescent Processing Overview

IncuCyte ZOOM Fluorescent Processing Overview IncuCyte ZOOM Fluorescent Processing Overview The IncuCyte ZOOM offers users the ability to acquire HD phase as well as dual wavelength fluorescent images of living cells producing multiplexed data that

More information

UM-Based Image Enhancement in Low-Light Situations

UM-Based Image Enhancement in Low-Light Situations UM-Based Image Enhancement in Low-Light Situations SHWU-HUEY YEN * CHUN-HSIEN LIN HWEI-JEN LIN JUI-CHEN CHIEN Department of Computer Science and Information Engineering Tamkang University, 151 Ying-chuan

More information

RELEASING APERTURE FILTER CONSTRAINTS

RELEASING APERTURE FILTER CONSTRAINTS RELEASING APERTURE FILTER CONSTRAINTS Jakub Chlapinski 1, Stephen Marshall 2 1 Department of Microelectronics and Computer Science, Technical University of Lodz, ul. Zeromskiego 116, 90-924 Lodz, Poland

More information

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES

DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM AND SEGMENTATION TECHNIQUES International Journal of Information Technology and Knowledge Management July-December 2011, Volume 4, No. 2, pp. 585-589 DESIGN & DEVELOPMENT OF COLOR MATCHING ALGORITHM FOR IMAGE RETRIEVAL USING HISTOGRAM

More information

Reliable Classification of Partially Occluded Coins

Reliable Classification of Partially Occluded Coins Reliable Classification of Partially Occluded Coins e-mail: L.J.P. van der Maaten P.J. Boon MICC, Universiteit Maastricht P.O. Box 616, 6200 MD Maastricht, The Netherlands telephone: (+31)43-3883901 fax:

More information

A Sorting Image Sensor: An Example of Massively Parallel Intensity to Time Processing for Low Latency Computational Sensors

A Sorting Image Sensor: An Example of Massively Parallel Intensity to Time Processing for Low Latency Computational Sensors Proceedings of the 1996 IEEE International Conference on Robotics and Automation Minneapolis, Minnesota April 1996 A Sorting Image Sensor: An Example of Massively Parallel Intensity to Time Processing

More information

Tiny ImageNet Challenge Investigating the Scaling of Inception Layers for Reduced Scale Classification Problems

Tiny ImageNet Challenge Investigating the Scaling of Inception Layers for Reduced Scale Classification Problems Tiny ImageNet Challenge Investigating the Scaling of Inception Layers for Reduced Scale Classification Problems Emeric Stéphane Boigné eboigne@stanford.edu Jan Felix Heyse heyse@stanford.edu Abstract Scaling

More information

Maine Day in May. 54 Chapter 2: Painterly Techniques for Non-Painters

Maine Day in May. 54 Chapter 2: Painterly Techniques for Non-Painters Maine Day in May 54 Chapter 2: Painterly Techniques for Non-Painters Simplifying a Photograph to Achieve a Hand-Rendered Result Excerpted from Beyond Digital Photography: Transforming Photos into Fine

More information

EC-433 Digital Image Processing

EC-433 Digital Image Processing EC-433 Digital Image Processing Lecture 2 Digital Image Fundamentals Dr. Arslan Shaukat 1 Fundamental Steps in DIP Image Acquisition An image is captured by a sensor (such as a monochrome or color TV camera)

More information

MICA at ImageClef 2013 Plant Identification Task

MICA at ImageClef 2013 Plant Identification Task MICA at ImageClef 2013 Plant Identification Task Thi-Lan LE, Ngoc-Hai PHAM International Research Institute MICA UMI2954 HUST Thi-Lan.LE@mica.edu.vn, Ngoc-Hai.Pham@mica.edu.vn I. Introduction In the framework

More information

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do?

The ultimate camera. Computational Photography. Creating the ultimate camera. The ultimate camera. What does it do? Computational Photography The ultimate camera What does it do? Image from Durand & Freeman s MIT Course on Computational Photography Today s reading Szeliski Chapter 9 The ultimate camera Infinite resolution

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Image Stabilization System on a Camera Module with Image Composition

Image Stabilization System on a Camera Module with Image Composition Image Stabilization System on a Camera Module with Image Composition Yu-Mau Lin, Chiou-Shann Fuh Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan,

More information

Automatic Locating the Centromere on Human Chromosome Pictures

Automatic Locating the Centromere on Human Chromosome Pictures Automatic Locating the Centromere on Human Chromosome Pictures M. Moradi Electrical and Computer Engineering Department, Faculty of Engineering, University of Tehran, Tehran, Iran moradi@iranbme.net S.

More information

Manifesting a Blackboard Image Restore and Mosaic using Multifeature Registration Algorithm

Manifesting a Blackboard Image Restore and Mosaic using Multifeature Registration Algorithm Manifesting a Blackboard Image Restore and Mosaic using Multifeature Registration Algorithm Priyanka Virendrasinh Jadeja 1, Dr. Dhaval R. Bhojani 2 1 Department of Electronics and Communication Engineering,

More information

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA

CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA 90 CHAPTER 4 LOCATING THE CENTER OF THE OPTIC DISC AND MACULA The objective in this chapter is to locate the centre and boundary of OD and macula in retinal images. In Diabetic Retinopathy, location of

More information

Recognizing Panoramas

Recognizing Panoramas Recognizing Panoramas Kevin Luo Stanford University 450 Serra Mall, Stanford, CA 94305 kluo8128@stanford.edu Abstract This project concerns the topic of panorama stitching. Given a set of overlapping photos,

More information

Colour Profiling Using Multiple Colour Spaces

Colour Profiling Using Multiple Colour Spaces Colour Profiling Using Multiple Colour Spaces Nicola Duffy and Gerard Lacey Computer Vision and Robotics Group, Trinity College, Dublin.Ireland duffynn@cs.tcd.ie Abstract This paper presents an original

More information

Fast, Robust Colour Vision for the Monash Humanoid Andrew Price Geoff Taylor Lindsay Kleeman

Fast, Robust Colour Vision for the Monash Humanoid Andrew Price Geoff Taylor Lindsay Kleeman Fast, Robust Colour Vision for the Monash Humanoid Andrew Price Geoff Taylor Lindsay Kleeman Intelligent Robotics Research Centre Monash University Clayton 3168, Australia andrew.price@eng.monash.edu.au

More information

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror

Image analysis. CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror Image analysis CS/CME/BioE/Biophys/BMI 279 Oct. 31 and Nov. 2, 2017 Ron Dror 1 Outline Images in molecular and cellular biology Reducing image noise Mean and Gaussian filters Frequency domain interpretation

More information

Lane Detection in Automotive

Lane Detection in Automotive Lane Detection in Automotive Contents Introduction... 2 Image Processing... 2 Reading an image... 3 RGB to Gray... 3 Mean and Gaussian filtering... 5 Defining our Region of Interest... 6 BirdsEyeView Transformation...

More information

A TWO-PART PREDICTIVE CODER FOR MULTITASK SIGNAL COMPRESSION. Scott Deeann Chen and Pierre Moulin

A TWO-PART PREDICTIVE CODER FOR MULTITASK SIGNAL COMPRESSION. Scott Deeann Chen and Pierre Moulin A TWO-PART PREDICTIVE CODER FOR MULTITASK SIGNAL COMPRESSION Scott Deeann Chen and Pierre Moulin University of Illinois at Urbana-Champaign Department of Electrical and Computer Engineering 5 North Mathews

More information

Chapter 2 Transformation Invariant Image Recognition Using Multilayer Perceptron 2.1 Introduction

Chapter 2 Transformation Invariant Image Recognition Using Multilayer Perceptron 2.1 Introduction Chapter 2 Transformation Invariant Image Recognition Using Multilayer Perceptron 2.1 Introduction A multilayer perceptron (MLP) [52, 53] comprises an input layer, any number of hidden layers and an output

More information

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015

CSC 320 H1S CSC320 Exam Study Guide (Last updated: April 2, 2015) Winter 2015 Question 1. Suppose you have an image I that contains an image of a left eye (the image is detailed enough that it makes a difference that it s the left eye). Write pseudocode to find other left eyes in

More information

Counting Sugar Crystals using Image Processing Techniques

Counting Sugar Crystals using Image Processing Techniques Counting Sugar Crystals using Image Processing Techniques Bill Seota, Netshiunda Emmanuel, GodsGift Uzor, Risuna Nkolele, Precious Makganoto, David Merand, Andrew Paskaramoorthy, Nouralden, Lucky Daniel

More information

Localization (Position Estimation) Problem in WSN

Localization (Position Estimation) Problem in WSN Localization (Position Estimation) Problem in WSN [1] Convex Position Estimation in Wireless Sensor Networks by L. Doherty, K.S.J. Pister, and L.E. Ghaoui [2] Semidefinite Programming for Ad Hoc Wireless

More information

CSE 564: Scientific Visualization

CSE 564: Scientific Visualization CSE 564: Scientific Visualization Lecture 5: Image Processing Klaus Mueller Stony Brook University Computer Science Department Klaus Mueller, Stony Brook 2003 Image Processing Definitions Purpose: - enhance

More information

Multiresolution Analysis of Connectivity

Multiresolution Analysis of Connectivity Multiresolution Analysis of Connectivity Atul Sajjanhar 1, Guojun Lu 2, Dengsheng Zhang 2, Tian Qi 3 1 School of Information Technology Deakin University 221 Burwood Highway Burwood, VIC 3125 Australia

More information

MATHEMATICAL MODELS Vol. I - Measurements in Mathematical Modeling and Data Processing - William Moran and Barbara La Scala

MATHEMATICAL MODELS Vol. I - Measurements in Mathematical Modeling and Data Processing - William Moran and Barbara La Scala MEASUREMENTS IN MATEMATICAL MODELING AND DATA PROCESSING William Moran and University of Melbourne, Australia Keywords detection theory, estimation theory, signal processing, hypothesis testing Contents.

More information

Photometry. Variable Star Photometry

Photometry. Variable Star Photometry Variable Star Photometry Photometry One of the most basic of astronomical analysis is photometry, or the monitoring of the light output of an astronomical object. Many stars, be they in binaries, interacting,

More information

Eileen Donelan. What s in my Camera Bag? Minimum Camera Macro Lens Cable Release Tripod

Eileen Donelan. What s in my Camera Bag? Minimum Camera Macro Lens Cable Release Tripod Close Up Photography Creating Artistic Floral Images Eileen Donelan Equipment Choices for Close Up Work What s in my Camera Bag? Minimum Camera Macro Lens Cable Release Tripod Additional Light Reflector

More information

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication

Image Enhancement. DD2423 Image Analysis and Computer Vision. Computational Vision and Active Perception School of Computer Science and Communication Image Enhancement DD2423 Image Analysis and Computer Vision Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 15, 2013 Mårten Björkman (CVAP)

More information

TED TED. τfac τpt. A intensity. B intensity A facilitation voltage Vfac. A direction voltage Vright. A output current Iout. Vfac. Vright. Vleft.

TED TED. τfac τpt. A intensity. B intensity A facilitation voltage Vfac. A direction voltage Vright. A output current Iout. Vfac. Vright. Vleft. Real-Time Analog VLSI Sensors for 2-D Direction of Motion Rainer A. Deutschmann ;2, Charles M. Higgins 2 and Christof Koch 2 Technische Universitat, Munchen 2 California Institute of Technology Pasadena,

More information

Project 4 Results http://www.cs.brown.edu/courses/cs129/results/proj4/jcmace/ http://www.cs.brown.edu/courses/cs129/results/proj4/damoreno/ http://www.cs.brown.edu/courses/csci1290/results/proj4/huag/

More information

Detection of Out-Of-Focus Digital Photographs

Detection of Out-Of-Focus Digital Photographs Detection of Out-Of-Focus Digital Photographs Suk Hwan Lim, Jonathan en, Peng Wu Imaging Systems Laboratory HP Laboratories Palo Alto HPL-2005-14 January 20, 2005* digital photographs, outof-focus, sharpness,

More information