Single Camera Catadioptric Stereo System

Size: px
Start display at page:

Download "Single Camera Catadioptric Stereo System"

Transcription

1 Single Camera Catadioptric Stereo System Abstract In this paper, we present a framework for novel catadioptric stereo camera system that uses a single camera and a single lens with conic mirrors. Various possible designs of the catadioptric stereo system with single-viewpoint constraint have been developed in this framework. The proposed systems are compact wide-baseline stereo systems with a panoramic view. The simple structures of the systems reduce the problems of misalignment between the camera and mirrors, which make the system free of complex search procedure for epipolar line. The wide baseline enables very accurate 3D-reconstruction of the environment. Additionally, the system has all the advantages of the single camera stereo system that stem from the same physical characteristics of the camera. The feasibility of the system as a practical stereo sensor has been demonstrated with experiments in an indoor environment. 1. Introduction With stereo vision, depth perception is possible by establishing stereo disparity between the two images from two distinct viewpoints. For a long time, it has been important tool for machine vision applications. There have been many possible types of stereo vision systems proposed. Among them, an attractive approach is the single-camera stereo. Generally, the characteristics of the stereo cameras are slightly different. For stereo images acquired by the two cameras, the focal lengths of the cameras are not exactly the same and the alignment of the imaging sensors is not accurate. Moreover, differences in the characteristics of the two imaging sensors cause intensity differences between corresponding points in stereo images. Using the single-camera stereo system, these unwanted geometric and chromatic differences can be eliminated to increase the ability to find correspondences reliably. A single-camera stereo system that uses a couple of planar mirror has been suggested [1]. The rectified stereo system was developed so that the corresponding features lie on the same scan line automatically []. The system complexity was reduced by using an image-splitting prism which is called biprism [3]. However, the system suffers from a short baseline, a narrow field of view and chromatic aberration. Another attractive approach is the catadioptric stereo. A catadioptric vision system using mirrors has been a popular means to get panoramic images [4], which contains a full horizontal field of view (FOV). Generally, it consists of coaxially aligned conic mirrors and cameras. A stereo system coaxially combining a couple of catadioptric vision systems was proposed [5]. However, it is bulky and always suffers from the different physical characteristics of the cameras and a slight misalignment between the systems. One of the partial solutions is using double mirrors positioned in front of the single camera [6]. This system reduces the number of required camera, but lead to the problems such as the difficult calibration and complex stereo matching process. Another partial solution is the catadioptric stereo sensor using a conical mirror and a beam splitter [7]. This system reduce the number of used mirror by using a beam splitter, but the orthogonally positioned two cameras still make the system suffer from bulky size, camera misalignment, and different characteristics of the cameras. A novel and effective solution for these problems is using a single camera and a double lobed mirror [8][9][10], which is a combined coaxial mirror pair. However, the critical deficiency of the system is the short baseline. In this paper, we propose a compact, wide-baseline catadioptric stereo system, with which the camera alignment procedure and the complex search procedure for the epipolar line are not necessary. Additionally, the system has all the advantages of the single-camera stereo system that stem from the same physical characteristics of the cameras.. Single-viewpoint (SVP) catadioptric image formation In various computer vision applications, a wide field of view is necessary. To obtain a wide field of

2 view, catadioptric vision sensors are useful. They are arrangements of cameras and specially configured mirrors. One important design factor is that the shape and the positioning of the mirror should ensure the constraint of single effective viewpoint. If Z (R) is the profile of the mirror shape, the complete class of solutions is given by [4] (Z c ) R k 1 = c k (k ) k (1) (Z c ) + R 1 + c = k + c (k > 0) k where c is half the distance between the desired virtual viewpoint and the effective pinhole of the imaging lens, and k is a constant of integration. The combination of c and k in the solution determines the mirror configuration [4]. The effective mirror shapes for the catadioptric camera are plane, ellipsoid, hyperboloid and paraboloid. They are used as the elements for the design of the proposed systems. These systems have the characteristics of rotational symmetry, therefore considerations on the radial cross section of the system are sufficient for the design. With the SVP optics, we can convert the projected image to a panoramic image seen from the effective viewpoint by establishing the relationship between the world point and its projection onto the image plane. Hyperboloidal mirrors and ellipsoidal mirrors, which have two foci, satisfy the SVP constraint. As shown in figure 1 (a, b), the light ray from the world point going to the first focus ( F ), which is also an effective viewpoint, is reflected to the second focus ( F ), which is also an effective pinhole, and projected onto the image plane. Given the parameters of the mirror ( a, b, and c ) and the focal length of the camera ( f ), the relationship between the world point and its projection onto the image plane is R () r= f M ZM where Z M = mr M + c (3) and R M = RM = mc + a 1 + m (for hyperboloid) a m b mc a 1 + m (for ellipsoid). a + m b Here, m is the slope of the line from the world point to the focus F (0,c ) : m= Z w c Rw ( Rw > 0) (5) where the range of m means the vertical FOV from the virtual viewpoint, which is determined by the mirror configuration. (c) Figure 1: Schematic of the catadioptric imaging system (4) using a hyperboloidal mirror, an ellipsoidal mirror, and a couple of paraboloidal mirrors (c). The foci of the mirror are denoted by F and F in and, F1 and F in (c).

3 Paraboloidal mirrors satisfy the SVP constraint. However, these mirrors are used as a coupled form, because the entire light ray directed to the effective viewpoint reflects to the direction parallel to the axis of rotation and consequently does not converge at a single point. As shown in figure 1(c), the light ray from the world point going to the focus of the first paraboloidal mirror ( F 1 ), which is also an effective viewpoint, is vertically reflected, then folded by the second paraboloidal mirror to its focus ( F ), which is also an effective pinhole, and projected onto the image plane. Given the parameters of the paraboloids ( p 1 and p ), the focal length of the camera ( f ), and the offset of the first paraboloidal mirror ( e ), the relationship between the world point and its projection onto the image plane is: RM r = f (6) Z M where RM Z M = + p (7) 4 p and RM1 = RM = p1( 1+ m m). (8) Here, m is the slope of the line from the world point to the focus of the first paraboloidal mirror F 1 ( 0, e ) : Z w e m = ( R w > 0) (9) R 3. Single camera catadioptric stereo system Generally, the catadioptric stereo vision system is an extension of conventional catadioptric vision system, which is a coaxial alignment of perspective cameras and conic mirrors. The coaxial configuration of the cameras and mirrors makes the epipolar line radially collinear, which makes the system free of the time consuming search process for complex epipolar curve in stereo matching. The panoramic stereo based on a double lobed mirror was first suggested in [8] and subsequently improved in [9]. The system is composed of a single camera and a coaxially aligned double lobed mirror. However, the non-svp optics makes the depth analysis complex and the short baseline results in extremely low depth resolution. Recently, a slight modification of the system was suggested in [10]. The sophisticated scheme of shifting the effective viewpoint of outer lobe succeeded in widening the baseline without breaking the SVP constraint. However, the w modification did not actually widen the vertical baseline, which is the most important parameter for 3D reconstruction of the horizontal panoramic view. To widen the vertical baseline of the coaxial single camera catadioptric stereo system, a modification of the double lobed mirror system is possible. Figure shows a schematic of the wide-baseline catadioptric stereo system. There are two different kinds of topology for the design as shown. One is projecting the upper view to the outer rim of the image plane and lower view to the center region, the other is exchanging the imaging region for each view by projecting the upper view to the center region of the image plane through the center hole of the lower mirror. Figure : Schematic of the catadioptric stereo system using hyperboloidal mirrors: (1) and () denote The primary and secondary mirrors, respectively. F and F denotes the foci of the mirrors. P denotes the effective pinhole of the camera. In these stereo configurations comprising two pieces of mirrors and a single camera, mirrors that independently satisfy the SVP constraint are required. The possible types are hyperboloid and ellipsoid. By selectively adapting the two kinds of mirrors at the position (1) and () in figure (a, b), a total of eight stereo configurations are possible. 4. Folded catadioptric stereo system The systems introduced in the previous section have the advantages of long baseline and single camera configuration. However, the problem is the overall size. Generally, the length of the system should be longer than twice the length of baseline. A more compact catadioptric stereo system can be obtained by folding the upper mirror to the level of imaging lens by reflecting with another supplementary mirror. Figure 3 shows a general form of single-viewpoint folded system that uses two conic mirrors (1) and (). This system has an equivalent single mirror system

4 with the same relation between the directions of scene point (φ ) and their image coordinates that is determined by β [11]. The light rays from the scene going to the near focus F1 are reflected on the mirror (1) to the direction of its far focus F1 (in case of paraboloidal mirror, the direction is vertical). The system is folded by placing another conic mirror () between the near and far foci such that the ray is reflected to the point P, which is the effective pinhole. Figure 3: General form of single-viewpoint folded catadioptric camera system that uses two conic mirrors. With this design scheme, there are nine different possible mirror pairs to construct folded imaging systems with SVPs [11]. In the stereo configurations comprising three pieces of mirrors and a single camera, a folded catadioptric system and a mirror that independently satisfy the SVP constraint are required. Then, for each imaging topology shown in Figure, there are two choices for the selection of lower mirror (1) and nine different choices of the upper mirror () folding. In other words, for a single folding scheme, there are two choices for the selection of lower mirror (1) and two choices of topology between and in Figure. As a result, we can say that there are totally thirty six possible folded catadioptric stereo configurations. For example, the folding configuration can be a primary hyperboloidal mirror and a secondary planar mirror in Figure 3. Then, we have four different types of design as shown in Figure 4. Fixing the mirror (1) and () we can choose a conic mirror for the upper view (3) among hyperboloid (a, c) and ellipsoid (b, d). The choice of topology for the image formation can be upper view-center configuration (a, b) or lower viewcenter configuration (c, d). In these stereo generation scheme, the mirror (3) can be positioned anywhere over the camera along the axis of rotation. In particular, the location of the mirror can be separated and raised over the mirror () to widen the baseline. (c) (d) Figure 4: An example of possible stereo configurations with a single selection of folded catadioptric system out of nine choices. Therefore, totally 36 configurations are possible. 5. Panoramic image generation and depth computation All the single-camera catadioptric stereo systems, which have coaxial camera-mirror configuration, obtain stereo inputs with inner zone and outer rim as shown in Figure 5. One of the major advantages of these systems is the simple epipolar geometry. All the epipolar lines are radial in raw catadioptric image, and they changes to be parallel with the coordinate conversion to panoramic image as shown in Figure 5. This fact simplifies the search process for the stereo matching by letting the corresponding features lie on the same scan line On the other hand, these types of stereo system suffer from the difference of resolution between each view. It reduces the stereo matching accuracy if the resolution of the input image is not so high. The resolution for the circular direction linearly decreases as the sampled point moves from outer rim to the center of the imaging circle. Moreover the radial resolutions are different for each view according to the mirror profiles. There was an attempt to achieve resolution invariant panoramic view by designing

5 mirror profile so that all the single pixels occupy the same size of solid angle [1]. However this approach breaks the SVP constraint. Moreover, the radial and circular resolutions for each view are different in stereo configuration. To solve this problem, we adopt the scale space theory. Different levels of resolution are created by the convolution with the different Gaussian kernel [13]: L( x, s) = G( sσ ) I( x). Here, I is the image with x = ( x, y), G is Gaussian kernel, s is scale factor and σ is reference standard deviation of the Gaussian kernel (σ is determined to have a minimum value to achieve anti-aliasing and noise reduction). It means that if s is, the resulting image is equivalent to the half image of the image with s is 1. For each view, the scale factor s is defined for radial and circular direction. The value is determined by the ratio of resolution between the two corresponding projected locations for the given vertical angle φ : s dri dro = max 1 dφ dφ dro dri = max 1 dφ dφ r i, s r o, c s i = 1 r c o s o = (10) ri where the subscript i and o represent inner and outer region, and superscript r and c represent radial and circular direction, respectively. The applied Gaussian kernel is depicted in Figure 5. The coordinate conversion from the catadioptric raw image to the panoramic images is possible both for the spherical and cylindrical coordinate. All the sampling scheme is similar except the difference of vertical axis between vertical angle φ and height Z. For the stereo matching between the two converted panoramic views, any conventional algorithms are applicable. Once correspondence between image points has been established, depth computation in both spherical and cylindrical panorama is straightforward by simple triangulation [5]. Figure 6 shows a sampling of the depth resolution in vertical cross section. The sampling is obtained by computing depth for every possible pair of image correspondences. The depth resolution mainly depends on camera resolution, the length of baseline and camera-mirror configuration. Figure 5: Panoramic image generation scheme. shows captured image which contains two different views with inner and outer region. The sampling is conducted by applying Gaussian kernel. The kernel size is determined by the image resolution. is the converted panoramic images for each view. The resulting resolutions between the two are the same. Figure 6: Depth resolution. Each point represents the estimated position calculated by all the possible pair of image correspondence in a single epipolar plane. This instance is generated with the configuration of Figure 4(c), the baseline is 100mm, and the diameter of the imaging circle is 100 pixels. s on z axis represent the virtual viewpoints of the stereo system.

6 6. Experiment To demonstrate the feasibility of the system, an experimental system setup was built as shown in Figure 7, which is configured as shown in Figure 4(c) wherein one planar and two hyperboloidal mirrors are used. Generally, hyperboloidal mirrors are recommendable in that they simplify the system structure and do not permit horizontal panoramic view. The length of the baseline was set to 100mm which is adequate for indoor usage. The overall size of the mirror system was designed to have 140mm tall and 80mm wide (the width of mirror is 60mm). The mirror system is mounted in front of the camera. The system is designed to have high resolution horizontal view which is most important in general indoor applications. o The vertical field of view was set to more than 16 for both upward and downward views. A inch CCD camera with 100 horizontal scan lines (Point grey - Scorpion) and a lens with 1mm focal length were used to capture the image. Figure 8 shows a catadioptric stereo image of our laboratory captured by the given experimental setup. All the corresponding points between each view are radially collinear. The problem of this system is defocusing. Because the system is the combination of two different catadioptric systems, the focusing range is largely different between them. Even in a single view, the focusing range varies radially. A partial solution is to reduce the aperture size. It increases the depth of field at the cost of shutter speed. To convert the input image to stereo panoramic views, we compute the correspondence between the light ray directions from the virtual viewpoint and the position in the acquired image using the projective geometry explained in section. Then, to speed up the algorithm, these data are saved in a lookup table and referred at each iteration. Gaussian smoothing scheme that is described in section 5 is optionally applicable for better matching accuracy. If the resolution of input image is very high or real-time performance is required, simple bilinear interpolation is sufficient. Figure 9(b, c) shows generated panoramic stereo pair which is defined in spherical coordinate. The size of images are 1800x160 which indicates the sampling resolution is o 0.. Stereo matching was conducted with simple window-based correlation search - sum of absolute difference (SAD). The correspondence was searched for the same scan line because the epipolar lines are collinear. The size of window is 9x9. Additionally, reverse verification [14] is applied to cancel out probable false matching which prone to generate unwanted clutter. Figure 9 shows the disparity map, in which nearby objects such as tables and a robot appear brighter than distant objects such as bookshelves and racks. In figure 10, panorama and its disparity map from two different sampling methods are zoomed to compare. and are results from linear interpolation and resolution equalization for each. We can observe the aliasing effect is suppressed and resolutions for each view are equalized in. Accordingly, in stereo matching, the number of filtered points (black dots in disparity map) from the reverse verification becomes smaller in case of resolution equalization. Figure 11(a-e) are results of 3D reconstruction with the panoramic images and disparity map in figure 7(ac). We can observe the 3D sensation with the change o o of the viewing direction from 16 to 16. Figure 1 is the vertical projection of 3D reconstruction with a height of 10cm near the level of tabletop. We can observe the linear edges of nearby tables and the objects on them. Even with a simple stereo matching algorithm, the disparity map and the depth map show that the reconstruction accuracy is quite good except the self-occlusion regions and the ambiguous regions (specular region, textureless region, and a region with repetitive pattern). Figure 7: Single camera catadioptric stereo system for panoramic view. The system is 140mm tall and 80mm wide. The length of the baseline is 100mm. The mirror system is mounted in front of the camera.

7 Figure 8: A catadioptric stereo image captured by the proposed system shown in Figure 7. The resolution of the image is 1600x100. The epipolar lines in two different views are radially collinear. (c) (d) Figure 10: Comparison of sampling method for panoramic image between linear interpolation resolution equalization. Top: disparity map based on top view, middle: top view, bottom: bottom view. (e) Figure 11: Results of 3D reconstruction: 3D sensation is observable with the change of the viewing direction + 16o + 8o (c) + 0o (d) 8o (e) 16o (c) Figure 9: Panoramic stereo images produced by the raw image in Figure 8, and corresponding disparity map. The disparity map was computed from the two panoramic images top view and bottom view (c). Disparity map was computed by simple block matching algorithm (SAD). The size of window is 9x9. The sizes of images are 1800x160. It covers field of view of 360 o 3 o

8 References [1] J. Gluckman and S. K. Nayar, "Planar Catadioptric Stereo: Geometry and Calibration", Proc. Computer Vision and Pattern Recognition, [] J. Gluckman and S. K. Nayar, "Rectified Catadioptric Stereo Sensors", Proc. Computer Vision and Pattern Recognition, 000. [3] D. Lee and I. Kweon, "A Novel Stereo Camera System by a Biprism", IEEE Trans. Robotics & Automation, vol. 16, no. 5, pp , Oct [4] S. Baker and S. Nayar, "A Theory of Single-Viewpoint Catadioptric Image Formation", Int. Journal of Computer Vision, 35(), pp , [5] J. Gluckman, S. K. Nayar and K. J. Thoresz, "Real-Time Omnidirectional and Panoramic Stereo", Proc. Image Understanding Workshop, Figure 1: Outline of the environment. Obtained 3-D reconstruction in height -10cm to -0cm was vertically projected. 7. Conclusion A framework is developed for the design of single camera catadioptric stereo system. A total of 44 possible stereo configurations with single-viewpoint constraint are suggested. Single camera configuration makes the whole system simple. In addition, the same physical characteristics of the camera make the stereo matching simpler with better accuracy. The mirror-folding scheme makes it easy to design a more compact system with a wide baseline. The single camera stereo scheme and folding scheme caused the problem of resolution difference between stereo pair. We solved it by applying the scale space theory. The result demonstrates that the proposed system provide sufficient accuracy for various machine vision applications in which the panoramic range measurements are important. It can be widely used for robotic applications (environment recognition, path planning, and obstacle avoidance), virtual reality, surveillance systems, and military applications. [6] S. A. Nene and S. Nayar, "Stereo with Mirrors", Proc. Int. Conference on Computer Vision, [7] S. Lin and R. Bajcsy, "High Resolution Catadioptric Omni-Directional Stereo Sensor for Robot Vision", Proc. Int. Conference on Robotics & Automation, 003. [8] D. Southwell, A. Basu, M. Fiala and J. Reyda, "Panoramic Stereo", Proc. Int. Conference on Pattern Recognition, [9] M. Fiala and A. Basu, "Panoramic stereo reconstruction using non-svp optics", Proc. Int. Conference on Pattern Recognition, 00. [10] E. L. L. Cabral, J. C. de Souza Junior and M. C. Hunold, "Omnidirectional Stereo Vision with a Hyperbolic Double Lobed Mirror", Proc. Int. Conference on Pattern Recognition, 004. [11] S. Nayar and V. Peri, "Folded Catadioptric Cameras", Proc. Computer Vision and Pattern Recognition, [1] T. L. Conroy and J. B. Moore, "Resolution Invariant Surfaces for Panoramic Vision Systems", Proc. Int. Conference on Computer Vision, [13] T. Lindeberg, "Scale-space theory: A Basic Tool for Analysing Structures at Different Scales", Journal of Applied Statistics, 0,, pp. 4-70, [14] P. Werth and S. Scherer, "A Novel Bidirectional Framework for Control and Refinement of Area Based Correlation Techniques", Proc. Int. Conference on Pattern Recognition, 000.

Depth Perception with a Single Camera

Depth Perception with a Single Camera Depth Perception with a Single Camera Jonathan R. Seal 1, Donald G. Bailey 2, Gourab Sen Gupta 2 1 Institute of Technology and Engineering, 2 Institute of Information Sciences and Technology, Massey University,

More information

Active Aperture Control and Sensor Modulation for Flexible Imaging

Active Aperture Control and Sensor Modulation for Flexible Imaging Active Aperture Control and Sensor Modulation for Flexible Imaging Chunyu Gao and Narendra Ahuja Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL,

More information

Folded Catadioptric Cameras*

Folded Catadioptric Cameras* Folded Catadioptric Cameras* Shree K. Nayar Department of Computer Science Columbia University, New York nayar @ cs.columbia.edu Venkata Peri CycloVision Technologies 295 Madison Avenue, New York peri

More information

Panoramic Mosaicing with a 180 Field of View Lens

Panoramic Mosaicing with a 180 Field of View Lens CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Panoramic Mosaicing with a 18 Field of View Lens Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek Bakstein and

More information

Cameras for Stereo Panoramic Imaging Λ

Cameras for Stereo Panoramic Imaging Λ Cameras for Stereo Panoramic Imaging Λ Shmuel Peleg Yael Pritch Moshe Ben-Ezra School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, ISRAEL Abstract A panorama

More information

Catadioptric Stereo For Robot Localization

Catadioptric Stereo For Robot Localization Catadioptric Stereo For Robot Localization Adam Bickett CSE 252C Project University of California, San Diego Abstract Stereo rigs are indispensable in real world 3D localization and reconstruction, yet

More information

UC Berkeley UC Berkeley Previously Published Works

UC Berkeley UC Berkeley Previously Published Works UC Berkeley UC Berkeley Previously Published Works Title Single-view-point omnidirectional catadioptric cone mirror imager Permalink https://escholarship.org/uc/item/1ht5q6xc Journal IEEE Transactions

More information

Folded catadioptric panoramic lens with an equidistance projection scheme

Folded catadioptric panoramic lens with an equidistance projection scheme Folded catadioptric panoramic lens with an equidistance projection scheme Gyeong-il Kweon, Kwang Taek Kim, Geon-hee Kim, and Hyo-sik Kim A new formula for a catadioptric panoramic lens with an equidistance

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Astigmatism Particle Tracking Velocimetry for Macroscopic Flows

Astigmatism Particle Tracking Velocimetry for Macroscopic Flows 1TH INTERNATIONAL SMPOSIUM ON PARTICLE IMAGE VELOCIMETR - PIV13 Delft, The Netherlands, July 1-3, 213 Astigmatism Particle Tracking Velocimetry for Macroscopic Flows Thomas Fuchs, Rainer Hain and Christian

More information

Digital Photographic Imaging Using MOEMS

Digital Photographic Imaging Using MOEMS Digital Photographic Imaging Using MOEMS Vasileios T. Nasis a, R. Andrew Hicks b and Timothy P. Kurzweg a a Department of Electrical and Computer Engineering, Drexel University, Philadelphia, USA b Department

More information

This is an author-deposited version published in: Eprints ID: 3672

This is an author-deposited version published in:   Eprints ID: 3672 This is an author-deposited version published in: http://oatao.univ-toulouse.fr/ Eprints ID: 367 To cite this document: ZHANG Siyuan, ZENOU Emmanuel. Optical approach of a hypercatadioptric system depth

More information

CS535 Fall Department of Computer Science Purdue University

CS535 Fall Department of Computer Science Purdue University Omnidirectional Camera Models CS535 Fall 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University A little bit of history Omnidirectional cameras are also called panoramic

More information

MINIATURE X-RAY SOURCES AND THE EFFECTS OF SPOT SIZE ON SYSTEM PERFORMANCE

MINIATURE X-RAY SOURCES AND THE EFFECTS OF SPOT SIZE ON SYSTEM PERFORMANCE 228 MINIATURE X-RAY SOURCES AND THE EFFECTS OF SPOT SIZE ON SYSTEM PERFORMANCE D. CARUSO, M. DINSMORE TWX LLC, CONCORD, MA 01742 S. CORNABY MOXTEK, OREM, UT 84057 ABSTRACT Miniature x-ray sources present

More information

Modeling and Synthesis of Aperture Effects in Cameras

Modeling and Synthesis of Aperture Effects in Cameras Modeling and Synthesis of Aperture Effects in Cameras Douglas Lanman, Ramesh Raskar, and Gabriel Taubin Computational Aesthetics 2008 20 June, 2008 1 Outline Introduction and Related Work Modeling Vignetting

More information

A moment-preserving approach for depth from defocus

A moment-preserving approach for depth from defocus A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:

More information

A Structured Light Range Imaging System Using a Moving Correlation Code

A Structured Light Range Imaging System Using a Moving Correlation Code A Structured Light Range Imaging System Using a Moving Correlation Code Frank Pipitone Navy Center for Applied Research in Artificial Intelligence Naval Research Laboratory Washington, DC 20375-5337 USA

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Exercise questions for Machine vision

Exercise questions for Machine vision Exercise questions for Machine vision This is a collection of exercise questions. These questions are all examination alike which means that similar questions may appear at the written exam. I ve divided

More information

Image Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens.

Image Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens. Image Formation Light (Energy) Source Surface Imaging Plane Pinhole Lens World Optics Sensor Signal B&W Film Color Film TV Camera Silver Density Silver density in three color layers Electrical Today Optics:

More information

Lecture 3: Geometrical Optics 1. Spherical Waves. From Waves to Rays. Lenses. Chromatic Aberrations. Mirrors. Outline

Lecture 3: Geometrical Optics 1. Spherical Waves. From Waves to Rays. Lenses. Chromatic Aberrations. Mirrors. Outline Lecture 3: Geometrical Optics 1 Outline 1 Spherical Waves 2 From Waves to Rays 3 Lenses 4 Chromatic Aberrations 5 Mirrors Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl Lecture 3: Geometrical

More information

College of Arts and Sciences

College of Arts and Sciences College of Arts and Sciences Drexel E-Repository and Archive (idea) http://idea.library.drexel.edu/ Drexel University Libraries www.library.drexel.edu The following item is made available as a courtesy

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

ADVANCED OPTICS LAB -ECEN Basic Skills Lab

ADVANCED OPTICS LAB -ECEN Basic Skills Lab ADVANCED OPTICS LAB -ECEN 5606 Basic Skills Lab Dr. Steve Cundiff and Edward McKenna, 1/15/04 Revised KW 1/15/06, 1/8/10 Revised CC and RZ 01/17/14 The goal of this lab is to provide you with practice

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Novel Hemispheric Image Formation: Concepts & Applications

Novel Hemispheric Image Formation: Concepts & Applications Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic

More information

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010 La photographie numérique Frank NIELSEN Lundi 7 Juin 2010 1 Le Monde digital Key benefits of the analog2digital paradigm shift? Dissociate contents from support : binarize Universal player (CPU, Turing

More information

PROCEEDINGS OF SPIE. Automated asphere centration testing with AspheroCheck UP

PROCEEDINGS OF SPIE. Automated asphere centration testing with AspheroCheck UP PROCEEDINGS OF SPIE SPIEDigitalLibrary.org/conference-proceedings-of-spie Automated asphere centration testing with AspheroCheck UP F. Hahne, P. Langehanenberg F. Hahne, P. Langehanenberg, "Automated asphere

More information

Development of a new multi-wavelength confocal surface profilometer for in-situ automatic optical inspection (AOI)

Development of a new multi-wavelength confocal surface profilometer for in-situ automatic optical inspection (AOI) Development of a new multi-wavelength confocal surface profilometer for in-situ automatic optical inspection (AOI) Liang-Chia Chen 1#, Chao-Nan Chen 1 and Yi-Wei Chang 1 1. Institute of Automation Technology,

More information

Image Formation and Capture

Image Formation and Capture Figure credits: B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, A. Theuwissen, and J. Malik Image Formation and Capture COS 429: Computer Vision Image Formation and Capture Real world Optics Sensor Devices

More information

Optical Coherence: Recreation of the Experiment of Thompson and Wolf

Optical Coherence: Recreation of the Experiment of Thompson and Wolf Optical Coherence: Recreation of the Experiment of Thompson and Wolf David Collins Senior project Department of Physics, California Polytechnic State University San Luis Obispo June 2010 Abstract The purpose

More information

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Cameras. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Cameras Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017 Camera Focus Camera Focus So far, we have been simulating pinhole cameras with perfect focus Often times, we want to simulate more

More information

New foveated wide angle lens with high resolving power and without brightness loss in the periphery

New foveated wide angle lens with high resolving power and without brightness loss in the periphery New foveated wide angle lens with high resolving power and without brightness loss in the periphery K. Wakamiya *a, T. Senga a, K. Isagi a, N. Yamamura a, Y. Ushio a and N. Kita b a Nikon Corp., 6-3,Nishi-ohi

More information

LENSES. INEL 6088 Computer Vision

LENSES. INEL 6088 Computer Vision LENSES INEL 6088 Computer Vision Digital camera A digital camera replaces film with a sensor array Each cell in the array is a Charge Coupled Device light-sensitive diode that converts photons to electrons

More information

Improved SIFT Matching for Image Pairs with a Scale Difference

Improved SIFT Matching for Image Pairs with a Scale Difference Improved SIFT Matching for Image Pairs with a Scale Difference Y. Bastanlar, A. Temizel and Y. Yardımcı Informatics Institute, Middle East Technical University, Ankara, 06531, Turkey Published in IET Electronics,

More information

Compact Dual Field-of-View Telescope for Small Satellite Payloads

Compact Dual Field-of-View Telescope for Small Satellite Payloads Compact Dual Field-of-View Telescope for Small Satellite Payloads James C. Peterson Space Dynamics Laboratory 1695 North Research Park Way, North Logan, UT 84341; 435-797-4624 Jim.Peterson@sdl.usu.edu

More information

Supplementary Figure 1

Supplementary Figure 1 Supplementary Figure 1 Technical overview drawing of the Roadrunner goniometer. The goniometer consists of three main components: an inline sample-viewing microscope, a high-precision scanning unit for

More information

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura

MIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura MIT CSAIL 6.869 Advances in Computer Vision Fall 2013 Problem Set 6: Anaglyph Camera Obscura Posted: Tuesday, October 8, 2013 Due: Thursday, October 17, 2013 You should submit a hard copy of your work

More information

Inverted-COR: Inverted-Occultation Coronagraph for Solar Orbiter

Inverted-COR: Inverted-Occultation Coronagraph for Solar Orbiter Inverted-COR: Inverted-Occultation Coronagraph for Solar Orbiter OATo Technical Report Nr. 119 Date 19-05-2009 by: Silvano Fineschi Release Date Sheet: 1 of 1 REV/ VER LEVEL DOCUMENT CHANGE RECORD DESCRIPTION

More information

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen

Image Formation and Capture. Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Acknowledgment: some figures by B. Curless, E. Hecht, W.J. Smith, B.K.P. Horn, and A. Theuwissen Image Formation and Capture Real world Optics Sensor Devices Sources of Error

More information

Conformal optical system design with a single fixed conic corrector

Conformal optical system design with a single fixed conic corrector Conformal optical system design with a single fixed conic corrector Song Da-Lin( ), Chang Jun( ), Wang Qing-Feng( ), He Wu-Bin( ), and Cao Jiao( ) School of Optoelectronics, Beijing Institute of Technology,

More information

A Study of Slanted-Edge MTF Stability and Repeatability

A Study of Slanted-Edge MTF Stability and Repeatability A Study of Slanted-Edge MTF Stability and Repeatability Jackson K.M. Roland Imatest LLC, 2995 Wilderness Place Suite 103, Boulder, CO, USA ABSTRACT The slanted-edge method of measuring the spatial frequency

More information

Omni-Directional Catadioptric Acquisition System

Omni-Directional Catadioptric Acquisition System Technical Disclosure Commons Defensive Publications Series December 18, 2017 Omni-Directional Catadioptric Acquisition System Andreas Nowatzyk Andrew I. Russell Follow this and additional works at: http://www.tdcommons.org/dpubs_series

More information

Chapter 18 Optical Elements

Chapter 18 Optical Elements Chapter 18 Optical Elements GOALS When you have mastered the content of this chapter, you will be able to achieve the following goals: Definitions Define each of the following terms and use it in an operational

More information

Feature Extraction and Pattern Recognition from Fisheye Images in the Spatial Domain

Feature Extraction and Pattern Recognition from Fisheye Images in the Spatial Domain Feature Extraction and Pattern Recognition from Fisheye Images in the Spatial Domain Konstantinos K. Delibasis 1 and Ilias Maglogiannis 2 1 Dept. of Computer Science and Biomedical Informatics, Univ. of

More information

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS INFOTEH-JAHORINA Vol. 10, Ref. E-VI-11, p. 892-896, March 2011. MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS Jelena Cvetković, Aleksej Makarov, Sasa Vujić, Vlatacom d.o.o. Beograd Abstract -

More information

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and

8.2 IMAGE PROCESSING VERSUS IMAGE ANALYSIS Image processing: The collection of routines and 8.1 INTRODUCTION In this chapter, we will study and discuss some fundamental techniques for image processing and image analysis, with a few examples of routines developed for certain purposes. 8.2 IMAGE

More information

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method

More information

A Mathematical model for the determination of distance of an object in a 2D image

A Mathematical model for the determination of distance of an object in a 2D image A Mathematical model for the determination of distance of an object in a 2D image Deepu R 1, Murali S 2,Vikram Raju 3 Maharaja Institute of Technology Mysore, Karnataka, India rdeepusingh@mitmysore.in

More information

Supplementary Figure S1. Schematic representation of different functionalities that could be

Supplementary Figure S1. Schematic representation of different functionalities that could be Supplementary Figure S1. Schematic representation of different functionalities that could be obtained using the fiber-bundle approach This schematic representation shows some example of the possible functions

More information

INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER

INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER INSTRUCTION MANUAL FOR THE MODEL C OPTICAL TESTER Data Optics, Inc. (734) 483-8228 115 Holmes Road or (800) 321-9026 Ypsilanti, Michigan 48198-3020 Fax:

More information

Computer Vision Slides curtesy of Professor Gregory Dudek

Computer Vision Slides curtesy of Professor Gregory Dudek Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short

More information

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS Equipment and accessories: an optical bench with a scale, an incandescent lamp, matte, a set of

More information

The History and Future of Measurement Technology in Sumitomo Electric

The History and Future of Measurement Technology in Sumitomo Electric ANALYSIS TECHNOLOGY The History and Future of Measurement Technology in Sumitomo Electric Noritsugu HAMADA This paper looks back on the history of the development of measurement technology that has contributed

More information

Physical Panoramic Pyramid and Noise Sensitivity in Pyramids

Physical Panoramic Pyramid and Noise Sensitivity in Pyramids Physical Panoramic Pyramid and Noise Sensitivity in Pyramids Weihong Yin and Terrance E. Boult Electrical Engineering and Computer Science Department Lehigh University, Bethlehem, PA 18015 Abstract Multi-resolution

More information

Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics

Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics Lehrstuhl für Bildverarbeitung Institute of Imaging & Computer Vision Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics Thomas Stehle and Michael Hennes and Sebastian Gross and

More information

Computational Cameras. Rahul Raguram COMP

Computational Cameras. Rahul Raguram COMP Computational Cameras Rahul Raguram COMP 790-090 What is a computational camera? Camera optics Camera sensor 3D scene Traditional camera Final image Modified optics Camera sensor Image Compute 3D scene

More information

Catadioptric Omnidirectional Camera *

Catadioptric Omnidirectional Camera * Catadioptric Omnidirectional Camera * Shree K. Nayar Department of Computer Science, Columbia University New York, New York 10027 Email: nayar@cs.columbia.edu Abstract Conventional video cameras have limited

More information

EXPERIMENT ON PARAMETER SELECTION OF IMAGE DISTORTION MODEL

EXPERIMENT ON PARAMETER SELECTION OF IMAGE DISTORTION MODEL IARS Volume XXXVI, art 5, Dresden 5-7 September 006 EXERIMENT ON ARAMETER SELECTION OF IMAGE DISTORTION MODEL Ryuji Matsuoa*, Noboru Sudo, Hideyo Yootsua, Mitsuo Sone Toai University Research & Information

More information

Camera Resolution and Distortion: Advanced Edge Fitting

Camera Resolution and Distortion: Advanced Edge Fitting 28, Society for Imaging Science and Technology Camera Resolution and Distortion: Advanced Edge Fitting Peter D. Burns; Burns Digital Imaging and Don Williams; Image Science Associates Abstract A frequently

More information

Opto Engineering S.r.l.

Opto Engineering S.r.l. TUTORIAL #1 Telecentric Lenses: basic information and working principles On line dimensional control is one of the most challenging and difficult applications of vision systems. On the other hand, besides

More information

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view) Camera projections Recall the plenoptic function: Panoramic imaging Ixyzϕθλt (,,,,,, ) At any point xyz,, in space, there is a full sphere of possible incidence directions ϕ, θ, covered by 0 ϕ 2π, 0 θ

More information

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7)

Lenses- Worksheet. (Use a ray box to answer questions 3 to 7) Lenses- Worksheet 1. Look at the lenses in front of you and try to distinguish the different types of lenses? Describe each type and record its characteristics. 2. Using the lenses in front of you, look

More information

Image Processing for feature extraction

Image Processing for feature extraction Image Processing for feature extraction 1 Outline Rationale for image pre-processing Gray-scale transformations Geometric transformations Local preprocessing Reading: Sonka et al 5.1, 5.2, 5.3 2 Image

More information

Unit 1: Image Formation

Unit 1: Image Formation Unit 1: Image Formation 1. Geometry 2. Optics 3. Photometry 4. Sensor Readings Szeliski 2.1-2.3 & 6.3.5 1 Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor

More information

Rotation/ scale invariant hybrid digital/optical correlator system for automatic target recognition

Rotation/ scale invariant hybrid digital/optical correlator system for automatic target recognition Rotation/ scale invariant hybrid digital/optical correlator system for automatic target recognition V. K. Beri, Amit Aran, Shilpi Goyal, and A. K. Gupta * Photonics Division Instruments Research and Development

More information

PLazeR. a planar laser rangefinder. Robert Ying (ry2242) Derek Xingzhou He (xh2187) Peiqian Li (pl2521) Minh Trang Nguyen (mnn2108)

PLazeR. a planar laser rangefinder. Robert Ying (ry2242) Derek Xingzhou He (xh2187) Peiqian Li (pl2521) Minh Trang Nguyen (mnn2108) PLazeR a planar laser rangefinder Robert Ying (ry2242) Derek Xingzhou He (xh2187) Peiqian Li (pl2521) Minh Trang Nguyen (mnn2108) Overview & Motivation Detecting the distance between a sensor and objects

More information

PHYS 160 Astronomy. When analyzing light s behavior in a mirror or lens, it is helpful to use a technique called ray tracing.

PHYS 160 Astronomy. When analyzing light s behavior in a mirror or lens, it is helpful to use a technique called ray tracing. Optics Introduction In this lab, we will be exploring several properties of light including diffraction, reflection, geometric optics, and interference. There are two sections to this lab and they may

More information

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope

PROCEEDINGS OF SPIE. Measurement of low-order aberrations with an autostigmatic microscope PROCEEDINGS OF SPIE SPIEDigitalLibrary.org/conference-proceedings-of-spie Measurement of low-order aberrations with an autostigmatic microscope William P. Kuhn Measurement of low-order aberrations with

More information

The introduction and background in the previous chapters provided context in

The introduction and background in the previous chapters provided context in Chapter 3 3. Eye Tracking Instrumentation 3.1 Overview The introduction and background in the previous chapters provided context in which eye tracking systems have been used to study how people look at

More information

Range Sensing strategies

Range Sensing strategies Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart and Nourbakhsh 4.1.6 Range Sensors (time of flight) (1) Large range distance measurement -> called

More information

Reconstructing Virtual Rooms from Panoramic Images

Reconstructing Virtual Rooms from Panoramic Images Reconstructing Virtual Rooms from Panoramic Images Dirk Farin, Peter H. N. de With Contact address: Dirk Farin Eindhoven University of Technology (TU/e) Embedded Systems Institute 5600 MB, Eindhoven, The

More information

Colour correction for panoramic imaging

Colour correction for panoramic imaging Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in

More information

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Distance Estimation with a Two or Three Aperture SLR Digital Camera Distance Estimation with a Two or Three Aperture SLR Digital Camera Seungwon Lee, Joonki Paik, and Monson H. Hayes Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University

More information

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 19: Depth Cameras Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Continuing theme: computational photography Cheap cameras capture light, extensive processing produces

More information

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques Kevin Rushant, Department of Computer Science, University of Sheffield, GB. email: krusha@dcs.shef.ac.uk Libor Spacek,

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

1. INTRODUCTION ABSTRACT

1. INTRODUCTION ABSTRACT Experimental verification of Sub-Wavelength Holographic Lithography physical concept for single exposure fabrication of complex structures on planar and non-planar surfaces Michael V. Borisov, Dmitry A.

More information

Optics Practice. Version #: 0. Name: Date: 07/01/2010

Optics Practice. Version #: 0. Name: Date: 07/01/2010 Optics Practice Date: 07/01/2010 Version #: 0 Name: 1. Which of the following diagrams show a real image? a) b) c) d) e) i, ii, iii, and iv i and ii i and iv ii and iv ii, iii and iv 2. A real image is

More information

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington Dual-fisheye Lens Stitching for 360-degree Imaging & Video Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington Introduction 360-degree imaging: the process of taking multiple photographs and

More information

Digital deformation model for fisheye image rectification

Digital deformation model for fisheye image rectification Digital deformation model for fisheye image rectification Wenguang Hou, 1 Mingyue Ding, 1 Nannan Qin, 2 and Xudong Lai 2, 1 Department of Bio-medical Engineering, Image Processing and Intelligence Control

More information

Design of null lenses for testing of elliptical surfaces

Design of null lenses for testing of elliptical surfaces Design of null lenses for testing of elliptical surfaces Yeon Soo Kim, Byoung Yoon Kim, and Yun Woo Lee Null lenses are designed for testing the oblate elliptical surface that is the third mirror of the

More information

ADVANCED OPTICS LAB -ECEN 5606

ADVANCED OPTICS LAB -ECEN 5606 ADVANCED OPTICS LAB -ECEN 5606 Basic Skills Lab Dr. Steve Cundiff and Edward McKenna, 1/15/04 rev KW 1/15/06, 1/8/10 The goal of this lab is to provide you with practice of some of the basic skills needed

More information

Optical Design with Zemax

Optical Design with Zemax Optical Design with Zemax Lecture : Correction II 3--9 Herbert Gross Summer term www.iap.uni-jena.de Correction II Preliminary time schedule 6.. Introduction Introduction, Zemax interface, menues, file

More information

Study of self-interference incoherent digital holography for the application of retinal imaging

Study of self-interference incoherent digital holography for the application of retinal imaging Study of self-interference incoherent digital holography for the application of retinal imaging Jisoo Hong and Myung K. Kim Department of Physics, University of South Florida, Tampa, FL, US 33620 ABSTRACT

More information

Bias errors in PIV: the pixel locking effect revisited.

Bias errors in PIV: the pixel locking effect revisited. Bias errors in PIV: the pixel locking effect revisited. E.F.J. Overmars 1, N.G.W. Warncke, C. Poelma and J. Westerweel 1: Laboratory for Aero & Hydrodynamics, University of Technology, Delft, The Netherlands,

More information

Optical Engineering 421/521 Sample Questions for Midterm 1

Optical Engineering 421/521 Sample Questions for Midterm 1 Optical Engineering 421/521 Sample Questions for Midterm 1 Short answer 1.) Sketch a pechan prism. Name a possible application of this prism., write the mirror matrix for this prism (or any other common

More information

Single-view Metrology and Cameras

Single-view Metrology and Cameras Single-view Metrology and Cameras 10/10/17 Computational Photography Derek Hoiem, University of Illinois Project 2 Results Incomplete list of great project pages Haohang Huang: Best presented project;

More information

Princeton University COS429 Computer Vision Problem Set 1: Building a Camera

Princeton University COS429 Computer Vision Problem Set 1: Building a Camera Princeton University COS429 Computer Vision Problem Set 1: Building a Camera What to submit: You need to submit two files: one PDF file for the report that contains your name, Princeton NetID, all the

More information

Final Reg Optics Review SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question.

Final Reg Optics Review SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. Final Reg Optics Review 1) How far are you from your image when you stand 0.75 m in front of a vertical plane mirror? 1) 2) A object is 12 cm in front of a concave mirror, and the image is 3.0 cm in front

More information

A Digital Camera and Real-time Image correction for use in Edge Location.

A Digital Camera and Real-time Image correction for use in Edge Location. A Digital Camera and Real-time Image correction for use in Edge Location. D.Hutber S. Wright Sowerby Research Centre Cambridge University Engineering Dept. British Aerospace NESD Mill Lane P.O.Box 5 FPC

More information

Parallel Mode Confocal System for Wafer Bump Inspection

Parallel Mode Confocal System for Wafer Bump Inspection Parallel Mode Confocal System for Wafer Bump Inspection ECEN5616 Class Project 1 Gao Wenliang wen-liang_gao@agilent.com 1. Introduction In this paper, A parallel-mode High-speed Line-scanning confocal

More information

Computer Vision. Howie Choset Introduction to Robotics

Computer Vision. Howie Choset   Introduction to Robotics Computer Vision Howie Choset http://www.cs.cmu.edu.edu/~choset Introduction to Robotics http://generalrobotics.org What is vision? What is computer vision? Edge Detection Edge Detection Interest points

More information

RECOMMENDATION ITU-R F *

RECOMMENDATION ITU-R F * Rec. ITU-R F.699-6 1 RECOMMENATION ITU-R F.699-6 * Reference radiation patterns for fixed wireless system antennas for use in coordination studies and interference assessment in the frequency range from

More information

Laboratory 7: Properties of Lenses and Mirrors

Laboratory 7: Properties of Lenses and Mirrors Laboratory 7: Properties of Lenses and Mirrors Converging and Diverging Lens Focal Lengths: A converging lens is thicker at the center than at the periphery and light from an object at infinity passes

More information

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline Lecture 4: Geometrical Optics 2 Outline 1 Optical Systems 2 Images and Pupils 3 Rays 4 Wavefronts 5 Aberrations Christoph U. Keller, Leiden University, keller@strw.leidenuniv.nl Lecture 4: Geometrical

More information

Multi Viewpoint Panoramas

Multi Viewpoint Panoramas 27. November 2007 1 Motivation 2 Methods Slit-Scan "The System" 3 "The System" Approach Preprocessing Surface Selection Panorama Creation Interactive Renement 4 Sources Motivation image showing long continous

More information

APPLICATION AND ACCURACY POTENTIAL OF A STRICT GEOMETRIC MODEL FOR ROTATING LINE CAMERAS

APPLICATION AND ACCURACY POTENTIAL OF A STRICT GEOMETRIC MODEL FOR ROTATING LINE CAMERAS APPLICATION AND ACCURACY POTENTIAL OF A STRICT GEOMETRIC MODEL FOR ROTATING LINE CAMERAS D. Schneider, H.-G. Maas Dresden University of Technology Institute of Photogrammetry and Remote Sensing Mommsenstr.

More information

Parity and Plane Mirrors. Invert Image flip about a horizontal line. Revert Image flip about a vertical line.

Parity and Plane Mirrors. Invert Image flip about a horizontal line. Revert Image flip about a vertical line. Optical Systems 37 Parity and Plane Mirrors In addition to bending or folding the light path, reflection from a plane mirror introduces a parity change in the image. Invert Image flip about a horizontal

More information

Displacement Measurement of Burr Arch-Truss Under Dynamic Loading Based on Image Processing Technology

Displacement Measurement of Burr Arch-Truss Under Dynamic Loading Based on Image Processing Technology 6 th International Conference on Advances in Experimental Structural Engineering 11 th International Workshop on Advanced Smart Materials and Smart Structures Technology August 1-2, 2015, University of

More information