True Single View Point Cone Mirror Omni-Directional Catadioptric System 1

Size: px
Start display at page:

Download "True Single View Point Cone Mirror Omni-Directional Catadioptric System 1"

Transcription

1 True Single View Point Cone Mirror Omni-Directional Catadioptric System 1 Shih-Schön Lin, Ruzena ajcsy GRASP Laoratory, Computer and Information Science Department University of Pennsylvania, shschon@grasp.cis.upenn.edu, rajcsy@nsf.gov (also, ajcsy@grasp.cis.upenn.edu) Astract Pinhole camera model is a simplified suset of geometric optics. In special cases like the image formation of the cone (a degenerate conic section) in an omnidirectional view catadioptric system, there are more complex optical phenomena involved that the simple pinhole model can not explain. We show that using the full geometric optics model a true single viewpoint cone omni-directional system can e uilt. We show how such system is uilt first, and then show in detail how each optical phenomenon works together to make the system true single viewpoint. The new system requires only simple off-the-shelf components and still outperforms other single viewpoint omni-systems for many applications. 1. Introduction With the ideal pinhole camera model in mind, it has een shown that conic section convex s can e used to create single viewpoint catadioptric omnidirectional view systems that can e unwarped perspectively without systematic distortions [1]. It has also een shown that cone is a degenerate shape of the conic section family that still possesses a single viewpoint [1]. The prolem was that, with a true pinhole camera the single viewpoint of a cone (located at its tip) seems not useale ecause all the visile rays seem to e locked y the cone itself [1], while if the virtual pinhole was placed at some distance away from the tip of the cone, the whole system no longer possesses a single viewpoint [10;1;8]. With this prolem in mind, cone s have not een used to generate precisely unwarped perspective images. However, cone s have een used to aid navigation, collision avoidance, and pipe inspections without single viewpoint [9;10;2;6;8] In those applications, cone images are used as is, and no attempt was made to perform a perspectively correct image unwarping. A cone is also used to construct mosaics [5], ut that requires a special line scan camera, time consuming scanning, and mechanical moving parts. In real world applications we use finite aperture lens systems to simulate the projective properties of a theoretical pinhole camera. The real cameras have more useful properties that a pure pinhole lacks. We show in this work that under the geometric optics image formation model, the single viewpoint of the cone is actually usale, i.e. we can physically place the effective pinhole position at the single viewpoint and still get the exact true single viewpoint image we need. The geometric optics image formation model is more powerful than pinhole camera model, i.e. it is a more accurate description of what really happens in the physical world. For example, when an oject is placed inside the front focal point of a real convex lens, it forms an enlarged virtual image. This is completely unexplainale if you treat the lens system as a perspective pinhole. A perspective pinhole camera always gets real images regardless of the oject distance and image plane distance, and it can never e out of focus. aker and ayar [1] used a simple optical model to analyze some omni-s ut they did not use any optical modeling on cone s. This turns out to e the key deficiency in modeling the cone /perspective camera comination as we will elaorate shortly. The cone system proposed here uses only off-the-shelf ordinary lens and CCD camera. The cone is among the simplest shape to produce, and it has much higher resolution for scenes around the horizon [8]. It adds the least optical distortion to the resulting image ecause it is the only omni-view with a longitudinally flat surface. All other omni- types are curved in oth longitudinal and lateral directions. The aility to pair with readily availale perspective camera instead of expensive and complex orthographic cameras is also a very important advantage over existing omni-view 1 This work has een supported in part y SF-IIS , ARO/MURI-DAAH , SF-CDS , DARPA-ITO-DAT , and Advanced etwork and Services. Special thanks to memers of MOOSE Project: V. J. Kumar, Kostas Daniilidis, Elli Angelopoulou, Oleg aroditsky, and Geoffrey Egnal for their invaluale comments and support. Also grateful thanks to memers of GRASP La for their wonderful feedack and help.

2 systems. For example, when you want to uild an omnisystem outside the visile range, it is often the case that only perspective cameras are availale. That is an added reason why we go through all the troule to prove and uild a cone omni-directional camera system. 2. True single viewpoint cone omniview system As proved y aker and ayar [1], when the viewpoint of a perspective camera coincides with the tip of the cone, you have a single viewpoint omni-view system. The only question then is whether or not you can see an image in such configuration. The COPIS y Yagi [10] did not put the lens camera viewpoint at the tip of the cone ecause of this concern and thus did not form single viewpoint omni-system. It turns out that we can indeed see an image in this single viewpoint configuration. We put the viewpoint of our lens camera right at the tip of the cone and have a working true single viewpoint cone omni-view system. See Figure 1(geometry), Figure 2, Figure 3(full models) and Figure 8(real experimental setup). of lens camera a image plane Figure 1 True Single Viewpoint Cone Mirror Geometry In geometric optics image formation model [3], plane is an optical system y itself. The cross section of a cone through its axis of rotation consists of two plane s joined at the tip of the cone. The image formation property of a plane is that it always forms virtual images ehind its surface. otice that the right side does not actually extends to point, ut that does not prevent the virtual image from forming. Within geometric optics, the position of A M the virtual image point does not change, either. As long as there is some portion of the plane (ideally infinitely large) is occupied with piece(s) of real, the virtual images corresponding to any world point on the front of the plane will e formed on the ack of the plane. The positions of the virtual images are not effected y the size of the actual, ut its visiility will depend on the size and position of the actual. Since the virtual images are formed y the real plane s, the second converging optical system must see the virtual image point through the real plane. One can regard real plane as a window to look at the virtual world on the other side of the. The surface of the cone, after it accomplishes the task of forming a virtual world ehind it, acts as a window opening for the 2 nd converging optical system, i.e. the perspective lens camera, to look into the virtual world. So the cone is not locking our view, it IS the view itself. Whether the lens camera has its effective pinhole at the cone tip or not, the lens camera sees the virtual image points just like any other real world point. These virtual points get imaged perspectively the same way as if there is a real point there. In Figure 1, the virtual images a and formed y the catoptric system are imaged y the dioptric system to form real images α and β respectively. As proved y aker and ayar [1], if we can put the effective pinhole of the lens camera at the tip of the cone and still see image perspectively we have a single viewpoint omni-view catadioptric system. We have just shown that we can do this. We put the effective pinhole of the lens camera right at tip of the cone and construct a true single viewpoint cone omni-directional view catadioptric system. See Figure 8. Full geometric optic ray tracing for the cone comined with Gaussian optics ray tracing for the lens camera will yield exactly the same results we just showed. otice here in geometric optics the term ray tracing has very different meaning than the same term used in computer graphics literatures. Computer graphics ray tracing traces only one ray for one world point while geometric optics ray tracing traces no less than 2 rays (ideally infinitely many) for one world point. The computer graphic ray tracing is a simplified way of finding image position valid only for focused real images. Analyses ased on tracing single rays alone can not model how the single viewpoint of cone omniview system works. Since we need to trace many rays for each world point, we trace only point in Figure 2 and Figure 3 to avoid confusion (too many rays in one small graph if we trace oth points). Interested reader can trace point A and find it imaged exactly at the image position α.

3 In Figure 2 the tip of the cone is cut out a little so the cone tip is not physically present. This does not alter the single viewpoint geometry. Only the rightness of the single viewpoint image is reduced a little. The two configurations (Figure 2 and Figure 3) yield exactly the same image for scenes points that are in focus vertically (geometry-wise, the physical focus settings are different). We shall explain details in the following section. 3. System Characteristics 3.1 Perfect vertical imaging Curved surfaces, do not form exact image points for all scene points. For example, paraolic forms perfect image only for points at infinity, while hyperolic forms perfect virtual image only for one point, its outside focal point. aker and ayar [1] has good analysis of vertical defocus lur caused y vertically curved surfaces. ecause cone is the only vertically flat omni-view, the cone ased omni-view device is the only device free of vertical defocus lur. of lens camera #1 ack focal point image plane lens iris O is projected to I via ray 2. In image space the two image positions are always the same ecause the lengths CH and VI are always the same. The lengths are the same ecause under Gaussian optics conditions ray 1 goes parallel to VW after refracted y the lens and will hit point I when the oject is in-focus. Here the conceptual camera at F does not produce any physical image ut the real image seen y camera at C will e exactly the same since they are 1:1 orthographic projection of each other. Gauss developed the theory for paraxial rays only and founded Gaussian optics [3]. Within Gaussian optics all the rays that come from the same oject point in the world and enter the focused optical system converge at the same point. This is an approximation ut in practice is very close to perfect in many situations. Given that, trace any two such rays from a world point and we can find its image point. of lens camera #2 front focal point f lens real image plane (CCD position) Virtual image plane So Figure 3 Optical ray tracing using front focal point of the lens as its virtual pin-hole Figure 2 Optical ray tracing using lens center as virtual pin-hole O Oject Lens(Aperture) ack Focus Screen (CCD Chip) 3.2 Dual effective pinholes for lens camera This applies to most ordinary perspective lens cameras that are in focus. In Figure 4, with a conceptual perspective camera with viewpoint at F and image plane at C(perpendicular to the axis WV), an aritrary oject point O is projected to H via ray 1. While with a second conceptual perspective camera with viewpoint at C and image plane at V(perpendicular to WV), the same oject W F Front Focus (Pin Hole#2) H C Lens Center (Pin Hole #1 I Image of Oject Figure 4 Thin lens geometry implies dual effective viewpoints V

4 For convenience, we usually pick the two most easily traced rays. 1. The ray that passes through the front focal point will e refracted y the lens and then advances parallel to the optical axis until it hits the image plane. 2. The ray that passes through the center of the lens will pass straight through unaltered. The Gaussian Thin Lens formula:[3] can e derived directly from the same similar triangles we just used to derive the dual effective pinholes. Equation 1 Gaussian Thin Lens Formula s + o s = i f where s o oject distance, s i is image distance, and f is focal length. The term focal length may cause some confusion as many computer vision literature uses focal length to mean the distance from the viewpoint to the image plane. The term focal length in Gaussian optics is a lens parameter that remains fixed while we change the position of image plane. The two conceptual cameras has different oject distance / image plane distance parameters ut their resulting images are the same. This is why we can use the setup of either Figure 2 or Figure 3 and get the same results. The choice of effective pinhole effects only the value of s o, s i, and f, ut not the pixel position of any image point (compare Figure 2, Figure 3, and Figure 4). Krishna and Ahuja [4] also use the front focal point as the viewpoint for their panoramic imaging device. 3.3 Image rightness and visiility Since the profile of a cone consists of two plane s, each point inside the cone should e the virtual image point of two world points simultaneously, one on each side, see Figure 5. In practice, we only see images on the same side of the. Figure 5 shows clearly that the point is also the position of the virtual image of point on the other side. ut in the same figure we can also see that very few rays from point can actually reach the image plane ecause of occlusion. Also, due to the symmetry of the geometry, for a point that is to the right of the central axis, there is always more area of the on the right side (from tip of the cone p to point R) than the area of on the opposite side (from p to L) that is useful in terms of imaging. So we always see instead of in practice ecause the image intensity of is far weaker than that of and thus the image of always dominates in practice. of lens camera #1 L ack focal point image plane R lens iris Figure 5 World points on the opposite side form extremely weak images which are not visile in most practical situations FOV with cone 16.7 /2 cone edge Cone Omnisystem Vertical FOV cone tip 53.3 / / /2 Real Perspective Camera FOV without cone Cone Mirror Figure 6 Cone Shape and Vertical FOV Mirror Image of Half of the Camera 4. Unwarping algorithm Unlike most existing omni-view system, the vertical field of view for cone omni-system is not continuous across the zenith. As shown in Figure 6, if the tip of the cone sutends an angle α, and the normal angular FOV(Field Of View) of the perspective camera is φ, then the upper limit of viewale elevation is α-π/2. The elevation here is defined to e 0 at level, 90 degree straight up and -90 degree straight down. The same as the system used for a gun turret. The lower ound of viewale elevation is α-(π/2)-(φ/2). When unwarping, assume the camera is perfectly positioned and lined up, we estalish a 2D image polar coordinate system. For a given azimuth and elevation

5 angle in the unwarped view, the azimuth match the polar angle directly. The polar radius variale r is related to the elevation as Equation 2 r = f p tan( α π 2 θ ) where f p is the effective focal length(image plane to viewpoint distance) of real camera in real camera pixels(i.e. the pixel unit in the original omni-view image, not the pixels in the unwarped image) and θ is the elevation angle of a point in the unwarped view we want to create pix^2 in the horizontal direction and pix^2 in the vertical direction. The square root of mean squared distance differences is pix. This is equivalent to 0.5 degree in view angle. Figure 10 Cone and Paraolic placed side y side; Right: Test pattern 5. Experiments Figure 7 Cone prototype. Right: Overview of the prototype setup. Figure 11 Omni-views, Left: Cone/perspective, Right: paraolic/orthographic Figure 8 The tip of the cone actually coincides with the effective pinhole. Right: cone omni-view seen y the camera. The left side is a test pattern with dots 4 apart. Figure 9 Actual perspective image taken y real perspective camera (left) The unwarped perspective image from the aove looking at the same test pattern is shown at the right. To demonstrate the aility to correctly unwarp image into perspective view, a test pattern of rectangular grids is imaged in a omni-view picture. The right picture in Figure 8 shows the omni-view containing the grids. Figure 9 shows the unwarped result compared to an actual picture taken y a normal perspective camera. There is no visile radial distortion in Figure 9, which means the radial distortion within our FOV is very small. Tsai s algorithm is suitale for compensating small radial lens distortions. [7] For the pattern of 45 points grids, the est match etween the two pictures has mean squared error of Figure 12 Unwarped images. Left: Cone, more stripes preserved. Right: Paraolic, stripes tend to merge and not separale due to lower vertical resolution. To demonstrate the high vertical angular resolution we put a custom made integrated paraolic/orthographic system along side the cone prototype and let them look at the same color foam pattern placed aout 1m away. In traditional non-omni cameras, there is a preferred direction one can move the camera closer to the scene to improve resolution, ut in our omni camera case there is no special direction to move closer. If you move the camera closer toward one direction, the scene in the opposite direction ecomes farther at the same time. In other words, cone gets high resolution in all direction easily in normal environment while other omni cam system gets high resolution only in very confined space where everything is close y. The lens used is COSMICAR 6 mm 1:1.2 TV lens stopped down to f/11 or f/16. A SOY XC-77 camera is connected through a video ox model PS-12SU made y CHORI AMERICA. If your lens cannot get proper focus within normal adjustment range, using extender rings and/or use smaller aperture settings. Small aperture setting in a dark room may need to digitally enhance the rightness of the image. Some lens set has effective

6 viewpoints surrounded y light shield which locks most FOV. Avoid using such lens or remove the light shield efore use. 6. Discussions The cone is the only system that can get perfect image vertically when you use a perfect lens system. All other longitudinally curved s introduce defocus lurs efore the light enter any lens system [1]. So even with a perfect lens system you can not get a perfect image in those s. In the rotational symmetric direction, all omni-s are curved with the same horizontal cross-section shape: circles. As a degenerate case of conic section, the cone does not have a full hemispherical view. It sees all 360 degrees in the horizontal direction, ut the vertical field of view (FOV) is not continuous over the zenith, see Figure 6. Other omni- system may in theory have complete hemispherical view ut in practice the lens camera or secondary s are always locking the view around the zenith, so in practice no similar system can see the whole hemispherical view anyway. Smaller vertical angular FOV is actually good for many applications ecause we get higher angular resolution with the same numer of pixels concentrating on the most interesting scene. The region not visile is occupied y the lens camera, the sky or the floor, all of which are of secondary interests in many applications like driving a ground vehicle. 7. Conclusion The Cone has een identified as having a single viewpoint efore ut that single viewpoint has een regarded as useless ecause previous analysis was done with the idealized pinhole camera geometric model. This work proves that with more accurate modeling of the optical properties of real cameras used in the real world, the single viewpoint inherent in the cone geometry is usale. Prototype system ased on our theory is constructed and experiments confirm our theoretical predictions. The total field of view is smaller ut more flexile. The new system is simple to uild and maintain, easier to analyze, costs a lot less, and yet yields much higher performance in the peripheral region where most interesting scenes reside. References [2] ogner, S. Introduction to panoramic imaging Proceedings of the IEEE SMC Conference. [3] Hecht, E., Optics, 3 ed. Reading, MA, USA: Addison Wesley Longman, Inc., 1998, pp [4] Krishna, A. and Ahuja,. Panoramic image acquisition San Francisco, CA. International Conference on Computer Vision and Pattern Recognition. [5] ayar, S. K. and Karmarkar, A. 360 x 360 Mosaics Hilton Head Island, South Carolina. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. [6] Southwell, D., Vandegriend,., and asu, A. A Conical Mirror Pipeline Inspection System. 4, Minneapolis, Minnesota, USA. Proceedings of the 1996 IEEE International Conference on Rootics and Automation. [7] Tsai, R. Y., "A Versatile Camera Caliration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the- Shelf TV Cameras and Lenses," IEEE Journal of Rootics and Automation, vol. RA-3, no. 4, pp , Aug [8] Yagi, Y., "Omnidirectional sensing and its applications," IEICE TRASACTIOS O IFORMATIO AD SYSTEMS, vol. E82D, no. 3, pp , Mar [9] Yagi, Y. and Kawato, S. Panoramic scene analysis with conic projection Proceedings of the International Conference on Roots and Systems. [10] Yagi, Y., Kawato, S., and Tsuji, S., "Real-Time Omnidirectional Image Sensor (COPIS) for Vision-Guided avigation," IEEE TRASACTIOS O ROOTICS AD AUTOMATIO, vol. 10, no. 1, pp , Fe [1] aker, S. and ayar, S. K., "A Theory of Single-Viewpoint Catadioptric Image Formation," International Journal of Computer Vision, vol. 35, no. 2, pp , ov.1999.

UC Berkeley UC Berkeley Previously Published Works

UC Berkeley UC Berkeley Previously Published Works UC Berkeley UC Berkeley Previously Published Works Title Single-view-point omnidirectional catadioptric cone mirror imager Permalink https://escholarship.org/uc/item/1ht5q6xc Journal IEEE Transactions

More information

Ch 24. Geometric Optics

Ch 24. Geometric Optics text concept Ch 24. Geometric Optics Fig. 24 3 A point source of light P and its image P, in a plane mirror. Angle of incidence =angle of reflection. text. Fig. 24 4 The blue dashed line through object

More information

Active Aperture Control and Sensor Modulation for Flexible Imaging

Active Aperture Control and Sensor Modulation for Flexible Imaging Active Aperture Control and Sensor Modulation for Flexible Imaging Chunyu Gao and Narendra Ahuja Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL,

More information

College of Arts and Sciences

College of Arts and Sciences College of Arts and Sciences Drexel E-Repository and Archive (idea) http://idea.library.drexel.edu/ Drexel University Libraries www.library.drexel.edu The following item is made available as a courtesy

More information

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS

GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS GEOMETRICAL OPTICS Practical 1. Part I. BASIC ELEMENTS AND METHODS FOR CHARACTERIZATION OF OPTICAL SYSTEMS Equipment and accessories: an optical bench with a scale, an incandescent lamp, matte, a set of

More information

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36

Image Formation. Light from distant things. Geometrical optics. Pinhole camera. Chapter 36 Light from distant things Chapter 36 We learn about a distant thing from the light it generates or redirects. The lenses in our eyes create images of objects our brains can process. This chapter concerns

More information

Gaussian Ray Tracing Technique

Gaussian Ray Tracing Technique Gaussian Ray Tracing Technique Positive Lenses. A positive lens has two focal points one on each side of the lens; both are at the same focal distance f from the lens. Parallel rays of light coming from

More information

Single Camera Catadioptric Stereo System

Single Camera Catadioptric Stereo System Single Camera Catadioptric Stereo System Abstract In this paper, we present a framework for novel catadioptric stereo camera system that uses a single camera and a single lens with conic mirrors. Various

More information

Waves & Oscillations

Waves & Oscillations Physics 42200 Waves & Oscillations Lecture 27 Geometric Optics Spring 205 Semester Matthew Jones Sign Conventions > + = Convex surface: is positive for objects on the incident-light side is positive for

More information

Unit 1: Image Formation

Unit 1: Image Formation Unit 1: Image Formation 1. Geometry 2. Optics 3. Photometry 4. Sensor Readings Szeliski 2.1-2.3 & 6.3.5 1 Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor

More information

Performance Factors. Technical Assistance. Fundamental Optics

Performance Factors.   Technical Assistance. Fundamental Optics Performance Factors After paraxial formulas have been used to select values for component focal length(s) and diameter(s), the final step is to select actual lenses. As in any engineering problem, this

More information

Cameras. CSE 455, Winter 2010 January 25, 2010

Cameras. CSE 455, Winter 2010 January 25, 2010 Cameras CSE 455, Winter 2010 January 25, 2010 Announcements New Lecturer! Neel Joshi, Ph.D. Post-Doctoral Researcher Microsoft Research neel@cs Project 1b (seam carving) was due on Friday the 22 nd Project

More information

LENSES. INEL 6088 Computer Vision

LENSES. INEL 6088 Computer Vision LENSES INEL 6088 Computer Vision Digital camera A digital camera replaces film with a sensor array Each cell in the array is a Charge Coupled Device light-sensitive diode that converts photons to electrons

More information

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION

FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION FRAUNHOFER AND FRESNEL DIFFRACTION IN ONE DIMENSION Revised November 15, 2017 INTRODUCTION The simplest and most commonly described examples of diffraction and interference from two-dimensional apertures

More information

28 Thin Lenses: Ray Tracing

28 Thin Lenses: Ray Tracing 28 Thin Lenses: Ray Tracing A lens is a piece of transparent material whose surfaces have been shaped so that, when the lens is in another transparent material (call it medium 0), light traveling in medium

More information

Chapter 18 Optical Elements

Chapter 18 Optical Elements Chapter 18 Optical Elements GOALS When you have mastered the content of this chapter, you will be able to achieve the following goals: Definitions Define each of the following terms and use it in an operational

More information

E X P E R I M E N T 12

E X P E R I M E N T 12 E X P E R I M E N T 12 Mirrors and Lenses Produced by the Physics Staff at Collin College Copyright Collin College Physics Department. All Rights Reserved. University Physics II, Exp 12: Mirrors and Lenses

More information

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1

Projection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1 Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Projection Readings Nalwa 2.1 Müller-Lyer Illusion Image formation object film by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html

More information

Chapter 34. Images. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

Chapter 34. Images. Copyright 2014 John Wiley & Sons, Inc. All rights reserved. Chapter 34 Images Copyright 34-1 Images and Plane Mirrors Learning Objectives 34.01 Distinguish virtual images from real images. 34.02 Explain the common roadway mirage. 34.03 Sketch a ray diagram for

More information

CH. 23 Mirrors and Lenses HW# 6, 7, 9, 11, 13, 21, 25, 31, 33, 35

CH. 23 Mirrors and Lenses HW# 6, 7, 9, 11, 13, 21, 25, 31, 33, 35 CH. 23 Mirrors and Lenses HW# 6, 7, 9, 11, 13, 21, 25, 31, 33, 35 Mirrors Rays of light reflect off of mirrors, and where the reflected rays either intersect or appear to originate from, will be the location

More information

Notation for Mirrors and Lenses. Chapter 23. Types of Images for Mirrors and Lenses. More About Images

Notation for Mirrors and Lenses. Chapter 23. Types of Images for Mirrors and Lenses. More About Images Notation for Mirrors and Lenses Chapter 23 Mirrors and Lenses Sections: 4, 6 Problems:, 8, 2, 25, 27, 32 The object distance is the distance from the object to the mirror or lens Denoted by p The image

More information

Phys 531 Lecture 9 30 September 2004 Ray Optics II. + 1 s i. = 1 f

Phys 531 Lecture 9 30 September 2004 Ray Optics II. + 1 s i. = 1 f Phys 531 Lecture 9 30 September 2004 Ray Optics II Last time, developed idea of ray optics approximation to wave theory Introduced paraxial approximation: rays with θ 1 Will continue to use Started disussing

More information

Novel Hemispheric Image Formation: Concepts & Applications

Novel Hemispheric Image Formation: Concepts & Applications Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic

More information

Tangents. The f-stops here. Shedding some light on the f-number. by Marcus R. Hatch and David E. Stoltzmann

Tangents. The f-stops here. Shedding some light on the f-number. by Marcus R. Hatch and David E. Stoltzmann Tangents Shedding some light on the f-number The f-stops here by Marcus R. Hatch and David E. Stoltzmann The f-number has peen around for nearly a century now, and it is certainly one of the fundamental

More information

Be aware that there is no universal notation for the various quantities.

Be aware that there is no universal notation for the various quantities. Fourier Optics v2.4 Ray tracing is limited in its ability to describe optics because it ignores the wave properties of light. Diffraction is needed to explain image spatial resolution and contrast and

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing

Chapters 1 & 2. Definitions and applications Conceptual basis of photogrammetric processing Chapters 1 & 2 Chapter 1: Photogrammetry Definitions and applications Conceptual basis of photogrammetric processing Transition from two-dimensional imagery to three-dimensional information Automation

More information

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view) Camera projections Recall the plenoptic function: Panoramic imaging Ixyzϕθλt (,,,,,, ) At any point xyz,, in space, there is a full sphere of possible incidence directions ϕ, θ, covered by 0 ϕ 2π, 0 θ

More information

ECEN 4606, UNDERGRADUATE OPTICS LAB

ECEN 4606, UNDERGRADUATE OPTICS LAB ECEN 4606, UNDERGRADUATE OPTICS LAB Lab 2: Imaging 1 the Telescope Original Version: Prof. McLeod SUMMARY: In this lab you will become familiar with the use of one or more lenses to create images of distant

More information

This is an author-deposited version published in: Eprints ID: 3672

This is an author-deposited version published in:   Eprints ID: 3672 This is an author-deposited version published in: http://oatao.univ-toulouse.fr/ Eprints ID: 367 To cite this document: ZHANG Siyuan, ZENOU Emmanuel. Optical approach of a hypercatadioptric system depth

More information

Chapter 23. Mirrors and Lenses

Chapter 23. Mirrors and Lenses Chapter 23 Mirrors and Lenses Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to the

More information

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics

IMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)

More information

Activity 6.1 Image Formation from Spherical Mirrors

Activity 6.1 Image Formation from Spherical Mirrors PHY385H1F Introductory Optics Practicals Day 6 Telescopes and Microscopes October 31, 2011 Group Number (number on Intro Optics Kit):. Facilitator Name:. Record-Keeper Name: Time-keeper:. Computer/Wiki-master:..

More information

Lecture 3: Geometrical Optics 1. Spherical Waves. From Waves to Rays. Lenses. Chromatic Aberrations. Mirrors. Outline

Lecture 3: Geometrical Optics 1. Spherical Waves. From Waves to Rays. Lenses. Chromatic Aberrations. Mirrors. Outline Lecture 3: Geometrical Optics 1 Outline 1 Spherical Waves 2 From Waves to Rays 3 Lenses 4 Chromatic Aberrations 5 Mirrors Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl Lecture 3: Geometrical

More information

CSE 473/573 Computer Vision and Image Processing (CVIP)

CSE 473/573 Computer Vision and Image Processing (CVIP) CSE 473/573 Computer Vision and Image Processing (CVIP) Ifeoma Nwogu inwogu@buffalo.edu Lecture 4 Image formation(part I) Schedule Last class linear algebra overview Today Image formation and camera properties

More information

CPSC 425: Computer Vision

CPSC 425: Computer Vision 1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 7, 2016 Topics: Image

More information

Spherical Mirrors. Concave Mirror, Notation. Spherical Aberration. Image Formed by a Concave Mirror. Image Formed by a Concave Mirror 4/11/2014

Spherical Mirrors. Concave Mirror, Notation. Spherical Aberration. Image Formed by a Concave Mirror. Image Formed by a Concave Mirror 4/11/2014 Notation for Mirrors and Lenses Chapter 23 Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to

More information

Intorduction to light sources, pinhole cameras, and lenses

Intorduction to light sources, pinhole cameras, and lenses Intorduction to light sources, pinhole cameras, and lenses Erik G. Learned-Miller Department of Computer Science University of Massachusetts, Amherst Amherst, MA 01003 October 26, 2011 Abstract 1 1 Analyzing

More information

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.

Lecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations. Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl

More information

Laboratory 7: Properties of Lenses and Mirrors

Laboratory 7: Properties of Lenses and Mirrors Laboratory 7: Properties of Lenses and Mirrors Converging and Diverging Lens Focal Lengths: A converging lens is thicker at the center than at the periphery and light from an object at infinity passes

More information

Chapter 36. Image Formation

Chapter 36. Image Formation Chapter 36 Image Formation Image of Formation Images can result when light rays encounter flat or curved surfaces between two media. Images can be formed either by reflection or refraction due to these

More information

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions

10.2 Images Formed by Lenses SUMMARY. Refraction in Lenses. Section 10.1 Questions 10.2 SUMMARY Refraction in Lenses Converging lenses bring parallel rays together after they are refracted. Diverging lenses cause parallel rays to move apart after they are refracted. Rays are refracted

More information

Chapter 23. Mirrors and Lenses

Chapter 23. Mirrors and Lenses Chapter 23 Mirrors and Lenses Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p The image distance is the distance from the image to

More information

Beam Profiling. Introduction. What is Beam Profiling? by Michael Scaggs. Haas Laser Technologies, Inc.

Beam Profiling. Introduction. What is Beam Profiling? by Michael Scaggs. Haas Laser Technologies, Inc. Beam Profiling by Michael Scaggs Haas Laser Technologies, Inc. Introduction Lasers are ubiquitous in industry today. Carbon Dioxide, Nd:YAG, Excimer and Fiber lasers are used in many industries and a myriad

More information

Cameras, lenses, and sensors

Cameras, lenses, and sensors Cameras, lenses, and sensors Reading: Chapter 1, Forsyth & Ponce Optional: Section 2.1, 2.3, Horn. 6.801/6.866 Profs. Bill Freeman and Trevor Darrell Sept. 10, 2002 Today s lecture How many people would

More information

Lenses, exposure, and (de)focus

Lenses, exposure, and (de)focus Lenses, exposure, and (de)focus http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 15 Course announcements Homework 4 is out. - Due October 26

More information

Basic Optics System OS-8515C

Basic Optics System OS-8515C 40 50 30 60 20 70 10 80 0 90 80 10 20 70 T 30 60 40 50 50 40 60 30 70 20 80 90 90 80 BASIC OPTICS RAY TABLE 10 0 10 70 20 60 50 40 30 Instruction Manual with Experiment Guide and Teachers Notes 012-09900B

More information

CSE 527: Introduction to Computer Vision

CSE 527: Introduction to Computer Vision CSE 527: Introduction to Computer Vision Week 2 - Class 2: Vision, Physics, Cameras September 7th, 2017 Today Physics Human Vision Eye Brain Perspective Projection Camera Models Image Formation Digital

More information

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals.

This experiment is under development and thus we appreciate any and all comments as we design an interesting and achievable set of goals. Experiment 7 Geometrical Optics You will be introduced to ray optics and image formation in this experiment. We will use the optical rail, lenses, and the camera body to quantify image formation and magnification;

More information

Chapter 23. Mirrors and Lenses

Chapter 23. Mirrors and Lenses Chapter 23 Mirrors and Lenses Mirrors and Lenses The development of mirrors and lenses aided the progress of science. It led to the microscopes and telescopes. Allowed the study of objects from microbes

More information

OPTICS LENSES AND TELESCOPES

OPTICS LENSES AND TELESCOPES ASTR 1030 Astronomy Lab 97 Optics - Lenses & Telescopes OPTICS LENSES AND TELESCOPES SYNOPSIS: In this lab you will explore the fundamental properties of a lens and investigate refracting and reflecting

More information

Dr. Todd Satogata (ODU/Jefferson Lab) Monday, April

Dr. Todd Satogata (ODU/Jefferson Lab)  Monday, April University Physics 227N/232N Mirrors and Lenses Homework Optics 2 due Friday AM Quiz Friday Optional review session next Monday (Apr 28) Bring Homework Notebooks to Final for Grading Dr. Todd Satogata

More information

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3

Image Formation. Dr. Gerhard Roth. COMP 4102A Winter 2015 Version 3 Image Formation Dr. Gerhard Roth COMP 4102A Winter 2015 Version 3 1 Image Formation Two type of images Intensity image encodes light intensities (passive sensor) Range (depth) image encodes shape and distance

More information

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline

Lecture 4: Geometrical Optics 2. Optical Systems. Images and Pupils. Rays. Wavefronts. Aberrations. Outline Lecture 4: Geometrical Optics 2 Outline 1 Optical Systems 2 Images and Pupils 3 Rays 4 Wavefronts 5 Aberrations Christoph U. Keller, Leiden University, keller@strw.leidenuniv.nl Lecture 4: Geometrical

More information

Panoramic Mosaicing with a 180 Field of View Lens

Panoramic Mosaicing with a 180 Field of View Lens CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Panoramic Mosaicing with a 18 Field of View Lens Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek Bakstein and

More information

Notes on the VPPEM electron optics

Notes on the VPPEM electron optics Notes on the VPPEM electron optics Raymond Browning 2/9/2015 We are interested in creating some rules of thumb for designing the VPPEM instrument in terms of the interaction between the field of view at

More information

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems

INTRODUCTION THIN LENSES. Introduction. given by the paraxial refraction equation derived last lecture: Thin lenses (19.1) = 1. Double-lens systems Chapter 9 OPTICAL INSTRUMENTS Introduction Thin lenses Double-lens systems Aberrations Camera Human eye Compound microscope Summary INTRODUCTION Knowledge of geometrical optics, diffraction and interference,

More information

COPYRIGHTED MATERIAL. Overview

COPYRIGHTED MATERIAL. Overview In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experience data, which is manipulated

More information

Gaussian Ray Tracing Technique

Gaussian Ray Tracing Technique Gaussian Ray Tracing Technique Positive Lenses. A positive lens has two focal points one on each side of the lens; both are at the same focal distance f from the lens. Parallel rays of light coming from

More information

NORTHERN ILLINOIS UNIVERSITY PHYSICS DEPARTMENT. Physics 211 E&M and Quantum Physics Spring Lab #8: Thin Lenses

NORTHERN ILLINOIS UNIVERSITY PHYSICS DEPARTMENT. Physics 211 E&M and Quantum Physics Spring Lab #8: Thin Lenses NORTHERN ILLINOIS UNIVERSITY PHYSICS DEPARTMENT Physics 211 E&M and Quantum Physics Spring 2018 Lab #8: Thin Lenses Lab Writeup Due: Mon/Wed/Thu/Fri, April 2/4/5/6, 2018 Background In the previous lab

More information

Experiment 2 Simple Lenses. Introduction. Focal Lengths of Simple Lenses

Experiment 2 Simple Lenses. Introduction. Focal Lengths of Simple Lenses Experiment 2 Simple Lenses Introduction In this experiment you will measure the focal lengths of (1) a simple positive lens and (2) a simple negative lens. In each case, you will be given a specific method

More information

OPTICAL SYSTEMS OBJECTIVES

OPTICAL SYSTEMS OBJECTIVES 101 L7 OPTICAL SYSTEMS OBJECTIVES Aims Your aim here should be to acquire a working knowledge of the basic components of optical systems and understand their purpose, function and limitations in terms

More information

COPYRIGHTED MATERIAL OVERVIEW 1

COPYRIGHTED MATERIAL OVERVIEW 1 OVERVIEW 1 In normal experience, our eyes are constantly in motion, roving over and around objects and through ever-changing environments. Through this constant scanning, we build up experiential data,

More information

REFLECTION THROUGH LENS

REFLECTION THROUGH LENS REFLECTION THROUGH LENS A lens is a piece of transparent optical material with one or two curved surfaces to refract light rays. It may converge or diverge light rays to form an image. Lenses are mostly

More information

Geometric Optics. Ray Model. assume light travels in straight line uses rays to understand and predict reflection & refraction

Geometric Optics. Ray Model. assume light travels in straight line uses rays to understand and predict reflection & refraction Geometric Optics Ray Model assume light travels in straight line uses rays to understand and predict reflection & refraction General Physics 2 Geometric Optics 1 Reflection Law of reflection the angle

More information

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses.

Mirrors and Lenses. Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses. Mirrors and Lenses Images can be formed by reflection from mirrors. Images can be formed by refraction through lenses. Notation for Mirrors and Lenses The object distance is the distance from the object

More information

Chapter 24 Geometrical Optics. Copyright 2010 Pearson Education, Inc.

Chapter 24 Geometrical Optics. Copyright 2010 Pearson Education, Inc. Chapter 24 Geometrical Optics Lenses convex (converging) concave (diverging) Mirrors Ray Tracing for Mirrors We use three principal rays in finding the image produced by a curved mirror. The parallel ray

More information

Lenses. Overview. Terminology. The pinhole camera. Pinhole camera Lenses Principles of operation Limitations

Lenses. Overview. Terminology. The pinhole camera. Pinhole camera Lenses Principles of operation Limitations Overview Pinhole camera Principles of operation Limitations 1 Terminology The pinhole camera The first camera - camera obscura - known to Aristotle. In 3D, we can visualize the blur induced by the pinhole

More information

Geometry of Aerial Photographs

Geometry of Aerial Photographs Geometry of Aerial Photographs Aerial Cameras Aerial cameras must be (details in lectures): Geometrically stable Have fast and efficient shutters Have high geometric and optical quality lenses They can

More information

Applications of Optics

Applications of Optics Nicholas J. Giordano www.cengage.com/physics/giordano Chapter 26 Applications of Optics Marilyn Akins, PhD Broome Community College Applications of Optics Many devices are based on the principles of optics

More information

Aberrations of a lens

Aberrations of a lens Aberrations of a lens 1. What are aberrations? A lens made of a uniform glass with spherical surfaces cannot form perfect images. Spherical aberration is a prominent image defect for a point source on

More information

PRINCIPLE PROCEDURE ACTIVITY. AIM To observe diffraction of light due to a thin slit.

PRINCIPLE PROCEDURE ACTIVITY. AIM To observe diffraction of light due to a thin slit. ACTIVITY 12 AIM To observe diffraction of light due to a thin slit. APPARATUS AND MATERIAL REQUIRED Two razor blades, one adhesive tape/cello-tape, source of light (electric bulb/ laser pencil), a piece

More information

Supplementary Notes to. IIT JEE Physics. Topic-wise Complete Solutions

Supplementary Notes to. IIT JEE Physics. Topic-wise Complete Solutions Supplementary Notes to IIT JEE Physics Topic-wise Complete Solutions Geometrical Optics: Focal Length of a Concave Mirror and a Convex Lens using U-V Method Jitender Singh Shraddhesh Chaturvedi PsiPhiETC

More information

Laboratory experiment aberrations

Laboratory experiment aberrations Laboratory experiment aberrations Obligatory laboratory experiment on course in Optical design, SK2330/SK3330, KTH. Date Name Pass Objective This laboratory experiment is intended to demonstrate the most

More information

Depth Perception with a Single Camera

Depth Perception with a Single Camera Depth Perception with a Single Camera Jonathan R. Seal 1, Donald G. Bailey 2, Gourab Sen Gupta 2 1 Institute of Technology and Engineering, 2 Institute of Information Sciences and Technology, Massey University,

More information

Practice Problems (Geometrical Optics)

Practice Problems (Geometrical Optics) 1 Practice Problems (Geometrical Optics) 1. A convex glass lens (refractive index = 3/2) has a focal length of 8 cm when placed in air. What is the focal length of the lens when it is immersed in water

More information

Section 3. Imaging With A Thin Lens

Section 3. Imaging With A Thin Lens 3-1 Section 3 Imaging With A Thin Lens Object at Infinity An object at infinity produces a set of collimated set of rays entering the optical system. Consider the rays from a finite object located on the

More information

Chapter Ray and Wave Optics

Chapter Ray and Wave Optics 109 Chapter Ray and Wave Optics 1. An astronomical telescope has a large aperture to [2002] reduce spherical aberration have high resolution increase span of observation have low dispersion. 2. If two

More information

Optical Systems: Pinhole Camera Pinhole camera: simple hole in a box: Called Camera Obscura Aristotle discussed, Al-Hazen analyzed in Book of Optics

Optical Systems: Pinhole Camera Pinhole camera: simple hole in a box: Called Camera Obscura Aristotle discussed, Al-Hazen analyzed in Book of Optics Optical Systems: Pinhole Camera Pinhole camera: simple hole in a box: Called Camera Obscura Aristotle discussed, Al-Hazen analyzed in Book of Optics 1011CE Restricts rays: acts as a single lens: inverts

More information

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13

Projection. Readings. Szeliski 2.1. Wednesday, October 23, 13 Projection Readings Szeliski 2.1 Projection Readings Szeliski 2.1 Müller-Lyer Illusion by Pravin Bhat Müller-Lyer Illusion by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Müller-Lyer

More information

Complete the diagram to show what happens to the rays. ... (1) What word can be used to describe this type of lens? ... (1)

Complete the diagram to show what happens to the rays. ... (1) What word can be used to describe this type of lens? ... (1) Q1. (a) The diagram shows two parallel rays of light, a lens and its axis. Complete the diagram to show what happens to the rays. (2) Name the point where the rays come together. (iii) What word can be

More information

Chapter 34: Geometric Optics

Chapter 34: Geometric Optics Chapter 34: Geometric Optics It is all about images How we can make different kinds of images using optical devices Optical device example: mirror, a piece of glass, telescope, microscope, kaleidoscope,

More information

Announcements. Image Formation: Outline. The course. How Cameras Produce Images. Earliest Surviving Photograph. Image Formation and Cameras

Announcements. Image Formation: Outline. The course. How Cameras Produce Images. Earliest Surviving Photograph. Image Formation and Cameras Announcements Image ormation and Cameras CSE 252A Lecture 3 Assignment 0: Getting Started with Matlab is posted to web page, due Tuesday, ctober 4. Reading: Szeliski, Chapter 2 ptional Chapters 1 & 2 of

More information

Imaging Optics Fundamentals

Imaging Optics Fundamentals Imaging Optics Fundamentals Gregory Hollows Director, Machine Vision Solutions Edmund Optics Why Are We Here? Topics for Discussion Fundamental Parameters of your system Field of View Working Distance

More information

Dr F. Cuzzolin 1. September 29, 2015

Dr F. Cuzzolin 1. September 29, 2015 P00407 Principles of Computer Vision 1 1 Department of Computing and Communication Technologies Oxford Brookes University, UK September 29, 2015 September 29, 2015 1 / 73 Outline of the Lecture 1 2 Basics

More information

PHYSICS FOR THE IB DIPLOMA CAMBRIDGE UNIVERSITY PRESS

PHYSICS FOR THE IB DIPLOMA CAMBRIDGE UNIVERSITY PRESS Option C Imaging C Introduction to imaging Learning objectives In this section we discuss the formation of images by lenses and mirrors. We will learn how to construct images graphically as well as algebraically.

More information

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.

Projection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2. Projection Projection Readings Szeliski 2.1 Readings Szeliski 2.1 Müller-Lyer Illusion Image formation object film by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Let s design a camera

More information

PHYSICS 289 Experiment 8 Fall Geometric Optics II Thin Lenses

PHYSICS 289 Experiment 8 Fall Geometric Optics II Thin Lenses PHYSICS 289 Experiment 8 Fall 2005 Geometric Optics II Thin Lenses Please look at the chapter on lenses in your text before this lab experiment. Please submit a short lab report which includes answers

More information

Chapter 34 Geometric Optics

Chapter 34 Geometric Optics Chapter 34 Geometric Optics Lecture by Dr. Hebin Li Goals of Chapter 34 To see how plane and curved mirrors form images To learn how lenses form images To understand how a simple image system works Reflection

More information

Geometrical Optics. Have you ever entered an unfamiliar room in which one wall was covered with a

Geometrical Optics. Have you ever entered an unfamiliar room in which one wall was covered with a Return to Table of Contents HAPTER24 C. Geometrical Optics A mirror now used in the Hubble space telescope Have you ever entered an unfamiliar room in which one wall was covered with a mirror and thought

More information

Geometric Optics. Objective: To study the basics of geometric optics and to observe the function of some simple and compound optical devices.

Geometric Optics. Objective: To study the basics of geometric optics and to observe the function of some simple and compound optical devices. Geometric Optics Objective: To study the basics of geometric optics and to observe the function of some simple and compound optical devices. Apparatus: Pasco optical bench, mounted lenses (f= +100mm, +200mm,

More information

Chapter 34 Geometric Optics (also known as Ray Optics) by C.-R. Hu

Chapter 34 Geometric Optics (also known as Ray Optics) by C.-R. Hu Chapter 34 Geometric Optics (also known as Ray Optics) by C.-R. Hu 1. Principles of image formation by mirrors (1a) When all length scales of objects, gaps, and holes are much larger than the wavelength

More information

Overview: Integration of Optical Systems Survey on current optical system design Case demo of optical system design

Overview: Integration of Optical Systems Survey on current optical system design Case demo of optical system design Outline Chapter 1: Introduction Overview: Integration of Optical Systems Survey on current optical system design Case demo of optical system design 1 Overview: Integration of optical systems Key steps

More information

Opto Engineering S.r.l.

Opto Engineering S.r.l. TUTORIAL #1 Telecentric Lenses: basic information and working principles On line dimensional control is one of the most challenging and difficult applications of vision systems. On the other hand, besides

More information

How do we see the world?

How do we see the world? The Camera 1 How do we see the world? Let s design a camera Idea 1: put a piece of film in front of an object Do we get a reasonable image? Credit: Steve Seitz 2 Pinhole camera Idea 2: Add a barrier to

More information

Optical systems WikiOptics

Optical systems WikiOptics Optical systems 2012. 6. 26 1 Contents 1. Eyeglasses 2. The magnifying glass 3. Eyepieces 4. The compound microscope 5. The telescope 6. The Camera Source 1) Optics Hecht, Eugene, 1989, Addison-Wesley

More information

Applied Optics. , Physics Department (Room #36-401) , ,

Applied Optics. , Physics Department (Room #36-401) , , Applied Optics Professor, Physics Department (Room #36-401) 2290-0923, 019-539-0923, shsong@hanyang.ac.kr Office Hours Mondays 15:00-16:30, Wednesdays 15:00-16:30 TA (Ph.D. student, Room #36-415) 2290-0921,

More information

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION

Determining MTF with a Slant Edge Target ABSTRACT AND INTRODUCTION Determining MTF with a Slant Edge Target Douglas A. Kerr Issue 2 October 13, 2010 ABSTRACT AND INTRODUCTION The modulation transfer function (MTF) of a photographic lens tells us how effectively the lens

More information

Image Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens.

Image Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens. Image Formation Light (Energy) Source Surface Imaging Plane Pinhole Lens World Optics Sensor Signal B&W Film Color Film TV Camera Silver Density Silver density in three color layers Electrical Today Optics:

More information