Panoramic Mosaicing with a 180 Field of View Lens
|
|
- Duane Tyrone Shepherd
- 6 years ago
- Views:
Transcription
1 CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Panoramic Mosaicing with a 18 Field of View Lens Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek Bakstein and Tomáš Pajdla, Panoramic Mosaicing with a 18 Field of View Lens in Proceedings of the Omnidirectional Vision Workshop, pp. 6-68, June 22. Copyright: IEEE Computer Society Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/bakstein/bakstein-pajdla-omnivis22.pdf Center for Machine Perception, Department of Cybernetics Faculty of Electrical Engineering, Czech Technical University Technická 2, Prague 6, Czech Republic fax , phone , www:
2 Panoramic Mosaicing with a 18 Field of View Lens Hynek Bakstein Tomáš Pajdla Center for Machine Perception, Dept. of Cybernetics Faculty of Electrical Eng., Czech Technical University Prague, Czech Republic {bakstein,pajdla}@cmp.felk.cvut.cz Abstract We presents a technique for 36 x 36 mosaicing with a very wide field of view fish eye lens. Standard camera calibration is extended for lenses with a field of view bigger than 18. We demonstrate the calibration on a Nikon FC- E8 fish eye converter, which is an example of a low-cost lens with 183 field of view. We illustrate the use of this lens on one application, the 36 x 36 mosaic which provides 36 field of view in both vertical and horizontal direction. saics [16] and also the setup of the mosaicing camera is simpler. 1. Introduction There are many ways to enhance a field of view and obtain an omnidirectional sensor. These approaches include use of mirrors [2, 6, 5], multicamera devices [13, 1], rotating cameras [19, 18, 7], lenses [4, 23, 18], or combination of the previous methods [16]. The shape of the mirror determines its field of view, mapping of the light rays [9, 12], and other features such as the single effective viewpoint [21, 1]. On the other hand, focusing of the lens is easier than focusing on the mirror and the resulting setup may be simpler. We concentrate on the use of a special lens, the Nikon FC-E8 fish eye converter [8], which provides FOV of 183. This lens provides omnidirectional image by itself, but we use this lens in a practical realization of a 36 x 36 mosaic [16], where the mosaic is composed by rotating an omnidirectional camera. The resulting mosaic then covers 36 in both horizontal and vertical direction. We mounted this lens on a Pulnix digital camera equipped with a standard 12.5mm lens as it is depicted in Figure 1. Our experiments also show that such a lens provides better results than mirrors, which were often used to build 36 x 36 mo- This work was supported by the following grants: MSM , GACR 12/1/971, MSMT KONTAKT 21/9. Figure 1. Nikon FC-E8 fish eye converter mounted on a Pulnix digital camera with a standard 12.5mm lens. For many computer vision tasks, the relationship between the light rays entering the camera and pixels in the image has to be known. In order to find this relationship, the camera has to be calibrated. A suitable camera model has to be chosen for this task. It turns out that the pinhole camera model with a planar retina is not sufficient for sensors with large FOV [14]. An image point (u, v) defines a light ray as a vector which connects a camera center with the image point which lies on an image plane at a certain distance from the camera center, see Figure 2(a). This is a straightforward approach, however, it limits the field of view of the camera to less then 18. Previous approaches for fish eye calibration used planar retina and pinhole model [3, 4, 22, 23]. In [2], a stereographic projection was employed but the experiments were evaluated on lenses with FOV smaller than 18. We introduce a spherical retina, see Figure 2(b), and a method for calibration from a single image of one known 3D calibration target with iterative refinement of parameters of our camera model with a spherical retina. The light rays emanate from
3 the camera center and are determined by a radially symmetrical mapping between the pixel coordinates (u, v) and the angle between the light ray and the optical axis of the camera, as it is depicted in Figure 2(b). The main contribution of this work is the introduction of a proper omnidirectional camera model, i. e. the spherical retina, and the choice of a proper projection function, the radially symmetric mapping between the light rays and pixels, for one particular lens. In contrast to other methods [3, 4, 22, 23], we test our approach on a lens with apriori unknown projection function. This lens is an off-the-shelf cheap lens and therefore it does not have to precisely follow any of the projection models listed in [14]. Moreover, this lens has a field of view larger than 18 and thus the standard camera model cannot be used. In the next section, we introduce a camera model with a spherical retina. Then we discuss various models describing the relationship between the light rays and pixels in Section 3. Section 4 is devoted to the determination of this model for the case of Nikon FC-E8 converter. A summary of the presented method is given in Section 5. Experimental results are presented in Section 6. 2 Camera Model The camera model describes how a 3D scene is transformed into a 2D image. It has to incorporate the orientation of the camera with respect to some scene coordinate system and also the way how the light rays in the camera centered coordinate system are projected into the image. The orientation is expressed by extrinsic camera parameters while the latter relationship is determined by intrinsic parameters of the camera. Intrinsic parameters can be divided into two groups. The first one includes the parameters of the mapping between the rays and ideal orthogonal square pixels. We will discuss these parameters in the next section. The second group contains the parameters describing the relationship between ideal orthogonal square pixels and the real pixels of image sensors. Let (u, v) denote coordinates of a point in the image measured in an orthogonal basis as shown in Figure 3. CCD chips often have a different spacing between pixels in the vertical and the horizontal direction. This results in images unequally scaled in the horizontal and vertical direction. This distortion causes circles to appear as ellipses in the image, as shown in Figure 3. Therefore, we introduce a parameter β representing the ratio between the scales of the horizontal and the vertical axis. A matrix expression of the distortion can be written in the following form: K 1 = 1 u β βv. (1) 1 (u,v) a) b) (u,v) Figure 2. From image coordinates to light rays: (a) a directional and (b) an omnidirectional camera. This matrix is a simplified intrinsic calibration matrix of a pinhole camera [11]. The displacement of the center of the image is expressed by terms u and v, the skewness of the image axes is neglected in our case, because cameras usually have orthogonal pixels. u v K 1 Figure 3. A circle in the image plane is distorted due to a different length of the axes. Therefore we observe an ellipse instead of a circle in the image. 3 Projection Models Models of the projection between the light rays and the pixels are discussed in this section. Most commonly used approach is that these models are described by a radially symmetric function that maps the angle θ between the incoming light ray and the optical axis to some distance r from the image center, see Figures 7(a) and 7(b). This function typically has one parameter k. As it was stated before, the perspective projection, which can be expressed as r = k tan θ, is not suitable for modeling cameras with large FOV. Several other projection models exist [14]: stereographic projection r = k tan θ 2, u v
4 equidistant projection r = kθ, equisolid angle projection r = k sin θ 2, and sine law projection r = k sin θ. Figure 4 shows graphs of the above projection functions for angle θ varying from to 18 degrees. The vertical axis of this graph represents the value of a specific model function for corresponding θ angle. All functions were scaled so that they have a value of 1 at θ = 5. This figure illustrates the development of the projective function with varying θ. It can be noticed that perspective projection cannot cope with angles θ near 9. It can also be noticed that most of the models can be approximated with an equidistant projection for a smaller angle θ. However, when the FOV of the lens increases, the models differ significantly. In the next section we describe a procedure for selecting the appropriate model for the Nikon FC-E8 converter. Model function value Perspective Stereographic Sine law Equisolid angle Equidistant θ angle Figure 4. Values of the projection functions for angle θ in range of and 18 degrees. All functions were scaled so that they have a value of 1 at 5. 4 Model Determination The model describing the mapping between the pixels and the light rays differs from lens to lens. Some lenses are manufactured so that they follow a certain model, for some other lenses is this information unavailable, which is the case of the Nikon FC-E8 converter. Also the assumption that the light rays emanate from one point does not have to be true for some lenses. This requires additional parameters of the model determining the position of the ray origin. All above situations can be incorporated into our framework. We demonstrate this procedure on one particular lens. In order to derive the projection model for Nikon FC- E8, we have investigated how light rays with constant increment in the angle θ are imaged on the image plane. We performed the following experiment. The camera was observing a cylinder with circles seen by light rays with known angle θ, as it is depicted in Figure 5(a). These circles correspond to an increment in the angle θ set to 5 for rays imaged to the peripheral parts of the image (θ = 9..7 ) and to 1 for the rays imaged to the central part of the image. Figure 5(b) shows the grid which, after wrapping around a cylinder, produced the circles. Figure 5(c) shows an image of this cylinder. It can be seen that circles are imaged to approximate circles and that constant increment in angles results in slowly increasing increment in radii of the circles in the image. Note that the circles at the border have angular distance 5, while the distance near the center is 1. The camera has to be positioned so that its optical axis is identical with the rotational axis of the cylinder and the circle corresponding to θ = 9 must be imaged precisely. In our case we used the assumption of the radial symmetry and the known field of view of the lens for manual positioning of the lens with respect to the calibration cylinder. This setup is sufficient for the model determination, however, for a full camera calibration, parameters determining this positioning (rotation and translation with respect to the scene coordinate system) have to be included in the computation, as it is described in Section 5. We fitted all of the models mentioned in the previous section to detected projections of the light rays into the image. The model fit error was the Euclidean distance between the pixels observed in the image and the pixel coordinates predicted by the model function. The stereographic projection with two parameters: r = a tan θ b provided the best fit but there was still a systematic error, see Figure 6. Therefore, we extended the model, which resulted in a combination of the stereographic projection with the equisolid angle projection. This improved model is identified by four parameters, see Equation 3, and provides the best fit with no systematic error, as it is depicted in Figure 6. This holds even for the situation, where the parameters were estimated using only a half of the detected points and then used to predict the other half of the points. The prediction error was less then half of the pixel in the worst case. An initial fit of the parameters is discussed in the following section. 5 Complete Camera Model Under the above observations, we can formulate the model of the camera. Provided with a scene point X = (x, y, z) T, we are able to compute its coordinates X = ( x, ỹ, z) T in the camera centered coordinate system: X = RX + T, (2)
5 (a) (b) (c) Figure 5. (a) Camera observing a cylinder with a calibration pattern (b) wrapped around the cylinder. Note that the lines corresponds to light rays with an increment in the angle θ set to 5 (the bottom 4 intervals) and 1 (the 5 upper intervals). (c) Image of circles with radii set to a tangent of a constantly incremented angle results in concentric circles with almost constant increment in radii in the image. 1 z = optical axis (x,y,z ) (u,v ) Model fiting error [pixels] a tan(θ/b) a tan(θ/b) + c sin(θ/d) θ angle Figure 6. Model fit error for stereographic and combined stereographic and equisolid angle projection. where R represents a rotation and T stands for a translation. The standard rotation matrix R has three degrees of freedom and T is expressed by the vector T = (t 1, t 2, t 3 ) T. Then, the angle θ, see Figure 7(a), between the light ray through the point X and the optical axis, can be computed. This angle determines the distance r of the pixel from the center of the image: r = a tan θ b + c sin θ d, (3) where a, b, c, and d are the parameters of the projection model. Together with the angle ϕ between the light ray reprojected to the xy plane and the x axis of the camera centered coordinate system, the distance r is sufficient to calculate x ϕ θ (a) y u ϕ r (u,v ) (b) Figure 7. (a) Camera coordinate system and its relationship to the angles θ and ϕ (b) From polar coordinates (r, ϕ) to orthogonal coordinates (u, v ). the pixel coordinates u = (u, v, 1) in some orthogonal image coordinate system, see Figure 7(b), as u = r cos ϕ (4) v = r sin ϕ. (5) In this case the vector u does not represent a light ray from the camera center like in a pinhole camera model, instead it is just a vector augmented by 1 so that we can write an affine transform of the image points compactly by one matrix multiplication (6). Real pixel coordinates u = (u, v, 1) can be obtained as u = Ku. (6) The complete camera model parameters including extrinsic and intrinsic parameters can be recovered from measured coordinates of calibration points by minimizing J(R, T, β, u, v, a, b, c, d) = v N ũ u, (7) i=1
6 (a) (b) (c) Figure 8. (a) Experimental setup for the half cylinder experiment. (b) One of the images. (c) The calibration target is located 9 left from the camera, note the significant distortion. where... denotes the Euclidean norm, N is the number of points, ũ are coordinates of points measured in the image, and u are their coordinates reprojected by the camera model. A MATLAB implementation of the Levenberg- Marquardt [15] minimization was employed in order to minimize the objective function (7). The rotational matrix R and the vector of translation T, see (2), have both three degrees of freedom. The image center, the scale ratio of the image axes β, and the four parameters of the mapping between the light rays and pixels (3) give 7 intrinsic parameters. This yields a total of 13 parameters of our model. When minimizing the objective function (7), we initialize the image center to the center of the circle (ellipsis) surrounding the image, see Figure 5. This is possible because the Nikon FC-E8 lens is so called circular fish eye, where this circle is visible. Assuming that the mapping between the light rays and pixels (3) is radially symmetric, this center of the circle should be approximately in the image center. Parameters of the model were initially set to an ideal stereographic projection, which means that b = 2, c =, d = 1, and a was initialized using the ratio between the coordinates of points corresponding to the light rays with the angle θ equal to and 9 degrees. The value of the β parameters was initialized to 1. The initial camera position was set to be in the center of the scene coordinate system with the z axis coincident with the optical axis of the camera. 6 Experimental Results We performed two calibration experiments. In the first experiment, the calibration points were located on a cylinder around the optical axis and the camera was looking down into that cylinder, see Figure 5(a). The points had the same depth for the same value of θ. The second experiment employed a 3D calibration object with points located on a half cylinder. The object was realized such that a line of calibration points was rotated on a turntable, as it is depicted in Figure 8. Here, the points with the same angle θ had different depths. The first experimental setup was also used to determine the projection model, as it is described in Section 4. The total number of 72 points was manually detected. One half of the circles of points was used for the estimation of the parameters while the second half was used to compute the reprojection errors. Similar approach was also used in the second experiment, where the number of calibration points was 285. Again, all points were detected manually. Figure 9 shows the reprojection of points, computed with parameters estimated during the calibration, compared with their coordinates detected in the image. The lines represent the errors between the respective points are scaled 2 times to make the distances clearly visible. The same error is shown in Figure 1 for all the points. It can be noticed that the error is small compared to the precision of manual detection, where the images of some lines spanned more pixels while others where too far to be imaged as continuous circles, see Figure 5(c). Therefore we performed another experiment, where the calibration points where checkerboard patterns. Similar graphs illustrate the results from the second experiment. Figure 11 shows the comparison between the reprojected points and their coordinates detected in the image. Again, the lines representing the distance between these two sets of points are scaled 2 times. Figure 12(a) depicts this reprojection error for each calibration point. Note that the error is bigger for points in the corners of the image, which is natural, since the resolution here is higher and therefore one pixel corresponds to a smaller change in the angle θ. To verify the randomness of the reprojection error, we performed the following test. Because the points in the image were detected manually, we suppose that the detection
7 Figure 9. Reprojection of points for the cylinder experiment. The distances between the reprojected and the detected points are scaled 2 times Figure 11. Reprojection of points for the half cylinder experiment. The distances between the reprojected and the detected points are scaled 2 times. Reprojection error Reprojection error [pixels] Calibration point number (a) Count Sum of squares of normalized detection errors (b) Calibration point number Figure 1. Reprojection error for each point for the cylinder experiment. Figure 12. (a) Reprojection error for each point for the half cylinder experiment. (b) Histogram of sum of squares of normalized detection errors together with a χ 2 density marked by the curve. error has normal distribution in both image axes. Therefore, a sum of squares of these errors, normalized to unit variance, should be described by a χ 2 distribution [17]. Figure 12(b) shows a histogram of detection errors together with a graph of a χ 2 density. Note that χ 2 distribution describes well the calibration error distribution. Finally we show that we are able to select pixels in the image, which correspond the the light rays lying in one plane passing through a camera center. The angle θ between these rays and the optical axis equals to π 2 and because this situation is circularly symmetric, the corresponding pixels should form a circle centered at the image center (u, v ), obtained by minimizing (7). The radius of the circle is determined from (3) for θ = π 2 and a, b, c, and d obtained by minimizing (7). Due to the difference in scale of the image axes β, see Equation 1, the pixels form an ellipse, while the image center again corresponds to the center of the ellipse. As noted before, these light rays lie in one plane, which is crucial for the employment of the proposed sensor in a re- alization of the 36 x 36 mosaic [16]. The selection of the proper pixels (ellipse) assures that the corresponding points in the mosaic pair will be on the same image rows, which simplifies the correspondence search algorithms. There are two possible approaches for selection of light rays with a specific angle θ. The one originally proposed in [16] uses mirrors, see Figure 13(a). The camera mirror rig setup must be performed very precisely to get reliable results. Moreover, focusing on the mirror is not easy, because one has to focus on a virtual scene, not on the mirror, neither on the real scene. Therefore, we propose another approach employing optics with FOV larger than 18, depicted in Figure 13(b). Figure 14 shows the right and the left eye mosaic respectively. Note the significant disparity of objects in the scene. Enlarged parts of the mosaic showing one corresponding point can be found in Figures 15(a) and 15(c) for the right mosaic and Figures 15(b) and 15(d) for the left mosaic. These figures represent the worst case, where the difference
8 7 (a) (b) Figure 13. Two possible realizations of the 36 x 36 mosaic. (a) a telecentric camera and a conical mirror, (b) a central camera with Nikon fish eye converter. between the situation, where the corresponding points lie on the same image row, was the biggest. Figures 15(a) and 15(b) were acquired using a conical mirror observed by a telecentric lens, while Figures 15(c) and 15(d) are from mosaic acquired employing the Nikon FC-E8 lens. Figure 14. Right (upper) and left (lower) eye mosaics. Notice that in the upper row, where images taken with the use of the mirror are shown, the images are more blurry and that the points do not lie on the same image row and that this difference is quite significant, although some other points actually were on the same image row. This is in contrast with the images in the bottom row that were taken with the Nikon FC-E8 converter, where all corresponding points lie on the same image row and the images are more sharp. This condition is satisfied for all pixels with a tolerance smaller than.5 pixel. Conclusion We have proposed a camera model for lenses with FOV larger than 18. The model is based on employment of a spherical retina and a radially symmetrical mapping between the incoming light rays and pixels in the image. We proposed a method for identification of the mapping function, which led to a combination of two mapping functions. A complete calibration procedure, involving a single image of a 3D calibration target, is then presented. Finally, we demonstrate the theory in two experiments and one application, all using the Nikon FC-E8 fish eye converter. We believe that the ability to correctly describe and calibrate the Nikon FC-E8 fish eye lens converter opens a way to many new application of very wide angle lenses. References [1] P. Baker, C. Fermu ller, Y. Aloimonos, and R. Pless. A spherical eye from multiple cameras (makes better models of the world). In A. Jacobs and T. Baldwin, editors, Proceedings of the CVPR 1 conference, volume 1, pages , Los Alamitos, CA, USA, Dec. 21. IEEE Computer Society. [2] S. Baker and S. K. Nayar. A theory of single-viewpoint catadioptric image formation. International Journal of Computer Vision, 35(2): , [3] A. Basu and S. Licardie. Alternative models for fish-eye lenses. Pattern Recognition Letters, 16(4): , [4] S. S. Beauchemin, R. Bajcsy, and G. G. A unified procedure for calibrating intrinsic parameters of fish-eye lenses. In Vision Interface (VI 99), pages , May [5] R. Benosman, E. Deforas, and J. Devars. A new catadioptric sensor for the panoramic vision of mobile robots. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages , June 2. [6] A. M. Bruckstein and T. J. Richardson. Omniview cameras with curved surface mirrors. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages 79 84, June 2. [7] J. Chai, S. B. Kang, and H.-Y. Shum. Rendering with nonuniform approximate concentric mosaics. In Second Workshop on Structure from Multiple Images of Large Scale Environments, SMILE, July 2. [8] N. Corp. Nikon www pages: 2. [9] S. Derrien and K. Konolige. Approximating a single effective viewpoint in panoramic imaging devices. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages 85 9, June 2. [1] C. Geyer and K. Daniilidis. A unifying theory for central panoramic systems and practical implications. In D. Vernon, editor, European Conference on Computer Vision ECCV 2, Dublin, Ireland, volume 2, pages , June-July 2. [11] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge, UK, 2.
9 (a) (b) (c) (d) Figure 15. Detail of a corresponding pair of points (a) and (c) in the right mosaic and (b) and (d) in the left mosaic representing the difference from the ideal case, where the corresponding points lie on the same image row. The upper row is the worst case acquired using mirror, bottom row for the Nikon FC-E8 fish eye converter. Note the blurred images and that the points do not lie on the same image row in case of mirror and that the lens provides focused and aligned images. [12] R. A. Hicks and R. Bajcsy. Catadioptric sensors that approximate wide-angle perspective projections. In IEEE Workshop on Omnidirectional Vision (OMNIVIS ), Hilton Head, South Carolina, pages 97 13, June 2. [13] H. Hua and N. Ahuja. A high-resolution panoramic camera. In A. Jacobs and T. Baldwin, editors, Proceedings of the CVPR 1 conference, volume 1, pages , Los Alamitos, CA, USA, Dec. 21. IEEE Computer Society. [14] F. M. M. Perspective projection: the wrong imaging model. Technical Report TR 95-1, Comp. Sci., U. Iowa, [15] J. Moré. The levenberg-marquardt algorithm: Implementation and theory. In G. A. Watson, editor, Numerical Analysis, Lecture Notes in Mathematics 63, pages Springer Verlag, [16] S. K. Nayar and A. Karmarkar. 36 x 36 mosaics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR ), Hilton Head, South Carolina, volume 2, pages , June 2. [17] A. Papoulis. Probability and Statistics. Prentice-Hall, 199. [18] S. Peleg and M. Ban-Ezra. Stereo panorama with a single camera. In IEEE Conference on Computer Vision and Pattern Recognition, pages , June [19] H.-Y. Shum, A. Kalai, and S. M. Seitz. Omnivergent stereo. In Proc. of the International Conference on Computer Vision (ICCV 99), Kerkyra, Greece, volume 1, pages 22 29, September [2] D. E. Stevenson and M. M. Fleck. Robot aerobics: Four easy steps to a more flexible calibration. In International Conference on Computer Vision, pages 34 39, [21] T. Svoboda, T. Pajdla, and V. Hlaváč. Epipolar geometry for panoramic cameras. In H. Burkhardt and N. Bernd, editors, the fifth European Conference on Computer Vision, Freiburg, Germany, number 146 in Lecture Notes in Computer Science, pages , Berlin, Germany, June [22] R. Swaminathan and S. Nayar. Non-metric calibration of wide-angle lenses. In DARPA Image Understanding Workshop, pages , [23] Y. Xiong and K. Turkowski. Creating image based vr using a self-calibrating fisheye lens. In IEEE Computer Vision and Pattern Recognition (CVPR97), pages , 1997.
Single Camera Catadioptric Stereo System
Single Camera Catadioptric Stereo System Abstract In this paper, we present a framework for novel catadioptric stereo camera system that uses a single camera and a single lens with conic mirrors. Various
More informationCatadioptric Stereo For Robot Localization
Catadioptric Stereo For Robot Localization Adam Bickett CSE 252C Project University of California, San Diego Abstract Stereo rigs are indispensable in real world 3D localization and reconstruction, yet
More informationBeNoGo Image Volume Acquisition
BeNoGo Image Volume Acquisition Hynek Bakstein Tomáš Pajdla Daniel Večerka Abstract This document deals with issues arising during acquisition of images for IBR used in the BeNoGo project. We describe
More informationDigital deformation model for fisheye image rectification
Digital deformation model for fisheye image rectification Wenguang Hou, 1 Mingyue Ding, 1 Nannan Qin, 2 and Xudong Lai 2, 1 Department of Bio-medical Engineering, Image Processing and Intelligence Control
More informationCameras for Stereo Panoramic Imaging Λ
Cameras for Stereo Panoramic Imaging Λ Shmuel Peleg Yael Pritch Moshe Ben-Ezra School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, ISRAEL Abstract A panorama
More informationUC Berkeley UC Berkeley Previously Published Works
UC Berkeley UC Berkeley Previously Published Works Title Single-view-point omnidirectional catadioptric cone mirror imager Permalink https://escholarship.org/uc/item/1ht5q6xc Journal IEEE Transactions
More informationCollege of Arts and Sciences
College of Arts and Sciences Drexel E-Repository and Archive (idea) http://idea.library.drexel.edu/ Drexel University Libraries www.library.drexel.edu The following item is made available as a courtesy
More informationDigital Photographic Imaging Using MOEMS
Digital Photographic Imaging Using MOEMS Vasileios T. Nasis a, R. Andrew Hicks b and Timothy P. Kurzweg a a Department of Electrical and Computer Engineering, Drexel University, Philadelphia, USA b Department
More informationON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES
ON THE CREATION OF PANORAMIC IMAGES FROM IMAGE SEQUENCES Petteri PÖNTINEN Helsinki University of Technology, Institute of Photogrammetry and Remote Sensing, Finland petteri.pontinen@hut.fi KEY WORDS: Cocentricity,
More informationDepth Perception with a Single Camera
Depth Perception with a Single Camera Jonathan R. Seal 1, Donald G. Bailey 2, Gourab Sen Gupta 2 1 Institute of Technology and Engineering, 2 Institute of Information Sciences and Technology, Massey University,
More informationImage stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration
Image stitching Stitching = alignment + blending Image stitching geometrical registration photometric registration Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/3/22 with slides by Richard Szeliski,
More informationThe key to a fisheye is the relationship between latitude ø of the 3D vector and radius on the 2D fisheye image, namely a linear one where
Fisheye mathematics Fisheye image y 3D world y 1 r P θ θ -1 1 x ø x (x,y,z) -1 z Any point P in a linear (mathematical) fisheye defines an angle of longitude and latitude and therefore a 3D vector into
More informationSensors and Sensing Cameras and Camera Calibration
Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014
More informationIMAGE FORMATION. Light source properties. Sensor characteristics Surface. Surface reflectance properties. Optics
IMAGE FORMATION Light source properties Sensor characteristics Surface Exposure shape Optics Surface reflectance properties ANALOG IMAGES An image can be understood as a 2D light intensity function f(x,y)
More informationActive Aperture Control and Sensor Modulation for Flexible Imaging
Active Aperture Control and Sensor Modulation for Flexible Imaging Chunyu Gao and Narendra Ahuja Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL,
More informationUnit 1: Image Formation
Unit 1: Image Formation 1. Geometry 2. Optics 3. Photometry 4. Sensor Readings Szeliski 2.1-2.3 & 6.3.5 1 Physical parameters of image formation Geometric Type of projection Camera pose Optical Sensor
More informationComputer Vision. The Pinhole Camera Model
Computer Vision The Pinhole Camera Model Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2017/2018 Imaging device
More informationA High-Resolution Panoramic Camera
A High-Resolution Panoramic Camera Hong Hua and Narendra Ahuja Beckman Institute, Department of Electrical and Computer Engineering2 University of Illinois at Urbana-Champaign, Urbana, IL, 61801 Email:
More informationDynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics
Lehrstuhl für Bildverarbeitung Institute of Imaging & Computer Vision Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics Thomas Stehle and Michael Hennes and Sebastian Gross and
More informationNovel Hemispheric Image Formation: Concepts & Applications
Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic
More informationOverview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image
Camera & Color Overview Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Book: Hartley 6.1, Szeliski 2.1.5, 2.2, 2.3 The trip
More informationFOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM
FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method
More informationMethod for out-of-focus camera calibration
2346 Vol. 55, No. 9 / March 20 2016 / Applied Optics Research Article Method for out-of-focus camera calibration TYLER BELL, 1 JING XU, 2 AND SONG ZHANG 1, * 1 School of Mechanical Engineering, Purdue
More informationFeature Extraction and Pattern Recognition from Fisheye Images in the Spatial Domain
Feature Extraction and Pattern Recognition from Fisheye Images in the Spatial Domain Konstantinos K. Delibasis 1 and Ilias Maglogiannis 2 1 Dept. of Computer Science and Biomedical Informatics, Univ. of
More informationThis is an author-deposited version published in: Eprints ID: 3672
This is an author-deposited version published in: http://oatao.univ-toulouse.fr/ Eprints ID: 367 To cite this document: ZHANG Siyuan, ZENOU Emmanuel. Optical approach of a hypercatadioptric system depth
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Sensors and Image Formation Imaging sensors and models of image formation Coordinate systems Digital
More informationImage Processing & Projective geometry
Image Processing & Projective geometry Arunkumar Byravan Partial slides borrowed from Jianbo Shi & Steve Seitz Color spaces RGB Red, Green, Blue HSV Hue, Saturation, Value Why HSV? HSV separates luma,
More informationCapturing Omni-Directional Stereoscopic Spherical Projections with a Single Camera
Capturing Omni-Directional Stereoscopic Spherical Projections with a Single Camera Paul Bourke ivec @ University of Western Australia, 35 Stirling Hwy, Crawley, WA 6009 Australia. paul.bourke@uwa.edu.au
More informationMIT CSAIL Advances in Computer Vision Fall Problem Set 6: Anaglyph Camera Obscura
MIT CSAIL 6.869 Advances in Computer Vision Fall 2013 Problem Set 6: Anaglyph Camera Obscura Posted: Tuesday, October 8, 2013 Due: Thursday, October 17, 2013 You should submit a hard copy of your work
More informationLecture 02 Image Formation 1
Institute of Informatics Institute of Neuroinformatics Lecture 02 Image Formation 1 Davide Scaramuzza http://rpg.ifi.uzh.ch 1 Lab Exercise 1 - Today afternoon Room ETH HG E 1.1 from 13:15 to 15:00 Work
More informationImage Formation. World Optics Sensor Signal. Computer Vision. Introduction to. Light (Energy) Source. Surface Imaging Plane. Pinhole Lens.
Image Formation Light (Energy) Source Surface Imaging Plane Pinhole Lens World Optics Sensor Signal B&W Film Color Film TV Camera Silver Density Silver density in three color layers Electrical Today Optics:
More informationFast Focal Length Solution in Partial Panoramic Image Stitching
Fast Focal Length Solution in Partial Panoramic Image Stitching Kirk L. Duffin Northern Illinois University duffin@cs.niu.edu William A. Barrett Brigham Young University barrett@cs.byu.edu Abstract Accurate
More informationMULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS
INFOTEH-JAHORINA Vol. 10, Ref. E-VI-11, p. 892-896, March 2011. MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS Jelena Cvetković, Aleksej Makarov, Sasa Vujić, Vlatacom d.o.o. Beograd Abstract -
More informationProjection. Announcements. Müller-Lyer Illusion. Image formation. Readings Nalwa 2.1
Announcements Mailing list (you should have received messages) Project 1 additional test sequences online Projection Readings Nalwa 2.1 Müller-Lyer Illusion Image formation object film by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html
More informationImage Formation: Camera Model
Image Formation: Camera Model Ruigang Yang COMP 684 Fall 2005, CS684-IBMR Outline Camera Models Pinhole Perspective Projection Affine Projection Camera with Lenses Digital Image Formation The Human Eye
More informationPanoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)
Camera projections Recall the plenoptic function: Panoramic imaging Ixyzϕθλt (,,,,,, ) At any point xyz,, in space, there is a full sphere of possible incidence directions ϕ, θ, covered by 0 ϕ 2π, 0 θ
More informationExtended View Toolkit
Extended View Toolkit Peter Venus Alberstrasse 19 Graz, Austria, 8010 mail@petervenus.de Cyrille Henry France ch@chnry.net Marian Weger Krenngasse 45 Graz, Austria, 8010 mail@marianweger.com Winfried Ritsch
More informationColour correction for panoramic imaging
Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in
More informationPrinceton University COS429 Computer Vision Problem Set 1: Building a Camera
Princeton University COS429 Computer Vision Problem Set 1: Building a Camera What to submit: You need to submit two files: one PDF file for the report that contains your name, Princeton NetID, all the
More informationDual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington
Dual-fisheye Lens Stitching for 360-degree Imaging & Video Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington Introduction 360-degree imaging: the process of taking multiple photographs and
More informationProjection. Readings. Szeliski 2.1. Wednesday, October 23, 13
Projection Readings Szeliski 2.1 Projection Readings Szeliski 2.1 Müller-Lyer Illusion by Pravin Bhat Müller-Lyer Illusion by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Müller-Lyer
More informationDigital Image Processing
Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing
More informationFolded catadioptric panoramic lens with an equidistance projection scheme
Folded catadioptric panoramic lens with an equidistance projection scheme Gyeong-il Kweon, Kwang Taek Kim, Geon-hee Kim, and Hyo-sik Kim A new formula for a catadioptric panoramic lens with an equidistance
More informationRemoving Temporal Stationary Blur in Route Panoramas
Removing Temporal Stationary Blur in Route Panoramas Jiang Yu Zheng and Min Shi Indiana University Purdue University Indianapolis jzheng@cs.iupui.edu Abstract The Route Panorama is a continuous, compact
More informationDigital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye
Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall,
More informationDigital Image Fundamentals. Digital Image Processing. Human Visual System. Contents. Structure Of The Human Eye (cont.) Structure Of The Human Eye
Digital Image Processing 2 Digital Image Fundamentals Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Those who wish to succeed must ask the right preliminary questions Aristotle Images
More informationA moment-preserving approach for depth from defocus
A moment-preserving approach for depth from defocus D. M. Tsai and C. T. Lin Machine Vision Lab. Department of Industrial Engineering and Management Yuan-Ze University, Chung-Li, Taiwan, R.O.C. E-mail:
More informationProjection. Projection. Image formation. Müller-Lyer Illusion. Readings. Readings. Let s design a camera. Szeliski 2.1. Szeliski 2.
Projection Projection Readings Szeliski 2.1 Readings Szeliski 2.1 Müller-Lyer Illusion Image formation object film by Pravin Bhat http://www.michaelbach.de/ot/sze_muelue/index.html Let s design a camera
More informationEFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING
Clemson University TigerPrints All Theses Theses 8-2009 EFFECTS OF PHASE AND AMPLITUDE ERRORS ON QAM SYSTEMS WITH ERROR- CONTROL CODING AND SOFT DECISION DECODING Jason Ellis Clemson University, jellis@clemson.edu
More informationComputer Vision Slides curtesy of Professor Gregory Dudek
Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short
More informationDigital Image Processing
Digital Image Processing Digital Imaging Fundamentals Christophoros Nikou cnikou@cs.uoi.gr Images taken from: R. Gonzalez and R. Woods. Digital Image Processing, Prentice Hall, 2008. Digital Image Processing
More informationCS535 Fall Department of Computer Science Purdue University
Omnidirectional Camera Models CS535 Fall 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University A little bit of history Omnidirectional cameras are also called panoramic
More informationUsing Line and Ellipse Features for Rectification of Broadcast Hockey Video
Using Line and Ellipse Features for Rectification of Broadcast Hockey Video Ankur Gupta, James J. Little, Robert J. Woodham Laboratory for Computational Intelligence (LCI) The University of British Columbia
More informationCPSC 425: Computer Vision
1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 7, 2016 Topics: Image
More informationMEM: Intro to Robotics. Assignment 3I. Due: Wednesday 10/15 11:59 EST
MEM: Intro to Robotics Assignment 3I Due: Wednesday 10/15 11:59 EST 1. Basic Optics You are shopping for a new lens for your Canon D30 digital camera and there are lots of lens options at the store. Your
More informationDynamically Reparameterized Light Fields & Fourier Slice Photography. Oliver Barth, 2009 Max Planck Institute Saarbrücken
Dynamically Reparameterized Light Fields & Fourier Slice Photography Oliver Barth, 2009 Max Planck Institute Saarbrücken Background What we are talking about? 2 / 83 Background What we are talking about?
More informationOn Cosine-fourth and Vignetting Effects in Real Lenses*
On Cosine-fourth and Vignetting Effects in Real Lenses* Manoj Aggarwal Hong Hua Narendra Ahuja University of Illinois at Urbana-Champaign 405 N. Mathews Ave, Urbana, IL 61801, USA { manoj,honghua,ahuja}@vision.ai.uiuc.edu
More informationCameras. CSE 455, Winter 2010 January 25, 2010
Cameras CSE 455, Winter 2010 January 25, 2010 Announcements New Lecturer! Neel Joshi, Ph.D. Post-Doctoral Researcher Microsoft Research neel@cs Project 1b (seam carving) was due on Friday the 22 nd Project
More informationPanorama Photogrammetry for Architectural Applications
Panorama Photogrammetry for Architectural Applications Thomas Luhmann University of Applied Sciences ldenburg Institute for Applied Photogrammetry and Geoinformatics fener Str. 16, D-26121 ldenburg, Germany
More informationAbstract. 1. Introduction and Motivation. 3. Methods. 2. Related Work Omni Directional Stereo Imaging
Abstract This project aims to create a camera system that captures stereoscopic 360 degree panoramas of the real world, and a viewer to render this content in a headset, with accurate spatial sound. 1.
More informationTrue Single View Point Cone Mirror Omni-Directional Catadioptric System 1
True Single View Point Cone Mirror Omni-Directional Catadioptric System 1 Shih-Schön Lin, Ruzena ajcsy GRASP Laoratory, Computer and Information Science Department University of Pennsylvania, shschon@grasp.cis.upenn.edu,
More informationSequential Algorithm for Robust Radiometric Calibration and Vignetting Correction
Sequential Algorithm for Robust Radiometric Calibration and Vignetting Correction Seon Joo Kim and Marc Pollefeys Department of Computer Science University of North Carolina Chapel Hill, NC 27599 {sjkim,
More informationOpto Engineering S.r.l.
TUTORIAL #1 Telecentric Lenses: basic information and working principles On line dimensional control is one of the most challenging and difficult applications of vision systems. On the other hand, besides
More informationLENSES. INEL 6088 Computer Vision
LENSES INEL 6088 Computer Vision Digital camera A digital camera replaces film with a sensor array Each cell in the array is a Charge Coupled Device light-sensitive diode that converts photons to electrons
More informationExtended depth-of-field in Integral Imaging by depth-dependent deconvolution
Extended depth-of-field in Integral Imaging by depth-dependent deconvolution H. Navarro* 1, G. Saavedra 1, M. Martinez-Corral 1, M. Sjöström 2, R. Olsson 2, 1 Dept. of Optics, Univ. of Valencia, E-46100,
More informationPhotographing Long Scenes with Multiviewpoint
Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an
More informationCatadioptric Omnidirectional Camera *
Catadioptric Omnidirectional Camera * Shree K. Nayar Department of Computer Science, Columbia University New York, New York 10027 Email: nayar@cs.columbia.edu Abstract Conventional video cameras have limited
More informationSimulated Programmable Apertures with Lytro
Simulated Programmable Apertures with Lytro Yangyang Yu Stanford University yyu10@stanford.edu Abstract This paper presents a simulation method using the commercial light field camera Lytro, which allows
More informationProc. of DARPA Image Understanding Workshop, New Orleans, May Omnidirectional Video Camera. Shree K. Nayar
Proc. of DARPA Image Understanding Workshop, New Orleans, May 1997 Omnidirectional Video Camera Shree K. Nayar Department of Computer Science, Columbia University New York, New York 10027 Email: nayar@cs.columbia.edu
More informationA Geometric Correction Method of Plane Image Based on OpenCV
Sensors & Transducers 204 by IFSA Publishing, S. L. http://www.sensorsportal.com A Geometric orrection Method of Plane Image ased on OpenV Li Xiaopeng, Sun Leilei, 2 Lou aiying, Liu Yonghong ollege of
More informationCSE 527: Introduction to Computer Vision
CSE 527: Introduction to Computer Vision Week 2 - Class 2: Vision, Physics, Cameras September 7th, 2017 Today Physics Human Vision Eye Brain Perspective Projection Camera Models Image Formation Digital
More informationImprovement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere
Improvement of Accuracy in Remote Gaze Detection for User Wearing Eyeglasses Using Relative Position Between Centers of Pupil and Corneal Sphere Kiyotaka Fukumoto (&), Takumi Tsuzuki, and Yoshinobu Ebisawa
More informationSimultaneous geometry and color texture acquisition using a single-chip color camera
Simultaneous geometry and color texture acquisition using a single-chip color camera Song Zhang *a and Shing-Tung Yau b a Department of Mechanical Engineering, Iowa State University, Ames, IA, USA 50011;
More informationLecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.
Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl
More informationPanoramic Vision: Sensors, Theory, And Applications (Monographs In Computer Science) READ ONLINE
Panoramic Vision: Sensors, Theory, And Applications (Monographs In Computer Science) READ ONLINE If you are searching for a ebook Panoramic Vision: Sensors, Theory, and Applications (Monographs in Computer
More informationDesign of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems
Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent
More informationExperiment O11e Optical Polarisation
Fakultät für Physik und Geowissenschaften Physikalisches Grundpraktikum Experiment O11e Optical Polarisation Tasks 0. During preparation for the laboratory experiment, familiarize yourself with the function
More informationReal-Time Scanning Goniometric Radiometer for Rapid Characterization of Laser Diodes and VCSELs
Real-Time Scanning Goniometric Radiometer for Rapid Characterization of Laser Diodes and VCSELs Jeffrey L. Guttman, John M. Fleischer, and Allen M. Cary Photon, Inc. 6860 Santa Teresa Blvd., San Jose,
More informationRecognizing Panoramas
Recognizing Panoramas Kevin Luo Stanford University 450 Serra Mall, Stanford, CA 94305 kluo8128@stanford.edu Abstract This project concerns the topic of panorama stitching. Given a set of overlapping photos,
More informationA Comparison Between Camera Calibration Software Toolboxes
2016 International Conference on Computational Science and Computational Intelligence A Comparison Between Camera Calibration Software Toolboxes James Rothenflue, Nancy Gordillo-Herrejon, Ramazan S. Aygün
More informationPractical design and evaluation methods of omnidirectional vision sensors
Practical design and evaluation methods of omnidirectional vision sensors Akira Ohte Osamu Tsuzuki Optical Engineering 51(1), 013005 (January 2012) Practical design and evaluation methods of omnidirectional
More informationEMVA1288 compliant Interpolation Algorithm
Company: BASLER AG Germany Contact: Mrs. Eva Tischendorf E-mail: eva.tischendorf@baslerweb.com EMVA1288 compliant Interpolation Algorithm Author: Jörg Kunze Description of the innovation: Basler invented
More informationReconstructing Virtual Rooms from Panoramic Images
Reconstructing Virtual Rooms from Panoramic Images Dirk Farin, Peter H. N. de With Contact address: Dirk Farin Eindhoven University of Technology (TU/e) Embedded Systems Institute 5600 MB, Eindhoven, The
More informationLecture 2: Geometrical Optics. Geometrical Approximation. Lenses. Mirrors. Optical Systems. Images and Pupils. Aberrations.
Lecture 2: Geometrical Optics Outline 1 Geometrical Approximation 2 Lenses 3 Mirrors 4 Optical Systems 5 Images and Pupils 6 Aberrations Christoph U. Keller, Leiden Observatory, keller@strw.leidenuniv.nl
More informationParallax-Free Long Bone X-ray Image Stitching
Parallax-Free Long Bone X-ray Image Stitching Lejing Wang 1,JoergTraub 1, Simon Weidert 2, Sandro Michael Heining 2, Ekkehard Euler 2, and Nassir Navab 1 1 Chair for Computer Aided Medical Procedures (CAMP),
More informationHomographies and Mosaics
Homographies and Mosaics Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2011 Why Mosaic? Are
More informationCameras, lenses and sensors
Cameras, lenses and sensors Marc Pollefeys COMP 256 Cameras, lenses and sensors Camera Models Pinhole Perspective Projection Affine Projection Camera with Lenses Sensing The Human Eye Reading: Chapter.
More informationHow do we see the world?
The Camera 1 How do we see the world? Let s design a camera Idea 1: put a piece of film in front of an object Do we get a reasonable image? Credit: Steve Seitz 2 Pinhole camera Idea 2: Add a barrier to
More informationDefense Technical Information Center Compilation Part Notice
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11345 TITLE: Measurement of the Spatial Frequency Response [SFR] of Digital Still-Picture Cameras Using a Modified Slanted
More informationHomographies and Mosaics
Homographies and Mosaics Jeffrey Martin (jeffrey-martin.com) CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2014 Steve Seitz and
More informationThe Mathematics of the Stewart Platform
The Mathematics of the Stewart Platform The Stewart Platform consists of 2 rigid frames connected by 6 variable length legs. The Base is considered to be the reference frame work, with orthogonal axes
More informationE X P E R I M E N T 12
E X P E R I M E N T 12 Mirrors and Lenses Produced by the Physics Staff at Collin College Copyright Collin College Physics Department. All Rights Reserved. University Physics II, Exp 12: Mirrors and Lenses
More informationRadiometric alignment and vignetting calibration
Radiometric alignment and vignetting calibration Pablo d Angelo University of Bielefeld, Technical Faculty, Applied Computer Science D-33501 Bielefeld, Germany pablo.dangelo@web.de Abstract. This paper
More informationDr F. Cuzzolin 1. September 29, 2015
P00407 Principles of Computer Vision 1 1 Department of Computing and Communication Technologies Oxford Brookes University, UK September 29, 2015 September 29, 2015 1 / 73 Outline of the Lecture 1 2 Basics
More informationThe eye & corrective lenses
Phys 102 Lecture 20 The eye & corrective lenses 1 Today we will... Apply concepts from ray optics & lenses Simple optical instruments the camera & the eye Learn about the human eye Accommodation Myopia,
More information10.1 Curves defined by parametric equations
Outline Section 1: Parametric Equations and Polar Coordinates 1.1 Curves defined by parametric equations 1.2 Calculus with Parametric Curves 1.3 Polar Coordinates 1.4 Areas and Lengths in Polar Coordinates
More informationCoded Aperture for Projector and Camera for Robust 3D measurement
Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement
More informationSingle-view Metrology and Cameras
Single-view Metrology and Cameras 10/10/17 Computational Photography Derek Hoiem, University of Illinois Project 2 Results Incomplete list of great project pages Haohang Huang: Best presented project;
More informationImage Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt
CSCE 641 Computer Graphics: Image Mosaicing Jinxiang Chai Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt Outline Image registration - How to break assumptions? 3D-2D registration
More informationON THE REDUCTION OF SUB-PIXEL ERROR IN IMAGE BASED DISPLACEMENT MEASUREMENT
5 XVII IMEKO World Congress Metrology in the 3 rd Millennium June 22 27, 2003, Dubrovnik, Croatia ON THE REDUCTION OF SUB-PIXEL ERROR IN IMAGE BASED DISPLACEMENT MEASUREMENT Alfredo Cigada, Remo Sala,
More information