Panoramic Vision System for an Intelligent Vehicle using. a Laser Sensor and Cameras

Size: px
Start display at page:

Download "Panoramic Vision System for an Intelligent Vehicle using. a Laser Sensor and Cameras"

Transcription

1 Panoramic Vision System for an Intelligent Vehicle using a Laser Sensor and Cameras Min Woo Park PH.D Student, Graduate School of Electrical Engineering and Computer Science, Kyungpook National University, 1370 Sankyuk-dong, Buk-gu, Daegu, South Korea , , mwpark@vr.knu.ac.kr Kyung Ho Jang Research Professor, Graduate School of Electrical Engineering and Computer Science, Kyungpook National University, 1370 Sankyuk-dong, Buk-gu, Daegu, South Korea , , khjang@knu.ac.kr Soon Ki Jung Professor, School of Computer Science and Engineering, Kyungpook National University, 1370 Sankyuk-dong, Buk-gu, Daegu, South Korea , , skjung@knu.ac.kr ABSTRACT We propose a multipurpose panoramic vision system for eliminating the blind spot and informing the driver of approaching vehicles using three cameras and a laser sensor. A wide-angle camera is attached to the trunk and two cameras are attached under each side-view mirror to eliminate the blind spot of a vehicle. A laser sensor is attached on the rear-left of the vehicle to gather information from around the vehicle. First of all, the proposed system displays laser sensor data in three dimensions and performs the calibration between laser sensor data and a rear image for finding road boundaries and gathering information on other vehicles. The proposed system computes the homography between the rear image and the laser sensor data interactively and overlaps the laser sensor data on the rear image for improving driver s recognition. Then, the system generates a panoramic mosaic view to eliminate the blind spot. First, the system computes the focus-of-contraction (FOC) in images taken from the rear-view and the homography between each side-view image and a rear-view on pre-processing. Next, the system performs the image registration process after road and background region segmentation using laser sensor data. Finally, it generates various views such as a cylindrical panorama view, a top view, a side view and an information panoramic mosaic view for displaying varied safety information. 1

2 INTRODUCTION BACKGROUND Today, vehicles are widely used for transport. The human-active area is increased by vehicle usage thus improving the quality of life. On the other side, however, many people are killed or injured in car accidents. Therefore, the present vehicle includes various safety equipments which help to prevent car accidents. For example, the lane departures warning system (LDWS) informs the driver when the vehicle is moving out from the lane. The driver state monitoring (DSM) wakes the driver up with an alarm when the driver falls into a doze. Vehicles including these kinds of safety devices are called intelligent vehicles [1]. Recently, the main issue of study in the field of intelligent vehicle design is the safety features. Especially, many of the assisted devices focus on reducing the driver s mistakes or eliminating the blind spot of a driver. Chief of all, eliminating the blind spot can reduce many car accidents. The blind spot of a driver means that a driver cannot see the field of the road [2, 3, 4]. Figure 1 shows the driver s blind spot and dangerous objects. From figure 1, a driver does not identify a motorbike in the blind spot when the motorbike is encroaching on the blind spot. In this case, car accidents frequently happen. Therefore, to reduce the blind spot, many drivers generally use the assisted mirror (e.g. convex mirror) and the wide-angle camera. However, this approach does not eliminate the blind spot of the vehicle but only reduces it. Therefore, we suggest a multi-purpose panoramic vision system. Figure 1. Driver's Blind Spot and Approaching Vehicles RELATED WORK Recently, expensive vehicles include various safety systems. Especially, many drivers are interested in safety devices for eliminating the blind spot [3, 4]. In most of the systems, only a wide-angle camera is used. The driver can see a wider area than the rear-view mirror. However, it is difficult for the driver to recognize the situation as shown in the rear scene because the image of the wide angle camera has a dense distortion. Another approach uses many cameras that mount standard lenses. In this case, image registration techniques are required for supporting the wide-view area. The Ford Interim Summer 98 of the Washington 2

3 University generates a panorama image to eliminate the blind spot [5]. Three CCD cameras are used to eliminate the rear blind spot. This panorama image is generated by using the mosaic algorithm. Ford s CamCar uses four cameras for generating panorama images [6]. However, these approaches only eliminate the blind spot of the rear-view. The concept car Z22 of BMW uses three cameras for eliminating the rear blind spot [7]. Three cameras are attached on the left/right and rear of the vehicle that instead of two side-view mirrors and a rear-view mirror. This system attempts to generate a panorama image, but it does not offer a perfect panorama image. General Motors presented the Panoramic Vision TM System of Magna Donnelly. They introduce the next generation rear-view mirror system using two cameras attached on the rear of the vehicle and four side cameras attached on each door [8]. However, it is at present just a concept design system. Nissan also attaches an around view on Infiniti EX35 [9]. The around view is a parking assist system for eliminating the blind spot around the vehicle. It has four wide-angle cameras that are attached on front/rear/left/right of the vehicle toward the road. It just performs the lens calibration for generating undistorted images. Undistorted images are displayed in a top-view when a driver parks the car. Another safety system that interested drivers are the blind spot information system (BLIS). This system is a safety system of another type to protect a driver from the blind spot. The blind spot information system intends to inform a driver of approaching vehicles that are in the blind spot. The BLIS has been included in high-end vehicles of Volvo and Ford [10]. In this paper, we propose a multi-purpose panoramic vision system for eliminating the blind spot and informing the approaching vehicles by using two standard cameras, one wide-angle camera and a laser sensor. In the proposed system, two standard cameras are attached under each side mirror and one wide-angle camera is attached to the trunk. For supporting wider area, two standard cameras are out-panning. A laser sensor is attached to the rear of the vehicle for gathering the varied information around the vehicle. To enhance visualization, input images of three cameras are registered. And then, the driver recognizes the object in the blind spot from the displayed image without confusion. Therefore, the system can eliminate the blind spot and the approaching vehicles are displayed in the image. The registration algorithm is not the general solution. However, the approach used in this study is suitable for eliminating the driver s blind spot. The remainder of this paper is as follows: Section 2 describes the proposed system. The laser sensor calibration technique is presented in Section 3. The panoramic vision generation method is presented in Section 4, along with extensive simulation results in Section 5. Finally, concluding remarks are presented in Section 6. 3

4 SYSTEM OVERVIEW In the proposed system, two standard cameras (30 degrees) are attached under each side mirror and one wide-angle camera (120 degrees) is attached on the trunk of the vehicle. For visualizing a wider area, two standard side-view cameras are out-panned about 15~30 degrees. Furthermore, LD-OEM laser sensor made by SICK is attached on the left rear of the vehicle. A laser sensor measures the distance from the vehicle to other objects. Figure 2 shows the configuration of the proposed system. Figure 2. System Configuration and Range of Devices The proposed system provides a multi-purpose panoramic vision system for eliminating the blind spot of a driver. The main functions of the proposed system are panorama image generation around the car and a warning system for approaching vehicles in the blind spot. However, the general registration method is not applied because acquired images using cameras have a different center of projection (COP). Also if we only use laser sensor data, the driver cannot clearly recognize any safety hazards. Therefore, we present a novel method for the image registration and a method to effectively display the laser sensor data. Figure 3 show the flow chart of the proposed system. Figure 3. Work Flow of the Proposed System 4

5 The proposed system consists of the laser sensor data calibration part and the panoramic mosaic view generation part. The calibration system for laser sensor data consists of pre-processing step and a visualization step. In the pre-processing step, an undistorted map is computed to calibrate lens distortion and a homography for overlapping laser sensor data on the rear image is computed. The pre-processing step is executed only once. The visualization step is performed while driving. A homography between an image data and distance data obtained from laser sensor data is calculated for several frames. Then, the proposed system overlays some information in the image such as approaching vehicles. The panoramic mosaic view generation consists of a pre-processing step, segmentation step, alignment step and visualization step. In the pre-processing step, the focus-of-contraction (FOC) and the homography between cameras are computed. This step is also executed only once for system initialization. After the pre-processing step is finished, the other steps are executed while driving. In the segmentation step, the road area and the background area in the input image are separated using results of laser sensor data and the FOC. In the alignment step, image registration of the side-view images and the rear-view image is performed for each of the road parts and background parts in the images. In the visualization step, various views such as a cylindrical panorama view, a top view, a side view and an information panoramic mosaic view for visualizing varied safety information are generated along with a panoramic mosaic view including the position of approaching vehicles. LASER SENSOR DATA CALIBRATION The laser sensor measures the distance from the driver s vehicle to other objects. However, the laser sensor data is not easy for a driver to interpret. Therefore, laser sensor data is overlaid using a homography on a rear image because the laser sensor data is not synchronized with the image data on the hardware level. Some other information such as the position of approaching vehicles or the boundary line of the road on the images are augmented to improve driver s recognition. PRE-PROCESSING STEP In this step, an unwarping map is generated for the calibration of lens distortion. And then, a homography is computed between the rear image and the laser sensor data for its registration on the rear image. Finally, the homography is adjusted using user interaction for achieving a more correct homography matrix. Calibration of lens distortion For eliminating a rear blind spot, a wide-angle lens camera of 120 degree field of view is used. The wide-angle lens camera has dense radial and tangential distortion. Therefore, distortions of the rear image are calibrated using the distortion coefficients and intrinsic parameters [11, 5

6 12]. However, the distortion removal process uses a long execution time. Therefore, unwarping map is computed only once in the pre-processing step. Then, the unwarping map is applied to the rear images while driving. Homography The laser sensor data only gives the distance from the vehicle to other objects. This data is used internally for other functions of an intelligent vehicle. Also, laser sensor data is useful in the removal of the blind spots. However, simply displayed laser sensor data is not easily understood by the driver. Therefore, the proposed system overlays the laser sensor data on the rear image for faster recognition of hazards in the blind spots. For the alignment between the rear image and the laser sensor data, a homography matrix is used. The homography means the transform matrix between two dimensional planes [13]. Because the laser sensor data exists in the x-z plane and the rear image data exists in the x-y plane, two types of data can be aligned by using the homography. Therefore, the homography is computed in advance. For computing the homography, we need 4 or more matched features between the laser sensor data and the rear image data. Therefore, we select some features in the laser sensor data. And then, we obtain their corresponding features in the rear image. We extract inlier feature set using RANSAC Algorithm [14] with the feature correspondences and compute the homography using the inlier feature set. The homography is computed using the following equation, wwuu h 11 h 12 h 13 uu pp = wwvv HHHH = h 21 h 22 h 23 vv, (1) ww h 31 h 31 h 33 1 Where pp and pp are the corresponding feature points and HH is the homography matrix. Interactive adjustment of the homography After computing the homography, we refine the homography more correctly using user interaction. First, we overlap the laser sensor data using the current homography on the rear image. After that we adjust the feature points of the laser sensor data until getting the correct homography interactively. Whenever we adjust the feature positions of the laser sensor data, the proposed system updates the overlaid image using the refined homography. In this way, we can get a more correct homography for the overlay between the rear image and the laser sensor data. Figure 4 shows that we get a more correct homography by adjustments to red points interactively. The red points are matched feature sets to compute the homography. 6

7 Figure 4. Interactive Procedure to Compute the Homography VISUALIZATION STEP In this step, the visualization is performed using the overlaying of the laser sensor data while driving. First, laser sensor data is interpreted into the 2D points data. And then the 2D points are overlaid on the rear image. Display of laser sensor data Laser sensor data captures distances and angles by using an LD-OEM sensor made by SICK. However, the drivers cannot recognize this raw data because these captured data consists of hexadecimal code. Therefore, the laser sensor data is displayed on the x-y plane using OpenGL in three dimension space. Visualization Simple plotting of 2D laser sensor data cannot offer the driver with its fast interpretation. Therefore, the proposed system overlaps the laser sensor data using the refined homography on the rear image while driving. Overlapped laser sensor data has many function subsidiaries. The main function is the warning function about hazardous objects in the blind spots. The proposed system warns the driver about the fast approaching vehicles. This function is called blind spot information system (BLIS) [10]. Most BLISs have a warning function with a sound. However, the proposed system displays fast approaching hazardous vehicles with the distance data for other vehicles in the image. The proposed system improves driver s recognition ability for hazards in the blind spots. The road region and the background region are segmented using overlapped laser sensor data in the rear image. Segmented road and background data can be used for registration between the rear image and the two side images. PANORAMIC MOSAIC VIEW GENERATION Two side mirrors and one rear mirror attached a vehicle do not eliminate the blind spot perfectly. Therefore, the proposed system attaches three cameras to the rear and sides of the vehicle, respectively. Attached cameras eliminate the blind spots of the driver. But, the 7

8 separated images captured by three cameras require an effective visualization for easy understanding of the driver. Therefore, the proposed system performs a registration of three images. In this case, the general panorama mosaic method cannot be applied because the centers of projection for the three cameras are not same [15]. Therefore, we present a novel image registration method. A stitched panoramic mosaic image enhances driver s recognition ability for the blind spots. PRE-PROCESSING STEP In this step, the focus-of-contraction in the rear image is first computed. Then, homographies between the rear image and two side images are computed. Because the pre-processing step is executed only once, it can reduce the execution time of the proposed system. Focus of contraction Generally, if a camera moves forward or backward, optical flows in the captured image expand from one point or gather to one point. This point is called focus of expansion (FOE) or focus of contraction (FOC) [16]. Figure 5. Optical Flow and FOC on the Rear Image In the proposed system, the vehicle moves forward and the rear camera is moving backward. Therefore, the focus of contraction (FOC) is computed using the following equation, xx TT ffff = 0, ffff = 0. (2) For computing the FOC, feature extraction and matching in rear images is performed using KLT algorithm [17]. A fundamental matrix is computed using EIGHT-POINT algorithm [13] and RANSAC [14] in the corresponding points. Then, an epipole is computed using the equation (2). In the proposed system, this epipole means the FOC. In this equation, ff is the fundamental matrix and ee is an epipole and xx and xx are a corresponding point set. Homography between images In this part, homographies between the rear image and two side images are computed using 8

9 the equation (1). For computing the homography between two images, a set of 2D corresponding points on a 3D plane is required. Therefore, the proposed system assumes that the road is the basic plane for registration. Figure 6 shows the geometry between time t and time t t. In this figure, t is the time interval required for the car to move from the position of the rear camera to the position of two side cameras. Finally the panorama image is generated using images at time t. However, homographies (HH rr, HH ll ) cannot be computed between the rear image at time t and two side images at time t because of the scale difference. Therefore, homographies (HH rr,bb, HH ll,bb ) are computed between the rear image at time t and two side image at time t t and the homography (HH tt,tt tt ) between the rear image at time t and the rear image at time t t for reducing the scale difference effect. Finally homographies are computed using the following equations, 1 HH rr = HH tt,tt tt HH rr,bb, 1 HH ll = HH tt,tt tt HH ll,bb. (3) Figure 6. Geometry between time tt and time tt tt SEGMENTATION STEP The road and the background areas in the image are segmented using the laser sensor data. First, a line model is defined in RANSAC algorithm [14] and inlier points near the line in laser sensor data are extracted. Next, the inlier points are transformed using the homography from the laser sensor coordinates to the rear image coordinates. In the transformed inlier set, the line model is extracted through the FOC. These line models are the road boundaries in the rear image. Then, the road boundary of the two side images is transformed using homographies (HH rr, HH ll ) between the rear image and two side images. Therefore, the road region and the background region are segmented using the road boundaries in the images. ALIGNMENT STEP The proposed system cannot apply the general mosaic method because the images do not have a common center of projection. Therefore, the proposed system performs the registration for 9

10 each of the road and the background region. In the registration of the road regions, homographies (HH rr, HH ll ) are performed for the road region of the images at time t. In the registration of background regions, the simple homographies cannot be applied because it does not exist on common plane. Therefore, it is assumed that the background regions of the side images are modeled as shown in figure 7. Figure 7. Alignment Model VISUALIZATION STEP In this step, the blind spots of the driver are displayed in various situations. The proposed system displays them with various views such as the cylindrical panorama view, the top view, the side view and the information panoramic mosaic view for visualizing varied safety information. Information panoramic mosaic view This view is the basic display view. When the vehicle is running at over 10km/h, the information view displays panoramic mosaic view with the states of the vehicle such as speed of vehicle or the distance until the nearest approaching vehicles. Cylindrical panoramic view Another view is the cylindrical panoramic view that is made by using a cylindrical projection [18]. This view is also used when the vehicle is running at over 10km/h. This view presents a complete panorama image to the driver while driving. Side view The side view means the extended image from the images in each side camera. When you use the turn signal for changing the lane, the view is displayed to prevent the car accident with an approaching vehicle. It is the registration image based on the images in each side camera. Top view The last view type is top view that is made by using a simple image warping [19]. The top view is actually a bird s eye view that displays the blind spot as if the driver is seeing it from the sky. In this case, the driver can easily distinguish obstacles around the vehicle. This view 10

11 is operated when the vehicle is running at below 10 km/h or is reversing for parking. It looks like the Around View Monitor of Nissan but it is fundamentally different [9]. The Around View Monitor uses four wide-angle cameras pointed toward the road placed around the vehicle to eliminate the blind spots during parking. However, the proposed system does not change the position of attached cameras for generating the top view. The proposed system generates the top view using only a three-dimensional image warping technique. EXPERIMENTAL RESULTS In all the experiments, three cameras and a laser sensor were used on a real vehicle and data from each devices was captured at 70~80km/h on a real road. First, the proposed system captures the distance of objects around the vehicle using a LD-OEM laser sensor made by SICK. Then, it calibrates the distortion of the rear image and computes the homography between the laser sensor data and the rear image. Figure 8. Visualization of overlapped laser sensor data Figure 8 illustrates the result of laser sensor calibration. The left image in the figure show the result of visualization of the real laser sensor data and the right image in the figure displays overlapped laser sensor data with the nearest approaching vehicle (red rectangle) on an undistorted rear image. Next, the proposed system generates a panoramic mosaic view using an overlapped rear image and two side images for eliminating the blind spots. First, the proposed system computes FOC and the homographies between the rear camera and each side camera on the pre-processing step. Then, it segments the road and background regions using the laser sensor data and RANSAC algorithm. After the segmentation is completed, the registration is performed with the 3D alignment model. Variou s panoramic mosaic views are used on the driver s intention. Figure 9 shows the information panoramic mosaic view such as informing the information of the vehicle. Figure 10 shows that the proposed system warns an approaching vehicle. 11

12 Figure 9. Information Panoramic Mosaic View Figure 10. Information Panoramic Mosaic View ( with an approaching vehicle ) Figure 11 shows another view made by the cylindrical projection. This view is the most natural view to the driver. Figure 12 shows the side view with warning for the approaching vehicle. Figure 13 shows the top view such as seeing from the sky. It is effective when the vehicle is running at below 10 km/h or is reversing for parking. Figure 11. Cylindrical Panorama View 12

13 Figure 12. Side View Figure 13. Top View CONCLUSION In this paper, we propose a multi-purpose panoramic vision system for eliminating the blind spots that are mainly caused of car accidents. The proposed system establishes two cameras in each side of the vehicle and a wide-angle camera in the rear of the vehicle and a laser sensor in rear-left of the vehicle to eliminate the blind spots perpectly. Therefore, the proposed system provides various type views depending on the driver s intention for eliminating the blind spots. The proposed system consists of two subparts such as the laser sensor data calibration part and panoramic mosaic view generation part. In the laser sensor data calibration part, the unwarping map for calibration of the lens distortion and the homography for overlapping laser sensor data on the rear image are computed in the pre-processing step and other information such as approaching vehicles are displayed in the image while driving. In the panoramic mosaic view generation part, the FOC and the homography between cameras are computed in pre-processing. The images are segmented using the road boundary into the road region and the background region while driving. And then, the registration is performed on the images of three cameras and the visualization on various type views is provided with the laser sensor data. 13

14 The proposed system has an advantage of visualization for the blind spot but also improves on other problems. The registration is performed for each of the road regions and background regions but this method is not correct in the background case. Because of the registration for the background region, the registration image has a little distortion for approaching vehicles. Therefore, future studies will focus on the novel registration method based on multi-perspective panoramic image generation method. Also, the experiment could not be conducted in bad weather such as a snowy or rainy day. But, the proposed system will be operated well if quality of images is guaranteed. Future research is needed to compensate for the current disadvantages. REFERENCES (1) Intelligent Vehicle Technologies. nologies (2) Blind Spot. d-spots.cfm (3) Blind Spot (vehicle). (4) R. Andrew Hicks and Ronald K. Perline, Blind-spot Problem for Motor Vehicles Applied Optics, Vol. 44, No. 19, pp , 2005 (5) Ford Interim Summer (6) Ford's CamCar Technology Eliminates blind spots. news/fulls-tory.asp?id=409 (7) The Mechatronic Car - Operating the Z22. /z22.htm (8) PanoramicVision TM /panoramicvisionsystem.asp (9) NISSAN - Around View Monitor. INTRODUCTION/DETAILS/AVM/ System. (10) Volvo invents BLIS. spot-info-system-actual-happiness/ (11) S. Nayar, Catadioptric Omnidirectional Cameras, in Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition(USA, 1997), IEEE Computer Society (12) Camera Calibration and 3D Reconstruction. mentation/camera_calibration_and_3d_reconstruction.html (13) R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, USA, 2004 (14) M. Fischler and R. Bolles, Random Sample Consensus: a Paradigm for Model Fitting with Application to Image Analysis and Automated Cartography, Communications of the ACM, ACM New York, USA, June 1981, Vol. 24, Issue 6, pp

15 (15) R. Szeliski, Image Alignment and Stitching: a Tutorial, Technical Report MSR-TR , Microsoft Research, USA, December 2004 (16) Didi Sazbon, Hector Rotstein and Ehud Rivlin, Finding the Focus of Expansion and Estimating Range using Optical Flow Images and a Matched Filter, Machine Vision and Applications, Springer Berlin, Germany, October 2004, Volume 15, Issue 4, pp (17) Carlo Tomasi and Takeo Kanade, Shape and Motion from Image Streams: a Factorization Method, Technical Report CMU-CS , Carnegie Mellon University, USA, Jan 1991, PA (18) Kyung Ho Jang, Soon Ki Jung, Minho Lee, Constructing Cylindrical Panoramic Image using Equidistant Matching, Electronics Letters, IEEE Computer Society, USA, September 1999, Volume 34, Issue 20, pp (19) William R. Mark, Leonard McMillan and Gary Bishop, Post-rendering 3D Warping, in Proceedings of the 1997 symposium on Interactive 3D graphics(usa, 1997), ACM New York 15

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington

Dual-fisheye Lens Stitching for 360-degree Imaging & Video. Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington Dual-fisheye Lens Stitching for 360-degree Imaging & Video Tuan Ho, PhD. Student Electrical Engineering Dept., UT Arlington Introduction 360-degree imaging: the process of taking multiple photographs and

More information

Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics

Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics Lehrstuhl für Bildverarbeitung Institute of Imaging & Computer Vision Dynamic Distortion Correction for Endoscopy Systems with Exchangeable Optics Thomas Stehle and Michael Hennes and Sebastian Gross and

More information

AR 2 kanoid: Augmented Reality ARkanoid

AR 2 kanoid: Augmented Reality ARkanoid AR 2 kanoid: Augmented Reality ARkanoid B. Smith and R. Gosine C-CORE and Memorial University of Newfoundland Abstract AR 2 kanoid, Augmented Reality ARkanoid, is an augmented reality version of the popular

More information

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration

Image stitching. Image stitching. Video summarization. Applications of image stitching. Stitching = alignment + blending. geometrical registration Image stitching Stitching = alignment + blending Image stitching geometrical registration photometric registration Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/3/22 with slides by Richard Szeliski,

More information

Parallax-Free Long Bone X-ray Image Stitching

Parallax-Free Long Bone X-ray Image Stitching Parallax-Free Long Bone X-ray Image Stitching Lejing Wang 1,JoergTraub 1, Simon Weidert 2, Sandro Michael Heining 2, Ekkehard Euler 2, and Nassir Navab 1 1 Chair for Computer Aided Medical Procedures (CAMP),

More information

Digital Photographic Imaging Using MOEMS

Digital Photographic Imaging Using MOEMS Digital Photographic Imaging Using MOEMS Vasileios T. Nasis a, R. Andrew Hicks b and Timothy P. Kurzweg a a Department of Electrical and Computer Engineering, Drexel University, Philadelphia, USA b Department

More information

Today I t n d ro ucti tion to computer vision Course overview Course requirements

Today I t n d ro ucti tion to computer vision Course overview Course requirements COMP 776: Computer Vision Today Introduction ti to computer vision i Course overview Course requirements The goal of computer vision To extract t meaning from pixels What we see What a computer sees Source:

More information

Colour correction for panoramic imaging

Colour correction for panoramic imaging Colour correction for panoramic imaging Gui Yun Tian Duke Gledhill Dave Taylor The University of Huddersfield David Clarke Rotography Ltd Abstract: This paper reports the problem of colour distortion in

More information

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera

GESTURE BASED HUMAN MULTI-ROBOT INTERACTION. Gerard Canal, Cecilio Angulo, and Sergio Escalera GESTURE BASED HUMAN MULTI-ROBOT INTERACTION Gerard Canal, Cecilio Angulo, and Sergio Escalera Gesture based Human Multi-Robot Interaction Gerard Canal Camprodon 2/27 Introduction Nowadays robots are able

More information

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University Panoramas CS 178, Spring 2010 Marc Levoy Computer Science Department Stanford University What is a panorama?! a wider-angle image than a normal camera can capture! any image stitched from overlapping photographs!

More information

Homographies and Mosaics

Homographies and Mosaics Homographies and Mosaics Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2011 Why Mosaic? Are

More information

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments

Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments , pp.32-36 http://dx.doi.org/10.14257/astl.2016.129.07 Multi-Resolution Estimation of Optical Flow on Vehicle Tracking under Unpredictable Environments Viet Dung Do 1 and Dong-Min Woo 1 1 Department of

More information

Visione per il veicolo Paolo Medici 2017/ Visual Perception

Visione per il veicolo Paolo Medici 2017/ Visual Perception Visione per il veicolo Paolo Medici 2017/2018 02 Visual Perception Today Sensor Suite for Autonomous Vehicle ADAS Hardware for ADAS Sensor Suite Which sensor do you know? Which sensor suite for Which algorithms

More information

Sensors and Sensing Cameras and Camera Calibration

Sensors and Sensing Cameras and Camera Calibration Sensors and Sensing Cameras and Camera Calibration Todor Stoyanov Mobile Robotics and Olfaction Lab Center for Applied Autonomous Sensor Systems Örebro University, Sweden todor.stoyanov@oru.se 20.11.2014

More information

Photographing Long Scenes with Multiviewpoint

Photographing Long Scenes with Multiviewpoint Photographing Long Scenes with Multiviewpoint Panoramas A. Agarwala, M. Agrawala, M. Cohen, D. Salesin, R. Szeliski Presenter: Stacy Hsueh Discussant: VasilyVolkov Motivation Want an image that shows an

More information

Catadioptric Stereo For Robot Localization

Catadioptric Stereo For Robot Localization Catadioptric Stereo For Robot Localization Adam Bickett CSE 252C Project University of California, San Diego Abstract Stereo rigs are indispensable in real world 3D localization and reconstruction, yet

More information

Homographies and Mosaics

Homographies and Mosaics Homographies and Mosaics Jeffrey Martin (jeffrey-martin.com) CS194: Image Manipulation & Computational Photography with a lot of slides stolen from Alexei Efros, UC Berkeley, Fall 2014 Steve Seitz and

More information

FLASH LiDAR KEY BENEFITS

FLASH LiDAR KEY BENEFITS In 2013, 1.2 million people died in vehicle accidents. That is one death every 25 seconds. Some of these lives could have been saved with vehicles that have a better understanding of the world around them

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2015 NAME: SOLUTIONS Problem Score Max Score 1 8 2 8 3 9 4 4 5 3 6 4 7 6 8 13 9 7 10 4 11 7 12 10 13 9 14 8 Total 100 1 1. [8] What are

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

Real Time Word to Picture Translation for Chinese Restaurant Menus

Real Time Word to Picture Translation for Chinese Restaurant Menus Real Time Word to Picture Translation for Chinese Restaurant Menus Michelle Jin, Ling Xiao Wang, Boyang Zhang Email: mzjin12, lx2wang, boyangz @stanford.edu EE268 Project Report, Spring 2014 Abstract--We

More information

Rectified Mosaicing: Mosaics without the Curl* Shmuel Peleg

Rectified Mosaicing: Mosaics without the Curl* Shmuel Peleg Rectified Mosaicing: Mosaics without the Curl* Assaf Zomet Shmuel Peleg Chetan Arora School of Computer Science & Engineering The Hebrew University of Jerusalem 91904 Jerusalem Israel Kizna.com Inc. 5-10

More information

MATLAB 및 Simulink 를이용한운전자지원시스템개발

MATLAB 및 Simulink 를이용한운전자지원시스템개발 MATLAB 및 Simulink 를이용한운전자지원시스템개발 김종헌차장 Senior Application Engineer MathWorks Korea 2015 The MathWorks, Inc. 1 Example : Sensor Fusion with Monocular Vision & Radar Configuration Monocular Vision installed

More information

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1

Active Stereo Vision. COMP 4102A Winter 2014 Gerhard Roth Version 1 Active Stereo Vision COMP 4102A Winter 2014 Gerhard Roth Version 1 Why active sensors? Project our own texture using light (usually laser) This simplifies correspondence problem (much easier) Pluses Can

More information

Recognizing Panoramas

Recognizing Panoramas Recognizing Panoramas Kevin Luo Stanford University 450 Serra Mall, Stanford, CA 94305 kluo8128@stanford.edu Abstract This project concerns the topic of panorama stitching. Given a set of overlapping photos,

More information

COLOR CORRECTION METHOD USING GRAY GRADIENT BAR FOR MULTI-VIEW CAMERA SYSTEM. Jae-Il Jung and Yo-Sung Ho

COLOR CORRECTION METHOD USING GRAY GRADIENT BAR FOR MULTI-VIEW CAMERA SYSTEM. Jae-Il Jung and Yo-Sung Ho COLOR CORRECTION METHOD USING GRAY GRADIENT BAR FOR MULTI-VIEW CAMERA SYSTEM Jae-Il Jung and Yo-Sung Ho School of Information and Mechatronics Gwangju Institute of Science and Technology (GIST) 1 Oryong-dong

More information

Toward an Augmented Reality System for Violin Learning Support

Toward an Augmented Reality System for Violin Learning Support Toward an Augmented Reality System for Violin Learning Support Hiroyuki Shiino, François de Sorbier, and Hideo Saito Graduate School of Science and Technology, Keio University, Yokohama, Japan {shiino,fdesorbi,saito}@hvrl.ics.keio.ac.jp

More information

Various Calibration Functions for Webcams and AIBO under Linux

Various Calibration Functions for Webcams and AIBO under Linux SISY 2006 4 th Serbian-Hungarian Joint Symposium on Intelligent Systems Various Calibration Functions for Webcams and AIBO under Linux Csaba Kertész, Zoltán Vámossy Faculty of Science, University of Szeged,

More information

Opto Engineering S.r.l.

Opto Engineering S.r.l. TUTORIAL #1 Telecentric Lenses: basic information and working principles On line dimensional control is one of the most challenging and difficult applications of vision systems. On the other hand, besides

More information

3D-Position Estimation for Hand Gesture Interface Using a Single Camera

3D-Position Estimation for Hand Gesture Interface Using a Single Camera 3D-Position Estimation for Hand Gesture Interface Using a Single Camera Seung-Hwan Choi, Ji-Hyeong Han, and Jong-Hwan Kim Department of Electrical Engineering, KAIST, Gusung-Dong, Yusung-Gu, Daejeon, Republic

More information

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung,

A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang, Dong-jun Seo, and Dong-seok Jung, IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.9, September 2011 55 A Study on the control Method of 3-Dimensional Space Application using KINECT System Jong-wook Kang,

More information

CENG 595 Selected Topics in Computer Engineering Computer Vision. Zafer ARICAN, PhD

CENG 595 Selected Topics in Computer Engineering Computer Vision. Zafer ARICAN, PhD CENG 595 Selected Topics in Computer Engineering Computer Vision Zafer ARICAN, PhD Today Administrivia What is Computer Vision? Why is it a difficult problem? State-of-the art Brief course syllabus Instructor

More information

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University Panoramas CS 178, Spring 2012 Marc Levoy Computer Science Department Stanford University What is a panorama?! a wider-angle image than a normal camera can capture! any image stitched from overlapping photographs!

More information

Book Cover Recognition Project

Book Cover Recognition Project Book Cover Recognition Project Carolina Galleguillos Department of Computer Science University of California San Diego La Jolla, CA 92093-0404 cgallegu@cs.ucsd.edu Abstract The purpose of this project

More information

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter

Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter Improving the Safety and Efficiency of Roadway Maintenance Phase II: Developing a Vision Guidance System for the Robotic Roadway Message Painter Final Report Prepared by: Ryan G. Rosandich Department of

More information

Speed Enforcement Systems Based on Vision and Radar Fusion: An Implementation and Evaluation 1

Speed Enforcement Systems Based on Vision and Radar Fusion: An Implementation and Evaluation 1 Speed Enforcement Systems Based on Vision and Radar Fusion: An Implementation and Evaluation 1 Seungki Ryu *, 2 Youngtae Jo, 3 Yeohwan Yoon, 4 Sangman Lee, 5 Gwanho Choi 1 Research Fellow, Korea Institute

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University

Panoramas. CS 178, Spring Marc Levoy Computer Science Department Stanford University Panoramas CS 178, Spring 2013 Marc Levoy Computer Science Department Stanford University What is a panorama? a wider-angle image than a normal camera can capture any image stitched from overlapping photographs

More information

High Performance Imaging Using Large Camera Arrays

High Performance Imaging Using Large Camera Arrays High Performance Imaging Using Large Camera Arrays Presentation of the original paper by Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz,

More information

Webcam Image Alignment

Webcam Image Alignment Washington University in St. Louis Washington University Open Scholarship All Computer Science and Engineering Research Computer Science and Engineering Report Number: WUCSE-2011-46 2011 Webcam Image Alignment

More information

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM

FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM FOCAL LENGTH CHANGE COMPENSATION FOR MONOCULAR SLAM Takafumi Taketomi Nara Institute of Science and Technology, Japan Janne Heikkilä University of Oulu, Finland ABSTRACT In this paper, we propose a method

More information

Detection of Vulnerable Road Users in Blind Spots through Bluetooth Low Energy

Detection of Vulnerable Road Users in Blind Spots through Bluetooth Low Energy 1 Detection of Vulnerable Road Users in Blind Spots through Bluetooth Low Energy Jo Verhaevert IDLab, Department of Information Technology Ghent University-imec, Technologiepark-Zwijnaarde 15, Ghent B-9052,

More information

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications

Bluetooth Low Energy Sensing Technology for Proximity Construction Applications Bluetooth Low Energy Sensing Technology for Proximity Construction Applications JeeWoong Park School of Civil and Environmental Engineering, Georgia Institute of Technology, 790 Atlantic Dr. N.W., Atlanta,

More information

Extended View Toolkit

Extended View Toolkit Extended View Toolkit Peter Venus Alberstrasse 19 Graz, Austria, 8010 mail@petervenus.de Cyrille Henry France ch@chnry.net Marian Weger Krenngasse 45 Graz, Austria, 8010 mail@marianweger.com Winfried Ritsch

More information

Discovering Panoramas in Web Videos

Discovering Panoramas in Web Videos Discovering Panoramas in Web Videos Feng Liu 1, Yu-hen Hu 2 and Michael Gleicher 1 1 Department of Computer Sciences 2 Department of Electrical and Comp. Engineering University of Wisconsin-Madison Discovering

More information

Rectifying the Planet USING SPACE TO HELP LIFE ON EARTH

Rectifying the Planet USING SPACE TO HELP LIFE ON EARTH Rectifying the Planet USING SPACE TO HELP LIFE ON EARTH About Me Computer Science (BS) Ecology (PhD, almost ) I write programs that process satellite data Scientific Computing! Land Cover Classification

More information

A Geometric Correction Method of Plane Image Based on OpenCV

A Geometric Correction Method of Plane Image Based on OpenCV Sensors & Transducers 204 by IFSA Publishing, S. L. http://www.sensorsportal.com A Geometric orrection Method of Plane Image ased on OpenV Li Xiaopeng, Sun Leilei, 2 Lou aiying, Liu Yonghong ollege of

More information

Development of Hybrid Image Sensor for Pedestrian Detection

Development of Hybrid Image Sensor for Pedestrian Detection AUTOMOTIVE Development of Hybrid Image Sensor for Pedestrian Detection Hiroaki Saito*, Kenichi HatanaKa and toshikatsu HayaSaKi To reduce traffic accidents and serious injuries at intersections, development

More information

ROAD TO THE BEST ALPR IMAGES

ROAD TO THE BEST ALPR IMAGES ROAD TO THE BEST ALPR IMAGES INTRODUCTION Since automatic license plate recognition (ALPR) or automatic number plate recognition (ANPR) relies on optical character recognition (OCR) of images, it makes

More information

Keywords Unidirectional scanning, Bidirectional scanning, Overlapping region, Mosaic image, Split image

Keywords Unidirectional scanning, Bidirectional scanning, Overlapping region, Mosaic image, Split image Volume 6, Issue 2, February 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com An Improved

More information

Driver Assistance Systems (DAS)

Driver Assistance Systems (DAS) Driver Assistance Systems (DAS) Short Overview László Czúni University of Pannonia What is DAS? DAS: electronic systems helping the driving of a vehicle ADAS (advanced DAS): the collection of systems and

More information

Assessment of Unmanned Aerial Vehicle for Management of Disaster Information

Assessment of Unmanned Aerial Vehicle for Management of Disaster Information Journal of the Korea Academia-Industrial cooperation Society Vol. 16, No. 1 pp. 697-702, 2015 http://dx.doi.org/10.5762/kais.2015.16.1.697 ISSN 1975-4701 / eissn 2288-4688 Assessment of Unmanned Aerial

More information

Beacon Island Report / Notes

Beacon Island Report / Notes Beacon Island Report / Notes Paul Bourke, ivec@uwa, 17 February 2014 During my 2013 and 2014 visits to Beacon Island four general digital asset categories were acquired, they were: high resolution panoramic

More information

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video

Using Line and Ellipse Features for Rectification of Broadcast Hockey Video Using Line and Ellipse Features for Rectification of Broadcast Hockey Video Ankur Gupta, James J. Little, Robert J. Woodham Laboratory for Computational Intelligence (LCI) The University of British Columbia

More information

Depth Perception with a Single Camera

Depth Perception with a Single Camera Depth Perception with a Single Camera Jonathan R. Seal 1, Donald G. Bailey 2, Gourab Sen Gupta 2 1 Institute of Technology and Engineering, 2 Institute of Information Sciences and Technology, Massey University,

More information

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image

Overview. Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Camera & Color Overview Pinhole camera model Projective geometry Vanishing points and lines Projection matrix Cameras with Lenses Color Digital image Book: Hartley 6.1, Szeliski 2.1.5, 2.2, 2.3 The trip

More information

Effective Collision Avoidance System Using Modified Kalman Filter

Effective Collision Avoidance System Using Modified Kalman Filter Effective Collision Avoidance System Using Modified Kalman Filter Dnyaneshwar V. Avatirak, S. L. Nalbalwar & N. S. Jadhav DBATU Lonere E-mail : dvavatirak@dbatu.ac.in, nalbalwar_sanjayan@yahoo.com, nsjadhav@dbatu.ac.in

More information

Active Aperture Control and Sensor Modulation for Flexible Imaging

Active Aperture Control and Sensor Modulation for Flexible Imaging Active Aperture Control and Sensor Modulation for Flexible Imaging Chunyu Gao and Narendra Ahuja Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL,

More information

MEASURING HEAD-UP DISPLAYS FROM 2D TO AR: SYSTEM BENEFITS & DEMONSTRATION Presented By Matt Scholz November 28, 2018

MEASURING HEAD-UP DISPLAYS FROM 2D TO AR: SYSTEM BENEFITS & DEMONSTRATION Presented By Matt Scholz November 28, 2018 MEASURING HEAD-UP DISPLAYS FROM 2D TO AR: SYSTEM BENEFITS & DEMONSTRATION Presented By Matt Scholz November 28, 2018 Light & Color Automated Visual Inspection Global Support TODAY S AGENDA The State of

More information

Single Camera Catadioptric Stereo System

Single Camera Catadioptric Stereo System Single Camera Catadioptric Stereo System Abstract In this paper, we present a framework for novel catadioptric stereo camera system that uses a single camera and a single lens with conic mirrors. Various

More information

Transportation Informatics Group, ALPEN-ADRIA University of Klagenfurt. Transportation Informatics Group University of Klagenfurt 3/10/2009 1

Transportation Informatics Group, ALPEN-ADRIA University of Klagenfurt. Transportation Informatics Group University of Klagenfurt 3/10/2009 1 Machine Vision Transportation Informatics Group University of Klagenfurt Alireza Fasih, 2009 3/10/2009 1 Address: L4.2.02, Lakeside Park, Haus B04, Ebene 2, Klagenfurt-Austria Index Driver Fatigue Detection

More information

Image Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt

Image Mosaicing. Jinxiang Chai. Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt CSCE 641 Computer Graphics: Image Mosaicing Jinxiang Chai Source: faculty.cs.tamu.edu/jchai/cpsc641_spring10/lectures/lecture8.ppt Outline Image registration - How to break assumptions? 3D-2D registration

More information

Computer Vision Slides curtesy of Professor Gregory Dudek

Computer Vision Slides curtesy of Professor Gregory Dudek Computer Vision Slides curtesy of Professor Gregory Dudek Ioannis Rekleitis Why vision? Passive (emits nothing). Discreet. Energy efficient. Intuitive. Powerful (works well for us, right?) Long and short

More information

SEE MORE, SMARTER. We design the most advanced vision systems to bring humanity to any device.

SEE MORE, SMARTER. We design the most advanced vision systems to bring humanity to any device. SEE MORE, SMARTER OUR VISION Immervision Enables Intelligent Vision OUR MISSION We design the most advanced vision systems to bring humanity to any device. ABOUT US Immervision enables intelligent vision

More information

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles

Introducing LISA. LISA: Laboratory for Intelligent and Safe Automobiles Introducing LISA LISA: Laboratory for Intelligent and Safe Automobiles Mohan M. Trivedi University of California at San Diego mtrivedi@ucsd.edu Int. Workshop on Progress and Future Directions of Adaptive

More information

Use of Photogrammetry for Sensor Location and Orientation

Use of Photogrammetry for Sensor Location and Orientation Use of Photogrammetry for Sensor Location and Orientation Michael J. Dillon and Richard W. Bono, The Modal Shop, Inc., Cincinnati, Ohio David L. Brown, University of Cincinnati, Cincinnati, Ohio In this

More information

1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany

1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany 1 st IFAC Conference on Mechatronic Systems - Mechatronics 2000, September 18-20, 2000, Darmstadt, Germany SPACE APPLICATION OF A SELF-CALIBRATING OPTICAL PROCESSOR FOR HARSH MECHANICAL ENVIRONMENT V.

More information

Classification of Road Images for Lane Detection

Classification of Road Images for Lane Detection Classification of Road Images for Lane Detection Mingyu Kim minkyu89@stanford.edu Insun Jang insunj@stanford.edu Eunmo Yang eyang89@stanford.edu 1. Introduction In the research on autonomous car, it is

More information

METHODS AND ALGORITHMS FOR STITCHING 360-DEGREE VIDEO

METHODS AND ALGORITHMS FOR STITCHING 360-DEGREE VIDEO International Journal of Civil Engineering and Technology (IJCIET) Volume 9, Issue 12, December 2018, pp. 77 85, Article ID: IJCIET_09_12_011 Available online at http://www.iaeme.com/ijciet/issues.asp?jtype=ijciet&vtype=9&itype=12

More information

Time-Lapse Panoramas for the Egyptian Heritage

Time-Lapse Panoramas for the Egyptian Heritage Time-Lapse Panoramas for the Egyptian Heritage Mohammad NABIL Anas SAID CULTNAT, Bibliotheca Alexandrina While laser scanning and Photogrammetry has become commonly-used methods for recording historical

More information

Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications

Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications Detection and Tracking of the Vanishing Point on a Horizon for Automotive Applications Young-Woo Seo and Ragunathan (Raj) Rajkumar GM-CMU Autonomous Driving Collaborative Research Lab Carnegie Mellon University

More information

Commercial Relevance of Freeform Optics. Mario Ledig Vice President Technology

Commercial Relevance of Freeform Optics. Mario Ledig Vice President Technology Commercial Relevance of Freeform Optics Mario Ledig Vice President Technology Content Introduction and drivers for free form optics (technology, market) Market developments enabled by free forms Summary

More information

Distance Estimation with a Two or Three Aperture SLR Digital Camera

Distance Estimation with a Two or Three Aperture SLR Digital Camera Distance Estimation with a Two or Three Aperture SLR Digital Camera Seungwon Lee, Joonki Paik, and Monson H. Hayes Graduate School of Advanced Imaging Science, Multimedia, and Film Chung-Ang University

More information

On the data compression and transmission aspects of panoramic video

On the data compression and transmission aspects of panoramic video Title On the data compression and transmission aspects of panoramic video Author(s) Ng, KT; Chan, SC; Shum, HY; Kang, SB Citation Ieee International Conference On Image Processing, 2001, v. 2, p. 105-108

More information

Directional Driver Hazard Advisory System. Benjamin Moore and Vasil Pendavinji ECE 445 Project Proposal Spring 2017 Team: 24 TA: Yuchen He

Directional Driver Hazard Advisory System. Benjamin Moore and Vasil Pendavinji ECE 445 Project Proposal Spring 2017 Team: 24 TA: Yuchen He Directional Driver Hazard Advisory System Benjamin Moore and Vasil Pendavinji ECE 445 Project Proposal Spring 2017 Team: 24 TA: Yuchen He 1 Table of Contents 1 Introduction... 3 1.1 Objective... 3 1.2

More information

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses

Fabrication of the kinect remote-controlled cars and planning of the motion interaction courses Available online at www.sciencedirect.com ScienceDirect Procedia - Social and Behavioral Sciences 174 ( 2015 ) 3102 3107 INTE 2014 Fabrication of the kinect remote-controlled cars and planning of the motion

More information

FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM

FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM SMART ALGORITHMS FOR BRILLIANT PICTURES The Competence Center Visual Computing of Fraunhofer FOKUS develops visualization

More information

Intelligent driving TH« TNO I Innovation for live

Intelligent driving TH« TNO I Innovation for live Intelligent driving TNO I Innovation for live TH«Intelligent Transport Systems have become an integral part of the world. In addition to the current ITS systems, intelligent vehicles can make a significant

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera

Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera Facoltà di Ingegneria Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera Mariolino De Cecco, Nicolo Biasi, Ilya Afanasyev Trento, 2012 1/20 Content

More information

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System R3-11 SASIMI 2013 Proceedings Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System Masaharu Yamamoto 1), Anh-Tuan Hoang 2), Mutsumi Omori 2), Tetsushi Koide 1) 2). 1) Graduate

More information

Fast and High-Quality Image Blending on Mobile Phones

Fast and High-Quality Image Blending on Mobile Phones Fast and High-Quality Image Blending on Mobile Phones Yingen Xiong and Kari Pulli Nokia Research Center 955 Page Mill Road Palo Alto, CA 94304 USA Email: {yingenxiong, karipulli}@nokiacom Abstract We present

More information

A software video stabilization system for automotive oriented applications

A software video stabilization system for automotive oriented applications A software video stabilization system for automotive oriented applications A. Broggi, P. Grisleri Dipartimento di Ingegneria dellinformazione Universita degli studi di Parma 43100 Parma, Italy Email: {broggi,

More information

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results

SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results SIS63-Building the Future-Advanced Integrated Safety Applications: interactive Perception platform and fusion modules results Angelos Amditis (ICCS) and Lali Ghosh (DEL) 18 th October 2013 20 th ITS World

More information

Computational Rephotography

Computational Rephotography Computational Rephotography SOONMIN BAE MIT Computer Science and Artificial Intelligence Laboratory ASEEM AGARWALA Abobe Systems, Inc. and FRÉDO DURAND MIT Computer Science and Artificial Intelligence

More information

Vehicle Safety Solutions. Backeye Camera Monitor Systems

Vehicle Safety Solutions. Backeye Camera Monitor Systems Vehicle Safety Solutions Backeye 360 360 Monitor Systems Backeye 360 Hazard Zones 360 Monitor System Even manoeuvring at slow speed, a driver needs to watch for potential hazards in several areas around

More information

Panoramic Image Mosaics

Panoramic Image Mosaics Panoramic Image Mosaics Image Stitching Computer Vision CSE 576, Spring 2008 Richard Szeliski Microsoft Research Full screen panoramas (cubic): http://www.panoramas.dk/ Mars: http://www.panoramas.dk/fullscreen3/f2_mars97.html

More information

THE SCHOOL BUS. Figure 1

THE SCHOOL BUS. Figure 1 THE SCHOOL BUS Federal Motor Vehicle Safety Standards (FMVSS) 571.111 Standard 111 provides the requirements for rear view mirror systems for road vehicles, including the school bus in the US. The Standards

More information

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems Light has to go where it is needed: Future Light Based Driver Assistance Systems Thomas Könning¹, Christian Amsel¹, Ingo Hoffmann² ¹ Hella KGaA Hueck & Co., Lippstadt, Germany ² Hella-Aglaia Mobile Vision

More information

The Research of the Lane Detection Algorithm Base on Vision Sensor

The Research of the Lane Detection Algorithm Base on Vision Sensor Research Journal of Applied Sciences, Engineering and Technology 6(4): 642-646, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 03, 2012 Accepted: October

More information

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view)

Panoramic imaging. Ixyzϕθλt. 45 degrees FOV (normal view) Camera projections Recall the plenoptic function: Panoramic imaging Ixyzϕθλt (,,,,,, ) At any point xyz,, in space, there is a full sphere of possible incidence directions ϕ, θ, covered by 0 ϕ 2π, 0 θ

More information

SMARTSCAN Smart Pushbroom Imaging System for Shaky Space Platforms

SMARTSCAN Smart Pushbroom Imaging System for Shaky Space Platforms SMARTSCAN Smart Pushbroom Imaging System for Shaky Space Platforms Klaus Janschek, Valerij Tchernykh, Sergeij Dyblenko SMARTSCAN 1 SMARTSCAN Smart Pushbroom Imaging System for Shaky Space Platforms Klaus

More information

Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera

Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera Facoltà di Ingegneria Reprojection of 3D points of Superquadrics Curvature caught by Kinect IR-depth sensor to CCD of RGB camera Mariolino De Cecco, Nicolo Biasi, Ilya Afanasyev Trento, 2011 1/20 Content

More information

Novel Hemispheric Image Formation: Concepts & Applications

Novel Hemispheric Image Formation: Concepts & Applications Novel Hemispheric Image Formation: Concepts & Applications Simon Thibault, Pierre Konen, Patrice Roulet, and Mathieu Villegas ImmerVision 2020 University St., Montreal, Canada H3A 2A5 ABSTRACT Panoramic

More information

Ubiquitous Home Simulation Using Augmented Reality

Ubiquitous Home Simulation Using Augmented Reality Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007 112 Ubiquitous Home Simulation Using Augmented Reality JAE YEOL

More information

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker

CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed Circuit Breaker 2016 3 rd International Conference on Engineering Technology and Application (ICETA 2016) ISBN: 978-1-60595-383-0 CCD Automatic Gain Algorithm Design of Noncontact Measurement System Based on High-speed

More information

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010

La photographie numérique. Frank NIELSEN Lundi 7 Juin 2010 La photographie numérique Frank NIELSEN Lundi 7 Juin 2010 1 Le Monde digital Key benefits of the analog2digital paradigm shift? Dissociate contents from support : binarize Universal player (CPU, Turing

More information

Light Field based 360º Panoramas

Light Field based 360º Panoramas 1 Light Field based 360º Panoramas André Alexandre Rodrigues Oliveira Abstract This paper describes in detail the developed light field based 360º panorama creation solution, named as multiperspective

More information

PHYS 1112L - Introductory Physics Laboratory II

PHYS 1112L - Introductory Physics Laboratory II PHYS 1112L - Introductory Physics Laboratory II Laboratory Advanced Sheet Thin Lenses 1. Objectives. The objectives of this laboratory are a. to be able to measure the focal length of a converging lens.

More information

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi Department of E&TC Engineering,PVPIT,Bavdhan,Pune ABSTRACT: In the last decades vehicle license plate recognition systems

More information