Challenges and Solutions for Bundling Multiple DAS Applications on a Single Hardware platform.

Size: px
Start display at page:

Download "Challenges and Solutions for Bundling Multiple DAS Applications on a Single Hardware platform."

Transcription

1 Challenges and Solutions for Bundling Multiple DAS Applications on a Single Hardware platform. Gideon P. Stein Itay Gat Gaby Hayon MobileEye Vision Technologies Ltd. Jerusalem, Israel. gideon.stein@mobileye.com Abstract: This paper addresses the key challenges in bundling multiple camera based DAS applications onto the same hardware platform. In particular, we discuss combinations of lane departure warning (LDW), Automatic High-beam Control (AHC), traffic sign recognition (TSR) and forward collision warning (FCW). The advantages of bundling are in cost reduction and that it allows more functions to be added to the car without increasing the footprint on the car windshield. The challenge in bundling is that the different applications traditionally have different requirements from the image sensor and optics. We show how algorithms can be modified so that they can all work together by relying less the particular physics of the camera and making more use of advanced pattern recognition techniques. This shift in algorithm paradigm means an increase in computational requirement. The introduction of new automotive qualified, high performance vision processors makes these new algorithms both viable and affordable paving the way to bundles of application running on the same platform. Keywords: Driver assistance systems, camera, vision 1 Introduction In the last few years we have started to see camera based driver assistance systems (DAS) entering the market: lane departure warning (LDW), Automatic High-beam Control (AHC), traffic sign recognition (TSR) and forward collision warning (FCW) to name a few. The key challenge now, apart from expanding the performance envelop of each function, is the bundling of multiple applications on the same hardware platform. The advantage of bundling is in cost reduction but more importantly, it allows more functions to be added to the car without increasing the footprint on the car windshield. The difficulty in bundling is that the different applications traditionally have different requirements from the image sensor and optics. For example, traditional AHC makes significant use of color information and thus requires a color sensor, while lane detection and traffic sign recognition require the extra sensitivity of the monochrome sensor for low light conditions. In order to overcome these conflicts new algorithms need to be developed which put more focus on the computation and less on the specific physics of the sensor. In particular, we introduce a new AHC algorithm that uses a higher resolution image sensor and requires only very weak color information. This then allows the same sensor to be naturally integrated with LDW, TSR and vehicle detection applications where mostly shape and pattern recognition come into play with some color information giving a more measured boost in performance. This shift in algorithm paradigm, however, means an increase in the computational requirement. The introduction of new automotive qualified, high performance vision processors makes these new algorithms both viable and affordable paving the way to bundles of application running on the same platform. To more accurately follow the design process this paper should first describe the four basic applications (LDW, FCW, TSR and AHC) and the lens and imager requirements of each individual application. One would then show how the requirement spaces intersect leading to possible solutions that fulfill all the requirements. However, it is often easier to use a concrete example, so this paper reverses the process and the next section (section 2) describes a particular camera (imager and lens) to be used in the discussion. Section 3 describes each application in detail and why this camera configuration is acceptable. The major change was in AHC where a totally new algorithm was developed to work with this camera (section 3.4). Section 4 describes the camera exposure control concept which is based on multiple exposures. We conclude with section 5 where we review the Mobileye EyeQ chip family which was developed to support multiple DAS applications. Due to space and time constraints this paper focuses on the nighttime performance of the application bun-

2 dle. It is the more challenging case since it must include AHC. 2.1 The Sensor 2 The Camera Solution The example sensor we use for discussion is a wide VGA sensor with resolution This is the highest resolution sensor available today in an automotive qualified part whose specifications are publicly known. The sensor has a 6µm 6µm square pixel. It has a global shutter and can switch exposure and gain settings every frame. This is key for the ability to use the multi-exposure approach described in section 2.3 and in more detail in section 4. One unusual aspect of the proposed sensor is that it has only red and clear pixels. The red pixels are arranged as they would be on a standard Bayer pattern but the green and blue filters are missing. This is done in order to combine most of the advantages of the monochrome sensor in terms of low light sensitivity and imager resolution with some color information; in particular, the ability to distinguish between red and white lights. 2.2 The Lens The lens example discussed is a 5.7mm lens with a low F number (for example the Marshall 5.7mm F1.6). Some sort of IR cutoff filter is required in order to give a consistent response with a variety of windshields whose transmission characteristics in the IR spectrum can vary widely. The filter selected is an IR cutoff filter with the 50% cutoff set at 700nm rather than the typical 650nm. This filter lets in more light and increases the low light performance in detection of lanes and traffic signs under halogen headlights in particular and also increases the detection range for red taillights. The sensor/lens configuration gives a FOV of 0.06 o per pixel or 45 o x29 o over the whole sensor. However due to assembly tolerances we will only assume that pixels are available giving an effective FOV of 39 o 24 o. Once the sensor and lens have been fixed it is often convenient to specify the lens focal length (F) in pixels. In this case: F = 950pixels (1) Camera Mounting in the Vehicle In this example the camera orientation is centered around the horizon. This was a compromise since the TSR might prefer a small upward tilt for better detection of overhead signs. This would be unacceptable for LDW. The camera height is 1.2m. 2.3 Multiple Exposures No single gain/exposure settings works well for all applications and in fact some applications (such as AHC and TSR) require more than one gain/exposure setting on their own. The solution is to sequence through a set of gain/exposure settings. There are three frames captured in sequence with different gain/exposures in each. A complete set of three frames is captured in 66msec. See section 2.3 for details. 3 Review of the Individual DAS Applications This section reviews the basic performance requirements of the four DAS applications and the imager requirements that are derived from desired performance. It then shows how the suggested camera manages to match these requirements. 3.1 Lane Departure Warning The lane departure warning system is designed to give a warning in the case of unintentional lane departure [1]. The warning is given when the vehicle crosses or is about to cross the lane marker. Driver intention is determined based on use of turn signals, change in steering wheel angle, vehicle speed and brake activation. There are various LDW systems available. The Mobileye algorithm is predictive in that it computes time to lane crossing (TLC) based on change in lane to wheel distance and warns when the time to lane crossing is below a certain threshold. Other algorithms give a warning if the wheel is inside a certain zone around the lane marker. In either case, the core of the algorithm is the lane detection algorithm. The lanes markers are detected in the camera image and then, given the known camera geometry and it s location relative to the vehicle, the position of the vehicle relative to the lane is computed. This information is then collected over time, often using the Kalman filter. Wheel to lane marker distance must be given with an accuracy of better than ±5cm. With forward looking camera this distance is not observed directly but is extrapolated from the forward view. The closer we observe the road markings the less we have to extrapolate and the more accurate our estimates will be, especially on curves. Due to the car hood and the location of the camera, the road is seldom visible closer than 6m in front of the wheels. In some cars, with longer hoods, this distance is even greater. The camera orientation and FOV also have a contributing factor. In order to see the road 6m in front of the vehicle using a camera mounted at 1.2m height

3 the camera vertical FOV must be: α = arctan( ) = 11.3o (2) below the horizon. In our case the camera vertical FOV below the horizon is 12 o so the closest visible point on the road is at 5.7m. This means the limiting factor is most likely to be the car hood not the camera specifications.. The lane departure warning system must work on sharp curves (with radius down to 125m). With a horizontal FOV of 39 o the inner lane markers will still be visible on such curves. The output of the lane detection module can also be used to support other applications. For example it is used for detection the current in-path vehicle (CIPV) for headway monitoring and FCW. In order to correctly perform this lane assignment on curves we need to detect lane markings at 50m and beyond. With the proposed camera, a lane mark of width 0.1m will be just under two pixels wide at 50m and can be detected accurately. The expectation from the system is greater than 99% availability when lane markings are visible. This is particularly challenging in low light conditions when the lane markings are not freshly painted (have low contrast with the road) and the only light source is the car halogen headlights. In these conditions the lane markings are only visible using the higher sensitivity of the clear pixels (i.e. using a monochrome sensor or the proposed red/clear sensor). Note: with the more powerful Xenon HID headlights it is possible to use a standard RGB sensor in most low light conditions. 3.2 Forward Collision Warning The core technology behind Forward Collision Warning (FCW) and headway distance monitoring is vehicle detection. To reliably detect vehicles in a single image the system currently requires that a vehicle be 13 pixels width. For a car of width 1.6m, the proposed camera gives initial detection at 115m and multi-frame approval at 100m. A narrower FOV will give a greater detection range however it will reduce the ability to detect passing and cutting-in vehicles. The FOV of around 40 o was found to be almost optimal given the sensor resolution and dimensions. A key component of the FCW algorithm [2] is the estimation of distance from a single camera as in [3] and the estimation of scale change from which one can determine the time to contact. Computing either value from a Bayer pattern mosaic would be difficult since no row or column is of uniform color. However since the red/clear sensor has every second row and column of uniform color (in this case clear) it is possible to find and track vertical and horizontal edges accurately. 3.3 Traffic Sign Recognition The TSR module is designed to detect all speed limit signs and end-of-speed limit signs on highways, country roads and urban settings. In addition iconic supplemental signs (e.g. rain, on exit (arrow), trailer) must be detected and identified. The signs can be of the regular (fixed) type at the sid of the road or the electronic overhead signs. Partially occluded, slightly twisted and rotated signs must also be detected. It is important that the system ignore the following signs: signs on truck/buses, exit road numbers, minimum speed signs, embedded signs. The TSR module which focuses on speed limit signs does not have a specific detection range requirement. These signs only needed to be detected before they leave the image. The most difficult case is to detect a 0.8m diameter sign on the side of the road when the vehicle is driving in the center lane of a three lane highway. In this case lateral distance between the sign and the vehicle centerline can reach 10m. In the above case, the sign will leave the edge of the image when it is: w = = 26pixels (3) 10 This translates to a detection at distance of: Z = FW w = = 29m. (4) Within the time of one frame the car can travel 7m. Therefore, in the worst case, the sign is last completely in the image, at a distance of 36m and when it is 21 pixels in diameter. An example is shown in figure 1c. This presents a challenge in terms of imager resolution. The numbers in the speed sign are confined to a central region which is half the diameter of the sign or about 11 pixels in this worst case. For speed signs which can include up to three digits this is clearly pushing the limits of OCR technology [4]. Superresolution techniques using prior models have been suggested [5] to improve the image resolution. We follow that approach and implicitly apply constraints in the training data due to the limited class of objects that must be identified. Tests show that reliable detection is possible with a red/clear sensor for signs down to 18 pixels in outer diameter (figure 1b). It is clear however that monochrome imagers will perform significantly better than Bayer pattern color sensors with the same number of pixels since Bayer mosaics have only 50% of the pixels of uniform response. The red/clear sensor is a compromise, where 75% of the pixels are of uniform response. However, the impact of using a red/clear sensor was found to be minimal when compare to the full monochrome imager. Recognition rates by the classifier dropped by only a fraction of one percent when, in tests, the value

4 (a) (a) (c) (d) Figure 1: (a) A 0.8m diameter sign first recognized at 50m when it is 15 pixels in the image. Robust recognition at 40m when it is 19 pixels in diameter. (c) Sign leaves the image at 36m when it is 21 pixel in diameter. (d) example of an electronic overhead sign for one out four pixels of a monochrome sensor was replaced by interpolation of its eight-way connected neighbors. Another significant reason why monochrome and red/clear imagers are better than RGB imagers is improved low light performance. At night there is a limit on the allowable camera exposure so as to avoid motion blur. However the sign must also be bright enough to be readable. This topic is described in more detail in the context of multiple exposures (section 4) Vehicle Detection Support for TSR Having multiple applications running on the same ECU does not just create problems. It sometimes provides solutions. A particularly difficult problem in TSR is posed by the speed limit signs on the backs of trucks and buses. They look like regular speed signs at a distance and since the relative speed to such a Figure 2: (a) The speed limit sign on the truck is recognized but ignored (marked in green). The circular shape of a tree behind a truck is also an initial candidate. All the speed limit signs on the truck are ignored. The 90 KMH speed limit sign is recognized (marked in blue). sign is small their size also increases slowly like a distant sign. So TSR by itself could not solve the problem. The solution was found by using the vehicle detection module. A traffic sign that was detected in the back of a vehicle was considered invalid. An example can be seen in figure Automatic High-beam Control A typical automatic high-beam control (AHC) system detects the following conditions and switches to low beams: Headlights of oncoming vehicles Taillights of preceding vehicles Street lights or ambient light indicating that high beams are not required Vehicle speed The host vehicle lights are switched back to high beams when non of these conditions exist (often after a specific grace period). One of the key challenges is to avoid false positive detection on reflections from street signs and reflectors and on stationary lights on distant buildings. One approach [6] is to compare images from two sensors: one with a red filter and the second with a cyan filter. The latter will respond only to non-red light sources and will give zero response to red light. By comparing corresponding pixels from the two imagers one

5 can detect the color of the lights source. The number of pixels of each color above a certain intensity are counted and if the count is above a threshold the systems switches to low beams. The use of low resolution imagers precludes the use of this system for any other application. A second approach [7] uses an RGB sensor to give better color differentiation and typical light sources can be located in the full CIE color space. This approach would be able to tell the difference between green, yellow and red lights. A powerful green traffic light will not be confused as an oncoming vehicle. Since a single sensor with a Bayer pattern mosaic is used the lens is defocused so as to spread a light source over multiple pixels. The use of the color mosaic reduces both the effective imager resolution (by 50%) and the low light response (to less than one third). This precludes the use of the same sensor for TSR or LDW. Given that the three other applications already require a high resolution monochrome sensor, a new AHC algorithm was developed that was primarily monochrome but makes use of the high resolution imager that is available. In other words, sophisticated pattern recognition techniques are used on the higher resolution monochrome imagers to identify light sources instead of relying on color information. Some color information is available from the red clear sensor but only when the light source is at a medium to short distance and covers a large enough area in the image (typically more than 16 pixels). A detailed description of the AHC algorithm is beyond the scope of this paper however the algorithm highlights are as follows: Detect bright spots in the sub-sampled long exposure image and then perform clustering and classification in the full resolution image. Classify spots based on brightness, edge shape, internal texture. Get further brightness information from the short exposure frames and classify obvious oncoming headlights based on size and brightness. Track spots over time and compute change in size and brightness. Pair up matching spots based on similarity of shape, brightness and motion. Classify pairs as oncoming or taillights based on distance, brightness and color, and estimate distance. Unmatched spots might be motorcycles taillights. For spots above a threshold size and brightness, search for other supporting evidence such as red color and the patch of brightness where the road in illuminated by the motorcycle headlights. Figure 3a shows a typical highway scene with oncoming vehicles, preceeding vehicles, traffic signs and reflectors. In the long exposure frame, even the reflections off the signs on the right can appear as saturated pixels. Figures 3b, 3c and 3d show some characteristic differences between taillights and reflectors which can be used for classification: Taillights have a smooth roll-off in brightness at the edges (Figure 3c) as opposed to the sharp edges on the rectangular sign (Figure 3d). The brightness of the circular sign drops off gradually like a light source but in this case it is due to the dark ring of a speed sign. However any light source which is that large should be much brighter. (See the oncoming lights figure 3b for example). The red pixels on the circular signs are darker than the saturated clear pixels further indicating that it is not a red light. The two reflectors in figure 3c have sharp edges. The two reflectors in figure 3d are not at the same height and their size/distance ratio does not match a taillight pair so they cannot be a pair of taillights. Figure 4 gives an example how the red pixels can be used to determine the color of a light source. 4 Multiple Exposures No single gain/exposure setting works well for all applications and in fact, some applications (such as AHC and TSR) require more than one gain/exposure setting on their own. The solution is to sequence through a set of gain/exposure settings. There are three frames captured in sequence with different gain/exposures in each. A complete set of three frames is captured in 66msec. Figure 5 shows a example of such a frame sequence. Figure 5a is a long exposure frame optimized for LDW and FCW. It has long exposure and low gain to give strong lane markings even in low light conditions. The outline of the vehicle ahead and it s point of contact with the ground can also be determined which is then used to compute distance [3] and time-to-contact [2]. The long exposure can also be used to detect distant taillights for AHC and candidate circles for TSR. However the long exposure causes motion blur on the traffic sign so it is not readable. Figure 5b has a medium exposure and high gain. It is has little motion blur and is used for TSR. The gain

6 (a) (a) (c) (d) Figure 3: (a) A typical highway scene with oncoming vehicles, preceeding vehicles, traffic signs and reflectors. Figures, (c) and (d) highlight some of the features that are used for classification. (see text) (a) (c) Figure 4: Use of the red pixels to determine color of the light source. (a) An in path vehicle. The halo of the taillights is smooth since the response of the red and clear pixels to red light is identical. The halo around the oncoming headlights shows a reduced response on the red pixels indicating that it is a white light source. Figure 5: A sequence of three different gain/exposure combinations: (a) A long exposure primarily used for lane and vehicle detection gives clear lane marks but produces motion blur on traffic signs and distant oncoming lights tend to blur. Medium exposure and high gain image has no motion blur and is used for TSR. (c) short exposure is used together with the long exposure frame for accurate interpretation of light sources.

7 (a) (a) Figure 6: (a) A detail of the lane marking as it appears in the long exposure image. corresponding point in the TSR image. The lane marking brightness is similar but note the increased background noise. and exposure settings used for the TSR frame are determined dynamically. Once the outline circle of a traffic sign has been detected and tracked in two images (any combination of long exposure frames and TSR frames) it is possible to predict the location and size of the sign in the next image. It is also possible to determine the maximum allowable exposure that will keep the motion blur below a certain limit. It is typically kept to under one pixel. In some cases the maximum allowable exposure combined with the maximum gain, give an image which is too bright and the traffic sign is saturated. In order to avoid this situation, the brightness of the sign is predicted based on the distance (derived from the image size of the circle), the angle based on the predicted location of sign in the image and the high/low beam status. The gain and exposure are then set accordingly. Further adjustment can be made using closed loop control. Since the TSR frame uses a high gain it is has significant image noise and not optimal for LDW (see Figure 6). However the TSR frame does give useful information on well marked highways and can effectively double the frame rate on such roads (which are more typical for fast moving vehicles). This is an opening for future development which could give a performance boost beyond the current LDW technology. Figure 5c is a short exposure frame that helps in accurate classification of the light sources detected in the long exposure frame. Figures 7a and 7b show in detail the center of the image in both long and short exposures respectively. The bright cluster detected on the left in figure 7a clearly appears as two distinct pairs of oncoming headlights in the short exposure image and not one close pair of oncoming lights. The cluster on the right taillight of the in-path vehicle is clearly split into a taillight on the close vehicle which matches the left light and also a taillight on a distant vehicle. The long and short exposure frames also help with the detection and recognition of overhead electronic Figure 7: Use of the short exposure frame to help classify the light source clusters detected in the long exposure frame (a). signs. Figure 8 shows how the long exposure is used to detect candidate overhead signs and the short exposure is used to read the content of the signs. 5 The Mobileye EyeQ Family The Mobileye EyeQ family was developed to support the computational requirements of multiple DAS applications running concurrently. The first generation, Mobileye EyeQ, combines two general purpose CPU s and four vision computation elements (VCE) that provide hardware acceleration for pattern recognition, object tracking and other basic building blocks of computer vision algorithms. The two CPUs and the VCE units can all operate in parallel under the control of one of the CPU units which is also in charge of camera gain/exposure control and CAN communication with the host vehicle. The Mobileye EyeQ can be found in a number of serial production programs including the 2008 GM Cadillac CTS and Buick Lucerne, the 2008 Volvo S80, V70, and XC7 models where it performs vehicle and lane detection and the newly announced BMW 7 Series where it performs Speed Limit Information (SLI), High Beam Assist (HBA) and Lane Departure Warning (LDW). The second generation, Mobileye EyeQ2, follows the same architecture concept but with significant enhancements that give it almost a six fold increase in computational power. The main goal of the Mobileye EyeQ2 was to be able to add pedestrian detection and more, to the bundle of applications running concurrently on a single, low cost, ECU. Figure 9 shows the block level diagram of the new chip. The main features

8 (a) Figure 8: (a) The long exposure is used to detect candidate overhead signs The short exposure is used to read the signs. are: Two Floating Point MIPS34K hyper threads RISC CPU running at 333MHz (twice chip speed). 1 MByte internal SRAM Five enhanced Vision Computation Elements (VCEs) with increased computational power and flexibility. Three new Vector Microcode Machines (VMP) The Vector Microcode Processor (VMP) is an innovative design for a coprocessor module for ASICs targeting computer vision applications: it operates very efficiently on 2D patches of data and has some very powerful and unique instructions for vision processing. The VMP is a fully programmable VLIW vector processing unit with local memory but without cache giving deterministic memory access and single clock instructions. The pipeline is handled by the programmer and compiler at compile time. Mobileye EyeQ2 will be launched in serial production vehicles starting 2009 with an application involving a consolidated feature package of lane detection, vehicle detection, pedestrian detection and fusion. Figure 9: Block level diagram of the Mobileye EyeQ2. References [1] ISO 17361:2007 Intelligent transport systems Lane departure warning systems Performance requirements and test procedures [2] O. Mano, G. Stein, E. Dagan and A. Shashua. Forward Collision Warning with a Single Camera In IEEE Intelligent Vehicles Symposium (IV2004), June. 2004, Parma Italy. [3] G. P. Stein, O. Mano and A. Shashua. Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy In IEEE Intelligent Vehicles Symposium (IV2003),June 2003, Columbus, OH. [4] S. V. Rice, G. Nagy and T. A. Nartker Optical Character Recognition: An Illustrated Guide to the Frontier Published by Springer, 1999 [5] M.F. Tappen, B. C. Russell, and W. T. Freeman. Efficient Graphical Models for Processing Images In The Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)., June 2004 [6] J. S. Stam et al., Continuously Variable Headlamp Control US Patent number: US 6,593,698 [7] K. Schofield et al., Vehicle Headlight Control Using Imaging Sensor US Patent number: US 5,796,094

Computer vision, wearable computing and the future of transportation

Computer vision, wearable computing and the future of transportation Computer vision, wearable computing and the future of transportation Amnon Shashua Hebrew University, Mobileye, OrCam 1 Computer Vision that will Change Transportation Amnon Shashua Mobileye 2 Computer

More information

Sony Releases the Industry's Highest Resolution Effective Megapixel Stacked CMOS Image Sensor for Automotive Cameras

Sony Releases the Industry's Highest Resolution Effective Megapixel Stacked CMOS Image Sensor for Automotive Cameras 2 International Business Park #05-10 Tower One The Strategy Singapore 609930 Telephone: (65) 6544 8338 Facsimile: (65) 6544 8330 NEWS RELEASE: Immediate Sony Releases the Industry's Highest Resolution

More information

CMOS Image Sensors in Cell Phones, Cars and Beyond. Patrick Feng General manager BYD Microelectronics October 8, 2013

CMOS Image Sensors in Cell Phones, Cars and Beyond. Patrick Feng General manager BYD Microelectronics October 8, 2013 CMOS Image Sensors in Cell Phones, Cars and Beyond Patrick Feng General manager BYD Microelectronics October 8, 2013 BYD Microelectronics (BME) is a subsidiary of BYD Company Limited, Shenzhen, China.

More information

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems

P1.4. Light has to go where it is needed: Future Light Based Driver Assistance Systems Light has to go where it is needed: Future Light Based Driver Assistance Systems Thomas Könning¹, Christian Amsel¹, Ingo Hoffmann² ¹ Hella KGaA Hueck & Co., Lippstadt, Germany ² Hella-Aglaia Mobile Vision

More information

6.A44 Computational Photography

6.A44 Computational Photography Add date: Friday 6.A44 Computational Photography Depth of Field Frédo Durand We allow for some tolerance What happens when we close the aperture by two stop? Aperture diameter is divided by two is doubled

More information

A software video stabilization system for automotive oriented applications

A software video stabilization system for automotive oriented applications A software video stabilization system for automotive oriented applications A. Broggi, P. Grisleri Dipartimento di Ingegneria dellinformazione Universita degli studi di Parma 43100 Parma, Italy Email: {broggi,

More information

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern

Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern Improved sensitivity high-definition interline CCD using the KODAK TRUESENSE Color Filter Pattern James DiBella*, Marco Andreghetti, Amy Enge, William Chen, Timothy Stanka, Robert Kaser (Eastman Kodak

More information

Using the Advanced Sharpen Transformation

Using the Advanced Sharpen Transformation Using the Advanced Sharpen Transformation Written by Jonathan Sachs Revised 10 Aug 2014 Copyright 2002-2014 Digital Light & Color Introduction Picture Window Pro s Advanced Sharpen transformation is a

More information

Evaluation of High Intensity Discharge Automotive Forward Lighting

Evaluation of High Intensity Discharge Automotive Forward Lighting Evaluation of High Intensity Discharge Automotive Forward Lighting John van Derlofske, John D. Bullough, Claudia M. Hunter Rensselaer Polytechnic Institute, USA Abstract An experimental field investigation

More information

TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0

TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0 TRUESENSE SPARSE COLOR FILTER PATTERN OVERVIEW SEPTEMBER 30, 2013 APPLICATION NOTE REVISION 1.0 TABLE OF CONTENTS Overview... 3 Color Filter Patterns... 3 Bayer CFA... 3 Sparse CFA... 3 Image Processing...

More information

Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving

Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving FEATURED ARTICLES Autonomous Driving Technology for Connected Cars Advances in Vehicle Periphery Sensing Techniques Aimed at Realizing Autonomous Driving Progress is being made on vehicle periphery sensing,

More information

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD

RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD RECOGNITION OF EMERGENCY AND NON-EMERGENCY LIGHT USING MATROX AND VB6 MOHD NAZERI BIN MUHAMMAD This thesis is submitted as partial fulfillment of the requirements for the award of the Bachelor of Electrical

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi

An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi An Evaluation of Automatic License Plate Recognition Vikas Kotagyale, Prof.S.D.Joshi Department of E&TC Engineering,PVPIT,Bavdhan,Pune ABSTRACT: In the last decades vehicle license plate recognition systems

More information

LED flicker: Root cause, impact and measurement for automotive imaging applications

LED flicker: Root cause, impact and measurement for automotive imaging applications https://doi.org/10.2352/issn.2470-1173.2018.17.avm-146 2018, Society for Imaging Science and Technology LED flicker: Root cause, impact and measurement for automotive imaging applications Brian Deegan;

More information

A Winning Combination

A Winning Combination A Winning Combination Risk factors Statements in this presentation that refer to future plans and expectations are forward-looking statements that involve a number of risks and uncertainties. Words such

More information

Project. Document identification

Project. Document identification Project GRANT AGREEMENT NO. ACRONYM TITLE CALL FUNDING SCHEME TITLE 248898 2WIDE_SENSE WIDE SPECTRAL BAND & WIDE DYNAMICS MULTIFUNCTIONAL IMAGING SENSOR ENABLING SAFER CAR TRANSPORTATION FP7-ICT-2009.6.1

More information

FLASH LiDAR KEY BENEFITS

FLASH LiDAR KEY BENEFITS In 2013, 1.2 million people died in vehicle accidents. That is one death every 25 seconds. Some of these lives could have been saved with vehicles that have a better understanding of the world around them

More information

ANPR INSTALLATION MANUAL

ANPR INSTALLATION MANUAL ANPR INSTALLATION MANUAL Version 1.1 04/22/2016 ANPR page 2 of 12 1. Camera and scene requirements. 2. How to. 3. Recommendations on mounting and adjusting. 4. How not to. Common mistakes. ANPR page 3

More information

White Paper High Dynamic Range Imaging

White Paper High Dynamic Range Imaging WPE-2015XI30-00 for Machine Vision What is Dynamic Range? Dynamic Range is the term used to describe the difference between the brightest part of a scene and the darkest part of a scene at a given moment

More information

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 19: Depth Cameras Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Continuing theme: computational photography Cheap cameras capture light, extensive processing produces

More information

Development of Hybrid Image Sensor for Pedestrian Detection

Development of Hybrid Image Sensor for Pedestrian Detection AUTOMOTIVE Development of Hybrid Image Sensor for Pedestrian Detection Hiroaki Saito*, Kenichi HatanaKa and toshikatsu HayaSaKi To reduce traffic accidents and serious injuries at intersections, development

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System

Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System Driver Education Classroom and In-Car Curriculum Unit 3 Space Management System Driver Education Classroom and In-Car Instruction Unit 3-2 Unit Introduction Unit 3 will introduce operator procedural and

More information

MEASUREMENT CAMERA USER GUIDE

MEASUREMENT CAMERA USER GUIDE How to use your Aven camera s imaging and measurement tools Part 1 of this guide identifies software icons for on-screen functions, camera settings and measurement tools. Part 2 provides step-by-step operating

More information

How does prism technology help to achieve superior color image quality?

How does prism technology help to achieve superior color image quality? WHITE PAPER How does prism technology help to achieve superior color image quality? Achieving superior image quality requires real and full color depth for every channel, improved color contrast and color

More information

Quintic Hardware Tutorial Camera Set-Up

Quintic Hardware Tutorial Camera Set-Up Quintic Hardware Tutorial Camera Set-Up 1 All Quintic Live High-Speed cameras are specifically designed to meet a wide range of needs including coaching, performance analysis and research. Quintic LIVE

More information

Vision Lighting Seminar

Vision Lighting Seminar Creators of Evenlite Vision Lighting Seminar Daryl Martin Midwest Sales & Support Manager Advanced illumination 734-213 213-13121312 dmartin@advill.com www.advill.com 2005 1 Objectives Lighting Source

More information

EBU - Tech 3335 : Methods of measuring the imaging performance of television cameras for the purposes of characterisation and setting

EBU - Tech 3335 : Methods of measuring the imaging performance of television cameras for the purposes of characterisation and setting EBU - Tech 3335 : Methods of measuring the imaging performance of television cameras for the purposes of characterisation and setting Alan Roberts, March 2016 SUPPLEMENT 19: Assessment of a Sony a6300

More information

Visione per il veicolo Paolo Medici 2017/ Visual Perception

Visione per il veicolo Paolo Medici 2017/ Visual Perception Visione per il veicolo Paolo Medici 2017/2018 02 Visual Perception Today Sensor Suite for Autonomous Vehicle ADAS Hardware for ADAS Sensor Suite Which sensor do you know? Which sensor suite for Which algorithms

More information

Nova Full-Screen Calibration System

Nova Full-Screen Calibration System Nova Full-Screen Calibration System Version: 5.0 1 Preparation Before the Calibration 1 Preparation Before the Calibration 1.1 Description of Operating Environments Full-screen calibration, which is used

More information

Development of Gaze Detection Technology toward Driver's State Estimation

Development of Gaze Detection Technology toward Driver's State Estimation Development of Gaze Detection Technology toward Driver's State Estimation Naoyuki OKADA Akira SUGIE Itsuki HAMAUE Minoru FUJIOKA Susumu YAMAMOTO Abstract In recent years, the development of advanced safety

More information

PARALLEL ALGORITHMS FOR HISTOGRAM-BASED IMAGE REGISTRATION. Benjamin Guthier, Stephan Kopf, Matthias Wichtlhuber, Wolfgang Effelsberg

PARALLEL ALGORITHMS FOR HISTOGRAM-BASED IMAGE REGISTRATION. Benjamin Guthier, Stephan Kopf, Matthias Wichtlhuber, Wolfgang Effelsberg This is a preliminary version of an article published by Benjamin Guthier, Stephan Kopf, Matthias Wichtlhuber, and Wolfgang Effelsberg. Parallel algorithms for histogram-based image registration. Proc.

More information

English PRO-642. Advanced Features: On-Screen Display

English PRO-642. Advanced Features: On-Screen Display English PRO-642 Advanced Features: On-Screen Display 1 Adjusting the Camera Settings The joystick has a middle button that you click to open the OSD menu. This button is also used to select an option that

More information

ImagesPlus Basic Interface Operation

ImagesPlus Basic Interface Operation ImagesPlus Basic Interface Operation The basic interface operation menu options are located on the File, View, Open Images, Open Operators, and Help main menus. File Menu New The New command creates a

More information

FTA SI-640 High Speed Camera Installation and Use

FTA SI-640 High Speed Camera Installation and Use FTA SI-640 High Speed Camera Installation and Use Last updated November 14, 2005 Installation The required drivers are included with the standard Fta32 Video distribution, so no separate folders exist

More information

COLOR FILTER PATTERNS

COLOR FILTER PATTERNS Sparse Color Filter Pattern Overview Overview The Sparse Color Filter Pattern (or Sparse CFA) is a four-channel alternative for obtaining full-color images from a single image sensor. By adding panchromatic

More information

Digital Camera Sensors

Digital Camera Sensors Digital Camera Sensors Agenda Basic Parts of a Digital Camera The Pixel Camera Sensor Pixels Camera Sensor Sizes Pixel Density CMOS vs. CCD Digital Signal Processors ISO, Noise & Light Sensor Comparison

More information

A Vehicle Speed Measurement System for Nighttime with Camera

A Vehicle Speed Measurement System for Nighttime with Camera Proceedings of the 2nd International Conference on Industrial Application Engineering 2014 A Vehicle Speed Measurement System for Nighttime with Camera Yuji Goda a,*, Lifeng Zhang a,#, Seiichi Serikawa

More information

Basic Optics System OS-8515C

Basic Optics System OS-8515C 40 50 30 60 20 70 10 80 0 90 80 10 20 70 T 30 60 40 50 50 40 60 30 70 20 80 90 90 80 BASIC OPTICS RAY TABLE 10 0 10 70 20 60 50 40 30 Instruction Manual with Experiment Guide and Teachers Notes 012-09900B

More information

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems

Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Design of Temporally Dithered Codes for Increased Depth of Field in Structured Light Systems Ricardo R. Garcia University of California, Berkeley Berkeley, CA rrgarcia@eecs.berkeley.edu Abstract In recent

More information

Infrared Night Vision Based Pedestrian Detection System

Infrared Night Vision Based Pedestrian Detection System Infrared Night Vision Based Pedestrian Detection System INTRODUCTION Chia-Yuan Ho, Chiung-Yao Fang, 2007 Department of Computer Science & Information Engineering National Taiwan Normal University Traffic

More information

Exercise questions for Machine vision

Exercise questions for Machine vision Exercise questions for Machine vision This is a collection of exercise questions. These questions are all examination alike which means that similar questions may appear at the written exam. I ve divided

More information

High Performance Imaging Using Large Camera Arrays

High Performance Imaging Using Large Camera Arrays High Performance Imaging Using Large Camera Arrays Presentation of the original paper by Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz,

More information

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions

Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Improving Image Quality by Camera Signal Adaptation to Lighting Conditions Mihai Negru and Sergiu Nedevschi Technical University of Cluj-Napoca, Computer Science Department Mihai.Negru@cs.utcluj.ro, Sergiu.Nedevschi@cs.utcluj.ro

More information

Image Capture and Problems

Image Capture and Problems Image Capture and Problems A reasonable capture IVR Vision: Flat Part Recognition Fisher lecture 4 slide 1 Image Capture: Focus problems Focus set to one distance. Nearby distances in focus (depth of focus).

More information

Revision of the EU General Safety Regulation and Pedestrian Safety Regulation

Revision of the EU General Safety Regulation and Pedestrian Safety Regulation AC.nl Revision of the EU General Safety Regulation and Pedestrian Safety Regulation 11 September 2018 ETSC isafer Fitting safety as standard Directorate-General for Internal Market, Automotive and Mobility

More information

Photo Editing Workflow

Photo Editing Workflow Photo Editing Workflow WHY EDITING Modern digital photography is a complex process, which starts with the Photographer s Eye, that is, their observational ability, it continues with photo session preparations,

More information

Chapter 12 Image Processing

Chapter 12 Image Processing Chapter 12 Image Processing The distance sensor on your self-driving car detects an object 100 m in front of your car. Are you following the car in front of you at a safe distance or has a pedestrian jumped

More information

Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles

Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles Choosing the Optimum Mix of Sensors for Driver Assistance and Autonomous Vehicles Ali Osman Ors May 2, 2017 Copyright 2017 NXP Semiconductors 1 Sensing Technology Comparison Rating: H = High, M=Medium,

More information

Vixar High Power Array Technology

Vixar High Power Array Technology Vixar High Power Array Technology I. Introduction VCSELs arrays emitting power ranging from 50mW to 10W have emerged as an important technology for applications within the consumer, industrial, automotive

More information

GPI INSTRUMENT PAGES

GPI INSTRUMENT PAGES GPI INSTRUMENT PAGES This document presents a snapshot of the GPI Instrument web pages as of the date of the call for letters of intent. Please consult the GPI web pages themselves for up to the minute

More information

Data Sheet SMX-160 Series USB2.0 Cameras

Data Sheet SMX-160 Series USB2.0 Cameras Data Sheet SMX-160 Series USB2.0 Cameras SMX-160 Series USB2.0 Cameras Data Sheet Revision 3.0 Copyright 2001-2010 Sumix Corporation 4005 Avenida de la Plata, Suite 201 Oceanside, CA, 92056 Tel.: (877)233-3385;

More information

Section 2 concludes that a glare meter based on a digital camera is probably too expensive to develop and produce, and may not be simple in use.

Section 2 concludes that a glare meter based on a digital camera is probably too expensive to develop and produce, and may not be simple in use. Possible development of a simple glare meter Kai Sørensen, 17 September 2012 Introduction, summary and conclusion Disability glare is sometimes a problem in road traffic situations such as: - at road works

More information

ROAD TO THE BEST ALPR IMAGES

ROAD TO THE BEST ALPR IMAGES ROAD TO THE BEST ALPR IMAGES INTRODUCTION Since automatic license plate recognition (ALPR) or automatic number plate recognition (ANPR) relies on optical character recognition (OCR) of images, it makes

More information

Experiments with An Improved Iris Segmentation Algorithm

Experiments with An Improved Iris Segmentation Algorithm Experiments with An Improved Iris Segmentation Algorithm Xiaomei Liu, Kevin W. Bowyer, Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, U.S.A.

More information

Face Detection DVR includes one or more channel with face detection algorithm. It

Face Detection DVR includes one or more channel with face detection algorithm. It Face Detection Introduction Face Detection DVR includes one or more channel with face detection algorithm. It can analyze video signal and identify faces in images but ignore other information. Device

More information

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System

Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System R3-11 SASIMI 2013 Proceedings Speed Traffic-Sign Recognition Algorithm for Real-Time Driving Assistant System Masaharu Yamamoto 1), Anh-Tuan Hoang 2), Mutsumi Omori 2), Tetsushi Koide 1) 2). 1) Graduate

More information

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques

An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques An Autonomous Vehicle Navigation System using Panoramic Machine Vision Techniques Kevin Rushant, Department of Computer Science, University of Sheffield, GB. email: krusha@dcs.shef.ac.uk Libor Spacek,

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

Positioning Challenges in Cooperative Vehicular Safety Systems

Positioning Challenges in Cooperative Vehicular Safety Systems Positioning Challenges in Cooperative Vehicular Safety Systems Dr. Luca Delgrossi Mercedes-Benz Research & Development North America, Inc. October 15, 2009 Positioning for Automotive Navigation Personal

More information

Lane Detection in Automotive

Lane Detection in Automotive Lane Detection in Automotive Contents Introduction... 2 Image Processing... 2 Reading an image... 3 RGB to Gray... 3 Mean and Gaussian filtering... 5 Defining our Region of Interest... 6 BirdsEyeView Transformation...

More information

Autofocus Problems The Camera Lens

Autofocus Problems The Camera Lens NEWHorenstein.04.Lens.32-55 3/11/05 11:53 AM Page 36 36 4 The Camera Lens Autofocus Problems Autofocus can be a powerful aid when it works, but frustrating when it doesn t. And there are some situations

More information

One Week to Better Photography

One Week to Better Photography One Week to Better Photography Glossary Adobe Bridge Useful application packaged with Adobe Photoshop that previews, organizes and renames digital image files and creates digital contact sheets Adobe Photoshop

More information

Until now, I have discussed the basics of setting

Until now, I have discussed the basics of setting Chapter 3: Shooting Modes for Still Images Until now, I have discussed the basics of setting up the camera for quick shots, using Intelligent Auto mode to take pictures with settings controlled mostly

More information

Transportation Informatics Group, ALPEN-ADRIA University of Klagenfurt. Transportation Informatics Group University of Klagenfurt 3/10/2009 1

Transportation Informatics Group, ALPEN-ADRIA University of Klagenfurt. Transportation Informatics Group University of Klagenfurt 3/10/2009 1 Machine Vision Transportation Informatics Group University of Klagenfurt Alireza Fasih, 2009 3/10/2009 1 Address: L4.2.02, Lakeside Park, Haus B04, Ebene 2, Klagenfurt-Austria Index Driver Fatigue Detection

More information

IJSER. Motion detection done at broad daylight. surrounding. This bright area will also change as. and night has some slight differences.

IJSER. Motion detection done at broad daylight. surrounding. This bright area will also change as. and night has some slight differences. 2014 International Journal of Scientific & Engineering Research, Volume 5, Issue 5, May-2014 1638 Detection Of Moving Object On Any Terrain By Using Image Processing Techniques D. Mohan Ranga Rao, T. Niharika

More information

Coded Aperture for Projector and Camera for Robust 3D measurement

Coded Aperture for Projector and Camera for Robust 3D measurement Coded Aperture for Projector and Camera for Robust 3D measurement Yuuki Horita Yuuki Matugano Hiroki Morinaga Hiroshi Kawasaki Satoshi Ono Makoto Kimura Yasuo Takane Abstract General active 3D measurement

More information

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring

Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Implementation of Adaptive Coded Aperture Imaging using a Digital Micro-Mirror Device for Defocus Deblurring Ashill Chiranjan and Bernardt Duvenhage Defence, Peace, Safety and Security Council for Scientific

More information

License Plate Localisation based on Morphological Operations

License Plate Localisation based on Morphological Operations License Plate Localisation based on Morphological Operations Xiaojun Zhai, Faycal Benssali and Soodamani Ramalingam School of Engineering & Technology University of Hertfordshire, UH Hatfield, UK Abstract

More information

White Paper Focusing more on the forest, and less on the trees

White Paper Focusing more on the forest, and less on the trees White Paper Focusing more on the forest, and less on the trees Why total system image quality is more important than any single component of your next document scanner Contents Evaluating total system

More information

Automatics Vehicle License Plate Recognition using MATLAB

Automatics Vehicle License Plate Recognition using MATLAB Automatics Vehicle License Plate Recognition using MATLAB Alhamzawi Hussein Ali mezher Faculty of Informatics/University of Debrecen Kassai ut 26, 4028 Debrecen, Hungary. Abstract - The objective of this

More information

PHOTOGRAPHING THE ELEMENTS

PHOTOGRAPHING THE ELEMENTS PHOTOGRAPHING THE ELEMENTS PHIL MORGAN FOR SOUTH WEST STORM CHASERS CONTENTS: The basics of exposure: Page 3 ISO: Page 3 Aperture (with examples): Pages 4-7 Shutter speed: Pages 8-9 Exposure overview:

More information

Master thesis: Author: Examiner: Tutor: Duration: 1. Introduction 2. Ghost Categories Figure 1 Ghost categories

Master thesis: Author: Examiner: Tutor: Duration: 1. Introduction 2. Ghost Categories Figure 1 Ghost categories Master thesis: Development of an Algorithm for Ghost Detection in the Context of Stray Light Test Author: Tong Wang Examiner: Prof. Dr. Ing. Norbert Haala Tutor: Dr. Uwe Apel (Robert Bosch GmbH) Duration:

More information

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987)

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987) Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group bdawson@goipd.com (987) 670-2050 Introduction Automated Optical Inspection (AOI) uses lighting, cameras, and vision computers

More information

Reikan FoCal Fully Automatic Test Report

Reikan FoCal Fully Automatic Test Report Focus Calibration and Analysis Software Test run on: 02/02/2016 00:07:17 with FoCal 2.0.6.2416W Report created on: 02/02/2016 00:12:31 with FoCal 2.0.6W Overview Test Information Property Description Data

More information

Use of Photogrammetry for Sensor Location and Orientation

Use of Photogrammetry for Sensor Location and Orientation Use of Photogrammetry for Sensor Location and Orientation Michael J. Dillon and Richard W. Bono, The Modal Shop, Inc., Cincinnati, Ohio David L. Brown, University of Cincinnati, Cincinnati, Ohio In this

More information

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs

Basic Digital Image Processing. The Structure of Digital Images. An Overview of Image Processing. Image Restoration: Line Drop-outs Basic Digital Image Processing A Basic Introduction to Digital Image Processing ~~~~~~~~~~ Rev. Ronald J. Wasowski, C.S.C. Associate Professor of Environmental Science University of Portland Portland,

More information

Putting It All Together: Computer Architecture and the Digital Camera

Putting It All Together: Computer Architecture and the Digital Camera 461 Putting It All Together: Computer Architecture and the Digital Camera This book covers many topics in circuit analysis and design, so it is only natural to wonder how they all fit together and how

More information

KEYENCE VKX LASER-SCANNING CONFOCAL MICROSCOPE Standard Operating Procedures (updated Oct 2017)

KEYENCE VKX LASER-SCANNING CONFOCAL MICROSCOPE Standard Operating Procedures (updated Oct 2017) KEYENCE VKX LASER-SCANNING CONFOCAL MICROSCOPE Standard Operating Procedures (updated Oct 2017) 1 Introduction You must be trained to operate the Laser-scanning confocal microscope (LSCM) independently.

More information

Machine Vision Basics

Machine Vision Basics Machine Vision Basics bannerengineering.com Contents The Four-Step Process 2 Machine Vision Components 2 Imager 2 Exposure 3 Gain 3 Contrast 3 Lens 4 Lighting 5 Backlight 5 Ring Light 6 Directional Lighting

More information

Facial Biometric For Performance. Best Practice Guide

Facial Biometric For Performance. Best Practice Guide Facial Biometric For Performance Best Practice Guide Foreword State-of-the-art face recognition systems under controlled lighting condition are proven to be very accurate with unparalleled user-friendliness,

More information

Sharpness, Resolution and Interpolation

Sharpness, Resolution and Interpolation Sharpness, Resolution and Interpolation Introduction There are a lot of misconceptions about resolution, camera pixel count, interpolation and their effect on astronomical images. Some of the confusion

More information

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography

Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Applications of Flash and No-Flash Image Pairs in Mobile Phone Photography Xi Luo Stanford University 450 Serra Mall, Stanford, CA 94305 xluo2@stanford.edu Abstract The project explores various application

More information

Reikan FoCal Aperture Sharpness Test Report

Reikan FoCal Aperture Sharpness Test Report Focus Calibration and Analysis Software Test run on: 26/01/2016 17:02:00 with FoCal 2.0.6.2416W Report created on: 26/01/2016 17:03:39 with FoCal 2.0.6W Overview Test Information Property Description Data

More information

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples

Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples 2011 IEEE Intelligent Vehicles Symposium (IV) Baden-Baden, Germany, June 5-9, 2011 Intelligent Traffic Sign Detector: Adaptive Learning Based on Online Gathering of Training Samples Daisuke Deguchi, Mitsunori

More information

Vandal Proof Camera: v-cam 500 (D-WDR, 650 TVL, Sony Effio-E, 0.05 lx) Vandal Proof Camera: v-cam 500 (D-WDR, 650 TVL, Sony Effio-E, 0.

Vandal Proof Camera: v-cam 500 (D-WDR, 650 TVL, Sony Effio-E, 0.05 lx) Vandal Proof Camera: v-cam 500 (D-WDR, 650 TVL, Sony Effio-E, 0. Vandal Proof Camera: v-cam 500 (D-WDR, 650 TVL, Sony Effio-E, 0.05 lx) Code: M10772 View of the camera View of the inside. Visible OSD keypad (on the left picture) and lens locking screws (on the right).

More information

Reikan FoCal Fully Automatic Test Report

Reikan FoCal Fully Automatic Test Report Focus Calibration and Analysis Software Reikan FoCal Fully Automatic Test Report Test run on: 26/02/2016 17:23:18 with FoCal 2.0.8.2500M Report created on: 26/02/2016 17:28:27 with FoCal 2.0.8M Overview

More information

for D500 (serial number ) with AF-S VR Nikkor 500mm f/4g ED + 1.4x TC Test run on: 20/09/ :57:09 with FoCal

for D500 (serial number ) with AF-S VR Nikkor 500mm f/4g ED + 1.4x TC Test run on: 20/09/ :57:09 with FoCal Powered by Focus Calibration and Analysis Software Test run on: 20/09/2016 12:57:09 with FoCal 2.2.0.2854M Report created on: 20/09/2016 13:04:53 with FoCal 2.2.0M Overview Test Information Property Description

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

Speed Traffic-Sign Number Recognition on Low Cost FPGA for Robust Sign Distortion and Illumination Conditions

Speed Traffic-Sign Number Recognition on Low Cost FPGA for Robust Sign Distortion and Illumination Conditions R4-17 SASIMI 2015 Proceedings Speed Traffic-Sign on Low Cost FPGA for Robust Sign Distortion and Illumination Conditions Masaharu Yamamoto 1), Anh-Tuan Hoang 2), Tetsushi Koide 1)2) 1) Graduate School

More information

A Method of Multi-License Plate Location in Road Bayonet Image

A Method of Multi-License Plate Location in Road Bayonet Image A Method of Multi-License Plate Location in Road Bayonet Image Ying Qian The lab of Graphics and Multimedia Chongqing University of Posts and Telecommunications Chongqing, China Zhi Li The lab of Graphics

More information

Reikan FoCal Aperture Sharpness Test Report

Reikan FoCal Aperture Sharpness Test Report Focus Calibration and Analysis Software Reikan FoCal Sharpness Test Report Test run on: 26/01/2016 17:14:35 with FoCal 2.0.6.2416W Report created on: 26/01/2016 17:16:16 with FoCal 2.0.6W Overview Test

More information

Target Range Analysis for the LOFTI Triple Field-of-View Camera

Target Range Analysis for the LOFTI Triple Field-of-View Camera Critical Imaging LLC Tele: 315.732.1544 2306 Bleecker St. www.criticalimaging.net Utica, NY 13501 info@criticalimaging.net Introduction Target Range Analysis for the LOFTI Triple Field-of-View Camera The

More information

Reikan FoCal Fully Automatic Test Report

Reikan FoCal Fully Automatic Test Report Focus Calibration and Analysis Software Reikan FoCal Fully Automatic Test Report Test run on: 08/03/2017 13:52:23 with FoCal 2.4.5.3284M Report created on: 08/03/2017 13:57:35 with FoCal 2.4.5M Overview

More information

WP640 Imaging Colorimeter. Backlit Graphics Panel Analysis

WP640 Imaging Colorimeter. Backlit Graphics Panel Analysis Westboro Photonics 1505 Carling Ave, Suite 301 Ottawa, ON K1V 3L7 Wphotonics.com WP640 Imaging Colorimeter Backlit Graphics Panel Analysis Issued: May 5, 2014 Table of Contents 1.0 WP600 SERIES IMAGING

More information

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices J Inf Process Syst, Vol.12, No.1, pp.100~108, March 2016 http://dx.doi.org/10.3745/jips.04.0022 ISSN 1976-913X (Print) ISSN 2092-805X (Electronic) Number Plate Detection with a Multi-Convolutional Neural

More information

Application Note. Digital Low-Light CMOS Camera. NOCTURN Camera: Optimized for Long-Range Observation in Low Light Conditions

Application Note. Digital Low-Light CMOS Camera. NOCTURN Camera: Optimized for Long-Range Observation in Low Light Conditions Digital Low-Light CMOS Camera Application Note NOCTURN Camera: Optimized for Long-Range Observation in Low Light Conditions PHOTONIS Digital Imaging, LLC. 6170 Research Road Suite 208 Frisco, TX USA 75033

More information

AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS)

AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS) AN0503 Using swarm bee LE for Collision Avoidance Systems (CAS) 1.3 NA-14-0267-0019-1.3 Document Information Document Title: Document Version: 1.3 Current Date: 2016-05-18 Print Date: 2016-05-18 Document

More information

ME 6406 MACHINE VISION. Georgia Institute of Technology

ME 6406 MACHINE VISION. Georgia Institute of Technology ME 6406 MACHINE VISION Georgia Institute of Technology Class Information Instructor Professor Kok-Meng Lee MARC 474 Office hours: Tues/Thurs 1:00-2:00 pm kokmeng.lee@me.gatech.edu (404)-894-7402 Class

More information