Implementing Vision Capabilities in Embedded Systems

Size: px
Start display at page:

Download "Implementing Vision Capabilities in Embedded Systems"

Transcription

1 Implementing Vision Capabilities in Embedded Systems Jeff Bier BDTI 2101 Webster St., Suite 1850 Oakland, CA U.S.A. Abstract With the emergence of increasingly capable processors, it s becoming practical to incorporate computer vision capabilities into a wide range of embedded systems, enabling system to analyze their environment via video inputs. Products like Microsoft s Kinect game controller and Mobileye s driver assistance systems are raising awareness of the incredible potential of embedded vision technology. As a result, many embedded system designers are beginning to think about implementing embedded vision capabilities. In this presentation, we ll explore the potential of embedded vision and introduce some of the key ingredients for implementing it. After examining some example applications, we ll introduce processors, algorithms, tools, and techniques for implementing embedded vision. I. INTRODUCTION We use the term embedded vision to refer to the use of computer vision technology in embedded systems. Stated another way, embedded vision refers to embedded systems that extract meaning from visual inputs. Similar to the way that wireless communication has become pervasive over the past 10 years, we believe that embedded vision technology will be very widely deployed in the next 10 years. It s clear that embedded vision technology can bring huge value to a vast range of applications. Two examples are Mobileye s vision-based driver assistance systems, intended to help prevent motor vehicle accidents, and MG International s swimming pool safety system, which helps prevent swimmers from drowning. And for sheer geek appeal, it s hard to beat Intellectual Ventures laser mosquito zapper, designed to prevent people from contracting malaria. Just as high-speed wireless connectivity began as an exotic, costly technology, embedded vision technology has so far typically been found in complex, expensive systems, such as a surgical robot for hair transplantation and quality control inspection systems for manufacturing. Advances in digital integrated circuits were critical in enabling high-speed wireless technology to evolve from exotic to mainstream. When chips got fast enough, inexpensive enough, and energy efficient enough, high-speed wireless became a mass-market technology. Today one can buy a broadband wireless modem for under $100. Similarly, advances in digital chips are now paving the way for the proliferation of embedded vision into high-volume applications. Like wireless communication, embedded vision requires lots of processing power particularly as applications increasingly adopt high-resolution cameras and make use of multiple cameras. Providing that processing power at a cost low enough to enable mass adoption is a big challenge. This challenge is multiplied by the fact that embedded vision applications require a high degree of programmability. In contrast to wireless applications where standards mean that, for example, algorithms don t vary dramatically from one cell phone handset to another, in embedded vision applications there are great opportunities to get better results and enable valuable features through unique algorithms. With embedded vision, we believe that the industry is entering a virtuous circle of the sort that has characterized many other digital signal processing application domains. Although there are few chips dedicated to embedded vision applications today, these applications are increasingly adopting high-performance, cost-effective processing chips developed for other applications, including DSPs, CPUs, FPGAs, and GPUs. As these chips continue to deliver more programmable performance per dollar and per watt, they will enable the creation of more high-volume embedded vision products. Those high-volume applications, in turn, will attract more attention from silicon providers, who will deliver even better performance, efficiency, and programmability. II. APPLICATIONS Computer vision research has its origins in the 1960s. In more recent decades, embedded computer vision systems have been deployed in niche applications such as target-tracking for missiles, and automated inspection for manufacturing plants. Now, as lower-cost, lower-power, and higher-performance processors emerge, embedded vision is beginning to appear in high-volume applications. Perhaps the most visible of these is the Microsoft Kinect, a peripheral for the Xbox 360 game console that uses embedded vision to enable users to control video games simply by gesturing and moving their bodies. Another example of an emerging high-volume embedded vision application is automotive safety systems based on vision. A few automakers, such as Volvo, have begun to install vision-based safety systems in certain models. These systems BDTI 1

2 perform a variety of functions, including warning the driver (and in some cases applying the brakes) when a forward collision is imminent, or when a pedestrian is in danger of being struck. A third example of an emerging high-volume embedded vision application is smart surveillance cameras, which are cameras with the ability to detect certain kinds of activity. For example, the Archerfish Solo, a consumeroriented smart surveillance camera, can be programmed to detect people, vehicles, or other motion in user-selected regions of the camera s field of view. Enabled by the same kinds of chips and algorithms powering the above examples, we expect embedded vision functionality to proliferate into a wide range of products in the next few years. There are obvious places where vision can add tremendous value to equipment in consumer electronics, automotive, entertainment, medical, and retail applications, among others. In other cases, embedded vision will enable the creation of new types of equipment. The purpose of this paper is to introduce some of the practical aspects of embedded vision technology and to inspire system designers to imagine what can be done by incorporating vision capabilities into their designs. III. ALGORITHMS Algorithms are the essence of embedded vision. Through algorithms, visual input in the form of raw video or images is transformed into meaningful information that can be acted upon. Computer vision has been the subject of vibrant academic research for decades, and that research has yielded a deep reservoir of algorithms. For many system designers seeking to implement vision capabilities, the challenge at the algorithm level will not be inventing new algorithms, but rather selecting the best existing algorithms for the task at hand, and refining or tuning them to the specific requirements and conditions of that task. The algorithms that are applicable depend on the nature of the vision processing being performed. Vision applications are generally constructed from a pipelined sequence of algorithms, as shown in Figure 1. Typically, the initial stages are concerned with improving the quality of the image. For example, this may include correcting geometric distortion created by imperfect lenses, enhancing contrast, and stabilizing images to compensate for undesired movement of the camera. objects. For example, in an automotive safety application, these algorithms would attempt to distinguish between vehicles, pedestrians, road signs, and other features of the scene. Generally speaking, vision algorithms are very computationally demanding, since they involve applying complex computations to large amounts of video or image data in real-time. There is typically a trade-off between the robustness of the algorithm and the amount of computation required. A. Algorithm Example: Lens Distortion Correction Lenses, especially inexpensive ones, tend to introduce geometric distortion into images. This distortion is typically characterized as barrel distortion or pincushion distortion, as illustrated in Figure 2. Figure 2. Typical lens distortion. (based on Lens Distortion Correction by Shehrzad Qureshi; used with permission) As shown in the figure, this kind of distortion causes lines that are in fact straight to appear curved, and vice-versa. This can thwart vision algorithms. Hence, it is common to apply an algorithm to reverse this distortion. The usual technique is to use a known test pattern to characterize the distortion. From this characterization data, a set of image warping coefficients is generated, which is subsequently used to undistort each frame. In other words, the warping coefficients are computed once, and then applied to each frame. This is illustrated in Figure 3. Figure 3. Lens distortion correction scheme. Figure 1. A typical embedded vision algorithm pipeline. The second set of stages in a typical embedded vision algorithm pipeline are concerned with converting raw images (i.e., collections of pixels) into information about objects. A wide variety of techniques can be used, identifying objects based on edges, motion, color, size, or other attributes. The final set of stages in a typical embedded vision algorithm pipeline are concerned with making inferences about One complication that arises with lens distortion correction is that the warping operation will use input data corresponding to pixel locations that do not precisely align with the actual pixel locations in the input frame. To enable this to work, interpolation is used between pixels in the input frame. The more demanding the algorithm, the more precise the interpolation must be and the more computationally demanding. BDTI 2

3 For color imaging, the interpolation and warping operations must be performed separately on each color component. For example, a 720p video frame comprises 921,600 pixels, or approximately 2.8 million color components. At 60 frames per second, this corresponds to about 166 million color components per second. If the interpolation and warping operations require 10 processing operations per pixel, the distortion correction algorithm will consume 1.66 billion operations per second. (And that s before we ve even started trying to interpret the content of the images!) B. Algorithm Example: Dense Optical Flow Optical flow is a family of techniques used to estimate the pattern of apparent motion of objects, surfaces, and edges in a video sequence. In vision applications, optical flow is often used to estimate observer and object positions and motion in 3- d space, or to estimate image registration for super-resolution and noise reduction algorithms. Optical flow algorithms typically generate a motion vector for each pixel a video frame. Optical flow requires making some assumptions about the video content (this is known as the aperture problem ). Different algorithms make different assumptions. For example, some algorithms may assume that illumination is constant across the scene, or that motion is smooth. Many optical flow algorithms exist. They can be roughly divided into the following classes: Block-based methods (similar to motion estimation in video compression codecs) Differential methods (Lucas-Kanade, Horn-Schunck, Buxton-Buxton, and variations) Other methods (discrete optimization, phase correlation) A key challenge with optical flow algorithms is aliasing, which can cause incorrect results, for example when an object in the scene has a repeating texture pattern, or when motion exceeds algorithmic constraints. Some optical flow algorithms are sensitive to camera noise. Most optical flow algorithms are computationally intensive. One popular approach is the Lucas-Kanade method with image pyramid. The Lucas-Kanade method is a differential method of estimating optical flow; it is simple but has significant limitations. For example, it assumes constant illumination and constant motion in a small neighborhood around the pixel position of interest. And, it is limited to very small velocity vectors (less than one pixel per frame). Image pyramids are a technique to extend Lucas-Kanade to support faster motion. First, each original frame is sub-sampled to different degrees to create several pyramid levels. The Lucas-Kanade method is used at the top level (lowest resolution) yielding a coarse estimate, but supporting greater motion. Lucas-Kanade is then used again at lower levels (higher resolution) to refine the optical flow estimate. This is summarized in Figure 4. Figure 4. Lucas-Kanade optical flow algorithm with image pyramid. Used by permission of and Julien Marzat.[2] C. Algorithm Example: Pedestrian Detection Pedestrian detection here refers to detecting the presence of people standing or walking, as illustrated in Figure 5. Pedestrian detection might more aptly be called an application rather than an algorithm. It is a complex problem requiring sophisticated algorithms. Figure 5. Prototype pedestrian detection application implemented on a CPU and an FPGA. BDTI 3

4 Figure 6: Block diagram of proof-of-concept pedestrian detection application using an FPGA and a CPU. In Figure 6 we briefly summarize a prototype implementation of a stationary-camera pedestrian detection system implemented using a combination of a CPU and an FPGA. In the figure, the Pre-processing block comprises operations such as scaling and noise reduction, intended to improve the quality of the image. The Image Analysis block incorporates motion detection, pixel statistics such as averages, color information, edge information, etc. At this stage of processing, the image is divided into small blocks. The object segmentation step groups blocks having similar statistics and thus creates an object. The statistics used for this purpose are based on user defined features specified in the hardware configuration file. The Identification and Meta Data generation block generates analysis results from the identified objects such as location, size, color information, and statistical information. It puts the analysis results into a structured data format and transmits them to the CPU. Finally, the On-screen Display block receives command information from the host and superimposes graphics on the video image for display. This prototype system, operating on 720p resolution video at 60 frames per second, was implemented by BDTI on a combination of a Xilinx Spartan-3A DSP 3400 FPGA and a Texas Instruments OMAP3430 CPU. The total compute load is on the order of hundreds of billions of operations per second. IV. PROCESSORS As we ve mentioned, vision algorithms typically require high compute performance. And, of course, embedded systems of all kinds are usually required to fit into tight cost and power consumption envelopes. In other digital-signal-processing application domains, such as digital wireless communications, chip designers achieve this challenging combination of high performance, low cost, and low power by using specialized coprocessors and accelerators to implement the most demanding processing tasks in the application. These coprocessors and accelerators are typically not programmable by the chip user, however. This trade-off is often acceptable in wireless applications, where standards mean that there is strong commonality among algorithms used by different equipment designers. In vision applications, however, there are no standards constraining the choice of algorithms. On the contrary, there are often many approaches to choose from to solve a particular vision problem. Therefore, vision algorithms are very diverse, and tend to change fairly rapidly over time. As a result, the use of non-programmable accelerators and coprocessors is less attractive for vision applications compared to applications like digital wireless and compression-centric consumer video equipment. Achieving the combination of high performance, low cost, low power, and programmability is challenging. Specialpurpose hardware typically achieves high performance at low cost, but with little programmability. General-purpose CPUs BDTI 4

5 provide programmability, but with weak performance or poor cost-, energy-efficiency. Demanding embedded vision applications most often use a combination of processing elements, which might include, for example: A general-purpose CPU for heuristics, complex decision-making, network access, user interface, storage management, and overall control A high-performance DSP-oriented processor for realtime, moderate-rate processing with moderately complex algorithms One or more highly parallel engines for pixel-rate processing with simple algorithms While any processor can in theory be used for embedded vision, the most promising types today are: High-performance embedded CPU Application-specific standard product (ASSP) in combination with a CPU Graphics processing unit (GPU) with a CPU DSP processor with accelerator(s) and a CPU Mobile application processor Field programmable gate array (FPGA) with a CPU In this section, we ll briefly introduce each of these processor types and some of their key strengths and weaknesses for embedded vision applications. A. High-performance Embedded CPU In many cases, embedded CPUs cannot provide enough performance or cannot do so at acceptable price or power consumption levels to implement demanding vision algorithms. Often, memory bandwidth is a key performance bottleneck, since vision algorithms typically use large amounts of memory bandwidth, and don t tend to repeatedly access the same data. The memory systems of embedded CPUs are not designed for these kinds of data flows. However, like most types of processors, embedded CPUs become more powerful over time, and in some cases can provide adequate performance. And there are some compelling reasons to run vision algorithms on a CPU when possible. First, most embedded systems need a CPU for a variety of functions. If the required vision functionality can be implemented using that CPU, then the complexity of the system is reduced relative to a multiprocessor solution. In addition, most vision algorithms are initially developed on PCs using general-purpose CPUs and their associated software development tools. Similarities between PC CPUs and embedded CPUs (and their associated tools) mean that it is typically easier to create embedded implementations of vision algorithms on embedded CPUs compared to other kinds of embedded vision processors. In addition embedded CPUs typically are the easiest to use compared to other kinds of embedded vision processors, due to their relatively straightforward architectures, sophisticated tools, and other application development infrastructure, such as operating systems. An example of an embedded CPU is the Intel Atom E660T. B. Application-specific standard product (ASSP) in combination with a CPU Application-specific standard products (ASSPs) are specialized, highly integrated chips tailored for specific applications or application sets. ASSPs may incorporate a CPU, or use a separate CPU chip. By virtue of specialization, ASSPs typically deliver superior cost- and energy-efficiency compared with other types of processing solutions. Among other techniques, ASSPs deliver this efficiency through the use of specialized coprocessors and accelerators. And, because ASSPs are by definition focused on a specific application, they are usually provided with extensive application software. The specialization that enables ASSPs to achieve strong efficiency, however, also leads to their key limitation: lack of flexibility. An ASSP designed for one application is typically not suitable for another application, even one that is related to the target application. ASSPs use unique architectures, and this can make programming them more difficult than with other kinds of processors. Indeed, some ASSPs are not userprogrammable. Another consideration is risk. ASSPs often are delivered by small suppliers, and this may increase the risk that there will be difficulty in supplying the chip, or in delivering successor products that enable system designers to upgrade their designs without having to start from scratch. An example of a vision-oriented ASSP is the PrimeSense PS1080-A2, used in the Microsoft Kinect. C. Graphics processing unit (GPU) with a CPU Graphics processing units (GPUs), intended mainly for 3-d graphics, are increasingly capable of being used for other functions, including vision applications. The GPUs used in personal computers today are explicitly intended to be programmable to perform functions other than 3-d graphics. Such GPUs are termed general-purpose GPUs or GPGPUs. GPUs have massive parallel processing horsepower. They are ubiquitous in personal computers. GPU software development tools are readily and freely available, and getting started with GPGPU programming is not terribly complex. For these reasons, GPUs are often the parallel processing engines of first resort of computer vision algorithm developers who develop their algorithms on PCs, and then may need to accelerate execution of their algorithms for simulation or prototyping purposes. GPUs are tightly integrated with general-purpose CPUs, sometimes on the same chip. However, one of the limitations of GPU chips is the limited variety of CPUs with which they are currently integrated, and the limited number of CPU operating systems that support that integration. Today there are low-cost, low-power GPUs, designed for products like smart phones, tablets. However, these GPUs are BDTI 5

6 generally not GPGPUs, and therefore using them for applications other than 3-d graphics is very challenging. An example of a GPGPU used in personal computers is the NVIDIA GT240. D. DSP processor with accelerator(s) and a CPU Digital signal processors ( DSP processors or DSPs ) are microprocessors specialized for signal processing algorithms and applications. This specialization typically makes DSPs more efficient than general-purpose CPUs for the kinds of signal processing tasks that are at the heart of vision applications. In addition, DSPs are relatively mature and easy to use compared to other kinds of parallel processors. Unfortunately, while DSPs do deliver higher performance and efficiency than general-purpose CPUs on vision algorithms, they often fail to deliver sufficient performance for demanding algorithms. For this reason, DSPs are often supplemented with one or more coprocessors. A typical DSP chip for vision applications therefore comprises a CPU, a DSP, and multiple coprocessors. This heterogeneous combination can yield excellent performance and efficiency, but can also be difficult to program. Indeed, DSP vendors typically do not enable users to program the coprocessors; rather, the coprocessors run software function libraries developed by the chip supplier. An example of a DSP targeting video applications is the Texas Instruments DM8168 E. Mobile application processor A mobile application processor is a highly integrated system-on-chip, typically designed primarily for smart phones but used for other applications. Application processors typically comprise a high-performance CPU core and a constellation of specialized co-processors, which may include a DSP, a GPU, a video processing unit (VPU), a 2-d graphics processor, an image acquisition processor, etc. These chips are specifically designed for battery powered applications, and therefore place a premium on energy efficiency. In addition, because of the growing importance of and activity surrounding smartphone and tablet applications, mobile application processors often have strong software development infrastructure, including low-cost development boards, Linux and Android ports, etc. However, as with the DSP processors discussed in the previous section, the specialized co-processors found in application processors are usually not user-programmable, which limits their utility for vision applications. An example of a mobile application processor is the Freescale i.mx53. F. Field programmable gate array (FPGA) with a CPU Field programmable gate arrays ( FPGAs ) are flexible logic chips that can be reconfigured at the gate and block levels. This flexibility enables the user to craft computation structures that are tailored to the application at hand. It also allows selection of I/O interfaces and on-chip peripherals matched to the application requirements. The ability to customize compute structures, coupled with the massive amount of resources available in modern FPGAs, yields high performance coupled with good cost- and energy-efficiency. However, using FGPAs is essentially a hardware design function, rather than a software development activity. FPGA design is typically performed using hardware description languages (Verilog or VHLD) at the register transfer level (RTL) a very low level of abstraction. This makes FPGA design time-consuming and expensive, compared to using the other types of processors discussed here. However using FPGAs is getting easier, due to several factors. First, so called IP block libraries libraries of reusable FPGA design components are becoming increasingly capable. In some cases, these libraries directly address vision algorithms. In other cases, they enable supporting functionality, such as video I/O ports or line buffers. Second, FGPA suppliers and their partners increasingly offer reference designs reusable system designs incorporating FPGAs and targeting specific applications. Third, high-level synthesis tools, which enable designers to implement vision and other algorithms in FPGAs using high-level languages, are increasingly effective. Relatively low-performance CPUs can be implemented by users in the FPGA. In a few cases, high-performance CPUs are integrated into FPGAs by the manufacturer. An example FPGA that can be used for vision applications is the Xilinx Spartan-6 LX150T. V. DEVELOPMENT AND TOOLS Developing embedded vision systems is challenging. One consideration, already mentioned above, is that vision algorithms tend to be very computationally demanding. Squeezing them into low-cost, low-power processors typically requires significant optimization work, which in turn requires a deep understanding of the target processor architecture. Another key consideration is that vision is a system-level problem. That is, success depends on numerous elements working together, besides the vision algorithms themselves. These include lighting, optics, image sensors, image preprocessing, and image storage sub-systems. Getting these diverse elements working together effectively and efficiently requires multi-disciplinary expertise. There are numerous algorithms available for vision functions, so in many cases it is not necessary to develop algorithms from scratch. But picking the best algorithm for the job, and ensuring that it meets application requirements, can be a large project in itself. Today, there are many computer vision experts who know little about embedded systems, and many embedded system designers who know little about computer vision. Many projects die in the chasm between these groups. To help bridge this gap, BDTI recently founded the Embedded Vision Alliance [1], an industry partnership dedicated to providing SoC and embedded system engineers with practical know-how they need to incorporate vision capabilities into their designs. BDTI 6

7 A. Personal Computers The personal computer is both a blessing and a curse for embedded vision development. Most embedded vision systems and virtually all vision algorithms are initially developed on a personal computer. The PC is a fabulous platform for research and prototyping. It is inexpensive, ubiquitous, and easy to integrate with cameras and displays. In addition, PCs are endowed with extensive application development infrastructure, including basic software development tools, vision-specific software component libraries, domain-specific tools (such as MATLAB), and example applications. In addition, the GPUs found in most PCs can be used to provide parallel processing acceleration for PC-based application prototypes or simulations. However, the PC is not an ideal platform for implementing most embedded vision systems. Although some applications can be implemented on an embedded PC (a more compact, lower-power cousin to the standard PC), many cannot, due to cost, size, and power considerations. In addition, PCs lack sufficient performance for many real-time vision applications. And, unfortunately, many of the same tools and libraries that make it easy to develop vision algorithms and applications on the PC also make it difficult to create efficient embedded implementations. For example vision libraries intended for algorithm development and prototyping often do not lend themselves to efficient embedded implementation. B. OpenCV OpenCV is a free, open source computer vision software component library for personal computers, comprising over two thousand algorithms. [3] Originally developed by Intel, now maintained by Willow Garage. The OpenCV library, used along with Bradski and Kahler s book, is a great way to quickly begin experimenting with computer vision. However, OpenCV is not a solution to all vision problems. Some OpenCV functions work better than others. And OpenCV is a library, not a standard, so there is no guarantee that it functions identically on different platforms. In its current form, OpenCV is not particularly well suited to embedded implementation. Ports of OpenCV to non-pc platforms have been made, and more are underway, but there s currently little coherence to these efforts. C. Some Promising Developments While embedded vision development is challenging, some promising recent industry developments suggest that it is getting easier. For example, the Microsoft Kinect is becoming very popular for vision development. Soon after its release in late 2010, the API for the Kinect was reverse-engineered, enabling engineers to use the Kinect with hosts other than the Xbox 360 game console. The Kinect has been used with PCs and with embedded platforms such as the Beagle Board. The XIMEA Currera integrates an embedded PC in a camera. It s not suitable for low-cost, low-power applications, but can be a good fit for low-volume applications like manufacturing inspection. Several embedded processor vendors have begun to recognize the magnitude of the opportunity for embedded vision, and are developing processors specifically targeted embedded vision applications. In addition, smart phones and tablets have the potential to become effective embedded vision platforms. Application software platforms are emerging for certain EV applications, such as augmented reality and gesturebased UIs. Such software platforms simplify embedded vision application development by providing many of the utility functions commonly required by such applications. VI. CONCLUSIONS With embedded vision, we believe that the industry is entering a virtuous circle of the sort that has characterized many other digital signal processing application domains. Although there are few chips dedicated to embedded vision applications today, these applications are increasingly adopting high-performance, cost-effective processing chips developed for other applications, including DSPs, CPUs, FPGAs, and GPUs. As these chips continue to deliver more programmable performance per dollar and per watt, they will enable the creation of more high-volume embedded vision products. Those high-volume applications, in turn, will attract more attention from silicon providers, who will deliver even better performance, efficiency, and programmability. ACKNOWLEDGMENTS The author gratefully acknowledges the assistance of Shehrzad Qureshi in providing information on lens distortion correction used in this paper. REFERENCES [1] (coming in June) [2] Real-Time Dense and Accurate Parallel Optical Flow use CUDA: rzat_dumortier_ducrot.pdf [3] OpenCV: Bradski and Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O Reilly, [4] MATLAB/Octave: Machine Vision Toolbox, P.I. Corke, IEEE Robotics and Automation Magazine, 12(4), pp 16-25, November P. D. Kovesi. MATLAB and Octave Functions for Computer Vision and Image Processing. Centre for Exploration Targeting, School of Earth and Environment, The University of Western Australia. [5] Visym (beta): [6] Predator self-learning object tracking algorithm: Z. Kalal, K. Mikolajczyk, and J. Matas, Forward-Backward Error: Automatic Detection of Tracking Failures, International Conference on Pattern Recognition, 2010, pp [7] Vision on GPUs: GPU4vision project, TU Graz: [8] Lens distortion correction: Luis Alvarez, Luis Gomez and J. Rafael Sendra. Algebraic Lens Distortion Model Estimation. Image Processing On Line, DOI: /ipol.2010.ags-alde: BDTI 7

Implementing Vision Capabilities in Embedded Systems

Implementing Vision Capabilities in Embedded Systems The most trusted source of analysis, advice, and engineering for embedded processing technology and applications Implementing Vision Capabilities in Embedded Systems Presented at the 2011 Embedded Systems

More information

Development and Deployment of Embedded Vision in Industry: An Update. Jeff Bier, Founder, Embedded Vision Alliance / President, BDTI

Development and Deployment of Embedded Vision in Industry: An Update. Jeff Bier, Founder, Embedded Vision Alliance / President, BDTI Development and Deployment of Embedded Vision in Industry: An Update Jeff Bier, Founder, Embedded Vision Alliance / President, BDTI NIWeek August 7, 2013 The Big Picture Computer vision is crossing the

More information

Computer Vision at the Edge and in the Cloud: Architectures, Algorithms, Processors, and Tools

Computer Vision at the Edge and in the Cloud: Architectures, Algorithms, Processors, and Tools Computer Vision at the Edge and in the Cloud: Architectures, Algorithms, Processors, and Tools IEEE Signal Processing Society Santa Clara Valley Chapter - April 11, 2018 Jeff Bier Founder, Embedded Vision

More information

Neural Networks The New Moore s Law

Neural Networks The New Moore s Law Neural Networks The New Moore s Law Chris Rowen, PhD, FIEEE CEO Cognite Ventures December 216 Outline Moore s Law Revisited: Efficiency Drives Productivity Embedded Neural Network Product Segments Efficiency

More information

Vision with Precision Webinar Series Augmented & Virtual Reality Aaron Behman, Xilinx Mark Beccue, Tractica. Copyright 2016 Xilinx

Vision with Precision Webinar Series Augmented & Virtual Reality Aaron Behman, Xilinx Mark Beccue, Tractica. Copyright 2016 Xilinx Vision with Precision Webinar Series Augmented & Virtual Reality Aaron Behman, Xilinx Mark Beccue, Tractica Xilinx Vision with Precision Webinar Series Perceiving Environment / Taking Action: AR / VR Monitoring

More information

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Peter Andreas Entschev and Hugo Vieira Neto Graduate School of Electrical Engineering and Applied Computer Science Federal

More information

Real-Time Testing Made Easy with Simulink Real-Time

Real-Time Testing Made Easy with Simulink Real-Time Real-Time Testing Made Easy with Simulink Real-Time Andreas Uschold Application Engineer MathWorks Martin Rosser Technical Sales Engineer Speedgoat 2015 The MathWorks, Inc. 1 Model-Based Design Continuous

More information

TOOLS & PROCESSORS FOR COMPUTER VISION. Selected Results from the Embedded Vision Alliance s Fall 2017 Computer Vision Developer Survey

TOOLS & PROCESSORS FOR COMPUTER VISION. Selected Results from the Embedded Vision Alliance s Fall 2017 Computer Vision Developer Survey TOOLS & PROCESSORS FOR COMPUTER VISION Selected Results from the Embedded Vision Alliance s Fall 2017 Computer Vision Developer Survey ABOUT THE EMBEDDED VISION ALLIANCE EXECUTIVE SUMMA Y Since 2015, the

More information

Video Enhancement Algorithms on System on Chip

Video Enhancement Algorithms on System on Chip International Journal of Scientific and Research Publications, Volume 2, Issue 4, April 2012 1 Video Enhancement Algorithms on System on Chip Dr.Ch. Ravikumar, Dr. S.K. Srivatsa Abstract- This paper presents

More information

Embedding Artificial Intelligence into Our Lives

Embedding Artificial Intelligence into Our Lives Embedding Artificial Intelligence into Our Lives Michael Thompson, Synopsys D&R IP-SOC DAYS Santa Clara April 2018 1 Agenda Introduction What AI is and is Not Where AI is being used Rapid Advance of AI

More information

TOOLS AND PROCESSORS FOR COMPUTER VISION. Selected Results from the Embedded Vision Alliance s Spring 2017 Computer Vision Developer Survey

TOOLS AND PROCESSORS FOR COMPUTER VISION. Selected Results from the Embedded Vision Alliance s Spring 2017 Computer Vision Developer Survey TOOLS AND PROCESSORS FOR COMPUTER VISION Selected Results from the Embedded Vision Alliance s Spring 2017 Computer Vision Developer Survey 1 EXECUTIVE SUMMARY Since 2015, the Embedded Vision Alliance has

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

Harnessing the Power of AI: An Easy Start with Lattice s sensai

Harnessing the Power of AI: An Easy Start with Lattice s sensai Harnessing the Power of AI: An Easy Start with Lattice s sensai A Lattice Semiconductor White Paper. January 2019 Artificial intelligence, or AI, is everywhere. It s a revolutionary technology that is

More information

TOOLS & PROCESSORS FOR COMPUTER VISION. Selected Results from the Embedded Vision Alliance s Computer Vision Developer Survey

TOOLS & PROCESSORS FOR COMPUTER VISION. Selected Results from the Embedded Vision Alliance s Computer Vision Developer Survey TOOLS & PROCESSORS FOR COMPUTER VISION Selected Results from the Embedded Vision Alliance s Computer Vision Developer Survey JANUARY 2019 EXECUTIVE SUMMA Y Since 2015, the Embedded Vision Alliance has

More information

REVOLUTIONIZING THE COMPUTING LANDSCAPE AND BEYOND.

REVOLUTIONIZING THE COMPUTING LANDSCAPE AND BEYOND. December 3-6, 2018 Santa Clara Convention Center CA, USA REVOLUTIONIZING THE COMPUTING LANDSCAPE AND BEYOND. https://tmt.knect365.com/risc-v-summit @risc_v ACCELERATING INFERENCING ON THE EDGE WITH RISC-V

More information

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM

CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM CONTROLLING METHODS AND CHALLENGES OF ROBOTIC ARM Aniket D. Kulkarni *1, Dr.Sayyad Ajij D. *2 *1(Student of E&C Department, MIT Aurangabad, India) *2(HOD of E&C department, MIT Aurangabad, India) aniket2212@gmail.com*1,

More information

In 1984, a cell phone in the U.S. cost $3,995 and

In 1984, a cell phone in the U.S. cost $3,995 and In 1984, a cell phone in the U.S. cost $3,995 and weighed 2 pounds. Today s 8GB smartphones cost $199 and weigh as little as 4.6 oz. Technology Commercialization Applied Materials is one of the most important

More information

High Performance Imaging Using Large Camera Arrays

High Performance Imaging Using Large Camera Arrays High Performance Imaging Using Large Camera Arrays Presentation of the original paper by Bennett Wilburn, Neel Joshi, Vaibhav Vaish, Eino-Ville Talvala, Emilio Antunez, Adam Barth, Andrew Adams, Mark Horowitz,

More information

AI Application Processing Requirements

AI Application Processing Requirements AI Application Processing Requirements 1 Low Medium High Sensor analysis Activity Recognition (motion sensors) Stress Analysis or Attention Analysis Audio & sound Speech Recognition Object detection Computer

More information

Automated Test Summit 2005 Keynote

Automated Test Summit 2005 Keynote 1 Automated Test Summit 2005 Keynote Trends and Techniques Across the Development Cycle Welcome to the Automated Test Summit 2005. Thank you all for joining us. We have a very exciting day full of great

More information

INTRODUCTION. In the industrial applications, many three-phase loads require a. supply of Variable Voltage Variable Frequency (VVVF) using fast and

INTRODUCTION. In the industrial applications, many three-phase loads require a. supply of Variable Voltage Variable Frequency (VVVF) using fast and 1 Chapter 1 INTRODUCTION 1.1. Introduction In the industrial applications, many three-phase loads require a supply of Variable Voltage Variable Frequency (VVVF) using fast and high-efficient electronic

More information

Abstract of PhD Thesis

Abstract of PhD Thesis FACULTY OF ELECTRONICS, TELECOMMUNICATION AND INFORMATION TECHNOLOGY Irina DORNEAN, Eng. Abstract of PhD Thesis Contribution to the Design and Implementation of Adaptive Algorithms Using Multirate Signal

More information

SEE MORE, SMARTER. We design the most advanced vision systems to bring humanity to any device.

SEE MORE, SMARTER. We design the most advanced vision systems to bring humanity to any device. SEE MORE, SMARTER OUR VISION Immervision Enables Intelligent Vision OUR MISSION We design the most advanced vision systems to bring humanity to any device. ABOUT US Immervision enables intelligent vision

More information

INSTITUTE FOR TELECOMMUNICATIONS RESEARCH (ITR)

INSTITUTE FOR TELECOMMUNICATIONS RESEARCH (ITR) INSTITUTE FOR TELECOMMUNICATIONS RESEARCH (ITR) The ITR is one of Australia s most significant research centres in the area of wireless telecommunications. SUCCESS STORIES The GSN Project The GSN Project

More information

Chapter 6: DSP And Its Impact On Technology. Book: Processor Design Systems On Chip. By Jari Nurmi

Chapter 6: DSP And Its Impact On Technology. Book: Processor Design Systems On Chip. By Jari Nurmi Chapter 6: DSP And Its Impact On Technology Book: Processor Design Systems On Chip Computing For ASICs And FPGAs By Jari Nurmi Slides Prepared by: Omer Anjum Introduction The early beginning g of DSP DSP

More information

Model-Based Design for Sensor Systems

Model-Based Design for Sensor Systems 2009 The MathWorks, Inc. Model-Based Design for Sensor Systems Stephanie Kwan Applications Engineer Agenda Sensor Systems Overview System Level Design Challenges Components of Sensor Systems Sensor Characterization

More information

FLASH LiDAR KEY BENEFITS

FLASH LiDAR KEY BENEFITS In 2013, 1.2 million people died in vehicle accidents. That is one death every 25 seconds. Some of these lives could have been saved with vehicles that have a better understanding of the world around them

More information

[Overview of the Consolidated Financial Results]

[Overview of the Consolidated Financial Results] 0 1 [Overview of the Consolidated Financial Results] 1. Consolidated revenue totaled 5,108.3 billion yen, increased by 581.1 billion yen (+12.8%) from the previous year. 2. Consolidated operating profit

More information

The Denali-MC HDR ISP Backgrounder

The Denali-MC HDR ISP Backgrounder The Denali-MC HDR ISP Backgrounder 2-4 brackets up to 8 EV frame offset Up to 16 EV stops for output HDR LATM (tone map) up to 24 EV Noise reduction due to merging of 10 EV LDR to a single 16 EV HDR up

More information

National Instruments Accelerating Innovation and Discovery

National Instruments Accelerating Innovation and Discovery National Instruments Accelerating Innovation and Discovery There s a way to do it better. Find it. Thomas Edison Engineers and scientists have the power to help meet the biggest challenges our planet faces

More information

The Xbox One System on a Chip and Kinect Sensor

The Xbox One System on a Chip and Kinect Sensor The Xbox One System on a Chip and Kinect Sensor John Sell, Patrick O Connor, Microsoft Corporation 1 Abstract The System on a Chip at the heart of the Xbox One entertainment console is one of the largest

More information

THE VISIONLAB TEAM engineers - 1 physicist. Feasibility study and prototyping Hardware benchmarking Open and closed source libraries

THE VISIONLAB TEAM engineers - 1 physicist. Feasibility study and prototyping Hardware benchmarking Open and closed source libraries VISIONLAB OPENING THE VISIONLAB TEAM 2018 6 engineers - 1 physicist Feasibility study and prototyping Hardware benchmarking Open and closed source libraries Deep learning frameworks GPU frameworks FPGA

More information

5G R&D at Huawei: An Insider Look

5G R&D at Huawei: An Insider Look 5G R&D at Huawei: An Insider Look Accelerating the move from theory to engineering practice with MATLAB and Simulink Huawei is the largest networking and telecommunications equipment and services corporation

More information

Hardware Implementation of Automatic Control Systems using FPGAs

Hardware Implementation of Automatic Control Systems using FPGAs Hardware Implementation of Automatic Control Systems using FPGAs Lecturer PhD Eng. Ionel BOSTAN Lecturer PhD Eng. Florin-Marian BÎRLEANU Romania Disclaimer: This presentation tries to show the current

More information

KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN?

KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN? KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN? Marc Stampfli https://www.linkedin.com/in/marcstampfli/ https://twitter.com/marc_stampfli E-Mail: mstampfli@nvidia.com INTELLIGENT ROBOTS AND SMART MACHINES

More information

Ultra-small, economical and cheap radar made possible thanks to chip technology

Ultra-small, economical and cheap radar made possible thanks to chip technology Edition March 2018 Radar technology, Smart Mobility Ultra-small, economical and cheap radar made possible thanks to chip technology By building radars into a car or something else, you are able to detect

More information

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987)

Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group (987) Applying Automated Optical Inspection Ben Dawson, DALSA Coreco Inc., ipd Group bdawson@goipd.com (987) 670-2050 Introduction Automated Optical Inspection (AOI) uses lighting, cameras, and vision computers

More information

Parallel Architecture for Optical Flow Detection Based on FPGA

Parallel Architecture for Optical Flow Detection Based on FPGA Parallel Architecture for Optical Flow Detection Based on FPGA Mr. Abraham C. G 1, Amala Ann Augustine Assistant professor, Department of ECE, SJCET, Palai, Kerala, India 1 M.Tech Student, Department of

More information

Exploring Computation- Communication Tradeoffs in Camera Systems

Exploring Computation- Communication Tradeoffs in Camera Systems Exploring Computation- Communication Tradeoffs in Camera Systems Amrita Mazumdar Thierry Moreau Sung Kim Meghan Cowan Armin Alaghi Luis Ceze Mark Oskin Visvesh Sathe IISWC 2017 1 Camera applications are

More information

Multi-sensor Panoramic Network Camera

Multi-sensor Panoramic Network Camera Multi-sensor Panoramic Network Camera White Paper by Dahua Technology Release 1.0 Table of contents 1 Preface... 2 2 Overview... 3 3 Technical Background... 3 4 Key Technologies... 5 4.1 Feature Points

More information

Rapid Design of FIR Filters in the SDR- 500 Software Defined Radio Evaluation System using the ASN Filter Designer

Rapid Design of FIR Filters in the SDR- 500 Software Defined Radio Evaluation System using the ASN Filter Designer Rapid Design of FIR Filters in the SDR- 500 Software Defined Radio Evaluation System using the ASN Filter Designer Application note (ASN-AN026) October 2017 (Rev B) SYNOPSIS SDR (Software Defined Radio)

More information

Enabling Mobile Virtual Reality ARM 助力移动 VR 产业腾飞

Enabling Mobile Virtual Reality ARM 助力移动 VR 产业腾飞 Enabling Mobile Virtual Reality ARM 助力移动 VR 产业腾飞 Nathan Li Ecosystem Manager Mobile Compute Business Line Shenzhen, China May 20, 2016 3 Photograph: Mark Zuckerberg Facebook https://www.facebook.com/photo.php?fbid=10102665120179591&set=pcb.10102665126861201&type=3&theater

More information

By Pierre Olivier, Vice President, Engineering and Manufacturing, LeddarTech Inc.

By Pierre Olivier, Vice President, Engineering and Manufacturing, LeddarTech Inc. Leddar optical time-of-flight sensing technology, originally discovered by the National Optics Institute (INO) in Quebec City and developed and commercialized by LeddarTech, is a unique LiDAR technology

More information

ULS24 Frequently Asked Questions

ULS24 Frequently Asked Questions List of Questions 1 1. What type of lens and filters are recommended for ULS24, where can we source these components?... 3 2. Are filters needed for fluorescence and chemiluminescence imaging, what types

More information

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC

ROBOT VISION. Dr.M.Madhavi, MED, MVSREC ROBOT VISION Dr.M.Madhavi, MED, MVSREC Robotic vision may be defined as the process of acquiring and extracting information from images of 3-D world. Robotic vision is primarily targeted at manipulation

More information

Dr Myat Su Hlaing Asia Research Center, Yangon University, Myanmar. Data programming model for an operation based parallel image processing system

Dr Myat Su Hlaing Asia Research Center, Yangon University, Myanmar. Data programming model for an operation based parallel image processing system Name: Affiliation: Field of research: Specific Field of Study: Proposed Research Topic: Dr Myat Su Hlaing Asia Research Center, Yangon University, Myanmar Information Science and Technology Computer Science

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Image Extraction using Image Mining Technique

Image Extraction using Image Mining Technique IOSR Journal of Engineering (IOSRJEN) e-issn: 2250-3021, p-issn: 2278-8719 Vol. 3, Issue 9 (September. 2013), V2 PP 36-42 Image Extraction using Image Mining Technique Prof. Samir Kumar Bandyopadhyay,

More information

IMPLEMENTATION OF SOFTWARE-BASED 2X2 MIMO LTE BASE STATION SYSTEM USING GPU

IMPLEMENTATION OF SOFTWARE-BASED 2X2 MIMO LTE BASE STATION SYSTEM USING GPU IMPLEMENTATION OF SOFTWARE-BASED 2X2 MIMO LTE BASE STATION SYSTEM USING GPU Seunghak Lee (HY-SDR Research Center, Hanyang Univ., Seoul, South Korea; invincible@dsplab.hanyang.ac.kr); Chiyoung Ahn (HY-SDR

More information

Industrial Keynotes. 06/09/2018 Juan-Les-Pins

Industrial Keynotes. 06/09/2018 Juan-Les-Pins Industrial Keynotes 1 06/09/2018 Juan-Les-Pins Agenda 1. The End of Driving Simulation? 2. Autonomous Vehicles: the new UI 3. Augmented Realities 4. Choose your factions 5. No genuine AI without flawless

More information

Digital Engines for Smart and Connected Cars By Bob O Donnell, TECHnalysis Research Chief Analyst

Digital Engines for Smart and Connected Cars By Bob O Donnell, TECHnalysis Research Chief Analyst WHITE PAPER On Behalf of Digital Engines for Smart and Connected Cars By Bob O Donnell, TECHnalysis Research Chief Analyst SUMMARY Interest in advanced car electronics is extremely high, but there is a

More information

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real...

preface Motivation Figure 1. Reality-virtuality continuum (Milgram & Kishino, 1994) Mixed.Reality Augmented. Virtuality Real... v preface Motivation Augmented reality (AR) research aims to develop technologies that allow the real-time fusion of computer-generated digital content with the real world. Unlike virtual reality (VR)

More information

Getting to Smart Paul Barnard Design Automation

Getting to Smart Paul Barnard Design Automation Getting to Smart Paul Barnard Design Automation paul.barnard@mathworks.com 2012 The MathWorks, Inc. Getting to Smart WHO WHAT HOW autonomous, responsive, multifunction, adaptive, transformable, and smart

More information

Introduction. Lighting

Introduction. Lighting &855(17 )8785(75(1'6,10$&+,1(9,6,21 5HVHDUFK6FLHQWLVW0DWV&DUOLQ 2SWLFDO0HDVXUHPHQW6\VWHPVDQG'DWD$QDO\VLV 6,17()(OHFWURQLFV &\EHUQHWLFV %R[%OLQGHUQ2VOR125:$< (PDLO0DWV&DUOLQ#HF\VLQWHIQR http://www.sintef.no/ecy/7210/

More information

Implementation of FPGA based Design for Digital Signal Processing

Implementation of FPGA based Design for Digital Signal Processing e-issn 2455 1392 Volume 2 Issue 8, August 2016 pp. 150 156 Scientific Journal Impact Factor : 3.468 http://www.ijcter.com Implementation of FPGA based Design for Digital Signal Processing Neeraj Soni 1,

More information

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 19: Depth Cameras. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 19: Depth Cameras Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) Continuing theme: computational photography Cheap cameras capture light, extensive processing produces

More information

Machine Vision for the Life Sciences

Machine Vision for the Life Sciences Machine Vision for the Life Sciences Presented by: Niels Wartenberg June 12, 2012 Track, Trace & Control Solutions Niels Wartenberg Microscan Sr. Applications Engineer, Clinical Senior Applications Engineer

More information

Master of Comm. Systems Engineering (Structure C)

Master of Comm. Systems Engineering (Structure C) ENGINEERING Master of Comm. DURATION 1.5 YEARS 3 YEARS (Full time) 2.5 YEARS 4 YEARS (Part time) P R O G R A M I N F O Master of Communication System Engineering is a quarter research program where candidates

More information

Powering Automotive Cockpit Electronics

Powering Automotive Cockpit Electronics White Paper Powering Automotive Cockpit Electronics Introduction The growth of automotive cockpit electronics has exploded over the past decade. Previously, self-contained systems such as steering, braking,

More information

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement

Module 1: Introduction to Experimental Techniques Lecture 2: Sources of error. The Lecture Contains: Sources of Error in Measurement The Lecture Contains: Sources of Error in Measurement Signal-To-Noise Ratio Analog-to-Digital Conversion of Measurement Data A/D Conversion Digitalization Errors due to A/D Conversion file:///g /optical_measurement/lecture2/2_1.htm[5/7/2012

More information

ADVANCED EMBEDDED MONITORING SYSTEM FOR ELECTROMAGNETIC RADIATION

ADVANCED EMBEDDED MONITORING SYSTEM FOR ELECTROMAGNETIC RADIATION 98 Chapter-5 ADVANCED EMBEDDED MONITORING SYSTEM FOR ELECTROMAGNETIC RADIATION 99 CHAPTER-5 Chapter 5: ADVANCED EMBEDDED MONITORING SYSTEM FOR ELECTROMAGNETIC RADIATION S.No Name of the Sub-Title Page

More information

{ TECHNOLOGY CHANGES } EXECUTIVE FOCUS TRANSFORMATIVE TECHNOLOGIES. & THE ENGINEER Engineering and technology

{ TECHNOLOGY CHANGES } EXECUTIVE FOCUS TRANSFORMATIVE TECHNOLOGIES. & THE ENGINEER Engineering and technology { TECHNOLOGY CHANGES } EXECUTIVE FOCUS By Mark Strandquest TECHNOLOGIES & THE ENGINEER Engineering and technology are forever intertwined. By definition, engineering is the application of knowledge in

More information

IRIDA Labs Your eyes to the future

IRIDA Labs Your eyes to the future IRIDA Labs Your eyes to the future Delivering compact solutions... for complex vision applications Χρήστος Θεοχαράτος Key Facts 2 IRIDA Labs is operational from 2009. Today 8 people, all with strong engineering

More information

Design of Adjustable Reconfigurable Wireless Single Core

Design of Adjustable Reconfigurable Wireless Single Core IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834,p- ISSN: 2278-8735. Volume 6, Issue 2 (May. - Jun. 2013), PP 51-55 Design of Adjustable Reconfigurable Wireless Single

More information

Analog front-end electronics in beam instrumentation

Analog front-end electronics in beam instrumentation Analog front-end electronics in beam instrumentation Basic instrumentation structure Silicon state of art Sampling state of art Instrumentation trend Comments and example on BPM Future Beam Position Instrumentation

More information

Simulation of Algorithms for Pulse Timing in FPGAs

Simulation of Algorithms for Pulse Timing in FPGAs 2007 IEEE Nuclear Science Symposium Conference Record M13-369 Simulation of Algorithms for Pulse Timing in FPGAs Michael D. Haselman, Member IEEE, Scott Hauck, Senior Member IEEE, Thomas K. Lewellen, Senior

More information

GF Machining Solutions Speed of Development : The Future of Machine Building. Sergei Schurov 23/06/2016

GF Machining Solutions Speed of Development : The Future of Machine Building. Sergei Schurov 23/06/2016 GF Machining Solutions Speed of Development : The Future of Machine Building Sergei Schurov 23/06/2016 Heritage Innovation Outlook Machine Tools Industry: Journey Through the Time Heritage Swiss Trains

More information

products PC Control

products PC Control products PC Control 04 2017 PC Control 04 2017 products Image processing directly in the PLC TwinCAT Vision Machine vision easily integrated into automation technology Automatic detection, traceability

More information

IHV means Independent Hardware Vendor. Example is Qualcomm Technologies Inc. that makes Snapdragon processors. OEM means Original Equipment

IHV means Independent Hardware Vendor. Example is Qualcomm Technologies Inc. that makes Snapdragon processors. OEM means Original Equipment 1 2 IHV means Independent Hardware Vendor. Example is Qualcomm Technologies Inc. that makes Snapdragon processors. OEM means Original Equipment Manufacturer. Examples are smartphone manufacturers. Tuning

More information

Low-Cost, On-Demand Film Digitisation and Online Delivery. Matt Garner

Low-Cost, On-Demand Film Digitisation and Online Delivery. Matt Garner Low-Cost, On-Demand Film Digitisation and Online Delivery Matt Garner (matt.garner@findmypast.com) Abstract Hundreds of millions of pages of microfilmed material are not being digitised at this time due

More information

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor

ADAS Development using Advanced Real-Time All-in-the-Loop Simulators. Roberto De Vecchi VI-grade Enrico Busto - AddFor ADAS Development using Advanced Real-Time All-in-the-Loop Simulators Roberto De Vecchi VI-grade Enrico Busto - AddFor The Scenario The introduction of ADAS and AV has created completely new challenges

More information

THE NEXT WAVE OF COMPUTING. September 2017

THE NEXT WAVE OF COMPUTING. September 2017 THE NEXT WAVE OF COMPUTING September 2017 SAFE HARBOR Forward-Looking Statements Except for the historical information contained herein, certain matters in this presentation including, but not limited

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM

FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS COMPETENCE CENTER VISCOM SMART ALGORITHMS FOR BRILLIANT PICTURES The Competence Center Visual Computing of Fraunhofer FOKUS develops visualization

More information

Control Systems Overview REV II

Control Systems Overview REV II Control Systems Overview REV II D R. T A R E K A. T U T U N J I M E C H A C T R O N I C S Y S T E M D E S I G N P H I L A D E L P H I A U N I V E R S I T Y 2 0 1 4 Control Systems The control system is

More information

RADAR ANALYST WORKSTATION MODERN, USER-FRIENDLY RADAR TECHNOLOGY IN ERDAS IMAGINE

RADAR ANALYST WORKSTATION MODERN, USER-FRIENDLY RADAR TECHNOLOGY IN ERDAS IMAGINE RADAR ANALYST WORKSTATION MODERN, USER-FRIENDLY RADAR TECHNOLOGY IN ERDAS IMAGINE White Paper December 17, 2014 Contents Introduction... 3 IMAGINE Radar Mapping Suite... 3 The Radar Analyst Workstation...

More information

Projection Based HCI (Human Computer Interface) System using Image Processing

Projection Based HCI (Human Computer Interface) System using Image Processing GRD Journals- Global Research and Development Journal for Volume 1 Issue 5 April 2016 ISSN: 2455-5703 Projection Based HCI (Human Computer Interface) System using Image Processing Pankaj Dhome Sagar Dhakane

More information

A GENERIC ARCHITECTURE FOR SMART MULTI-STANDARD SOFTWARE DEFINED RADIO SYSTEMS

A GENERIC ARCHITECTURE FOR SMART MULTI-STANDARD SOFTWARE DEFINED RADIO SYSTEMS A GENERIC ARCHITECTURE FOR SMART MULTI-STANDARD SOFTWARE DEFINED RADIO SYSTEMS S.A. Bassam, M.M. Ebrahimi, A. Kwan, M. Helaoui, M.P. Aflaki, O. Hammi, M. Fattouche, and F.M. Ghannouchi iradio Laboratory,

More information

Figure 1 HDR image fusion example

Figure 1 HDR image fusion example TN-0903 Date: 10/06/09 Using image fusion to capture high-dynamic range (hdr) scenes High dynamic range (HDR) refers to the ability to distinguish details in scenes containing both very bright and relatively

More information

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018.

Job Description. Commitment: Must be available to work full-time hours, M-F for weeks beginning Summer of 2018. Research Intern Director of Research We are seeking a summer intern to support the team to develop prototype 3D sensing systems based on state-of-the-art sensing technologies along with computer vision

More information

OMR Auto Grading System

OMR Auto Grading System OMR Auto Grading System Nithin T. nithint_11484@aitpune.edu.in Md Nasim mdnasim_11720@aitpune.edu.in T. Raj Shekhar t.rajshekhar_11684@aitpune.edu.in Omendra Singh Gautam omendrsinghgautam_11667@aitpune.edu.in

More information

Architecting Systems of the Future, page 1

Architecting Systems of the Future, page 1 Architecting Systems of the Future featuring Eric Werner interviewed by Suzanne Miller ---------------------------------------------------------------------------------------------Suzanne Miller: Welcome

More information

FACE RECOGNITION BY PIXEL INTENSITY

FACE RECOGNITION BY PIXEL INTENSITY FACE RECOGNITION BY PIXEL INTENSITY Preksha jain & Rishi gupta Computer Science & Engg. Semester-7 th All Saints College Of Technology, Gandhinagar Bhopal. Email Id-Priky0889@yahoo.com Abstract Face Recognition

More information

Making Vehicles Smarter and Safer with Diode Laser-Based 3D Sensing

Making Vehicles Smarter and Safer with Diode Laser-Based 3D Sensing Making Vehicles Smarter and Safer with Diode Laser-Based 3D Sensing www.lumentum.com White Paper There is tremendous development underway to improve vehicle safety through technologies like driver assistance

More information

Hardware-Software Co-Design Cosynthesis and Partitioning

Hardware-Software Co-Design Cosynthesis and Partitioning Hardware-Software Co-Design Cosynthesis and Partitioning EE8205: Embedded Computer Systems http://www.ee.ryerson.ca/~courses/ee8205/ Dr. Gul N. Khan http://www.ee.ryerson.ca/~gnkhan Electrical and Computer

More information

An Area Efficient Decomposed Approximate Multiplier for DCT Applications

An Area Efficient Decomposed Approximate Multiplier for DCT Applications An Area Efficient Decomposed Approximate Multiplier for DCT Applications K.Mohammed Rafi 1, M.P.Venkatesh 2 P.G. Student, Department of ECE, Shree Institute of Technical Education, Tirupati, India 1 Assistant

More information

Partner for Success Secure & Smart Future Home

Partner for Success Secure & Smart Future Home Partner for Success Secure & Smart Future Home Jiang Yanbing Director of Strategy and Market Development Dept. Infineon Technologies China Table of contents 1 About Infineon 2 Make Future Home Smart and

More information

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS

MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS INFOTEH-JAHORINA Vol. 10, Ref. E-VI-11, p. 892-896, March 2011. MULTIPLE SENSORS LENSLETS FOR SECURE DOCUMENT SCANNERS Jelena Cvetković, Aleksej Makarov, Sasa Vujić, Vlatacom d.o.o. Beograd Abstract -

More information

Imaging with hyperspectral sensors: the right design for your application

Imaging with hyperspectral sensors: the right design for your application Imaging with hyperspectral sensors: the right design for your application Frederik Schönebeck Framos GmbH f.schoenebeck@framos.com June 29, 2017 Abstract In many vision applications the relevant information

More information

Hochperformante Inline-3D-Messung

Hochperformante Inline-3D-Messung Hochperformante Inline-3D-Messung mittels Lichtfeld Dipl.-Ing. Dorothea Heiss Deputy Head of Business Unit High Performance Image Processing Digital Safety & Security Department AIT Austrian Institute

More information

More specifically, I would like to talk about Gallium Nitride and related wide bandgap compound semiconductors.

More specifically, I would like to talk about Gallium Nitride and related wide bandgap compound semiconductors. Good morning everyone, I am Edgar Martinez, Program Manager for the Microsystems Technology Office. Today, it is my pleasure to dedicate the next few minutes talking to you about transformations in future

More information

Real-Time Face Detection and Tracking for High Resolution Smart Camera System

Real-Time Face Detection and Tracking for High Resolution Smart Camera System Digital Image Computing Techniques and Applications Real-Time Face Detection and Tracking for High Resolution Smart Camera System Y. M. Mustafah a,b, T. Shan a, A. W. Azman a,b, A. Bigdeli a, B. C. Lovell

More information

e2v Launches New Onyx 1.3M for Premium Performance in Low Light Conditions

e2v Launches New Onyx 1.3M for Premium Performance in Low Light Conditions e2v Launches New Onyx 1.3M for Premium Performance in Low Light Conditions e2v s Onyx family of image sensors is designed for the most demanding outdoor camera and industrial machine vision applications,

More information

Multi-core Platforms for

Multi-core Platforms for 20 JUNE 2011 Multi-core Platforms for Immersive-Audio Applications Course: Advanced Computer Architectures Teacher: Prof. Cristina Silvano Student: Silvio La Blasca 771338 Introduction on Immersive-Audio

More information

Ben Baker. Sponsored by:

Ben Baker. Sponsored by: Ben Baker Sponsored by: Background Agenda GPU Computing Digital Image Processing at FamilySearch Potential GPU based solutions Performance Testing Results Conclusions and Future Work 2 CPU vs. GPU Architecture

More information

Artificial intelligence, made simple. Written by: Dale Benton Produced by: Danielle Harris

Artificial intelligence, made simple. Written by: Dale Benton Produced by: Danielle Harris Artificial intelligence, made simple Written by: Dale Benton Produced by: Danielle Harris THE ARTIFICIAL INTELLIGENCE MARKET IS SET TO EXPLODE AND NVIDIA, ALONG WITH THE TECHNOLOGY ECOSYSTEM INCLUDING

More information

Image Enhancement using Hardware co-simulation for Biomedical Applications

Image Enhancement using Hardware co-simulation for Biomedical Applications Image Enhancement using Hardware co-simulation for Biomedical Applications Kalyani A. Dakre Dept. of Electronics and Telecommunications P.R. Pote (Patil) college of Engineering and, Management, Amravati,

More information

Wideband Spectral Measurement Using Time-Gated Acquisition Implemented on a User-Programmable FPGA

Wideband Spectral Measurement Using Time-Gated Acquisition Implemented on a User-Programmable FPGA Wideband Spectral Measurement Using Time-Gated Acquisition Implemented on a User-Programmable FPGA By Raajit Lall, Abhishek Rao, Sandeep Hari, and Vinay Kumar Spectral measurements for some of the Multiple

More information

A GENERAL SYSTEM DESIGN & IMPLEMENTATION OF SOFTWARE DEFINED RADIO SYSTEM

A GENERAL SYSTEM DESIGN & IMPLEMENTATION OF SOFTWARE DEFINED RADIO SYSTEM A GENERAL SYSTEM DESIGN & IMPLEMENTATION OF SOFTWARE DEFINED RADIO SYSTEM 1 J. H.VARDE, 2 N.B.GOHIL, 3 J.H.SHAH 1 Electronics & Communication Department, Gujarat Technological University, Ahmadabad, India

More information

Design of Mixed-Signal Microsystems in Nanometer CMOS

Design of Mixed-Signal Microsystems in Nanometer CMOS Design of Mixed-Signal Microsystems in Nanometer CMOS Carl Grace Lawrence Berkeley National Laboratory August 2, 2012 DOE BES Neutron and Photon Detector Workshop Introduction Common themes in emerging

More information