A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware

Size: px
Start display at page:

Download "A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware"

Transcription

1 A Self-Driving Robot Using Deep Convolutional Neural Networks on Neuromorphic Hardware Tiffany Hwu, Jacob Isbell, Nicolas Oros, and Jeffrey Krichmar Department of Cognitive Sciences University of California, Irvine Irvine, California, USA, arxiv: v1 [cs.ne] 4 Nov 2016 Abstract Neuromorphic computing is a promising solution for reducing the size, weight and power of mobile embedded systems. In this paper, we introduce a realization of such a system by creating the first closed-loop battery-powered communication system between an IBM TrueNorth NS1e and an autonomous Android-Based Robotics platform. Using this system, we constructed a dataset of path following behavior by manually driving the Android-Based robot along steep mountain trails and recording video frames from the camera mounted on the robot along with the corresponding motor commands. We used this dataset to train a deep convolutional neural network implemented on the TrueNorth NS1e. The NS1e, which was mounted on the robot and powered by the robot s battery, resulted in a selfdriving robot that could successfully traverse a steep mountain path in real time. To our knowledge, this represents the first time the TrueNorth NS1e neuromorphic chip has been embedded on a mobile platform under closed-loop control. I. INTRODUCTION As the need for faster, more efficient computing continues to grow, the observed rate of improvement of computing speed shows signs of leveling off [1]. In response, researchers have been looking for new strategies to increase computing power. Neuromorphic hardware is a promising direction for computing, taking a brain-inspired approach to achieve magnitudes lower power than traditional Von Neumann architectures [2], [3]. Mimicking the computational strategy of the brain, the hardware uses event-driven, massively parallel and distributed processing of information. As a result, the hardware has low size, weight, and power, making it ideal for mobile embedded Northrop Grumman Redondo Beach, California, USA, Department of Electrical and Computer Engineering University of Maryland College Park, Maryland, USA, BrainChip LLC Aliso Viejo, California, USA, Department of Computer Sciences University of California, Irvine Irvine, California, USA, jkrichma@uci.edu systems. In exploring the advantages of neuromorphic hardware, it is important to consider how this approach might be used to solve our existing needs and applications. One such application is autonomous driving [4]. In order for an autonomous mobile platform to perform effectively, it must be able to process large amounts of information simultaneously, extracting salient features from a stream of sensory data and making decisions about which motor actions to take [5]. Particularly, the platform must be able to segment visual scenes into objects such as roads and pedestrians [4]. Deep convolutional networks (CNNs) [6] have proven very effective for many tasks, including self-driving. For instance, Huval et al. used deep learning on a large dataset of highway driving to perform a variety of functions such as object and lane detection [7]. Recently, Bojarski et al., showed that tasks such as lane detection do not need to be explicitly trained [8]. In their DAVE-2 network, an end-to-end learning scheme was presented in which the network is simply trained to classify images from the car s cameras into steering commands learned from real human driving data. Intermediate tasks such as lane detection were automatically learned within the intermediate layers, saving the work of selecting these tasks by hand. Such networks are suitable for running on neuromorphic hardware due to the large amount of parallel processing involved. In fact, many computer vision tasks have already been successfully transferred to the neuromorphic domain, such as handwritten digit recognition [9] and scene segmentation [10]. However, less work has been done embedding

2 the neuromorphic hardware on mobile platforms. An example includes NENGO simulations embedded on SpiNNaker boards controlling mobile robots [11], [12]. Addressing the challenges of physically connecting these components, as well as creating a data pipeline for communication between the platforms is an open issue, but worth pursuing given the small size, weight and power of neuromorphic hardware. At the Telluride Neuromorphic Cognition Workshop 2016, we embedded the the IBM TrueNorth NS1e [13] on the Android-Based Robotics platform [14] to create a self-driving robot that uses a deep CNN to travel autonomously along an outdoor mountain path. The result of our experiment is a robot that is able to use video frame data to steer along a road in real time with low-powered processing. A. IBM TrueNorth II. PLATFORMS Fig. 2. Left: Side view of CARLrado. A pan and tilt unit supports the Samsung Galaxy S5 smartphone, which is mounted on a Dagu Wild Thumper chassis. A plastic enclosure holds the IOIO-OTG microcontroller and RoboClaw motor controller. A velcro strip on top of the housing can attach any other small components. Top Right: Front view of CARLrado. Three front-facing sonars can detect obstacles. Bottom Right: Close-up of IOIO-OTG and motor controller. Fig. 1. A) Core connectivity on the TrueNorth. Each neuron on a core connects to every other neuron on the core, and can connect to other cores through input lines. B) The IBM NS1e board. Adapted from [15]. The IBM TrueNorth (Figure 1) is a neuromorphic chip with a multicore array of programmable neurons. Within each core, there are 256 input lines connected to 256 neurons through a 256x256 synaptic crossbar array. Each neuron on a core is connected with every other neuron on the same core through the crossbar, and can communicate with neurons on other cores through their input lines. In our experiment, we used the IBM NS1e board, which contains 4096 cores, 1 million neurons, and 256 million synapses. An integrate-and-fire neuron model having 23 parameters was used, with trinary synaptic weights of -1, 0, and 1. As the TrueNorth has been used to run many types of deep convolutional networks, and is able to be powered by an external battery, it served as ideal hardware for this task [16] [15]. B. Android Based Robotics The Android-Based Robotics platform (Figure 2) was created at the University of California, Irvine, using entirely off-the-shelf commodity parts and controlled by an Android phone [14]. The robot used in the present experiment, the CARLorado, was constructed from a Dagu Wild-Thumper All-Terrain chassis that could easily travel through difficult outdoor terrain. A IOIO-OTG microcontroller (SparkFun Electronics) communicated through a Bluetooth connection with the Android phone (Samsung Galaxy S5). The phone provided extra sensors such as a built-in accelerometer, gyroscope, compass, and global positioning system (GPS). The IOIO- OTG controlled a pan and tilt unit that held the phone, a motor controller for the robot wheels, and ultrasonic sensors for detecting obstacles. Instructions for building the robot can be found at: jkrichma/abr/. A differential steering technique was used, moving the left and right sides of the robot at different speeds for turning. The modularity of the platform made it easy to add extra units such as the IBM TrueNorth. Software for controlling the robot was written in Java using Android Studio. With various support libraries for the IOIO-OTG, open-source libraries for computer vision such as OpenCV, and sample Android-Based Robotics code ( it was straightfoward to develop intelligent controls. A. Data Collection III. METHODS AND RESULTS First, we created datasets of first-person video footage of the robot and motor commands issued to the robot as it was manually driven along a mountain trail in Telluride, Colorado (Figures 5 and 8 top). This was done by creating an app in Android Studio that was run on both a Samsung Galaxy S5 smartphone and a Samsung Nexus 7 tablet (Figure 3). The smartphone was mounted on the pan and tilt unit of the robot with the camera facing ahead. JPEG images captured by the camera of the smartphone were saved to an SD card

3 as stopping were excluded. Due to lack of time, only the first day of data collection was used in actual training. B. Eedn Framework Fig. 3. Data collection setup. Video from the smartphone mounted on the robot was sent to the tablet through a Wi-Fi direct connection. A human operator used two joysticks on the touchscreen of the tablet to issue motor commands, which were sent to the phone through the same connection. Video frames and commands were saved to the SD card on the smartphone. Fig. 5. The CNN classified images into three classes of motor output: turning left, moving forward, and turning right. Accuracy of training was above 90 percent. Fig. 4. Convolution of layers in a CNN on TrueNorth. Neurons in each layer are arranged in three dimensions, which can be convolved using a filter of weights. Convolution occurs among the first two dimensions, and the third dimension represents different features. This allows the convolution to be divided along the feature dimension into groups (indicated by blue and yellow colors) that can be computed separately on different cores. Adapted from [15]. at 30 frames per second. The JPEGs had a resolution of 176 by 144 pixels. Through a Wi-Fi direct connection, the video frame data was streamed from the phone to a handheld tablet that controlled the robot. The tablet displayed controls for moving the robot forward and steering the robot left and right. These commands from the tablet (left, right, forward) were streamed to the smartphone via the Wi-Fi direct connection and saved on the smartphone as a text file. A total of 4 datasets were recorded on the same mountain trail, with each dataset recording a round trip of.5 km up and down a single trail segment. To account for different lighting conditions, we spread the recordings across two separate days, and on each day we performed one recording in the morning and one in the afternoon. In total we collected approximately 30 minutes of driving data. By matching the time stamps of motor commands to video images, we were able to determine which commands corresponded to which images. Images that were not associated with a left, right, or forward movement such Fig. 6. Physical connection of TrueNorth NS1e and CARLorado. The NS1e is attached to the top of the housing of the electronics housing using velcro. The NS1e is powered by running connections from the motor controller within the housing. The motor controller itself is powered by a Ni-MH battery attached to the bottom of the robot chassis. We used the dataset to train a deep convolutional neural network using an Energy-Efficient Deep Neuromorphic Network (EEDN), a network that is structured to run efficiently on the TrueNorth [15]. In summary, a traditional CNN is transferred to the neuromorphic domain by connecting the neurons on the TrueNorth with the same connectivity as the original CNN. Input values to the original CNN are translated into input firing patterns on EEDN, and the resulting firing rates of each neuron correspond to the values seen in the original CNN. To distribute a convolutional operation among cores of the TrueNorth, the layers are divided along the feature dimension into groups (Figure 4). When a neuron targets multiple core inputs, exact duplicates of the neuron and synaptic weights are created, either on the same core or a different core. The response of each neuron is the binary thresholded sum of synaptic input, in which the trinary weight values are determined by different combinations of two input lines. A more complete explanation of the EEDN flow and structure of the convolutional network (1 chip version) can be found in [15].

4 Fig. 7. Data pipeline for running CNN. Training is done separately using the MatConvNet package using Titan X GPUs. A Wi-Fi connection between the Android Galaxy S5 and IBM NS1e transmit spiking data back and forth. them to a resolution of 44 by 36 pixels and separating them into red, green, and blue channels. The output is a single layer of three neuron populations, corresponding to three classes of turning left, going straight, or turning right, as seen in Figure 5. Using the MatConvNet package, a Matlab toolbox for implementing convolutional neural networks, the network was trained to classify images into motor commands. For instance, if the image showed the road to be more towards the left of center, the CNN would learn the human-trained command of steering to the left. To test accuracy, the dataset was split into train and test sets by using every fifth frame as a test frame (in total 20 percent of the dataset). We achieved an accuracy of over 90 percent, which took 10K iterations and a few hours to train. Training was performed separately from the TrueNorth chip, producing trinary synaptic weights (-1,0,1) that could be used interchangeably in a traditional CNN or EEDN. Fig. 8. Mountain trail in Telluride, Colorado. Above: Google Satellite image of trail (highlighted) Imagery c 2016 Google. Below: Testing CNN performance. The video frames were preprocessed by down-sampling C. Data Pipeline With the methods used in [15], the weights of the network were transferred to the TrueNorth NS1e. The CNN was able to run on the TrueNorth by feeding input from the camera on the Android Galaxy S5 to the TrueNorth using a TCP/IP connection. In order to achieve this, the phone had to replicate the preprocessing used when training the network. The preprocessing on the phone was achieved by using the Android OpenCV scaling function to downsample the images. Then, the images were separated into red, green, and blue channels. Next, the filter kernels from the first layer of the CNN were pulled from the EEDN training output and applied to the image using a 2D convolution function from the Android OpenCV library. The result of the convolution was thresholded into

5 binary spiking format, such that any neuron with an activity greater than zero was set to spike. The spiking input to the TrueNorth was sent in XYF format, where X, Y, and F are the three dimensions to describe the identity of a spiking neuron within a layer. At each tick of the TrueNorth NS1e, a frame was fed into the input layer by sending the XYF coordinates of all neurons that spiked for that frame. A detailed diagram of the pipeline is found in Figure 7. Output from the TrueNorth NS1e was sent back to the smartphone through the TCP/IP connection in the form of a class histogram, which indicated the firing activity of the output neurons. The smartphone could then calculate which output neuron was the most active and issue the corresponding motor command to the robot. D. Physical Connection of Platforms The TrueNorth was powered by connecting the robot s battery terminals from the motor controller to a two-pin battery connection on the NS1e board. It was then secured with velcro to the top of the housing for the IOIO and motor controller. A picture of the setup is seen in Figure 6. The robot, microcontroller, motor controller, servos, and NS1e were powered by a single Duratrax NiMH Onyx 7.2V 5000mAh battery. E. Testing With this wireless, battery-powered setup, the trained CNN was able to successfully drive the robot on the mountain trail (Figure 8). A wireless hotspot was necessary to create a TCP connection between the TrueNorth NS1e and the Android phone. We placed the robot on the same section of the trail used for training. The robot steered according to the class histograms received from the TrueNorth output, which provided total firing counts for each of the three output neuron populations. Steering was done by using the histogram to determine which output population fired the most, and steering in that direction. As a result, the robot stayed near the center of the trail, steering away from green brush on both sides of the trail. At some points, the robot did travel off the trail and needed to be manually redirected back towards the center of the trail. The robot drove approximately.5 km uphill, and the returned.5 km downhill with minimal intervention. It should be noted that there was a steep dropoff on the south side of the trail. Therefore, extra care was taken to make sure the robot did not tumble down the mountainside. A video of the path following performance can be seen at IV. DISCUSSION To the best of our knowledge, the present setup represents the first time the TrueNorth NS1e has been embedded on a mobile platform under closed loop control. It demonstrated that a low power neuromorphic chip could communicate with a smartphone in an autonomous system. Furthermore, it showed that a CNN using the EEDN framework was sufficient to achieve a self-driving application. Furthermore, this complete system ran in real-time and was powered by a single off-theshelf hobby grade battery, demonstrating the power efficiency of the TrueNorth NS1e. An expansion of this work would require better quantification of the robot s performance. This could be achieved by tracking the number of times the robot had to be manually redirected, or comparing the CNN classifier accuracy on the training set of images versus the classifier accuracy on the actual images captured in realtime. Increasing the amount of training data would likely increase the classifier accuracy, since only 15 minutes of data were used for the training as compared to other self-driving CNNs [7], [8], which have used several days or even weeks of training. Our success was due in part to the simplicity of the landscape, with an obvious red hue to the dirt road and bold green hue for the bordering areas. It would therefore be useful to test the network in more complex settings. Additionally, while the main purpose of the project was to demonstrate a practical integration of neuromorphic and non-neuromorphic hardware, it would also be useful to calculate the power savings of running the CNN computations on neuromorphic hardware instead of directly on the smartphone. V. CONCLUSION In this trailblazing study, we have demonstrated a novel closed-loop system between a robotic platform and a neuromorphic chip, operating in a rugged outdoor environment. We have shown the advantages of integrating neuromorphic hardware with popular machine learning methods such as deep convolutional neural networks. We have shown that neuromorphic hardware can be integrated with smartphone technology and off the shelf components resulting in a complete autonomous system. The present setup is one of the first demonstrations of using neuromorphic hardware in an autonomous, embedded system. ACKNOWLEDGMENT The authors would like to thank Andrew Cassidy and Rodrigo Alvarez-Icaza of IBM for their support. This work was supported by the National Science Foundation Award number and Northrop Grumman Aerospace Systems. We also would like to thank the Telluride Neuromorphic Cognition Engineering Workshop, The Institute of Neuromorphic Engineering, and their National Science Foundation, DoD and Industrial Sponsors. REFERENCES [1] J. Backus, Can programming be liberated from the von neumann style?: a functional style and its algebra of programs, Communications of the ACM, vol. 21, no. 8, pp , [2] C. Mead, Neuromorphic electronic systems, Proceedings of the IEEE, vol. 78, no. 10, pp , [3] G. Indiveri, B. Linares-Barranco, T. J. Hamilton, A. Van Schaik, R. Etienne-Cummings, T. Delbruck, S.-C. Liu, P. Dudek, P. Häfliger, S. Renaud et al., Neuromorphic silicon neuron circuits, Frontiers in neuroscience, vol. 5, p. 73, [4] S. Thrun, Toward robotic cars, Communications of the ACM, vol. 53, no. 4, pp , 2010.

6 [5] J. Levinson, J. Askeland, J. Becker, J. Dolson, D. Held, S. Kammel, J. Z. Kolter, D. Langer, O. Pink, V. Pratt et al., Towards fully autonomous driving: Systems and algorithms, in Intelligent Vehicles Symposium (IV), 2011 IEEE. IEEE, 2011, pp [6] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, Backpropagation applied to handwritten zip code recognition, Neural computation, vol. 1, no. 4, pp , [7] B. Huval, T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue et al., An empirical evaluation of deep learning on highway driving, arxiv preprint arxiv: , [8] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang et al., End to end learning for self-driving cars, arxiv preprint arxiv: , [9] J. H. Lee, T. Delbruck, and M. Pfeiffer, Training deep spiking neural networks using backpropagation, arxiv preprint arxiv: , [10] Y. Cao, Y. Chen, and D. Khosla, Spiking deep convolutional neural networks for energy-efficient object recognition, International Journal of Computer Vision, vol. 113, no. 1, pp , [11] J. Conradt, F. Galluppi, and T. C. Stewart, Trainable sensorimotor mapping in a neuromorphic robot, Robotics and Autonomous Systems, vol. 71, pp , [12] F. Galluppi, C. Denk, M. C. Meiner, T. C. Stewart, L. A. Plana, C. Eliasmith, S. Furber, and J. Conradt, Event-based neural computing on an autonomous mobile platform, in 2014 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2014, pp [13] P. A. Merolla, J. V. Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y. Nakamura et al., A million spiking-neuron integrated circuit with a scalable communication network and interface, Science, vol. 345, no. 6197, pp , [14] N. Oros and J. L. Krichmar, Smartphone based robotics: Powerful, flexible and inexpensive robots for hobbyists, educators, students and researchers, Center for Embedded Computer Systems, University of California, Irvine, Irvine, California, Tech. Rep , [15] S. K. Esser, P. A. Merolla, J. V. Arthur, A. S. Cassidy, R. Appuswamy, A. Andreopoulos, D. J. Berg, J. L. McKinstry, T. Melano, D. R. Barch, C. di Nolfo, P. Datta, A. Amir, B. Taba, M. D. Flickner, and D. S. Modha, Convolutional networks for fast, energy-efficient neuromorphic computing, Proceedings of the National Academy of Sciences, vol. 113, no. 41, pp , [16] F. Akopyan, Design and tool flow of ibm s truenorth: an ultra-low power programmable neurosynaptic chip with 1 million neurons, in Proceedings of the 2016 on International Symposium on Physical Design. ACM, 2016, pp

arxiv: v1 [cs.ne] 16 Nov 2016

arxiv: v1 [cs.ne] 16 Nov 2016 Training Spiking Deep Networks for Neuromorphic Hardware arxiv:1611.5141v1 [cs.ne] 16 Nov 16 Eric Hunsberger Centre for Theoretical Neuroscience University of Waterloo Waterloo, ON N2L 3G1 ehunsber@uwaterloo.ca

More information

Design of a CMOS OR Gate using Artificial Neural Networks (ANNs)

Design of a CMOS OR Gate using Artificial Neural Networks (ANNs) AMSE JOURNALS-2016-Series: Advances D; Vol. 21; N 1; pp 66-77 Submitted July 2016; Revised Oct. 11, 2016, Accepted Nov. 15, 2016 Design of a CMOS OR Gate using Artificial Neural Networks (ANNs) R. K. Mandal

More information

Automated Driving Car Using Image Processing

Automated Driving Car Using Image Processing Automated Driving Car Using Image Processing Shrey Shah 1, Debjyoti Das Adhikary 2, Ashish Maheta 3 Abstract: In day to day life many car accidents occur due to lack of concentration as well as lack of

More information

Event-based neural computing on an autonomous mobile platform

Event-based neural computing on an autonomous mobile platform Event-based neural computing on an autonomous mobile platform Francesco Galluppi 1, Christian Denk 2, Matthias C. Meiner 2, Terrence C. Stewart 3, Luis A. Plana 1, Chris Eliasmith 3, Steve Furber 1 and

More information

Adaptive Robot Path Planning Using a Spiking Neuron Algorithm with Axonal Delays

Adaptive Robot Path Planning Using a Spiking Neuron Algorithm with Axonal Delays Adaptive Robot Path Planning Using a Spiking Neuron Algorithm with Axonal Delays Tiffany Hwu 1,2, Alexander Y. Wang 3, Nicolas Oros 4, Jeffrey L. Krichmar 1,5 1 Department of Cognitive Sciences University

More information

SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW

SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW SpiNNaker SPIKING NEURAL NETWORK ARCHITECTURE MAX BROWN NICK BARLOW OVERVIEW What is SpiNNaker Architecture Spiking Neural Networks Related Work Router Commands Task Scheduling Related Works / Projects

More information

Continuous Gesture Recognition Fact Sheet

Continuous Gesture Recognition Fact Sheet Continuous Gesture Recognition Fact Sheet August 17, 2016 1 Team details Team name: ICT NHCI Team leader name: Xiujuan Chai Team leader address, phone number and email Address: No.6 Kexueyuan South Road

More information

Colorful Image Colorizations Supplementary Material

Colorful Image Colorizations Supplementary Material Colorful Image Colorizations Supplementary Material Richard Zhang, Phillip Isola, Alexei A. Efros {rich.zhang, isola, efros}@eecs.berkeley.edu University of California, Berkeley 1 Overview This document

More information

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB

SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB SIMULATION-BASED MODEL CONTROL USING STATIC HAND GESTURES IN MATLAB S. Kajan, J. Goga Institute of Robotics and Cybernetics, Faculty of Electrical Engineering and Information Technology, Slovak University

More information

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017

23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS. Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 23270: AUGMENTED REALITY FOR NAVIGATION AND INFORMATIONAL ADAS Sergii Bykov Technical Lead Machine Learning 12 Oct 2017 Product Vision Company Introduction Apostera GmbH with headquarter in Munich, was

More information

arxiv: v1 [cs.lg] 2 Jan 2018

arxiv: v1 [cs.lg] 2 Jan 2018 Deep Learning for Identifying Potential Conceptual Shifts for Co-creative Drawing arxiv:1801.00723v1 [cs.lg] 2 Jan 2018 Pegah Karimi pkarimi@uncc.edu Kazjon Grace The University of Sydney Sydney, NSW 2006

More information

Developing Applications for the ROBOBO! robot

Developing Applications for the ROBOBO! robot Developing Applications for the ROBOBO! robot Gervasio Varela gervasio.varela@mytechia.com Outline ROBOBO!, the robot ROBOBO! Framework Developing native apps Developing ROS apps Let s Hack ROBOBO!, the

More information

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects

NCCT IEEE PROJECTS ADVANCED ROBOTICS SOLUTIONS. Latest Projects, in various Domains. Promise for the Best Projects NCCT Promise for the Best Projects IEEE PROJECTS in various Domains Latest Projects, 2009-2010 ADVANCED ROBOTICS SOLUTIONS EMBEDDED SYSTEM PROJECTS Microcontrollers VLSI DSP Matlab Robotics ADVANCED ROBOTICS

More information

A New Approach to Control a Robot using Android Phone and Colour Detection Technique

A New Approach to Control a Robot using Android Phone and Colour Detection Technique A New Approach to Control a Robot using Android Phone and Colour Detection Technique Saurav Biswas 1 Umaima Rahman 2 Asoke Nath 3 1,2,3 Department of Computer Science, St. Xavier s College, Kolkata-700016,

More information

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit)

Vishnu Nath. Usage of computer vision and humanoid robotics to create autonomous robots. (Ximea Currera RL04C Camera Kit) Vishnu Nath Usage of computer vision and humanoid robotics to create autonomous robots (Ximea Currera RL04C Camera Kit) Acknowledgements Firstly, I would like to thank Ivan Klimkovic of Ximea Corporation,

More information

ON HEARING YOUR POSITION THROUGH LIGHT FOR MOBILE ROBOT INDOOR NAVIGATION. Anonymous ICME submission

ON HEARING YOUR POSITION THROUGH LIGHT FOR MOBILE ROBOT INDOOR NAVIGATION. Anonymous ICME submission ON HEARING YOUR POSITION THROUGH LIGHT FOR MOBILE ROBOT INDOOR NAVIGATION Anonymous ICME submission ABSTRACT Mobile Audio Commander (MAC) is a mobile phone-based multimedia sensing system that facilitates

More information

arxiv: v2 [cs.ne] 17 Jun 2017

arxiv: v2 [cs.ne] 17 Jun 2017 roup Scissor: Scaling Neuromorphic Computing Design to Large Neural Networks arxiv:702.03443v2 [cs.ne] 7 Jun 207 ABSTRACT Yandan Wang yaw46@pitt.edu Donald Chiarulli don@pitt.edu Synapse crossbar is an

More information

OPEN CV BASED AUTONOMOUS RC-CAR

OPEN CV BASED AUTONOMOUS RC-CAR OPEN CV BASED AUTONOMOUS RC-CAR B. Sabitha 1, K. Akila 2, S.Krishna Kumar 3, D.Mohan 4, P.Nisanth 5 1,2 Faculty, Department of Mechatronics Engineering, Kumaraguru College of Technology, Coimbatore, India

More information

Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks

Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks MOO PAPER SCIENCE CHINA Information Sciences February 2016, Vol 59 023401:1 023401:5 doi: 101007/s11432-015-5511-7 Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks Juncheng

More information

arxiv: v3 [cs.lg] 23 Aug 2018

arxiv: v3 [cs.lg] 23 Aug 2018 MultiNet: Multi-Modal Multi-Task Learning for Autonomous Driving Sauhaarda Chowdhuri 1 Tushar Pankaj 2 Karl Zipser 3 arxiv:1709.05581v3 [cs.lg] 23 Aug 2018 Abstract Several deep learning approaches have

More information

Nautical Autonomous System with Task Integration (Code name)

Nautical Autonomous System with Task Integration (Code name) Nautical Autonomous System with Task Integration (Code name) NASTI 10/6/11 Team NASTI: Senior Students: Terry Max Christy, Jeremy Borgman Advisors: Nick Schmidt, Dr. Gary Dempsey Introduction The Nautical

More information

Final Report. Chazer Gator. by Siddharth Garg

Final Report. Chazer Gator. by Siddharth Garg Final Report Chazer Gator by Siddharth Garg EEL 5666: Intelligent Machines Design Laboratory A. Antonio Arroyo, PhD Eric M. Schwartz, PhD Thomas Vermeer, Mike Pridgen No table of contents entries found.

More information

Neural Networks The New Moore s Law

Neural Networks The New Moore s Law Neural Networks The New Moore s Law Chris Rowen, PhD, FIEEE CEO Cognite Ventures December 216 Outline Moore s Law Revisited: Efficiency Drives Productivity Embedded Neural Network Product Segments Efficiency

More information

Devastator Tank Mobile Platform with Edison SKU:ROB0125

Devastator Tank Mobile Platform with Edison SKU:ROB0125 Devastator Tank Mobile Platform with Edison SKU:ROB0125 From Robot Wiki Contents 1 Introduction 2 Tutorial 2.1 Chapter 2: Run! Devastator! 2.2 Chapter 3: Expansion Modules 2.3 Chapter 4: Build The Devastator

More information

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic

Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic Universal Journal of Control and Automation 6(1): 13-18, 2018 DOI: 10.13189/ujca.2018.060102 http://www.hrpub.org Wheeled Mobile Robot Obstacle Avoidance Using Compass and Ultrasonic Yousef Moh. Abueejela

More information

Introduction to Mobile Sensing Technology

Introduction to Mobile Sensing Technology Introduction to Mobile Sensing Technology Kleomenis Katevas k.katevas@qmul.ac.uk https://minoskt.github.io Image by CRCA / CNRS / University of Toulouse In this talk What is Mobile Sensing? Sensor data,

More information

FROM BRAIN RESEARCH TO FUTURE TECHNOLOGIES. Dirk Pleiter Post-H2020 Vision for HPC Workshop, Frankfurt

FROM BRAIN RESEARCH TO FUTURE TECHNOLOGIES. Dirk Pleiter Post-H2020 Vision for HPC Workshop, Frankfurt FROM BRAIN RESEARCH TO FUTURE TECHNOLOGIES Dirk Pleiter Post-H2020 Vision for HPC Workshop, Frankfurt Science Challenge and Benefits Whole brain cm scale Understanding the human brain Understand the organisation

More information

Proposers Day Workshop

Proposers Day Workshop Proposers Day Workshop Monday, January 23, 2017 @srcjump, #JUMPpdw Cognitive Computing Vertical Research Center Mandy Pant Academic Research Director Intel Corporation Center Motivation Today s deep learning

More information

Space Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people

Space Research expeditions and open space work. Education & Research Teaching and laboratory facilities. Medical Assistance for people Space Research expeditions and open space work Education & Research Teaching and laboratory facilities. Medical Assistance for people Safety Life saving activity, guarding Military Use to execute missions

More information

arxiv: v1 [cs.ai] 31 Oct 2016

arxiv: v1 [cs.ai] 31 Oct 2016 A Survey of Brain Inspired Technologies for Engineering arxiv:1610.09882v1 [cs.ai] 31 Oct 2016 Jarryd Son Electrical Engineering Department University of Cape Town, South Africa Email: jdsonza@gmail.com

More information

Research on Hand Gesture Recognition Using Convolutional Neural Network

Research on Hand Gesture Recognition Using Convolutional Neural Network Research on Hand Gesture Recognition Using Convolutional Neural Network Tian Zhaoyang a, Cheng Lee Lung b a Department of Electronic Engineering, City University of Hong Kong, Hong Kong, China E-mail address:

More information

ReVRSR: Remote Virtual Reality for Service Robots

ReVRSR: Remote Virtual Reality for Service Robots ReVRSR: Remote Virtual Reality for Service Robots Amel Hassan, Ahmed Ehab Gado, Faizan Muhammad March 17, 2018 Abstract This project aims to bring a service robot s perspective to a human user. We believe

More information

Live Hand Gesture Recognition using an Android Device

Live Hand Gesture Recognition using an Android Device Live Hand Gesture Recognition using an Android Device Mr. Yogesh B. Dongare Department of Computer Engineering. G.H.Raisoni College of Engineering and Management, Ahmednagar. Email- yogesh.dongare05@gmail.com

More information

Night-time pedestrian detection via Neuromorphic approach

Night-time pedestrian detection via Neuromorphic approach Night-time pedestrian detection via Neuromorphic approach WOO JOON HAN, IL SONG HAN Graduate School for Green Transportation Korea Advanced Institute of Science and Technology 335 Gwahak-ro, Yuseong-gu,

More information

Creating a 3D environment map from 2D camera images in robotics

Creating a 3D environment map from 2D camera images in robotics Creating a 3D environment map from 2D camera images in robotics J.P. Niemantsverdriet jelle@niemantsverdriet.nl 4th June 2003 Timorstraat 6A 9715 LE Groningen student number: 0919462 internal advisor:

More information

IoT. Indoor Positioning with BLE Beacons. Author: Uday Agarwal

IoT. Indoor Positioning with BLE Beacons. Author: Uday Agarwal IoT Indoor Positioning with BLE Beacons Author: Uday Agarwal Contents Introduction 1 Bluetooth Low Energy and RSSI 2 Factors Affecting RSSI 3 Distance Calculation 4 Approach to Indoor Positioning 5 Zone

More information

GPU ACCELERATED DEEP LEARNING WITH CUDNN

GPU ACCELERATED DEEP LEARNING WITH CUDNN GPU ACCELERATED DEEP LEARNING WITH CUDNN Larry Brown Ph.D. March 2015 AGENDA 1 Introducing cudnn and GPUs 2 Deep Learning Context 3 cudnn V2 4 Using cudnn 2 Introducing cudnn and GPUs 3 HOW GPU ACCELERATION

More information

2D Floor-Mapping Car

2D Floor-Mapping Car CDA 4630 Embedded Systems Final Report Group 4: Camilo Moreno, Ahmed Awada ------------------------------------------------------------------------------------------------------------------------------------------

More information

Embedding Artificial Intelligence into Our Lives

Embedding Artificial Intelligence into Our Lives Embedding Artificial Intelligence into Our Lives Michael Thompson, Synopsys D&R IP-SOC DAYS Santa Clara April 2018 1 Agenda Introduction What AI is and is Not Where AI is being used Rapid Advance of AI

More information

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged

* Intelli Robotic Wheel Chair for Specialty Operations & Physically Challenged ADVANCED ROBOTICS SOLUTIONS * Intelli Mobile Robot for Multi Specialty Operations * Advanced Robotic Pick and Place Arm and Hand System * Automatic Color Sensing Robot using PC * AI Based Image Capturing

More information

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY

SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY SMARTPHONE SENSOR BASED GESTURE RECOGNITION LIBRARY Sidhesh Badrinarayan 1, Saurabh Abhale 2 1,2 Department of Information Technology, Pune Institute of Computer Technology, Pune, India ABSTRACT: Gestures

More information

Biologically Inspired Computation

Biologically Inspired Computation Biologically Inspired Computation Deep Learning & Convolutional Neural Networks Joe Marino biologically inspired computation biological intelligence flexible capable of detecting/ executing/reasoning about

More information

III. MATERIAL AND COMPONENTS USED

III. MATERIAL AND COMPONENTS USED Prototype Development of a Smartphone- Controlled Robotic Vehicle with Pick- Place Capability Dheeraj Sharma Electronics and communication department Gian Jyoti Institute Of Engineering And Technology,

More information

A Vehicular Visual Tracking System Incorporating Global Positioning System

A Vehicular Visual Tracking System Incorporating Global Positioning System A Vehicular Visual Tracking System Incorporating Global Positioning System Hsien-Chou Liao and Yu-Shiang Wang Abstract Surveillance system is widely used in the traffic monitoring. The deployment of cameras

More information

Driving Using End-to-End Deep Learning

Driving Using End-to-End Deep Learning Driving Using End-to-End Deep Learning Farzain Majeed farza@knights.ucf.edu Kishan Athrey kishan.athrey@knights.ucf.edu Dr. Mubarak Shah shah@crcv.ucf.edu Abstract This work explores the problem of autonomously

More information

A Simple Design of Clean Robot

A Simple Design of Clean Robot Journal of Computing and Electronic Information Management ISSN: 2413-1660 A Simple Design of Clean Robot Huichao Wu 1, a, Daofang Chen 2, Yunpeng Yin 3 1 College of Optoelectronic Engineering, Chongqing

More information

We Know Where You Are : Indoor WiFi Localization Using Neural Networks Tong Mu, Tori Fujinami, Saleil Bhat

We Know Where You Are : Indoor WiFi Localization Using Neural Networks Tong Mu, Tori Fujinami, Saleil Bhat We Know Where You Are : Indoor WiFi Localization Using Neural Networks Tong Mu, Tori Fujinami, Saleil Bhat Abstract: In this project, a neural network was trained to predict the location of a WiFi transmitter

More information

Hand Gesture Recognition by Means of Region- Based Convolutional Neural Networks

Hand Gesture Recognition by Means of Region- Based Convolutional Neural Networks Contemporary Engineering Sciences, Vol. 10, 2017, no. 27, 1329-1342 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ces.2017.710154 Hand Gesture Recognition by Means of Region- Based Convolutional

More information

The Jigsaw Continuous Sensing Engine for Mobile Phone Applications!

The Jigsaw Continuous Sensing Engine for Mobile Phone Applications! The Jigsaw Continuous Sensing Engine for Mobile Phone Applications! Hong Lu, Jun Yang, Zhigang Liu, Nicholas D. Lane, Tanzeem Choudhury, Andrew T. Campbell" CS Department Dartmouth College Nokia Research

More information

Project Name: SpyBot

Project Name: SpyBot EEL 4924 Electrical Engineering Design (Senior Design) Final Report April 23, 2013 Project Name: SpyBot Team Members: Name: Josh Kurland Name: Parker Karaus Email: joshkrlnd@gmail.com Email: pbkaraus@ufl.edu

More information

Wheeled Mobile Robot Kuzma I

Wheeled Mobile Robot Kuzma I Contemporary Engineering Sciences, Vol. 7, 2014, no. 18, 895-899 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ces.2014.47102 Wheeled Mobile Robot Kuzma I Andrey Sheka 1, 2 1) Department of Intelligent

More information

Autonomous Obstacle Avoiding and Path Following Rover

Autonomous Obstacle Avoiding and Path Following Rover Volume 114 No. 9 2017, 271-281 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu Autonomous Obstacle Avoiding and Path Following Rover ijpam.eu Sandeep Polina

More information

CROSS-LAYER FEATURES IN CONVOLUTIONAL NEURAL NETWORKS FOR GENERIC CLASSIFICATION TASKS. Kuan-Chuan Peng and Tsuhan Chen

CROSS-LAYER FEATURES IN CONVOLUTIONAL NEURAL NETWORKS FOR GENERIC CLASSIFICATION TASKS. Kuan-Chuan Peng and Tsuhan Chen CROSS-LAYER FEATURES IN CONVOLUTIONAL NEURAL NETWORKS FOR GENERIC CLASSIFICATION TASKS Kuan-Chuan Peng and Tsuhan Chen Cornell University School of Electrical and Computer Engineering Ithaca, NY 14850

More information

A Vehicular Visual Tracking System Incorporating Global Positioning System

A Vehicular Visual Tracking System Incorporating Global Positioning System A Vehicular Visual Tracking System Incorporating Global Positioning System Hsien-Chou Liao and Yu-Shiang Wang Abstract Surveillance system is widely used in the traffic monitoring. The deployment of cameras

More information

ECE 599/692 Deep Learning Lecture 19 Beyond BP and CNN

ECE 599/692 Deep Learning Lecture 19 Beyond BP and CNN ECE 599/692 Deep Learning Lecture 19 Beyond BP and CNN Hairong Qi, Gonzalez Family Professor Electrical Engineering and Computer Science University of Tennessee, Knoxville http://www.eecs.utk.edu/faculty/qi

More information

Cedarville University Little Blue

Cedarville University Little Blue Cedarville University Little Blue IGVC Robot Design Report June 2004 Team Members: Silas Gibbs Kenny Keslar Tim Linden Jonathan Struebel Faculty Advisor: Dr. Clint Kohl Table of Contents 1. Introduction...

More information

Semantic Segmentation on Resource Constrained Devices

Semantic Segmentation on Resource Constrained Devices Semantic Segmentation on Resource Constrained Devices Sachin Mehta University of Washington, Seattle In collaboration with Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi Project

More information

Semi-Autonomous Parking for Enhanced Safety and Efficiency

Semi-Autonomous Parking for Enhanced Safety and Efficiency Technical Report 105 Semi-Autonomous Parking for Enhanced Safety and Efficiency Sriram Vishwanath WNCG June 2017 Data-Supported Transportation Operations & Planning Center (D-STOP) A Tier 1 USDOT University

More information

Implementation of STDP in Neuromorphic Analog VLSI

Implementation of STDP in Neuromorphic Analog VLSI Implementation of STDP in Neuromorphic Analog VLSI Chul Kim chk079@eng.ucsd.edu Shangzhong Li shl198@eng.ucsd.edu Department of Bioengineering University of California San Diego La Jolla, CA 92093 Abstract

More information

A software video stabilization system for automotive oriented applications

A software video stabilization system for automotive oriented applications A software video stabilization system for automotive oriented applications A. Broggi, P. Grisleri Dipartimento di Ingegneria dellinformazione Universita degli studi di Parma 43100 Parma, Italy Email: {broggi,

More information

Voice Command Based Robotic Vehicle Control

Voice Command Based Robotic Vehicle Control Voice Command Based Robotic Vehicle Control P R Bhole 1, N L Lokhande 2, Manoj L Patel 3, V D Rathod 4, P R Mahajan 5 1, 2, 3, 4, 5 Department of Electronics & Telecommunication, R C Patel Institute of

More information

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision

Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Efficient Construction of SIFT Multi-Scale Image Pyramids for Embedded Robot Vision Peter Andreas Entschev and Hugo Vieira Neto Graduate School of Electrical Engineering and Applied Computer Science Federal

More information

KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN?

KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN? KÜNSTLICHE INTELLIGENZ JOBKILLER VON MORGEN? Marc Stampfli https://www.linkedin.com/in/marcstampfli/ https://twitter.com/marc_stampfli E-Mail: mstampfli@nvidia.com INTELLIGENT ROBOTS AND SMART MACHINES

More information

fast blur removal for wearable QR code scanners

fast blur removal for wearable QR code scanners fast blur removal for wearable QR code scanners Gábor Sörös, Stephan Semmler, Luc Humair, Otmar Hilliges ISWC 2015, Osaka, Japan traditional barcode scanning next generation barcode scanning ubiquitous

More information

Senior Design I. Fast Acquisition and Real-time Tracking Vehicle. University of Central Florida

Senior Design I. Fast Acquisition and Real-time Tracking Vehicle. University of Central Florida Senior Design I Fast Acquisition and Real-time Tracking Vehicle University of Central Florida College of Engineering Department of Electrical Engineering Inventors: Seth Rhodes Undergraduate B.S.E.E. Houman

More information

SenseMaker IST Martin McGinnity University of Ulster Neuro-IT, Bonn, June 2004 SenseMaker IST Neuro-IT workshop June 2004 Page 1

SenseMaker IST Martin McGinnity University of Ulster Neuro-IT, Bonn, June 2004 SenseMaker IST Neuro-IT workshop June 2004 Page 1 SenseMaker IST2001-34712 Martin McGinnity University of Ulster Neuro-IT, Bonn, June 2004 Page 1 Project Objectives To design and implement an intelligent computational system, drawing inspiration from

More information

An IoT Based Real-Time Environmental Monitoring System Using Arduino and Cloud Service

An IoT Based Real-Time Environmental Monitoring System Using Arduino and Cloud Service Engineering, Technology & Applied Science Research Vol. 8, No. 4, 2018, 3238-3242 3238 An IoT Based Real-Time Environmental Monitoring System Using Arduino and Cloud Service Saima Zafar Emerging Sciences,

More information

GPU Computing for Cognitive Robotics

GPU Computing for Cognitive Robotics GPU Computing for Cognitive Robotics Martin Peniak, Davide Marocco, Angelo Cangelosi GPU Technology Conference, San Jose, California, 25 March, 2014 Acknowledgements This study was financed by: EU Integrating

More information

Multiband NFC for High-Throughput Wireless Computer Vision Sensor Network

Multiband NFC for High-Throughput Wireless Computer Vision Sensor Network Multiband NFC for High-Throughput Wireless Computer Vision Sensor Network Fei Y. Li, Jason Y. Du 09212020027@fudan.edu.cn Vision sensors lie in the heart of computer vision. In many computer vision applications,

More information

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer

University of Toronto. Companion Robot Security. ECE1778 Winter Wei Hao Chang Apper Alexander Hong Programmer University of Toronto Companion ECE1778 Winter 2015 Creative Applications for Mobile Devices Wei Hao Chang Apper Alexander Hong Programmer April 9, 2015 Contents 1 Introduction 3 1.1 Problem......................................

More information

Face Detection System on Ada boost Algorithm Using Haar Classifiers

Face Detection System on Ada boost Algorithm Using Haar Classifiers Vol.2, Issue.6, Nov-Dec. 2012 pp-3996-4000 ISSN: 2249-6645 Face Detection System on Ada boost Algorithm Using Haar Classifiers M. Gopi Krishna, A. Srinivasulu, Prof (Dr.) T.K.Basak 1, 2 Department of Electronics

More information

Smart License Plate Recognition Using Optical Character Recognition Based on the Multicopter

Smart License Plate Recognition Using Optical Character Recognition Based on the Multicopter Smart License Plate Recognition Using Optical Character Recognition Based on the Multicopter Sanjaa Bold Department of Computer Hardware and Networking. University of the humanities Ulaanbaatar, Mongolia

More information

ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION

ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION ENGINEERING ENERGY TELECOM TRAVEL AND AVIATION SOFTWARE FINANCIAL SERVICES ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION Sergii Bykov, Technical Lead TECHNOLOGY AUTOMOTIVE Product Vision Road To

More information

VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING

VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING VOICE CONTROLLED ROBOT WITH REAL TIME BARRIER DETECTION AND AVERTING P.NARENDRA ILAYA PALLAVAN 1, S.HARISH 2, C.DHACHINAMOORTHI 3 1Assistant Professor, EIE Department, Bannari Amman Institute of Technology,

More information

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington

Team Autono-Mo. Jacobia. Department of Computer Science and Engineering The University of Texas at Arlington Department of Computer Science and Engineering The University of Texas at Arlington Team Autono-Mo Jacobia Architecture Design Specification Team Members: Bill Butts Darius Salemizadeh Lance Storey Yunesh

More information

Perspectives on Neuromorphic Computing

Perspectives on Neuromorphic Computing Perspectives on Neuromorphic Computing Todd Hylton Brain Corporation hylton@braincorporation.com ORNL Neuromorphic Computing Workshop June 29, 2016 Outline Retrospective SyNAPSE Perspective Neuromorphic

More information

SPTF: Smart Photo-Tagging Framework on Smart Phones

SPTF: Smart Photo-Tagging Framework on Smart Phones , pp.123-132 http://dx.doi.org/10.14257/ijmue.2014.9.9.14 SPTF: Smart Photo-Tagging Framework on Smart Phones Hao Xu 1 and Hong-Ning Dai 2* and Walter Hon-Wai Lau 2 1 School of Computer Science and Engineering,

More information

Modern Robotics with OpenCV. Widodo Budiharto

Modern Robotics with OpenCV. Widodo Budiharto Modern Robotics with OpenCV Widodo Budiharto Science Publishing Group 548 Fashion Avenue New York, NY 10018 Published by Science Publishing Group 2014 Copyright Widodo Budiharto 2014 All rights reserved.

More information

Introduction to Mobile Robotics Welcome

Introduction to Mobile Robotics Welcome Introduction to Mobile Robotics Welcome Wolfram Burgard, Michael Ruhnke, Bastian Steder 1 Today This course Robotics in the past and today 2 Organization Wed 14:00 16:00 Fr 14:00 15:00 lectures, discussions

More information

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback

Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Integrated Driving Aware System in the Real-World: Sensing, Computing and Feedback Jung Wook Park HCI Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, USA, 15213 jungwoop@andrew.cmu.edu

More information

Automated Planetary Terrain Mapping of Mars Using Image Pattern Recognition

Automated Planetary Terrain Mapping of Mars Using Image Pattern Recognition Automated Planetary Terrain Mapping of Mars Using Image Pattern Recognition Design Document Version 2.0 Team Strata: Sean Baquiro Matthew Enright Jorge Felix Tsosie Schneider 2 Table of Contents 1 Introduction.3

More information

NUST FALCONS. Team Description for RoboCup Small Size League, 2011

NUST FALCONS. Team Description for RoboCup Small Size League, 2011 1. Introduction: NUST FALCONS Team Description for RoboCup Small Size League, 2011 Arsalan Akhter, Muhammad Jibran Mehfooz Awan, Ali Imran, Salman Shafqat, M. Aneeq-uz-Zaman, Imtiaz Noor, Kanwar Faraz,

More information

ISSN No: International Journal & Magazine of Engineering, Technology, Management and Research

ISSN No: International Journal & Magazine of Engineering, Technology, Management and Research Design of Automatic Number Plate Recognition System Using OCR for Vehicle Identification M.Kesab Chandrasen Abstract: Automatic Number Plate Recognition (ANPR) is an image processing technology which uses

More information

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices

Number Plate Detection with a Multi-Convolutional Neural Network Approach with Optical Character Recognition for Mobile Devices J Inf Process Syst, Vol.12, No.1, pp.100~108, March 2016 http://dx.doi.org/10.3745/jips.04.0022 ISSN 1976-913X (Print) ISSN 2092-805X (Electronic) Number Plate Detection with a Multi-Convolutional Neural

More information

Project Title: Sparse Image Reconstruction with Trainable Image priors

Project Title: Sparse Image Reconstruction with Trainable Image priors Project Title: Sparse Image Reconstruction with Trainable Image priors Project Supervisor(s) and affiliation(s): Stamatis Lefkimmiatis, Skolkovo Institute of Science and Technology (Email: s.lefkimmiatis@skoltech.ru)

More information

A Vehicular Visual Tracking System Incorporating Global Positioning System

A Vehicular Visual Tracking System Incorporating Global Positioning System Vol:5, :6, 20 A Vehicular Visual Tracking System Incorporating Global Positioning System Hsien-Chou Liao and Yu-Shiang Wang International Science Index, Computer and Information Engineering Vol:5, :6,

More information

OpenCV Based Real-Time Video Processing Using Android Smartphone

OpenCV Based Real-Time Video Processing Using Android Smartphone OpenCV Based Real-Time Video Processing Using Android Smartphone Ammar Anuar, Khairul Muzzammil Saipullah, Nurul Atiqah Ismail, Yewguan Soo Abstract as the smarphone industry grows rapidly, the smartphone

More information

Indoor localization using NFC and mobile sensor data corrected using neural net

Indoor localization using NFC and mobile sensor data corrected using neural net Proceedings of the 9 th International Conference on Applied Informatics Eger, Hungary, January 29 February 1, 2014. Vol. 2. pp. 163 169 doi: 10.14794/ICAI.9.2014.2.163 Indoor localization using NFC and

More information

Sensing and Perception

Sensing and Perception Unit D tion Exploring Robotics Spring, 2013 D.1 Why does a robot need sensors? the environment is complex the environment is dynamic enable the robot to learn about current conditions in its environment.

More information

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot

Visual Perception Based Behaviors for a Small Autonomous Mobile Robot Visual Perception Based Behaviors for a Small Autonomous Mobile Robot Scott Jantz and Keith L Doty Machine Intelligence Laboratory Mekatronix, Inc. Department of Electrical and Computer Engineering Gainesville,

More information

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot

Key-Words: - Neural Networks, Cerebellum, Cerebellar Model Articulation Controller (CMAC), Auto-pilot erebellum Based ar Auto-Pilot System B. HSIEH,.QUEK and A.WAHAB Intelligent Systems Laboratory, School of omputer Engineering Nanyang Technological University, Blk N4 #2A-32 Nanyang Avenue, Singapore 639798

More information

ECE 477 Digital Systems Senior Design Project Rev 8/09. Homework 5: Theory of Operation and Hardware Design Narrative

ECE 477 Digital Systems Senior Design Project Rev 8/09. Homework 5: Theory of Operation and Hardware Design Narrative ECE 477 Digital Systems Senior Design Project Rev 8/09 Homework 5: Theory of Operation and Hardware Design Narrative Team Code Name: _ATV Group No. 3 Team Member Completing This Homework: Sebastian Hening

More information

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization

Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization Swarm Intelligence W7: Application of Machine- Learning Techniques to Automatic Control Design and Optimization Learning to avoid obstacles Outline Problem encoding using GA and ANN Floreano and Mondada

More information

Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks

Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks Automatic Vehicles Detection from High Resolution Satellite Imagery Using Morphological Neural Networks HONG ZHENG Research Center for Intelligent Image Processing and Analysis School of Electronic Information

More information

arxiv: v2 [cs.cv] 7 Dec 2016

arxiv: v2 [cs.cv] 7 Dec 2016 Learning from Maps: Visual Common Sense for Autonomous Driving Ari Seff aseff@princeton.edu Jianxiong Xiao profx@autox.ai arxiv:1611.08583v2 [cs.cv] 7 Dec 2016 Abstract Today s autonomous vehicles rely

More information

Biologically Inspired Embodied Evolution of Survival

Biologically Inspired Embodied Evolution of Survival Biologically Inspired Embodied Evolution of Survival Stefan Elfwing 1,2 Eiji Uchibe 2 Kenji Doya 2 Henrik I. Christensen 1 1 Centre for Autonomous Systems, Numerical Analysis and Computer Science, Royal

More information

Optimization of Four-Way Controlled Intersections with Autonomous and Human-Driven Vehicles

Optimization of Four-Way Controlled Intersections with Autonomous and Human-Driven Vehicles Optimization of Four-Way Controlled Intersections with Autonomous and Human-Driven Vehicles Roshni Dhanasekar roshni.dhanasekar@gmail.com Nikhil Kolachalama geomathnikhil903@gmail.com Sheikh Mahmud Elizabeth

More information

AI Application Processing Requirements

AI Application Processing Requirements AI Application Processing Requirements 1 Low Medium High Sensor analysis Activity Recognition (motion sensors) Stress Analysis or Attention Analysis Audio & sound Speech Recognition Object detection Computer

More information

VSI Labs The Build Up of Automated Driving

VSI Labs The Build Up of Automated Driving VSI Labs The Build Up of Automated Driving October - 2017 Agenda Opening Remarks Introduction and Background Customers Solutions VSI Labs Some Industry Content Opening Remarks Automated vehicle systems

More information

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT

MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT MULTI-LAYERED HYBRID ARCHITECTURE TO SOLVE COMPLEX TASKS OF AN AUTONOMOUS MOBILE ROBOT F. TIECHE, C. FACCHINETTI and H. HUGLI Institute of Microtechnology, University of Neuchâtel, Rue de Tivoli 28, CH-2003

More information